Jan 23 03:14:50 np0005593232 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 23 03:14:50 np0005593232 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 23 03:14:50 np0005593232 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 03:14:50 np0005593232 kernel: BIOS-provided physical RAM map:
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 23 03:14:50 np0005593232 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 23 03:14:50 np0005593232 kernel: NX (Execute Disable) protection: active
Jan 23 03:14:50 np0005593232 kernel: APIC: Static calls initialized
Jan 23 03:14:50 np0005593232 kernel: SMBIOS 2.8 present.
Jan 23 03:14:50 np0005593232 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 23 03:14:50 np0005593232 kernel: Hypervisor detected: KVM
Jan 23 03:14:50 np0005593232 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 23 03:14:50 np0005593232 kernel: kvm-clock: using sched offset of 3392750971 cycles
Jan 23 03:14:50 np0005593232 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 23 03:14:50 np0005593232 kernel: tsc: Detected 2800.000 MHz processor
Jan 23 03:14:50 np0005593232 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 23 03:14:50 np0005593232 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 23 03:14:50 np0005593232 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 23 03:14:50 np0005593232 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 23 03:14:50 np0005593232 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 23 03:14:50 np0005593232 kernel: Using GB pages for direct mapping
Jan 23 03:14:50 np0005593232 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 23 03:14:50 np0005593232 kernel: ACPI: Early table checksum verification disabled
Jan 23 03:14:50 np0005593232 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 23 03:14:50 np0005593232 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 03:14:50 np0005593232 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 03:14:50 np0005593232 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 03:14:50 np0005593232 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 23 03:14:50 np0005593232 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 03:14:50 np0005593232 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 23 03:14:50 np0005593232 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 23 03:14:50 np0005593232 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 23 03:14:50 np0005593232 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 23 03:14:50 np0005593232 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 23 03:14:50 np0005593232 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 23 03:14:50 np0005593232 kernel: No NUMA configuration found
Jan 23 03:14:50 np0005593232 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 23 03:14:50 np0005593232 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 23 03:14:50 np0005593232 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 23 03:14:50 np0005593232 kernel: Zone ranges:
Jan 23 03:14:50 np0005593232 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 23 03:14:50 np0005593232 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 23 03:14:50 np0005593232 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 03:14:50 np0005593232 kernel:  Device   empty
Jan 23 03:14:50 np0005593232 kernel: Movable zone start for each node
Jan 23 03:14:50 np0005593232 kernel: Early memory node ranges
Jan 23 03:14:50 np0005593232 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 23 03:14:50 np0005593232 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 23 03:14:50 np0005593232 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 23 03:14:50 np0005593232 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 23 03:14:50 np0005593232 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 23 03:14:50 np0005593232 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 23 03:14:50 np0005593232 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 23 03:14:50 np0005593232 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 23 03:14:50 np0005593232 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 23 03:14:50 np0005593232 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 23 03:14:50 np0005593232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 23 03:14:50 np0005593232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 23 03:14:50 np0005593232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 23 03:14:50 np0005593232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 23 03:14:50 np0005593232 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 23 03:14:50 np0005593232 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 23 03:14:50 np0005593232 kernel: TSC deadline timer available
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Max. logical packages:   8
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Max. logical dies:       8
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Max. dies per package:   1
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Max. threads per core:   1
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Num. cores per package:     1
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Num. threads per package:   1
Jan 23 03:14:50 np0005593232 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 23 03:14:50 np0005593232 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 23 03:14:50 np0005593232 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 23 03:14:50 np0005593232 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 23 03:14:50 np0005593232 kernel: Booting paravirtualized kernel on KVM
Jan 23 03:14:50 np0005593232 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 23 03:14:50 np0005593232 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 23 03:14:50 np0005593232 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 23 03:14:50 np0005593232 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 23 03:14:50 np0005593232 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 03:14:50 np0005593232 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 23 03:14:50 np0005593232 kernel: random: crng init done
Jan 23 03:14:50 np0005593232 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: Fallback order for Node 0: 0 
Jan 23 03:14:50 np0005593232 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 23 03:14:50 np0005593232 kernel: Policy zone: Normal
Jan 23 03:14:50 np0005593232 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 23 03:14:50 np0005593232 kernel: software IO TLB: area num 8.
Jan 23 03:14:50 np0005593232 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 23 03:14:50 np0005593232 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 23 03:14:50 np0005593232 kernel: ftrace: allocated 194 pages with 3 groups
Jan 23 03:14:50 np0005593232 kernel: Dynamic Preempt: voluntary
Jan 23 03:14:50 np0005593232 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 23 03:14:50 np0005593232 kernel: rcu: #011RCU event tracing is enabled.
Jan 23 03:14:50 np0005593232 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 23 03:14:50 np0005593232 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 23 03:14:50 np0005593232 kernel: #011Rude variant of Tasks RCU enabled.
Jan 23 03:14:50 np0005593232 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 23 03:14:50 np0005593232 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 23 03:14:50 np0005593232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 23 03:14:50 np0005593232 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 03:14:50 np0005593232 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 03:14:50 np0005593232 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 23 03:14:50 np0005593232 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 23 03:14:50 np0005593232 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 23 03:14:50 np0005593232 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 23 03:14:50 np0005593232 kernel: Console: colour VGA+ 80x25
Jan 23 03:14:50 np0005593232 kernel: printk: console [ttyS0] enabled
Jan 23 03:14:50 np0005593232 kernel: ACPI: Core revision 20230331
Jan 23 03:14:50 np0005593232 kernel: APIC: Switch to symmetric I/O mode setup
Jan 23 03:14:50 np0005593232 kernel: x2apic enabled
Jan 23 03:14:50 np0005593232 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 23 03:14:50 np0005593232 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 23 03:14:50 np0005593232 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 23 03:14:50 np0005593232 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 23 03:14:50 np0005593232 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 23 03:14:50 np0005593232 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 23 03:14:50 np0005593232 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 23 03:14:50 np0005593232 kernel: Spectre V2 : Mitigation: Retpolines
Jan 23 03:14:50 np0005593232 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 23 03:14:50 np0005593232 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 23 03:14:50 np0005593232 kernel: RETBleed: Mitigation: untrained return thunk
Jan 23 03:14:50 np0005593232 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 23 03:14:50 np0005593232 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 23 03:14:50 np0005593232 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 23 03:14:50 np0005593232 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 23 03:14:50 np0005593232 kernel: x86/bugs: return thunk changed
Jan 23 03:14:50 np0005593232 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 23 03:14:50 np0005593232 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 23 03:14:50 np0005593232 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 23 03:14:50 np0005593232 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 23 03:14:50 np0005593232 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 23 03:14:50 np0005593232 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 23 03:14:50 np0005593232 kernel: Freeing SMP alternatives memory: 40K
Jan 23 03:14:50 np0005593232 kernel: pid_max: default: 32768 minimum: 301
Jan 23 03:14:50 np0005593232 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 23 03:14:50 np0005593232 kernel: landlock: Up and running.
Jan 23 03:14:50 np0005593232 kernel: Yama: becoming mindful.
Jan 23 03:14:50 np0005593232 kernel: SELinux:  Initializing.
Jan 23 03:14:50 np0005593232 kernel: LSM support for eBPF active
Jan 23 03:14:50 np0005593232 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 23 03:14:50 np0005593232 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 23 03:14:50 np0005593232 kernel: ... version:                0
Jan 23 03:14:50 np0005593232 kernel: ... bit width:              48
Jan 23 03:14:50 np0005593232 kernel: ... generic registers:      6
Jan 23 03:14:50 np0005593232 kernel: ... value mask:             0000ffffffffffff
Jan 23 03:14:50 np0005593232 kernel: ... max period:             00007fffffffffff
Jan 23 03:14:50 np0005593232 kernel: ... fixed-purpose events:   0
Jan 23 03:14:50 np0005593232 kernel: ... event mask:             000000000000003f
Jan 23 03:14:50 np0005593232 kernel: signal: max sigframe size: 1776
Jan 23 03:14:50 np0005593232 kernel: rcu: Hierarchical SRCU implementation.
Jan 23 03:14:50 np0005593232 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 23 03:14:50 np0005593232 kernel: smp: Bringing up secondary CPUs ...
Jan 23 03:14:50 np0005593232 kernel: smpboot: x86: Booting SMP configuration:
Jan 23 03:14:50 np0005593232 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 23 03:14:50 np0005593232 kernel: smp: Brought up 1 node, 8 CPUs
Jan 23 03:14:50 np0005593232 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 23 03:14:50 np0005593232 kernel: node 0 deferred pages initialised in 57ms
Jan 23 03:14:50 np0005593232 kernel: Memory: 7763852K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 23 03:14:50 np0005593232 kernel: devtmpfs: initialized
Jan 23 03:14:50 np0005593232 kernel: x86/mm: Memory block size: 128MB
Jan 23 03:14:50 np0005593232 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 23 03:14:50 np0005593232 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 23 03:14:50 np0005593232 kernel: pinctrl core: initialized pinctrl subsystem
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 23 03:14:50 np0005593232 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 23 03:14:50 np0005593232 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 23 03:14:50 np0005593232 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 23 03:14:50 np0005593232 kernel: audit: initializing netlink subsys (disabled)
Jan 23 03:14:50 np0005593232 kernel: audit: type=2000 audit(1769156086.740:1): state=initialized audit_enabled=0 res=1
Jan 23 03:14:50 np0005593232 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 23 03:14:50 np0005593232 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 23 03:14:50 np0005593232 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 23 03:14:50 np0005593232 kernel: cpuidle: using governor menu
Jan 23 03:14:50 np0005593232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 23 03:14:50 np0005593232 kernel: PCI: Using configuration type 1 for base access
Jan 23 03:14:50 np0005593232 kernel: PCI: Using configuration type 1 for extended access
Jan 23 03:14:50 np0005593232 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 23 03:14:50 np0005593232 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 23 03:14:50 np0005593232 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 23 03:14:50 np0005593232 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 23 03:14:50 np0005593232 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 23 03:14:50 np0005593232 kernel: Demotion targets for Node 0: null
Jan 23 03:14:50 np0005593232 kernel: cryptd: max_cpu_qlen set to 1000
Jan 23 03:14:50 np0005593232 kernel: ACPI: Added _OSI(Module Device)
Jan 23 03:14:50 np0005593232 kernel: ACPI: Added _OSI(Processor Device)
Jan 23 03:14:50 np0005593232 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 23 03:14:50 np0005593232 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 23 03:14:50 np0005593232 kernel: ACPI: Interpreter enabled
Jan 23 03:14:50 np0005593232 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 23 03:14:50 np0005593232 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 23 03:14:50 np0005593232 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 23 03:14:50 np0005593232 kernel: PCI: Using E820 reservations for host bridge windows
Jan 23 03:14:50 np0005593232 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 23 03:14:50 np0005593232 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [3] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [4] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [5] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [6] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [7] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [8] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [9] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [10] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [11] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [12] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [13] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [14] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [15] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [16] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [17] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [18] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [19] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [20] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [21] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [22] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [23] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [24] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [25] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [26] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [27] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [28] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [29] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [30] registered
Jan 23 03:14:50 np0005593232 kernel: acpiphp: Slot [31] registered
Jan 23 03:14:50 np0005593232 kernel: PCI host bridge to bus 0000:00
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 23 03:14:50 np0005593232 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 23 03:14:50 np0005593232 kernel: iommu: Default domain type: Translated
Jan 23 03:14:50 np0005593232 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 23 03:14:50 np0005593232 kernel: SCSI subsystem initialized
Jan 23 03:14:50 np0005593232 kernel: ACPI: bus type USB registered
Jan 23 03:14:50 np0005593232 kernel: usbcore: registered new interface driver usbfs
Jan 23 03:14:50 np0005593232 kernel: usbcore: registered new interface driver hub
Jan 23 03:14:50 np0005593232 kernel: usbcore: registered new device driver usb
Jan 23 03:14:50 np0005593232 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 23 03:14:50 np0005593232 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 23 03:14:50 np0005593232 kernel: PTP clock support registered
Jan 23 03:14:50 np0005593232 kernel: EDAC MC: Ver: 3.0.0
Jan 23 03:14:50 np0005593232 kernel: NetLabel: Initializing
Jan 23 03:14:50 np0005593232 kernel: NetLabel:  domain hash size = 128
Jan 23 03:14:50 np0005593232 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 23 03:14:50 np0005593232 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 23 03:14:50 np0005593232 kernel: PCI: Using ACPI for IRQ routing
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 23 03:14:50 np0005593232 kernel: vgaarb: loaded
Jan 23 03:14:50 np0005593232 kernel: clocksource: Switched to clocksource kvm-clock
Jan 23 03:14:50 np0005593232 kernel: VFS: Disk quotas dquot_6.6.0
Jan 23 03:14:50 np0005593232 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 23 03:14:50 np0005593232 kernel: pnp: PnP ACPI init
Jan 23 03:14:50 np0005593232 kernel: pnp: PnP ACPI: found 5 devices
Jan 23 03:14:50 np0005593232 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_INET protocol family
Jan 23 03:14:50 np0005593232 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 23 03:14:50 np0005593232 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_XDP protocol family
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 23 03:14:50 np0005593232 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 23 03:14:50 np0005593232 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 23 03:14:50 np0005593232 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71986 usecs
Jan 23 03:14:50 np0005593232 kernel: PCI: CLS 0 bytes, default 64
Jan 23 03:14:50 np0005593232 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 23 03:14:50 np0005593232 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 23 03:14:50 np0005593232 kernel: ACPI: bus type thunderbolt registered
Jan 23 03:14:50 np0005593232 kernel: Trying to unpack rootfs image as initramfs...
Jan 23 03:14:50 np0005593232 kernel: Initialise system trusted keyrings
Jan 23 03:14:50 np0005593232 kernel: Key type blacklist registered
Jan 23 03:14:50 np0005593232 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 23 03:14:50 np0005593232 kernel: zbud: loaded
Jan 23 03:14:50 np0005593232 kernel: integrity: Platform Keyring initialized
Jan 23 03:14:50 np0005593232 kernel: integrity: Machine keyring initialized
Jan 23 03:14:50 np0005593232 kernel: Freeing initrd memory: 87956K
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_ALG protocol family
Jan 23 03:14:50 np0005593232 kernel: xor: automatically using best checksumming function   avx       
Jan 23 03:14:50 np0005593232 kernel: Key type asymmetric registered
Jan 23 03:14:50 np0005593232 kernel: Asymmetric key parser 'x509' registered
Jan 23 03:14:50 np0005593232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 23 03:14:50 np0005593232 kernel: io scheduler mq-deadline registered
Jan 23 03:14:50 np0005593232 kernel: io scheduler kyber registered
Jan 23 03:14:50 np0005593232 kernel: io scheduler bfq registered
Jan 23 03:14:50 np0005593232 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 23 03:14:50 np0005593232 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 23 03:14:50 np0005593232 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 23 03:14:50 np0005593232 kernel: ACPI: button: Power Button [PWRF]
Jan 23 03:14:50 np0005593232 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 23 03:14:50 np0005593232 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 23 03:14:50 np0005593232 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 23 03:14:50 np0005593232 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 23 03:14:50 np0005593232 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 23 03:14:50 np0005593232 kernel: Non-volatile memory driver v1.3
Jan 23 03:14:50 np0005593232 kernel: rdac: device handler registered
Jan 23 03:14:50 np0005593232 kernel: hp_sw: device handler registered
Jan 23 03:14:50 np0005593232 kernel: emc: device handler registered
Jan 23 03:14:50 np0005593232 kernel: alua: device handler registered
Jan 23 03:14:50 np0005593232 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 23 03:14:50 np0005593232 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 23 03:14:50 np0005593232 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 23 03:14:50 np0005593232 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 23 03:14:50 np0005593232 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 23 03:14:50 np0005593232 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 23 03:14:50 np0005593232 kernel: usb usb1: Product: UHCI Host Controller
Jan 23 03:14:50 np0005593232 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 23 03:14:50 np0005593232 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 23 03:14:50 np0005593232 kernel: hub 1-0:1.0: USB hub found
Jan 23 03:14:50 np0005593232 kernel: hub 1-0:1.0: 2 ports detected
Jan 23 03:14:50 np0005593232 kernel: usbcore: registered new interface driver usbserial_generic
Jan 23 03:14:50 np0005593232 kernel: usbserial: USB Serial support registered for generic
Jan 23 03:14:50 np0005593232 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 23 03:14:50 np0005593232 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 23 03:14:50 np0005593232 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 23 03:14:50 np0005593232 kernel: mousedev: PS/2 mouse device common for all mice
Jan 23 03:14:50 np0005593232 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 23 03:14:50 np0005593232 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 23 03:14:50 np0005593232 kernel: rtc_cmos 00:04: registered as rtc0
Jan 23 03:14:50 np0005593232 kernel: rtc_cmos 00:04: setting system clock to 2026-01-23T08:14:49 UTC (1769156089)
Jan 23 03:14:50 np0005593232 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 23 03:14:50 np0005593232 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 23 03:14:50 np0005593232 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 23 03:14:50 np0005593232 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 23 03:14:50 np0005593232 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 23 03:14:50 np0005593232 kernel: usbcore: registered new interface driver usbhid
Jan 23 03:14:50 np0005593232 kernel: usbhid: USB HID core driver
Jan 23 03:14:50 np0005593232 kernel: drop_monitor: Initializing network drop monitor service
Jan 23 03:14:50 np0005593232 kernel: Initializing XFRM netlink socket
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_INET6 protocol family
Jan 23 03:14:50 np0005593232 kernel: Segment Routing with IPv6
Jan 23 03:14:50 np0005593232 kernel: NET: Registered PF_PACKET protocol family
Jan 23 03:14:50 np0005593232 kernel: mpls_gso: MPLS GSO support
Jan 23 03:14:50 np0005593232 kernel: IPI shorthand broadcast: enabled
Jan 23 03:14:50 np0005593232 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 23 03:14:50 np0005593232 kernel: AES CTR mode by8 optimization enabled
Jan 23 03:14:50 np0005593232 kernel: sched_clock: Marking stable (2929025540, 155690999)->(3572692879, -487976340)
Jan 23 03:14:50 np0005593232 kernel: registered taskstats version 1
Jan 23 03:14:50 np0005593232 kernel: Loading compiled-in X.509 certificates
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 23 03:14:50 np0005593232 kernel: Demotion targets for Node 0: null
Jan 23 03:14:50 np0005593232 kernel: page_owner is disabled
Jan 23 03:14:50 np0005593232 kernel: Key type .fscrypt registered
Jan 23 03:14:50 np0005593232 kernel: Key type fscrypt-provisioning registered
Jan 23 03:14:50 np0005593232 kernel: Key type big_key registered
Jan 23 03:14:50 np0005593232 kernel: Key type encrypted registered
Jan 23 03:14:50 np0005593232 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 23 03:14:50 np0005593232 kernel: Loading compiled-in module X.509 certificates
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 23 03:14:50 np0005593232 kernel: ima: Allocated hash algorithm: sha256
Jan 23 03:14:50 np0005593232 kernel: ima: No architecture policies found
Jan 23 03:14:50 np0005593232 kernel: evm: Initialising EVM extended attributes:
Jan 23 03:14:50 np0005593232 kernel: evm: security.selinux
Jan 23 03:14:50 np0005593232 kernel: evm: security.SMACK64 (disabled)
Jan 23 03:14:50 np0005593232 kernel: evm: security.SMACK64EXEC (disabled)
Jan 23 03:14:50 np0005593232 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 23 03:14:50 np0005593232 kernel: evm: security.SMACK64MMAP (disabled)
Jan 23 03:14:50 np0005593232 kernel: evm: security.apparmor (disabled)
Jan 23 03:14:50 np0005593232 kernel: evm: security.ima
Jan 23 03:14:50 np0005593232 kernel: evm: security.capability
Jan 23 03:14:50 np0005593232 kernel: evm: HMAC attrs: 0x1
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 23 03:14:50 np0005593232 kernel: Running certificate verification RSA selftest
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 23 03:14:50 np0005593232 kernel: Running certificate verification ECDSA selftest
Jan 23 03:14:50 np0005593232 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 23 03:14:50 np0005593232 kernel: clk: Disabling unused clocks
Jan 23 03:14:50 np0005593232 kernel: Freeing unused decrypted memory: 2028K
Jan 23 03:14:50 np0005593232 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 23 03:14:50 np0005593232 kernel: Write protecting the kernel read-only data: 30720k
Jan 23 03:14:50 np0005593232 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: Manufacturer: QEMU
Jan 23 03:14:50 np0005593232 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 23 03:14:50 np0005593232 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 23 03:14:50 np0005593232 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 23 03:14:50 np0005593232 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 23 03:14:50 np0005593232 kernel: Run /init as init process
Jan 23 03:14:50 np0005593232 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 03:14:50 np0005593232 systemd: Detected virtualization kvm.
Jan 23 03:14:50 np0005593232 systemd: Detected architecture x86-64.
Jan 23 03:14:50 np0005593232 systemd: Running in initrd.
Jan 23 03:14:50 np0005593232 systemd: No hostname configured, using default hostname.
Jan 23 03:14:50 np0005593232 systemd: Hostname set to <localhost>.
Jan 23 03:14:50 np0005593232 systemd: Initializing machine ID from VM UUID.
Jan 23 03:14:50 np0005593232 systemd: Queued start job for default target Initrd Default Target.
Jan 23 03:14:50 np0005593232 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 03:14:50 np0005593232 systemd: Reached target Local Encrypted Volumes.
Jan 23 03:14:50 np0005593232 systemd: Reached target Initrd /usr File System.
Jan 23 03:14:50 np0005593232 systemd: Reached target Local File Systems.
Jan 23 03:14:50 np0005593232 systemd: Reached target Path Units.
Jan 23 03:14:50 np0005593232 systemd: Reached target Slice Units.
Jan 23 03:14:50 np0005593232 systemd: Reached target Swaps.
Jan 23 03:14:50 np0005593232 systemd: Reached target Timer Units.
Jan 23 03:14:50 np0005593232 systemd: Listening on D-Bus System Message Bus Socket.
Jan 23 03:14:50 np0005593232 systemd: Listening on Journal Socket (/dev/log).
Jan 23 03:14:50 np0005593232 systemd: Listening on Journal Socket.
Jan 23 03:14:50 np0005593232 systemd: Listening on udev Control Socket.
Jan 23 03:14:50 np0005593232 systemd: Listening on udev Kernel Socket.
Jan 23 03:14:50 np0005593232 systemd: Reached target Socket Units.
Jan 23 03:14:50 np0005593232 systemd: Starting Create List of Static Device Nodes...
Jan 23 03:14:50 np0005593232 systemd: Starting Journal Service...
Jan 23 03:14:50 np0005593232 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 03:14:50 np0005593232 systemd: Starting Apply Kernel Variables...
Jan 23 03:14:50 np0005593232 systemd: Starting Create System Users...
Jan 23 03:14:50 np0005593232 systemd: Starting Setup Virtual Console...
Jan 23 03:14:50 np0005593232 systemd: Finished Create List of Static Device Nodes.
Jan 23 03:14:50 np0005593232 systemd: Finished Apply Kernel Variables.
Jan 23 03:14:50 np0005593232 systemd: Finished Create System Users.
Jan 23 03:14:50 np0005593232 systemd: Starting Create Static Device Nodes in /dev...
Jan 23 03:14:50 np0005593232 systemd-journald[311]: Journal started
Jan 23 03:14:50 np0005593232 systemd-journald[311]: Runtime Journal (/run/log/journal/ebc95db9b3894cf6b3a57ae0afc322d2) is 8.0M, max 153.6M, 145.6M free.
Jan 23 03:14:50 np0005593232 systemd-sysusers[315]: Creating group 'users' with GID 100.
Jan 23 03:14:50 np0005593232 systemd-sysusers[315]: Creating group 'dbus' with GID 81.
Jan 23 03:14:50 np0005593232 systemd-sysusers[315]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 23 03:14:50 np0005593232 systemd: Started Journal Service.
Jan 23 03:14:50 np0005593232 systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 03:14:50 np0005593232 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 03:14:50 np0005593232 systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 03:14:50 np0005593232 systemd[1]: Finished Setup Virtual Console.
Jan 23 03:14:50 np0005593232 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 23 03:14:50 np0005593232 systemd[1]: Starting dracut cmdline hook...
Jan 23 03:14:50 np0005593232 dracut-cmdline[329]: dracut-9 dracut-057-102.git20250818.el9
Jan 23 03:14:50 np0005593232 dracut-cmdline[329]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 23 03:14:50 np0005593232 systemd[1]: Finished dracut cmdline hook.
Jan 23 03:14:50 np0005593232 systemd[1]: Starting dracut pre-udev hook...
Jan 23 03:14:50 np0005593232 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 23 03:14:50 np0005593232 kernel: device-mapper: uevent: version 1.0.3
Jan 23 03:14:50 np0005593232 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 23 03:14:51 np0005593232 kernel: RPC: Registered named UNIX socket transport module.
Jan 23 03:14:51 np0005593232 kernel: RPC: Registered udp transport module.
Jan 23 03:14:51 np0005593232 kernel: RPC: Registered tcp transport module.
Jan 23 03:14:51 np0005593232 kernel: RPC: Registered tcp-with-tls transport module.
Jan 23 03:14:51 np0005593232 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 23 03:14:51 np0005593232 rpc.statd[446]: Version 2.5.4 starting
Jan 23 03:14:51 np0005593232 rpc.statd[446]: Initializing NSM state
Jan 23 03:14:51 np0005593232 rpc.idmapd[451]: Setting log level to 0
Jan 23 03:14:51 np0005593232 systemd[1]: Finished dracut pre-udev hook.
Jan 23 03:14:51 np0005593232 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 03:14:51 np0005593232 systemd-udevd[464]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 03:14:51 np0005593232 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 03:14:51 np0005593232 systemd[1]: Starting dracut pre-trigger hook...
Jan 23 03:14:51 np0005593232 systemd[1]: Finished dracut pre-trigger hook.
Jan 23 03:14:51 np0005593232 systemd[1]: Starting Coldplug All udev Devices...
Jan 23 03:14:51 np0005593232 systemd[1]: Created slice Slice /system/modprobe.
Jan 23 03:14:51 np0005593232 systemd[1]: Starting Load Kernel Module configfs...
Jan 23 03:14:51 np0005593232 systemd[1]: Finished Coldplug All udev Devices.
Jan 23 03:14:51 np0005593232 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 03:14:51 np0005593232 systemd[1]: Finished Load Kernel Module configfs.
Jan 23 03:14:51 np0005593232 systemd[1]: Mounting Kernel Configuration File System...
Jan 23 03:14:51 np0005593232 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 03:14:51 np0005593232 systemd[1]: Reached target Network.
Jan 23 03:14:51 np0005593232 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 23 03:14:51 np0005593232 systemd[1]: Starting dracut initqueue hook...
Jan 23 03:14:51 np0005593232 systemd[1]: Mounted Kernel Configuration File System.
Jan 23 03:14:51 np0005593232 systemd[1]: Reached target System Initialization.
Jan 23 03:14:51 np0005593232 systemd[1]: Reached target Basic System.
Jan 23 03:14:51 np0005593232 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 23 03:14:51 np0005593232 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 23 03:14:51 np0005593232 systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 03:14:51 np0005593232 kernel: vda: vda1
Jan 23 03:14:51 np0005593232 kernel: scsi host0: ata_piix
Jan 23 03:14:51 np0005593232 kernel: scsi host1: ata_piix
Jan 23 03:14:51 np0005593232 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 23 03:14:51 np0005593232 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 23 03:14:51 np0005593232 kernel: ata1: found unknown device (class 0)
Jan 23 03:14:51 np0005593232 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 23 03:14:51 np0005593232 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 23 03:14:51 np0005593232 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 23 03:14:51 np0005593232 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 03:14:51 np0005593232 systemd[1]: Reached target Initrd Root Device.
Jan 23 03:14:51 np0005593232 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 23 03:14:51 np0005593232 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 23 03:14:52 np0005593232 systemd[1]: Finished dracut initqueue hook.
Jan 23 03:14:52 np0005593232 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 03:14:52 np0005593232 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 23 03:14:52 np0005593232 systemd[1]: Reached target Remote File Systems.
Jan 23 03:14:52 np0005593232 systemd[1]: Starting dracut pre-mount hook...
Jan 23 03:14:52 np0005593232 systemd[1]: Finished dracut pre-mount hook.
Jan 23 03:14:52 np0005593232 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 23 03:14:52 np0005593232 systemd-fsck[560]: /usr/sbin/fsck.xfs: XFS file system.
Jan 23 03:14:52 np0005593232 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 23 03:14:52 np0005593232 systemd[1]: Mounting /sysroot...
Jan 23 03:14:53 np0005593232 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 23 03:14:53 np0005593232 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 23 03:14:53 np0005593232 kernel: XFS (vda1): Ending clean mount
Jan 23 03:14:54 np0005593232 systemd[1]: Mounted /sysroot.
Jan 23 03:14:54 np0005593232 systemd[1]: Reached target Initrd Root File System.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 23 03:14:54 np0005593232 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 23 03:14:54 np0005593232 systemd[1]: Reached target Initrd File Systems.
Jan 23 03:14:54 np0005593232 systemd[1]: Reached target Initrd Default Target.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting dracut mount hook...
Jan 23 03:14:54 np0005593232 systemd[1]: Finished dracut mount hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 23 03:14:54 np0005593232 rpc.idmapd[451]: exiting on signal 15
Jan 23 03:14:54 np0005593232 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Network.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Timer Units.
Jan 23 03:14:54 np0005593232 systemd[1]: dbus.socket: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Initrd Default Target.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Basic System.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Initrd Root Device.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Initrd /usr File System.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Path Units.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Remote File Systems.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Slice Units.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Socket Units.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target System Initialization.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Local File Systems.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Swaps.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut mount hook.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut pre-mount hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut initqueue hook.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Apply Kernel Variables.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Coldplug All udev Devices.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut pre-trigger hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Setup Virtual Console.
Jan 23 03:14:54 np0005593232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-udevd.service: Consumed 1.179s CPU time.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Closed udev Control Socket.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Closed udev Kernel Socket.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut pre-udev hook.
Jan 23 03:14:54 np0005593232 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped dracut cmdline hook.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting Cleanup udev Database...
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 23 03:14:54 np0005593232 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 23 03:14:54 np0005593232 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Stopped Create System Users.
Jan 23 03:14:54 np0005593232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 23 03:14:54 np0005593232 systemd[1]: Finished Cleanup udev Database.
Jan 23 03:14:54 np0005593232 systemd[1]: Reached target Switch Root.
Jan 23 03:14:54 np0005593232 systemd[1]: Starting Switch Root...
Jan 23 03:14:54 np0005593232 systemd[1]: Switching root.
Jan 23 03:14:54 np0005593232 systemd-journald[311]: Journal stopped
Jan 23 03:14:55 np0005593232 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 23 03:14:55 np0005593232 kernel: audit: type=1404 audit(1769156094.902:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:14:55 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:14:55 np0005593232 kernel: audit: type=1403 audit(1769156095.050:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 23 03:14:55 np0005593232 systemd: Successfully loaded SELinux policy in 151.265ms.
Jan 23 03:14:55 np0005593232 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 43.761ms.
Jan 23 03:14:55 np0005593232 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 23 03:14:55 np0005593232 systemd: Detected virtualization kvm.
Jan 23 03:14:55 np0005593232 systemd: Detected architecture x86-64.
Jan 23 03:14:55 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:14:55 np0005593232 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Stopped Switch Root.
Jan 23 03:14:55 np0005593232 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 23 03:14:55 np0005593232 systemd: Created slice Slice /system/getty.
Jan 23 03:14:55 np0005593232 systemd: Created slice Slice /system/serial-getty.
Jan 23 03:14:55 np0005593232 systemd: Created slice Slice /system/sshd-keygen.
Jan 23 03:14:55 np0005593232 systemd: Created slice User and Session Slice.
Jan 23 03:14:55 np0005593232 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 23 03:14:55 np0005593232 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 23 03:14:55 np0005593232 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 23 03:14:55 np0005593232 systemd: Reached target Local Encrypted Volumes.
Jan 23 03:14:55 np0005593232 systemd: Stopped target Switch Root.
Jan 23 03:14:55 np0005593232 systemd: Stopped target Initrd File Systems.
Jan 23 03:14:55 np0005593232 systemd: Stopped target Initrd Root File System.
Jan 23 03:14:55 np0005593232 systemd: Reached target Local Integrity Protected Volumes.
Jan 23 03:14:55 np0005593232 systemd: Reached target Path Units.
Jan 23 03:14:55 np0005593232 systemd: Reached target rpc_pipefs.target.
Jan 23 03:14:55 np0005593232 systemd: Reached target Slice Units.
Jan 23 03:14:55 np0005593232 systemd: Reached target Swaps.
Jan 23 03:14:55 np0005593232 systemd: Reached target Local Verity Protected Volumes.
Jan 23 03:14:55 np0005593232 systemd: Listening on RPCbind Server Activation Socket.
Jan 23 03:14:55 np0005593232 systemd: Reached target RPC Port Mapper.
Jan 23 03:14:55 np0005593232 systemd: Listening on Process Core Dump Socket.
Jan 23 03:14:55 np0005593232 systemd: Listening on initctl Compatibility Named Pipe.
Jan 23 03:14:55 np0005593232 systemd: Listening on udev Control Socket.
Jan 23 03:14:55 np0005593232 systemd: Listening on udev Kernel Socket.
Jan 23 03:14:55 np0005593232 systemd: Mounting Huge Pages File System...
Jan 23 03:14:55 np0005593232 systemd: Mounting POSIX Message Queue File System...
Jan 23 03:14:55 np0005593232 systemd: Mounting Kernel Debug File System...
Jan 23 03:14:55 np0005593232 systemd: Mounting Kernel Trace File System...
Jan 23 03:14:55 np0005593232 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 03:14:55 np0005593232 systemd: Starting Create List of Static Device Nodes...
Jan 23 03:14:55 np0005593232 systemd: Starting Load Kernel Module configfs...
Jan 23 03:14:55 np0005593232 systemd: Starting Load Kernel Module drm...
Jan 23 03:14:55 np0005593232 systemd: Starting Load Kernel Module efi_pstore...
Jan 23 03:14:55 np0005593232 systemd: Starting Load Kernel Module fuse...
Jan 23 03:14:55 np0005593232 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 23 03:14:55 np0005593232 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Stopped File System Check on Root Device.
Jan 23 03:14:55 np0005593232 systemd: Stopped Journal Service.
Jan 23 03:14:55 np0005593232 systemd: Starting Journal Service...
Jan 23 03:14:55 np0005593232 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 23 03:14:55 np0005593232 systemd: Starting Generate network units from Kernel command line...
Jan 23 03:14:55 np0005593232 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 03:14:55 np0005593232 systemd: Starting Remount Root and Kernel File Systems...
Jan 23 03:14:55 np0005593232 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 23 03:14:55 np0005593232 systemd: Starting Apply Kernel Variables...
Jan 23 03:14:55 np0005593232 systemd: Starting Coldplug All udev Devices...
Jan 23 03:14:55 np0005593232 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 23 03:14:55 np0005593232 systemd: Mounted Huge Pages File System.
Jan 23 03:14:55 np0005593232 systemd: Mounted POSIX Message Queue File System.
Jan 23 03:14:55 np0005593232 systemd: Mounted Kernel Debug File System.
Jan 23 03:14:55 np0005593232 systemd: Mounted Kernel Trace File System.
Jan 23 03:14:55 np0005593232 systemd: Finished Create List of Static Device Nodes.
Jan 23 03:14:55 np0005593232 kernel: ACPI: bus type drm_connector registered
Jan 23 03:14:55 np0005593232 systemd: modprobe@configfs.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 kernel: fuse: init (API version 7.37)
Jan 23 03:14:55 np0005593232 systemd: Finished Load Kernel Module configfs.
Jan 23 03:14:55 np0005593232 systemd: modprobe@drm.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Finished Load Kernel Module drm.
Jan 23 03:14:55 np0005593232 systemd: modprobe@efi_pstore.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Finished Load Kernel Module efi_pstore.
Jan 23 03:14:55 np0005593232 systemd: modprobe@fuse.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Finished Load Kernel Module fuse.
Jan 23 03:14:55 np0005593232 systemd: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 23 03:14:55 np0005593232 systemd-journald[683]: Journal started
Jan 23 03:14:55 np0005593232 systemd-journald[683]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 03:14:55 np0005593232 systemd[1]: Queued start job for default target Multi-User System.
Jan 23 03:14:55 np0005593232 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 23 03:14:55 np0005593232 systemd: Started Journal Service.
Jan 23 03:14:55 np0005593232 systemd[1]: Finished Generate network units from Kernel command line.
Jan 23 03:14:55 np0005593232 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 23 03:14:55 np0005593232 systemd[1]: Finished Apply Kernel Variables.
Jan 23 03:14:55 np0005593232 systemd[1]: Mounting FUSE Control File System...
Jan 23 03:14:55 np0005593232 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 03:14:55 np0005593232 systemd[1]: Starting Rebuild Hardware Database...
Jan 23 03:14:55 np0005593232 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 23 03:14:55 np0005593232 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 23 03:14:55 np0005593232 systemd[1]: Starting Load/Save OS Random Seed...
Jan 23 03:14:55 np0005593232 systemd[1]: Starting Create System Users...
Jan 23 03:14:55 np0005593232 systemd[1]: Mounted FUSE Control File System.
Jan 23 03:14:55 np0005593232 systemd-journald[683]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 23 03:14:55 np0005593232 systemd-journald[683]: Received client request to flush runtime journal.
Jan 23 03:14:55 np0005593232 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 23 03:14:55 np0005593232 systemd[1]: Finished Coldplug All udev Devices.
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Create System Users.
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Load/Save OS Random Seed.
Jan 23 03:14:56 np0005593232 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 23 03:14:56 np0005593232 systemd[1]: Reached target Preparation for Local File Systems.
Jan 23 03:14:56 np0005593232 systemd[1]: Reached target Local File Systems.
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 23 03:14:56 np0005593232 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 23 03:14:56 np0005593232 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 23 03:14:56 np0005593232 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Automatic Boot Loader Update...
Jan 23 03:14:56 np0005593232 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Create Volatile Files and Directories...
Jan 23 03:14:56 np0005593232 bootctl[699]: Couldn't find EFI system partition, skipping.
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Automatic Boot Loader Update.
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Create Volatile Files and Directories.
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Security Auditing Service...
Jan 23 03:14:56 np0005593232 systemd[1]: Starting RPC Bind...
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Rebuild Journal Catalog...
Jan 23 03:14:56 np0005593232 auditd[705]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 23 03:14:56 np0005593232 auditd[705]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 23 03:14:56 np0005593232 systemd[1]: Started RPC Bind.
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Rebuild Journal Catalog.
Jan 23 03:14:56 np0005593232 augenrules[710]: /sbin/augenrules: No change
Jan 23 03:14:56 np0005593232 augenrules[725]: No rules
Jan 23 03:14:56 np0005593232 augenrules[725]: enabled 1
Jan 23 03:14:56 np0005593232 augenrules[725]: failure 1
Jan 23 03:14:56 np0005593232 augenrules[725]: pid 705
Jan 23 03:14:56 np0005593232 augenrules[725]: rate_limit 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_limit 8192
Jan 23 03:14:56 np0005593232 augenrules[725]: lost 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time 60000
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time_actual 0
Jan 23 03:14:56 np0005593232 augenrules[725]: enabled 1
Jan 23 03:14:56 np0005593232 augenrules[725]: failure 1
Jan 23 03:14:56 np0005593232 augenrules[725]: pid 705
Jan 23 03:14:56 np0005593232 augenrules[725]: rate_limit 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_limit 8192
Jan 23 03:14:56 np0005593232 augenrules[725]: lost 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time 60000
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time_actual 0
Jan 23 03:14:56 np0005593232 augenrules[725]: enabled 1
Jan 23 03:14:56 np0005593232 augenrules[725]: failure 1
Jan 23 03:14:56 np0005593232 augenrules[725]: pid 705
Jan 23 03:14:56 np0005593232 augenrules[725]: rate_limit 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_limit 8192
Jan 23 03:14:56 np0005593232 augenrules[725]: lost 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog 0
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time 60000
Jan 23 03:14:56 np0005593232 augenrules[725]: backlog_wait_time_actual 0
Jan 23 03:14:56 np0005593232 systemd[1]: Started Security Auditing Service.
Jan 23 03:14:56 np0005593232 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 23 03:14:56 np0005593232 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 23 03:14:57 np0005593232 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 23 03:14:57 np0005593232 systemd[1]: Finished Rebuild Hardware Database.
Jan 23 03:14:57 np0005593232 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 23 03:14:57 np0005593232 systemd[1]: Starting Update is Completed...
Jan 23 03:14:57 np0005593232 systemd[1]: Finished Update is Completed.
Jan 23 03:14:57 np0005593232 systemd-udevd[733]: Using default interface naming scheme 'rhel-9.0'.
Jan 23 03:14:57 np0005593232 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 23 03:14:57 np0005593232 systemd[1]: Reached target System Initialization.
Jan 23 03:14:57 np0005593232 systemd[1]: Started dnf makecache --timer.
Jan 23 03:14:57 np0005593232 systemd[1]: Started Daily rotation of log files.
Jan 23 03:14:57 np0005593232 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 23 03:14:57 np0005593232 systemd[1]: Reached target Timer Units.
Jan 23 03:14:57 np0005593232 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 23 03:14:57 np0005593232 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 23 03:14:57 np0005593232 systemd[1]: Reached target Socket Units.
Jan 23 03:14:57 np0005593232 systemd[1]: Starting D-Bus System Message Bus...
Jan 23 03:14:57 np0005593232 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 03:14:57 np0005593232 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 23 03:14:57 np0005593232 systemd[1]: Starting Load Kernel Module configfs...
Jan 23 03:14:57 np0005593232 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 23 03:14:57 np0005593232 systemd[1]: Finished Load Kernel Module configfs.
Jan 23 03:14:57 np0005593232 systemd-udevd[736]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 03:14:57 np0005593232 systemd[1]: Started D-Bus System Message Bus.
Jan 23 03:14:57 np0005593232 systemd[1]: Reached target Basic System.
Jan 23 03:14:57 np0005593232 dbus-broker-lau[750]: Ready
Jan 23 03:14:58 np0005593232 systemd[1]: Starting NTP client/server...
Jan 23 03:14:58 np0005593232 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 23 03:14:58 np0005593232 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 23 03:14:58 np0005593232 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 23 03:14:58 np0005593232 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 23 03:14:58 np0005593232 chronyd[785]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 03:14:58 np0005593232 chronyd[785]: Loaded 0 symmetric keys
Jan 23 03:14:58 np0005593232 chronyd[785]: Using right/UTC timezone to obtain leap second data
Jan 23 03:14:58 np0005593232 chronyd[785]: Loaded seccomp filter (level 2)
Jan 23 03:14:58 np0005593232 systemd[1]: Starting IPv4 firewall with iptables...
Jan 23 03:14:58 np0005593232 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 23 03:14:58 np0005593232 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 23 03:14:58 np0005593232 systemd[1]: Started irqbalance daemon.
Jan 23 03:14:58 np0005593232 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 23 03:14:58 np0005593232 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 03:14:58 np0005593232 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 03:14:58 np0005593232 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 03:14:58 np0005593232 systemd[1]: Reached target sshd-keygen.target.
Jan 23 03:14:58 np0005593232 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 23 03:14:58 np0005593232 systemd[1]: Reached target User and Group Name Lookups.
Jan 23 03:14:58 np0005593232 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 23 03:14:58 np0005593232 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 23 03:14:58 np0005593232 kernel: Console: switching to colour dummy device 80x25
Jan 23 03:14:58 np0005593232 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 23 03:14:58 np0005593232 kernel: [drm] features: -context_init
Jan 23 03:14:58 np0005593232 kernel: [drm] number of scanouts: 1
Jan 23 03:14:58 np0005593232 kernel: [drm] number of cap sets: 0
Jan 23 03:14:58 np0005593232 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 23 03:14:58 np0005593232 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 23 03:14:58 np0005593232 kernel: Console: switching to colour frame buffer device 128x48
Jan 23 03:14:58 np0005593232 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 23 03:14:58 np0005593232 kernel: kvm_amd: TSC scaling supported
Jan 23 03:14:58 np0005593232 kernel: kvm_amd: Nested Virtualization enabled
Jan 23 03:14:58 np0005593232 kernel: kvm_amd: Nested Paging enabled
Jan 23 03:14:58 np0005593232 kernel: kvm_amd: LBR virtualization supported
Jan 23 03:14:58 np0005593232 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 23 03:14:58 np0005593232 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 23 03:14:58 np0005593232 systemd[1]: Starting User Login Management...
Jan 23 03:14:58 np0005593232 systemd[1]: Started NTP client/server.
Jan 23 03:14:58 np0005593232 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 23 03:14:58 np0005593232 systemd-logind[808]: New seat seat0.
Jan 23 03:14:58 np0005593232 systemd-logind[808]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 03:14:58 np0005593232 systemd-logind[808]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 03:14:58 np0005593232 systemd[1]: Started User Login Management.
Jan 23 03:14:58 np0005593232 iptables.init[786]: iptables: Applying firewall rules: [  OK  ]
Jan 23 03:14:58 np0005593232 systemd[1]: Finished IPv4 firewall with iptables.
Jan 23 03:14:59 np0005593232 cloud-init[842]: Cloud-init v. 24.4-8.el9 running 'init-local' at Fri, 23 Jan 2026 08:14:59 +0000. Up 12.84 seconds.
Jan 23 03:14:59 np0005593232 systemd[1]: run-cloud\x2dinit-tmp-tmp7npb2hi8.mount: Deactivated successfully.
Jan 23 03:14:59 np0005593232 systemd[1]: Starting Hostname Service...
Jan 23 03:14:59 np0005593232 systemd[1]: Started Hostname Service.
Jan 23 03:14:59 np0005593232 systemd-hostnamed[856]: Hostname set to <np0005593232.novalocal> (static)
Jan 23 03:15:00 np0005593232 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 23 03:15:00 np0005593232 systemd[1]: Reached target Preparation for Network.
Jan 23 03:15:00 np0005593232 systemd[1]: Starting Network Manager...
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0811] NetworkManager (version 1.54.3-2.el9) is starting... (boot:a77cb431-adbb-4a02-bdb3-9bbe07725eca)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0817] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0910] manager[0x562eef54e000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0950] hostname: hostname: using hostnamed
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0951] hostname: static hostname changed from (none) to "np0005593232.novalocal"
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.0955] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1157] manager[0x562eef54e000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1157] manager[0x562eef54e000]: rfkill: WWAN hardware radio set enabled
Jan 23 03:15:00 np0005593232 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1198] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1199] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1199] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1200] manager: Networking is enabled by state file
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1202] settings: Loaded settings plugin: keyfile (internal)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1215] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1234] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1244] dhcp: init: Using DHCP client 'internal'
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1246] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1259] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1265] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1272] device (lo): Activation: starting connection 'lo' (42068fe1-a1cc-4ff6-98c6-929f9f15c702)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1280] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1282] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1317] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1321] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1323] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1324] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1326] device (eth0): carrier: link connected
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1329] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1334] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1340] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1343] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1344] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1346] manager: NetworkManager state is now CONNECTING
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1347] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1354] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:15:00 np0005593232 systemd[1]: Started Network Manager.
Jan 23 03:15:00 np0005593232 systemd[1]: Reached target Network.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1398] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1406] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1423] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 systemd[1]: Starting Network Manager Wait Online...
Jan 23 03:15:00 np0005593232 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 23 03:15:00 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1587] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1589] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1594] device (lo): Activation: successful, device activated.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1601] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1602] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1604] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1607] device (eth0): Activation: successful, device activated.
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1611] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 03:15:00 np0005593232 NetworkManager[860]: <info>  [1769156100.1613] manager: startup complete
Jan 23 03:15:00 np0005593232 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 23 03:15:00 np0005593232 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 23 03:15:00 np0005593232 systemd[1]: Reached target NFS client services.
Jan 23 03:15:00 np0005593232 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 23 03:15:00 np0005593232 systemd[1]: Reached target Remote File Systems.
Jan 23 03:15:00 np0005593232 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 23 03:15:00 np0005593232 systemd[1]: Finished Network Manager Wait Online.
Jan 23 03:15:00 np0005593232 systemd[1]: Starting Cloud-init: Network Stage...
Jan 23 03:15:00 np0005593232 cloud-init[923]: Cloud-init v. 24.4-8.el9 running 'init' at Fri, 23 Jan 2026 08:15:00 +0000. Up 14.03 seconds.
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.174         | 255.255.255.0 | global | fa:16:3e:d8:8f:1f |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fed8:8f1f/64 |       .       |  link  | fa:16:3e:d8:8f:1f |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 23 03:15:00 np0005593232 cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 23 03:15:02 np0005593232 cloud-init[923]: Generating public/private rsa key pair.
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key fingerprint is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: SHA256:oZzNfqcehJ5yz1o9zFkSJ64qjC+O368II1PTha/tZFM root@np0005593232.novalocal
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key's randomart image is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: +---[RSA 3072]----+
Jan 23 03:15:02 np0005593232 cloud-init[923]: |                 |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |     .           |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |    . . .   o .  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |   . + = o . +   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |  o . = E . o .  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | . . o + o = +   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |o o .o* = = B    |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | o +o*o+ * + .   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |  .o+o=++o=      |
Jan 23 03:15:02 np0005593232 cloud-init[923]: +----[SHA256]-----+
Jan 23 03:15:02 np0005593232 cloud-init[923]: Generating public/private ecdsa key pair.
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key fingerprint is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: SHA256:p0exAG4I6JVCq6S5eXP+FASHtOTyH8B/O//njq/GhTw root@np0005593232.novalocal
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key's randomart image is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: +---[ECDSA 256]---+
Jan 23 03:15:02 np0005593232 cloud-init[923]: |.o..=.o          |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |...B.* .         |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |.o+ * + . .      |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |+o o =   . o     |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |+   . + S +. .   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | o   . + =  E .  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |o o . o + .. o   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | . + .   +  o..  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |    ...   .o+*+  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: +----[SHA256]-----+
Jan 23 03:15:02 np0005593232 cloud-init[923]: Generating public/private ed25519 key pair.
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 23 03:15:02 np0005593232 cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key fingerprint is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: SHA256:+o1sqPDCJx7fBt+VxC+vaHT3BMoiSfEz1QRkxRydmms root@np0005593232.novalocal
Jan 23 03:15:02 np0005593232 cloud-init[923]: The key's randomart image is:
Jan 23 03:15:02 np0005593232 cloud-init[923]: +--[ED25519 256]--+
Jan 23 03:15:02 np0005593232 cloud-init[923]: |         .+B+o . |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |      .  .. + o  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |       o o   o   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |      . + o +    |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |     . .S= + o   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |    . o.o B E .  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | .o  o.= + = o   |
Jan 23 03:15:02 np0005593232 cloud-init[923]: |  +=..+o++  . .  |
Jan 23 03:15:02 np0005593232 cloud-init[923]: | ..++o.o= o.     |
Jan 23 03:15:02 np0005593232 cloud-init[923]: +----[SHA256]-----+
Jan 23 03:15:02 np0005593232 systemd[1]: Finished Cloud-init: Network Stage.
Jan 23 03:15:02 np0005593232 systemd[1]: Reached target Cloud-config availability.
Jan 23 03:15:02 np0005593232 systemd[1]: Reached target Network is Online.
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Cloud-init: Config Stage...
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Crash recovery kernel arming...
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Notify NFS peers of a restart...
Jan 23 03:15:02 np0005593232 systemd[1]: Starting System Logging Service...
Jan 23 03:15:02 np0005593232 sm-notify[1007]: Version 2.5.4 starting
Jan 23 03:15:02 np0005593232 systemd[1]: Starting OpenSSH server daemon...
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Permit User Sessions...
Jan 23 03:15:02 np0005593232 systemd[1]: Started Notify NFS peers of a restart.
Jan 23 03:15:02 np0005593232 systemd[1]: Started OpenSSH server daemon.
Jan 23 03:15:02 np0005593232 systemd[1]: Finished Permit User Sessions.
Jan 23 03:15:02 np0005593232 systemd[1]: Started Command Scheduler.
Jan 23 03:15:02 np0005593232 systemd[1]: Started Getty on tty1.
Jan 23 03:15:02 np0005593232 systemd[1]: Started Serial Getty on ttyS0.
Jan 23 03:15:02 np0005593232 systemd[1]: Reached target Login Prompts.
Jan 23 03:15:02 np0005593232 rsyslogd[1008]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1008" x-info="https://www.rsyslog.com"] start
Jan 23 03:15:02 np0005593232 rsyslogd[1008]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 23 03:15:02 np0005593232 systemd[1]: Started System Logging Service.
Jan 23 03:15:02 np0005593232 systemd[1]: Reached target Multi-User System.
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 23 03:15:02 np0005593232 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 23 03:15:02 np0005593232 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 23 03:15:02 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 03:15:02 np0005593232 kdumpctl[1022]: kdump: No kdump initial ramdisk found.
Jan 23 03:15:02 np0005593232 kdumpctl[1022]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 23 03:15:02 np0005593232 cloud-init[1217]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Fri, 23 Jan 2026 08:15:02 +0000. Up 15.97 seconds.
Jan 23 03:15:02 np0005593232 systemd[1]: Finished Cloud-init: Config Stage.
Jan 23 03:15:02 np0005593232 systemd[1]: Starting Cloud-init: Final Stage...
Jan 23 03:15:02 np0005593232 dracut[1287]: dracut-057-102.git20250818.el9
Jan 23 03:15:02 np0005593232 dracut[1289]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 03:15:03 np0005593232 cloud-init[1468]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Fri, 23 Jan 2026 08:15:03 +0000. Up 16.92 seconds.
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 03:15:03 np0005593232 cloud-init[1517]: #############################################################
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 03:15:03 np0005593232 cloud-init[1521]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 23 03:15:03 np0005593232 cloud-init[1530]: 256 SHA256:p0exAG4I6JVCq6S5eXP+FASHtOTyH8B/O//njq/GhTw root@np0005593232.novalocal (ECDSA)
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 03:15:03 np0005593232 cloud-init[1537]: 256 SHA256:+o1sqPDCJx7fBt+VxC+vaHT3BMoiSfEz1QRkxRydmms root@np0005593232.novalocal (ED25519)
Jan 23 03:15:03 np0005593232 cloud-init[1543]: 3072 SHA256:oZzNfqcehJ5yz1o9zFkSJ64qjC+O368II1PTha/tZFM root@np0005593232.novalocal (RSA)
Jan 23 03:15:03 np0005593232 cloud-init[1546]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 23 03:15:03 np0005593232 cloud-init[1548]: #############################################################
Jan 23 03:15:03 np0005593232 cloud-init[1468]: Cloud-init v. 24.4-8.el9 finished at Fri, 23 Jan 2026 08:15:03 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 17.27 seconds
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 03:15:03 np0005593232 dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 03:15:04 np0005593232 systemd[1]: Finished Cloud-init: Final Stage.
Jan 23 03:15:04 np0005593232 systemd[1]: Reached target Cloud-init target.
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: memstrack is not available
Jan 23 03:15:04 np0005593232 dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 23 03:15:04 np0005593232 dracut[1289]: memstrack is not available
Jan 23 03:15:04 np0005593232 dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 23 03:15:04 np0005593232 dracut[1289]: *** Including module: systemd ***
Jan 23 03:15:04 np0005593232 dracut[1289]: *** Including module: fips ***
Jan 23 03:15:05 np0005593232 chronyd[785]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Jan 23 03:15:05 np0005593232 chronyd[785]: System clock TAI offset set to 37 seconds
Jan 23 03:15:05 np0005593232 dracut[1289]: *** Including module: systemd-initrd ***
Jan 23 03:15:05 np0005593232 dracut[1289]: *** Including module: i18n ***
Jan 23 03:15:05 np0005593232 dracut[1289]: *** Including module: drm ***
Jan 23 03:15:05 np0005593232 dracut[1289]: *** Including module: prefixdevname ***
Jan 23 03:15:05 np0005593232 dracut[1289]: *** Including module: kernel-modules ***
Jan 23 03:15:05 np0005593232 kernel: block vda: the capability attribute has been deprecated.
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: kernel-modules-extra ***
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: qemu ***
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: fstab-sys ***
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: rootfs-block ***
Jan 23 03:15:06 np0005593232 chronyd[785]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: terminfo ***
Jan 23 03:15:06 np0005593232 dracut[1289]: *** Including module: udev-rules ***
Jan 23 03:15:07 np0005593232 dracut[1289]: Skipping udev rule: 91-permissions.rules
Jan 23 03:15:07 np0005593232 dracut[1289]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: virtiofs ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: dracut-systemd ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: usrmount ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: base ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: fs-lib ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: kdumpbase ***
Jan 23 03:15:07 np0005593232 dracut[1289]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 23 03:15:07 np0005593232 dracut[1289]:  microcode_ctl module: mangling fw_dir
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel" is ignored
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 23 03:15:07 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 23 03:15:08 np0005593232 dracut[1289]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 23 03:15:08 np0005593232 dracut[1289]: *** Including module: openssl ***
Jan 23 03:15:08 np0005593232 dracut[1289]: *** Including module: shutdown ***
Jan 23 03:15:08 np0005593232 dracut[1289]: *** Including module: squash ***
Jan 23 03:15:08 np0005593232 dracut[1289]: *** Including modules done ***
Jan 23 03:15:08 np0005593232 dracut[1289]: *** Installing kernel module dependencies ***
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 35 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 33 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 31 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 28 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 34 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 32 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 30 affinity is now unmanaged
Jan 23 03:15:08 np0005593232 irqbalance[796]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 23 03:15:08 np0005593232 irqbalance[796]: IRQ 29 affinity is now unmanaged
Jan 23 03:15:09 np0005593232 dracut[1289]: *** Installing kernel module dependencies done ***
Jan 23 03:15:09 np0005593232 dracut[1289]: *** Resolving executable dependencies ***
Jan 23 03:15:10 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:15:10 np0005593232 dracut[1289]: *** Resolving executable dependencies done ***
Jan 23 03:15:10 np0005593232 dracut[1289]: *** Generating early-microcode cpio image ***
Jan 23 03:15:10 np0005593232 dracut[1289]: *** Store current command line parameters ***
Jan 23 03:15:10 np0005593232 dracut[1289]: Stored kernel commandline:
Jan 23 03:15:10 np0005593232 dracut[1289]: No dracut internal kernel commandline stored in the initramfs
Jan 23 03:15:11 np0005593232 dracut[1289]: *** Install squash loader ***
Jan 23 03:15:11 np0005593232 dracut[1289]: *** Squashing the files inside the initramfs ***
Jan 23 03:15:13 np0005593232 dracut[1289]: *** Squashing the files inside the initramfs done ***
Jan 23 03:15:13 np0005593232 dracut[1289]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 23 03:15:13 np0005593232 dracut[1289]: *** Hardlinking files ***
Jan 23 03:15:13 np0005593232 dracut[1289]: *** Hardlinking files done ***
Jan 23 03:15:13 np0005593232 dracut[1289]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 23 03:15:15 np0005593232 kdumpctl[1022]: kdump: kexec: loaded kdump kernel
Jan 23 03:15:15 np0005593232 kdumpctl[1022]: kdump: Starting kdump: [OK]
Jan 23 03:15:15 np0005593232 systemd[1]: Finished Crash recovery kernel arming.
Jan 23 03:15:15 np0005593232 systemd[1]: Startup finished in 3.315s (kernel) + 4.968s (initrd) + 20.302s (userspace) = 28.585s.
Jan 23 03:15:30 np0005593232 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 03:16:11 np0005593232 systemd[1]: Created slice User Slice of UID 1000.
Jan 23 03:16:11 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 23 03:16:11 np0005593232 systemd-logind[808]: New session 1 of user zuul.
Jan 23 03:16:11 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 23 03:16:11 np0005593232 systemd[1]: Starting User Manager for UID 1000...
Jan 23 03:16:11 np0005593232 systemd[4313]: Queued start job for default target Main User Target.
Jan 23 03:16:12 np0005593232 systemd[4313]: Created slice User Application Slice.
Jan 23 03:16:12 np0005593232 systemd[4313]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 03:16:12 np0005593232 systemd[4313]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 03:16:12 np0005593232 systemd[4313]: Reached target Paths.
Jan 23 03:16:12 np0005593232 systemd[4313]: Reached target Timers.
Jan 23 03:16:12 np0005593232 systemd[4313]: Starting D-Bus User Message Bus Socket...
Jan 23 03:16:12 np0005593232 systemd[4313]: Starting Create User's Volatile Files and Directories...
Jan 23 03:16:12 np0005593232 systemd[4313]: Listening on D-Bus User Message Bus Socket.
Jan 23 03:16:12 np0005593232 systemd[4313]: Reached target Sockets.
Jan 23 03:16:12 np0005593232 systemd[4313]: Finished Create User's Volatile Files and Directories.
Jan 23 03:16:12 np0005593232 systemd[4313]: Reached target Basic System.
Jan 23 03:16:12 np0005593232 systemd[4313]: Reached target Main User Target.
Jan 23 03:16:12 np0005593232 systemd[4313]: Startup finished in 128ms.
Jan 23 03:16:12 np0005593232 systemd[1]: Started User Manager for UID 1000.
Jan 23 03:16:12 np0005593232 systemd[1]: Started Session 1 of User zuul.
Jan 23 03:16:12 np0005593232 python3[4396]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:16:17 np0005593232 python3[4424]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:16:30 np0005593232 python3[4482]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:16:32 np0005593232 python3[4522]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 23 03:16:34 np0005593232 python3[4548]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9EiUM/Wr1dKB5RqtCvSqVc226WpmY1hGtuMMJuG9WgOJFIcFQvrVt53c29e+5OTV1q1e4Rmvj7g1SyULPxSS7/DtzxyTWY79kkBpxNFmAUeUA4U0adsVRTquvEONpBa4UE3bqCTtaRQa8XED98xqS4bCBXmbcLROlQ4Qc91Uj3wxKY4/fplPdXYZdZXz3cxwEsyC6dRkYcfiUSowlrmecr3FZO6SJfG9H4YFxzwAu1R4led86PwzjZJyHfDeIHcdaDUVFcX2hGQv9iIqgYP58aTb2gRp2PxSQJfGAevolpgA3xrQKo2uBDBuRTC/hE81toPd5IIPQ3lX2JDXxauMMbmmxSjYCltaP2/bcvZ697yZh1vEmyz62itMHt6GV69XsjsX5jHWhY2RtQ6ZpsNSqrOHSUj4jlPcZEFk+4UshKJJZNaM1psuS+KAGeodosF43EuKDbWMGeqCe/kwZaBXj/Xxob+rLcVQBMVOBq+EHuNNKxSIqaNZiMz0RBf11CUk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:34 np0005593232 python3[4572]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:35 np0005593232 python3[4671]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:16:35 np0005593232 python3[4742]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769156194.7982419-251-97163449527158/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=87e5a27e13f74e79b84f2ddd13a58bce_id_rsa follow=False checksum=40e82d1acb27268baed51ce64c7c4dfd80f45a5d backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:36 np0005593232 python3[4865]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:16:36 np0005593232 python3[4936]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769156196.1101189-306-5708075619197/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=87e5a27e13f74e79b84f2ddd13a58bce_id_rsa.pub follow=False checksum=1c043d58f1d2a49f415267f4f9437247d6d980d7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:38 np0005593232 python3[4984]: ansible-ping Invoked with data=pong
Jan 23 03:16:39 np0005593232 python3[5008]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:16:41 np0005593232 python3[5066]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 23 03:16:42 np0005593232 python3[5098]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:42 np0005593232 python3[5122]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:42 np0005593232 python3[5146]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:43 np0005593232 python3[5170]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:43 np0005593232 python3[5194]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:43 np0005593232 python3[5218]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:45 np0005593232 python3[5244]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:46 np0005593232 python3[5322]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:16:46 np0005593232 python3[5395]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769156205.7252612-31-220606413246617/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:47 np0005593232 python3[5443]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:47 np0005593232 python3[5467]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:47 np0005593232 python3[5491]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:48 np0005593232 python3[5515]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:48 np0005593232 python3[5539]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:48 np0005593232 python3[5563]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:48 np0005593232 python3[5587]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:49 np0005593232 python3[5611]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:49 np0005593232 python3[5635]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:49 np0005593232 python3[5659]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:50 np0005593232 python3[5683]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:50 np0005593232 python3[5707]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:50 np0005593232 python3[5731]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:51 np0005593232 python3[5755]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:51 np0005593232 python3[5779]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:51 np0005593232 python3[5803]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:52 np0005593232 python3[5827]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:52 np0005593232 python3[5851]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:52 np0005593232 python3[5875]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:52 np0005593232 python3[5899]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:53 np0005593232 python3[5923]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:53 np0005593232 python3[5947]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:53 np0005593232 python3[5971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:54 np0005593232 python3[5995]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:54 np0005593232 python3[6019]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:54 np0005593232 python3[6043]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:16:56 np0005593232 python3[6069]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 03:16:56 np0005593232 systemd[1]: Starting Time & Date Service...
Jan 23 03:16:56 np0005593232 systemd[1]: Started Time & Date Service.
Jan 23 03:16:56 np0005593232 systemd-timedated[6071]: Changed time zone to 'UTC' (UTC).
Jan 23 03:16:58 np0005593232 python3[6100]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:58 np0005593232 python3[6176]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:16:59 np0005593232 python3[6247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769156218.4417949-251-50786119504110/source _original_basename=tmpufqrd4nz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:16:59 np0005593232 python3[6347]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:16:59 np0005593232 python3[6418]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769156219.27171-301-31451190153062/source _original_basename=tmpd6dgcj6k follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:17:00 np0005593232 python3[6520]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:17:01 np0005593232 python3[6593]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769156220.4773722-381-209504642867284/source _original_basename=tmp4soyw1vi follow=False checksum=4443522d106e75a5e1be95297fe05ddba04454bc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:17:01 np0005593232 python3[6641]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:17:01 np0005593232 python3[6667]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:17:02 np0005593232 python3[6747]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:17:02 np0005593232 python3[6820]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769156222.175092-451-181135635526605/source _original_basename=tmpvkj7k5gm follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:17:03 np0005593232 python3[6871]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-51e8-11f9-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:17:04 np0005593232 python3[6899]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-51e8-11f9-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 23 03:17:05 np0005593232 python3[6927]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:17:15 np0005593232 chronyd[785]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Jan 23 03:17:26 np0005593232 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 03:17:37 np0005593232 python3[6957]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 23 03:18:20 np0005593232 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 23 03:18:20 np0005593232 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6243] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 03:18:20 np0005593232 systemd-udevd[6958]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6457] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6493] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6497] device (eth1): carrier: link connected
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6499] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6505] policy: auto-activating connection 'Wired connection 1' (243170e2-823e-31e8-b127-988fe6a5c415)
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6509] device (eth1): Activation: starting connection 'Wired connection 1' (243170e2-823e-31e8-b127-988fe6a5c415)
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6510] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6513] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6517] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:18:20 np0005593232 NetworkManager[860]: <info>  [1769156300.6522] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:18:20 np0005593232 systemd[4313]: Starting Mark boot as successful...
Jan 23 03:18:20 np0005593232 systemd[4313]: Finished Mark boot as successful.
Jan 23 03:18:21 np0005593232 python3[6986]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-ea81-0856-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:18:31 np0005593232 python3[7066]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:18:32 np0005593232 python3[7139]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769156311.5782044-104-248148584064715/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=4329afcdd269ac4f04cecf68e145c863d7893e52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:18:33 np0005593232 python3[7189]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:18:33 np0005593232 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 03:18:33 np0005593232 systemd[1]: Stopped Network Manager Wait Online.
Jan 23 03:18:33 np0005593232 systemd[1]: Stopping Network Manager Wait Online...
Jan 23 03:18:33 np0005593232 systemd[1]: Stopping Network Manager...
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1268] caught SIGTERM, shutting down normally.
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1282] dhcp4 (eth0): canceled DHCP transaction
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1283] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1283] dhcp4 (eth0): state changed no lease
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1285] manager: NetworkManager state is now CONNECTING
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1399] dhcp4 (eth1): canceled DHCP transaction
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1400] dhcp4 (eth1): state changed no lease
Jan 23 03:18:33 np0005593232 NetworkManager[860]: <info>  [1769156313.1469] exiting (success)
Jan 23 03:18:33 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:18:33 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:18:33 np0005593232 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 03:18:33 np0005593232 systemd[1]: Stopped Network Manager.
Jan 23 03:18:33 np0005593232 systemd[1]: NetworkManager.service: Consumed 1.659s CPU time, 10.1M memory peak.
Jan 23 03:18:33 np0005593232 systemd[1]: Starting Network Manager...
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2052] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:a77cb431-adbb-4a02-bdb3-9bbe07725eca)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2054] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2110] manager[0x564801708000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 03:18:33 np0005593232 systemd[1]: Starting Hostname Service...
Jan 23 03:18:33 np0005593232 systemd[1]: Started Hostname Service.
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2947] hostname: hostname: using hostnamed
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2949] hostname: static hostname changed from (none) to "np0005593232.novalocal"
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2959] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2966] manager[0x564801708000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.2967] manager[0x564801708000]: rfkill: WWAN hardware radio set enabled
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3020] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3021] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3023] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3024] manager: Networking is enabled by state file
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3029] settings: Loaded settings plugin: keyfile (internal)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3038] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3092] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3109] dhcp: init: Using DHCP client 'internal'
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3114] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3123] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3133] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3147] device (lo): Activation: starting connection 'lo' (42068fe1-a1cc-4ff6-98c6-929f9f15c702)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3159] device (eth0): carrier: link connected
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3166] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3174] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3175] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3186] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3199] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3210] device (eth1): carrier: link connected
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3219] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3230] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (243170e2-823e-31e8-b127-988fe6a5c415) (indicated)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3231] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3243] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3255] device (eth1): Activation: starting connection 'Wired connection 1' (243170e2-823e-31e8-b127-988fe6a5c415)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3264] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 03:18:33 np0005593232 systemd[1]: Started Network Manager.
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3273] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3276] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3280] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3283] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3286] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3291] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3295] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3300] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3314] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3319] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3335] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3340] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3365] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3373] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3381] device (lo): Activation: successful, device activated.
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3394] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 23 03:18:33 np0005593232 systemd[1]: Starting Network Manager Wait Online...
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.3407] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5875] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5902] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5904] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5909] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5912] device (eth0): Activation: successful, device activated.
Jan 23 03:18:33 np0005593232 NetworkManager[7198]: <info>  [1769156313.5917] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 03:18:33 np0005593232 python3[7254]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-ea81-0856-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:18:43 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:19:03 np0005593232 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6151] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 03:19:18 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:19:18 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6503] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6507] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6546] device (eth1): Activation: successful, device activated.
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6554] manager: startup complete
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6572] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <warn>  [1769156358.6578] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6586] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 systemd[1]: Finished Network Manager Wait Online.
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6719] dhcp4 (eth1): canceled DHCP transaction
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6719] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6719] dhcp4 (eth1): state changed no lease
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6730] policy: auto-activating connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835)
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6733] device (eth1): Activation: starting connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835)
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6733] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6735] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6740] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.6745] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.9215] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.9220] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:19:18 np0005593232 NetworkManager[7198]: <info>  [1769156358.9233] device (eth1): Activation: successful, device activated.
Jan 23 03:19:28 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:19:33 np0005593232 systemd-logind[808]: Session 1 logged out. Waiting for processes to exit.
Jan 23 03:20:31 np0005593232 systemd-logind[808]: New session 3 of user zuul.
Jan 23 03:20:31 np0005593232 systemd[1]: Started Session 3 of User zuul.
Jan 23 03:20:32 np0005593232 python3[7395]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:20:32 np0005593232 python3[7468]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769156431.7838457-373-12044739094922/source _original_basename=tmptixvi2r1 follow=False checksum=2312614d64ff3d0f4e5be15d02ba4fd1b13cd8df backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:20:36 np0005593232 systemd[1]: session-3.scope: Deactivated successfully.
Jan 23 03:20:36 np0005593232 systemd-logind[808]: Session 3 logged out. Waiting for processes to exit.
Jan 23 03:20:36 np0005593232 systemd-logind[808]: Removed session 3.
Jan 23 03:21:30 np0005593232 systemd[4313]: Created slice User Background Tasks Slice.
Jan 23 03:21:30 np0005593232 systemd[4313]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 03:21:30 np0005593232 systemd[4313]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 03:26:26 np0005593232 systemd-logind[808]: New session 4 of user zuul.
Jan 23 03:26:26 np0005593232 systemd[1]: Started Session 4 of User zuul.
Jan 23 03:26:26 np0005593232 python3[7538]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-81c6-2885-000000000ca6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:27 np0005593232 python3[7567]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:27 np0005593232 python3[7593]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:27 np0005593232 python3[7619]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:28 np0005593232 python3[7645]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:28 np0005593232 python3[7671]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:29 np0005593232 python3[7749]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:26:29 np0005593232 python3[7822]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769156789.041811-365-124056549595477/source _original_basename=tmp5ocu99x4 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:26:31 np0005593232 python3[7872]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 03:26:31 np0005593232 systemd[1]: Reloading.
Jan 23 03:26:32 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:26:34 np0005593232 python3[7928]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 23 03:26:34 np0005593232 python3[7954]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:34 np0005593232 python3[7982]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:35 np0005593232 python3[8010]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:35 np0005593232 python3[8038]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:36 np0005593232 python3[8065]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-81c6-2885-000000000cad-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:26:37 np0005593232 python3[8095]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:26:40 np0005593232 systemd[1]: session-4.scope: Deactivated successfully.
Jan 23 03:26:40 np0005593232 systemd[1]: session-4.scope: Consumed 4.392s CPU time.
Jan 23 03:26:40 np0005593232 systemd-logind[808]: Session 4 logged out. Waiting for processes to exit.
Jan 23 03:26:40 np0005593232 systemd-logind[808]: Removed session 4.
Jan 23 03:26:41 np0005593232 systemd-logind[808]: New session 5 of user zuul.
Jan 23 03:26:41 np0005593232 systemd[1]: Started Session 5 of User zuul.
Jan 23 03:26:42 np0005593232 python3[8130]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 03:26:47 np0005593232 setsebool[8173]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 23 03:26:47 np0005593232 setsebool[8173]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 23 03:27:01 np0005593232 kernel: SELinux:  Converting 385 SID table entries...
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:27:01 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  Converting 388 SID table entries...
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:27:12 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:27:31 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 03:27:32 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:27:32 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:27:32 np0005593232 systemd[1]: Reloading.
Jan 23 03:27:32 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:27:32 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:28:19 np0005593232 python3[28227]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-57be-5ddd-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:28:20 np0005593232 kernel: evm: overlay not supported
Jan 23 03:28:20 np0005593232 systemd[4313]: Starting D-Bus User Message Bus...
Jan 23 03:28:20 np0005593232 dbus-broker-launch[28695]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 23 03:28:20 np0005593232 dbus-broker-launch[28695]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 23 03:28:20 np0005593232 systemd[4313]: Started D-Bus User Message Bus.
Jan 23 03:28:20 np0005593232 dbus-broker-lau[28695]: Ready
Jan 23 03:28:20 np0005593232 systemd[4313]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 23 03:28:20 np0005593232 systemd[4313]: Created slice Slice /user.
Jan 23 03:28:20 np0005593232 systemd[4313]: podman-28620.scope: unit configures an IP firewall, but not running as root.
Jan 23 03:28:20 np0005593232 systemd[4313]: (This warning is only shown for the first unit using IP firewalling.)
Jan 23 03:28:20 np0005593232 systemd[4313]: Started podman-28620.scope.
Jan 23 03:28:20 np0005593232 systemd[4313]: Started podman-pause-64aca057.scope.
Jan 23 03:28:21 np0005593232 python3[28979]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.129.56.147:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.129.56.147:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:28:21 np0005593232 python3[28979]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 23 03:28:21 np0005593232 systemd[1]: session-5.scope: Deactivated successfully.
Jan 23 03:28:21 np0005593232 systemd[1]: session-5.scope: Consumed 48.637s CPU time.
Jan 23 03:28:21 np0005593232 systemd-logind[808]: Session 5 logged out. Waiting for processes to exit.
Jan 23 03:28:21 np0005593232 systemd-logind[808]: Removed session 5.
Jan 23 03:28:22 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:28:22 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:28:22 np0005593232 systemd[1]: man-db-cache-update.service: Consumed 1min 1.274s CPU time.
Jan 23 03:28:22 np0005593232 systemd[1]: run-r7908a98f189d45e389e76b5162e94006.service: Deactivated successfully.
Jan 23 03:28:49 np0005593232 systemd-logind[808]: New session 6 of user zuul.
Jan 23 03:28:49 np0005593232 systemd[1]: Started Session 6 of User zuul.
Jan 23 03:28:50 np0005593232 python3[29657]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO5lLQe2ste4Gmi1Ir356q/C15WL/7fzZRoS9rMZVgSYU6jYrJKHH43bSyQAo3PQspZm2qMkx0r+2fgxF65A8l0= zuul@np0005593231.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:28:50 np0005593232 python3[29683]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO5lLQe2ste4Gmi1Ir356q/C15WL/7fzZRoS9rMZVgSYU6jYrJKHH43bSyQAo3PQspZm2qMkx0r+2fgxF65A8l0= zuul@np0005593231.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:28:51 np0005593232 python3[29709]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005593232.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 23 03:28:52 np0005593232 python3[29743]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO5lLQe2ste4Gmi1Ir356q/C15WL/7fzZRoS9rMZVgSYU6jYrJKHH43bSyQAo3PQspZm2qMkx0r+2fgxF65A8l0= zuul@np0005593231.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 23 03:28:52 np0005593232 python3[29821]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:28:53 np0005593232 python3[29894]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769156932.304224-167-241293580715006/source _original_basename=tmp8njhr6nn follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:28:53 np0005593232 python3[29944]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 23 03:28:53 np0005593232 systemd[1]: Starting Hostname Service...
Jan 23 03:28:54 np0005593232 systemd[1]: Started Hostname Service.
Jan 23 03:28:54 np0005593232 systemd-hostnamed[29948]: Changed pretty hostname to 'compute-0'
Jan 23 03:28:54 np0005593232 systemd-hostnamed[29948]: Hostname set to <compute-0> (static)
Jan 23 03:28:54 np0005593232 NetworkManager[7198]: <info>  [1769156934.1018] hostname: static hostname changed from "np0005593232.novalocal" to "compute-0"
Jan 23 03:28:54 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:28:54 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:28:54 np0005593232 systemd[1]: session-6.scope: Deactivated successfully.
Jan 23 03:28:54 np0005593232 systemd[1]: session-6.scope: Consumed 2.414s CPU time.
Jan 23 03:28:54 np0005593232 systemd-logind[808]: Session 6 logged out. Waiting for processes to exit.
Jan 23 03:28:54 np0005593232 systemd-logind[808]: Removed session 6.
Jan 23 03:29:04 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:29:24 np0005593232 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 03:29:56 np0005593232 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 23 03:29:56 np0005593232 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 23 03:29:56 np0005593232 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 23 03:29:56 np0005593232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 23 03:33:53 np0005593232 systemd-logind[808]: New session 7 of user zuul.
Jan 23 03:33:53 np0005593232 systemd[1]: Started Session 7 of User zuul.
Jan 23 03:33:54 np0005593232 python3[30078]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:33:56 np0005593232 python3[30194]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:33:57 np0005593232 python3[30267]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:33:57 np0005593232 python3[30293]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:33:57 np0005593232 python3[30366]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:33:57 np0005593232 python3[30392]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:33:58 np0005593232 python3[30465]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:33:58 np0005593232 python3[30491]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:33:59 np0005593232 python3[30564]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:33:59 np0005593232 python3[30590]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:33:59 np0005593232 python3[30663]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:34:00 np0005593232 python3[30689]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:34:00 np0005593232 python3[30762]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:34:00 np0005593232 python3[30788]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:34:01 np0005593232 python3[30861]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769157236.1311767-34015-186643991153815/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:34:14 np0005593232 python3[30920]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:39:14 np0005593232 systemd[1]: session-7.scope: Deactivated successfully.
Jan 23 03:39:14 np0005593232 systemd[1]: session-7.scope: Consumed 5.594s CPU time.
Jan 23 03:39:14 np0005593232 systemd-logind[808]: Session 7 logged out. Waiting for processes to exit.
Jan 23 03:39:14 np0005593232 systemd-logind[808]: Removed session 7.
Jan 23 03:44:30 np0005593232 systemd[1]: Starting dnf makecache...
Jan 23 03:44:30 np0005593232 dnf[30939]: Failed determining last makecache time.
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-openstack-barbican-42b4c41831408a8e323 382 kB/s |  13 kB     00:00
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 2.9 MB/s |  65 kB     00:00
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-openstack-cinder-1c00d6490d88e436f26ef 1.4 MB/s |  32 kB     00:00
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-python-stevedore-c4acc5639fd2329372142 5.7 MB/s | 131 kB     00:00
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-python-cloudkitty-tests-tempest-2c80f8 1.6 MB/s |  32 kB     00:00
Jan 23 03:44:30 np0005593232 dnf[30939]: delorean-os-refresh-config-9bfc52b5049be2d8de61  11 MB/s | 349 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 1.9 MB/s |  42 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-python-designate-tests-tempest-347fdbc 847 kB/s |  18 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-glance-1fd12c29b339f30fe823e 829 kB/s |  18 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 1.4 MB/s |  29 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-manila-3c01b7181572c95dac462 994 kB/s |  25 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-python-whitebox-neutron-tests-tempest- 6.8 MB/s | 154 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-octavia-ba397f07a7331190208c 1.2 MB/s |  26 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-watcher-c014f81a8647287f6dcc 723 kB/s |  16 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-ansible-config_template-5ccaa22121a7ff 276 kB/s | 7.4 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 3.1 MB/s | 144 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-swift-dc98a8463506ac520c469a 600 kB/s |  14 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-python-tempestconf-8515371b7cceebd4282 2.1 MB/s |  53 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: delorean-openstack-heat-ui-013accbfd179753bc3f0 4.1 MB/s |  96 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: CentOS Stream 9 - BaseOS                         29 kB/s | 6.7 kB     00:00
Jan 23 03:44:31 np0005593232 dnf[30939]: CentOS Stream 9 - AppStream                      65 kB/s | 6.8 kB     00:00
Jan 23 03:44:32 np0005593232 dnf[30939]: CentOS Stream 9 - CRB                            29 kB/s | 6.6 kB     00:00
Jan 23 03:44:32 np0005593232 dnf[30939]: CentOS Stream 9 - Extras packages                32 kB/s | 7.3 kB     00:00
Jan 23 03:44:32 np0005593232 dnf[30939]: dlrn-antelope-testing                            30 MB/s | 1.1 MB     00:00
Jan 23 03:44:33 np0005593232 dnf[30939]: dlrn-antelope-build-deps                        4.8 MB/s | 461 kB     00:00
Jan 23 03:44:33 np0005593232 dnf[30939]: centos9-rabbitmq                                 11 MB/s | 123 kB     00:00
Jan 23 03:44:33 np0005593232 dnf[30939]: centos9-storage                                  21 MB/s | 415 kB     00:00
Jan 23 03:44:33 np0005593232 dnf[30939]: centos9-opstools                                4.7 MB/s |  51 kB     00:00
Jan 23 03:44:33 np0005593232 dnf[30939]: NFV SIG OpenvSwitch                              21 MB/s | 461 kB     00:00
Jan 23 03:44:34 np0005593232 dnf[30939]: repo-setup-centos-appstream                      86 MB/s |  26 MB     00:00
Jan 23 03:44:42 np0005593232 dnf[30939]: repo-setup-centos-baseos                         72 MB/s | 8.9 MB     00:00
Jan 23 03:44:43 np0005593232 dnf[30939]: repo-setup-centos-highavailability               29 MB/s | 744 kB     00:00
Jan 23 03:44:44 np0005593232 dnf[30939]: repo-setup-centos-powertools                     69 MB/s | 7.6 MB     00:00
Jan 23 03:44:46 np0005593232 dnf[30939]: Extra Packages for Enterprise Linux 9 - x86_64   42 MB/s |  20 MB     00:00
Jan 23 03:45:06 np0005593232 dnf[30939]: Metadata cache created.
Jan 23 03:45:06 np0005593232 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 23 03:45:06 np0005593232 systemd[1]: Finished dnf makecache.
Jan 23 03:45:06 np0005593232 systemd[1]: dnf-makecache.service: Consumed 34.418s CPU time.
Jan 23 03:49:32 np0005593232 systemd-logind[808]: New session 8 of user zuul.
Jan 23 03:49:32 np0005593232 systemd[1]: Started Session 8 of User zuul.
Jan 23 03:49:34 np0005593232 python3.9[31202]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:49:35 np0005593232 python3.9[31383]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:49:43 np0005593232 systemd[1]: session-8.scope: Deactivated successfully.
Jan 23 03:49:43 np0005593232 systemd[1]: session-8.scope: Consumed 8.315s CPU time.
Jan 23 03:49:43 np0005593232 systemd-logind[808]: Session 8 logged out. Waiting for processes to exit.
Jan 23 03:49:43 np0005593232 systemd-logind[808]: Removed session 8.
Jan 23 03:50:00 np0005593232 systemd-logind[808]: New session 9 of user zuul.
Jan 23 03:50:00 np0005593232 systemd[1]: Started Session 9 of User zuul.
Jan 23 03:50:01 np0005593232 python3.9[31593]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 03:50:02 np0005593232 python3.9[31767]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:50:03 np0005593232 python3.9[31919]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:50:05 np0005593232 python3.9[32072]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:50:06 np0005593232 python3.9[32224]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:50:06 np0005593232 python3.9[32376]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:50:07 np0005593232 python3.9[32499]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158206.3330624-177-238813725461330/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:50:08 np0005593232 python3.9[32651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:50:09 np0005593232 python3.9[32807]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:50:09 np0005593232 python3.9[32959]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:50:10 np0005593232 python3.9[33109]: ansible-ansible.builtin.service_facts Invoked
Jan 23 03:50:16 np0005593232 python3.9[33362]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:50:17 np0005593232 python3.9[33512]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:50:18 np0005593232 python3.9[33666]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:50:19 np0005593232 python3.9[33824]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:50:20 np0005593232 python3.9[33908]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:51:06 np0005593232 systemd[1]: Reloading.
Jan 23 03:51:06 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:51:06 np0005593232 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 23 03:51:06 np0005593232 systemd[1]: Reloading.
Jan 23 03:51:06 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:51:06 np0005593232 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 23 03:51:06 np0005593232 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 23 03:51:06 np0005593232 systemd[1]: Reloading.
Jan 23 03:51:06 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:51:07 np0005593232 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 23 03:51:07 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 03:51:07 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 03:51:07 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 03:52:12 np0005593232 kernel: SELinux:  Converting 2725 SID table entries...
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:52:12 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:52:12 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 23 03:52:12 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:52:12 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:52:12 np0005593232 systemd[1]: Reloading.
Jan 23 03:52:12 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:52:12 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:52:13 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:52:13 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:52:13 np0005593232 systemd[1]: man-db-cache-update.service: Consumed 1.060s CPU time.
Jan 23 03:52:13 np0005593232 systemd[1]: run-r1cafac0517124abc87049d54ad5e9ba3.service: Deactivated successfully.
Jan 23 03:52:14 np0005593232 python3.9[35430]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:52:16 np0005593232 python3.9[35711]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 03:52:17 np0005593232 python3.9[35863]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 03:52:20 np0005593232 python3.9[36016]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:52:21 np0005593232 python3.9[36168]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 03:52:22 np0005593232 python3.9[36320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:52:23 np0005593232 python3.9[36472]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:52:24 np0005593232 python3.9[36595]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158343.2079706-666-69596738754672/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:52:34 np0005593232 python3.9[36747]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:52:35 np0005593232 python3.9[36899]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:52:36 np0005593232 python3.9[37052]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:52:37 np0005593232 python3.9[37204]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 03:52:37 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 03:52:38 np0005593232 python3.9[37358]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 03:52:39 np0005593232 python3.9[37516]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 03:52:40 np0005593232 python3.9[37676]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 03:52:41 np0005593232 python3.9[37829]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 03:52:42 np0005593232 python3.9[37987]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 03:52:43 np0005593232 python3.9[38139]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:52:46 np0005593232 python3.9[38293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:52:47 np0005593232 python3.9[38445]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:52:47 np0005593232 python3.9[38568]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769158366.6291263-1023-58286270148924/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:52:48 np0005593232 python3.9[38720]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:52:48 np0005593232 systemd[1]: Starting Load Kernel Modules...
Jan 23 03:52:48 np0005593232 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 23 03:52:48 np0005593232 kernel: Bridge firewalling registered
Jan 23 03:52:48 np0005593232 systemd-modules-load[38724]: Inserted module 'br_netfilter'
Jan 23 03:52:48 np0005593232 systemd[1]: Finished Load Kernel Modules.
Jan 23 03:52:49 np0005593232 python3.9[38879]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:52:50 np0005593232 python3.9[39002]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769158369.2372143-1092-22709524674456/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:52:51 np0005593232 python3.9[39154]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:52:54 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 03:52:54 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 03:52:55 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:52:55 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:52:55 np0005593232 systemd[1]: Reloading.
Jan 23 03:52:55 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:52:55 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:52:58 np0005593232 python3.9[42596]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:52:58 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:52:58 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:52:58 np0005593232 systemd[1]: man-db-cache-update.service: Consumed 4.547s CPU time.
Jan 23 03:52:58 np0005593232 systemd[1]: run-raf5ba8a5517540f7ae68c8898beb621c.service: Deactivated successfully.
Jan 23 03:52:59 np0005593232 python3.9[43022]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 03:53:00 np0005593232 python3.9[43172]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:53:01 np0005593232 python3.9[43324]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:01 np0005593232 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 03:53:01 np0005593232 systemd[1]: Starting Authorization Manager...
Jan 23 03:53:01 np0005593232 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 03:53:01 np0005593232 polkitd[43541]: Started polkitd version 0.117
Jan 23 03:53:01 np0005593232 systemd[1]: Started Authorization Manager.
Jan 23 03:53:02 np0005593232 python3.9[43711]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:53:02 np0005593232 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 03:53:02 np0005593232 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 03:53:02 np0005593232 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 03:53:02 np0005593232 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 03:53:02 np0005593232 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 03:53:03 np0005593232 python3.9[43872]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 03:53:08 np0005593232 python3.9[44024]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:53:08 np0005593232 systemd[1]: Reloading.
Jan 23 03:53:08 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:53:09 np0005593232 python3.9[44214]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:53:09 np0005593232 systemd[1]: Reloading.
Jan 23 03:53:09 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:53:10 np0005593232 python3.9[44403]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:11 np0005593232 python3.9[44556]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:11 np0005593232 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 23 03:53:12 np0005593232 python3.9[44709]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:14 np0005593232 python3.9[44871]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:15 np0005593232 python3.9[45024]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:53:15 np0005593232 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 23 03:53:15 np0005593232 systemd[1]: Stopped Apply Kernel Variables.
Jan 23 03:53:15 np0005593232 systemd[1]: Stopping Apply Kernel Variables...
Jan 23 03:53:15 np0005593232 systemd[1]: Starting Apply Kernel Variables...
Jan 23 03:53:15 np0005593232 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 23 03:53:15 np0005593232 systemd[1]: Finished Apply Kernel Variables.
Jan 23 03:53:15 np0005593232 systemd[1]: session-9.scope: Deactivated successfully.
Jan 23 03:53:15 np0005593232 systemd[1]: session-9.scope: Consumed 2min 18.684s CPU time.
Jan 23 03:53:15 np0005593232 systemd-logind[808]: Session 9 logged out. Waiting for processes to exit.
Jan 23 03:53:15 np0005593232 systemd-logind[808]: Removed session 9.
Jan 23 03:53:21 np0005593232 systemd-logind[808]: New session 10 of user zuul.
Jan 23 03:53:21 np0005593232 systemd[1]: Started Session 10 of User zuul.
Jan 23 03:53:22 np0005593232 python3.9[45209]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:53:24 np0005593232 python3.9[45365]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 03:53:24 np0005593232 python3.9[45518]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 03:53:26 np0005593232 python3.9[45676]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 03:53:27 np0005593232 python3.9[45836]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:53:28 np0005593232 python3.9[45920]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 03:53:31 np0005593232 python3.9[46084]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:53:45 np0005593232 kernel: SELinux:  Converting 2737 SID table entries...
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:53:45 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:53:45 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 23 03:53:45 np0005593232 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 23 03:53:46 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:53:46 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:53:47 np0005593232 systemd[1]: Reloading.
Jan 23 03:53:47 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:53:47 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:53:47 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:53:47 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:53:47 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:53:47 np0005593232 systemd[1]: run-ra3c44607f4cf40cabcfbf645242ede7a.service: Deactivated successfully.
Jan 23 03:53:48 np0005593232 python3.9[47183]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 03:53:49 np0005593232 systemd[1]: Reloading.
Jan 23 03:53:49 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:53:49 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:53:49 np0005593232 systemd[1]: Starting Open vSwitch Database Unit...
Jan 23 03:53:49 np0005593232 chown[47225]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 23 03:53:49 np0005593232 ovs-ctl[47230]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 23 03:53:49 np0005593232 ovs-ctl[47230]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 23 03:53:49 np0005593232 ovs-ctl[47230]: Starting ovsdb-server [  OK  ]
Jan 23 03:53:49 np0005593232 ovs-vsctl[47279]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 23 03:53:49 np0005593232 ovs-vsctl[47299]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d80bc768-e67f-4e48-bcf3-42912cda98f1\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 23 03:53:49 np0005593232 ovs-ctl[47230]: Configuring Open vSwitch system IDs [  OK  ]
Jan 23 03:53:49 np0005593232 ovs-vsctl[47305]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 03:53:49 np0005593232 ovs-ctl[47230]: Enabling remote OVSDB managers [  OK  ]
Jan 23 03:53:49 np0005593232 systemd[1]: Started Open vSwitch Database Unit.
Jan 23 03:53:49 np0005593232 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 23 03:53:49 np0005593232 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 23 03:53:49 np0005593232 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 23 03:53:49 np0005593232 kernel: openvswitch: Open vSwitch switching datapath
Jan 23 03:53:49 np0005593232 ovs-ctl[47349]: Inserting openvswitch module [  OK  ]
Jan 23 03:53:50 np0005593232 ovs-ctl[47318]: Starting ovs-vswitchd [  OK  ]
Jan 23 03:53:50 np0005593232 ovs-vsctl[47367]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 23 03:53:50 np0005593232 ovs-ctl[47318]: Enabling remote OVSDB managers [  OK  ]
Jan 23 03:53:50 np0005593232 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 23 03:53:50 np0005593232 systemd[1]: Starting Open vSwitch...
Jan 23 03:53:50 np0005593232 systemd[1]: Finished Open vSwitch.
Jan 23 03:53:50 np0005593232 python3.9[47518]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:53:51 np0005593232 python3.9[47670]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 03:53:53 np0005593232 kernel: SELinux:  Converting 2751 SID table entries...
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 03:53:53 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 03:53:54 np0005593232 python3.9[47825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:53:55 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 23 03:53:55 np0005593232 python3.9[47983]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:53:57 np0005593232 python3.9[48136]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:53:59 np0005593232 python3.9[48423]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 03:54:00 np0005593232 python3.9[48573]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:54:01 np0005593232 python3.9[48727]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:54:03 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:54:03 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:54:03 np0005593232 systemd[1]: Reloading.
Jan 23 03:54:03 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:54:03 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:54:03 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:54:05 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:54:05 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:54:05 np0005593232 systemd[1]: run-re32b566302c54b32a8fc06bbb1b64e8c.service: Deactivated successfully.
Jan 23 03:54:06 np0005593232 python3.9[49044]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:54:06 np0005593232 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 23 03:54:06 np0005593232 systemd[1]: Stopped Network Manager Wait Online.
Jan 23 03:54:06 np0005593232 systemd[1]: Stopping Network Manager Wait Online...
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.3917] caught SIGTERM, shutting down normally.
Jan 23 03:54:06 np0005593232 systemd[1]: Stopping Network Manager...
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.3942] dhcp4 (eth0): canceled DHCP transaction
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.3942] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.3943] dhcp4 (eth0): state changed no lease
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.3945] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 03:54:06 np0005593232 NetworkManager[7198]: <info>  [1769158446.4016] exiting (success)
Jan 23 03:54:06 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:54:06 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:54:06 np0005593232 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 23 03:54:06 np0005593232 systemd[1]: Stopped Network Manager.
Jan 23 03:54:06 np0005593232 systemd[1]: NetworkManager.service: Consumed 15.906s CPU time, 4.1M memory peak, read 0B from disk, written 25.5K to disk.
Jan 23 03:54:06 np0005593232 systemd[1]: Starting Network Manager...
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.4877] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:a77cb431-adbb-4a02-bdb3-9bbe07725eca)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.4879] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.4935] manager[0x55a084a5c000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 23 03:54:06 np0005593232 systemd[1]: Starting Hostname Service...
Jan 23 03:54:06 np0005593232 systemd[1]: Started Hostname Service.
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5699] hostname: hostname: using hostnamed
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5700] hostname: static hostname changed from (none) to "compute-0"
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5704] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5709] manager[0x55a084a5c000]: rfkill: Wi-Fi hardware radio set enabled
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5709] manager[0x55a084a5c000]: rfkill: WWAN hardware radio set enabled
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5729] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5738] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5739] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5739] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5739] manager: Networking is enabled by state file
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5742] settings: Loaded settings plugin: keyfile (internal)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5745] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5780] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5790] dhcp: init: Using DHCP client 'internal'
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5792] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5798] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5803] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5810] device (lo): Activation: starting connection 'lo' (42068fe1-a1cc-4ff6-98c6-929f9f15c702)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5815] device (eth0): carrier: link connected
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5819] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5823] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5824] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5830] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5835] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5841] device (eth1): carrier: link connected
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5844] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5849] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835) (indicated)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5849] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5854] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5861] device (eth1): Activation: starting connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5866] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 23 03:54:06 np0005593232 systemd[1]: Started Network Manager.
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5875] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5877] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5879] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5889] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5896] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5899] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5904] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5909] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5921] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5929] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5941] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5955] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5966] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5970] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5975] device (lo): Activation: successful, device activated.
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5981] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.5986] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 23 03:54:06 np0005593232 systemd[1]: Starting Network Manager Wait Online...
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6059] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6066] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6072] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6075] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6077] device (eth1): Activation: successful, device activated.
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6094] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6096] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6100] manager: NetworkManager state is now CONNECTED_SITE
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6102] device (eth0): Activation: successful, device activated.
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6107] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 23 03:54:06 np0005593232 NetworkManager[49057]: <info>  [1769158446.6109] manager: startup complete
Jan 23 03:54:06 np0005593232 systemd[1]: Finished Network Manager Wait Online.
Jan 23 03:54:07 np0005593232 python3.9[49271]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:54:12 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:54:12 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:54:12 np0005593232 systemd[1]: Reloading.
Jan 23 03:54:12 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:54:12 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:54:12 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 03:54:13 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:54:13 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:54:13 np0005593232 systemd[1]: run-r7211a4dc9bcf43adb90017457141867d.service: Deactivated successfully.
Jan 23 03:54:16 np0005593232 python3.9[49731]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:54:16 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:54:17 np0005593232 python3.9[49883]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:18 np0005593232 python3.9[50037]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:18 np0005593232 python3.9[50189]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:19 np0005593232 python3.9[50341]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:20 np0005593232 python3.9[50493]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:21 np0005593232 python3.9[50645]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:54:21 np0005593232 python3.9[50768]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158460.6904707-647-241796086642755/.source _original_basename=.krlzw1de follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:22 np0005593232 python3.9[50920]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:23 np0005593232 python3.9[51072]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 23 03:54:24 np0005593232 python3.9[51224]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:26 np0005593232 python3.9[51651]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 23 03:54:28 np0005593232 ansible-async_wrapper.py[51826]: Invoked with j1408923657 300 /home/zuul/.ansible/tmp/ansible-tmp-1769158467.1471498-845-245255104955939/AnsiballZ_edpm_os_net_config.py _
Jan 23 03:54:28 np0005593232 ansible-async_wrapper.py[51829]: Starting module and watcher
Jan 23 03:54:28 np0005593232 ansible-async_wrapper.py[51829]: Start watching 51830 (300)
Jan 23 03:54:28 np0005593232 ansible-async_wrapper.py[51830]: Start module (51830)
Jan 23 03:54:28 np0005593232 ansible-async_wrapper.py[51826]: Return async_wrapper task started.
Jan 23 03:54:28 np0005593232 python3.9[51831]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 23 03:54:29 np0005593232 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 23 03:54:29 np0005593232 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 23 03:54:29 np0005593232 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 23 03:54:29 np0005593232 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 23 03:54:29 np0005593232 kernel: cfg80211: failed to load regulatory.db
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.1651] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.1671] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2169] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2172] audit: op="connection-add" uuid="704f9cf5-d3cd-4aa8-9b51-d5f190bb92a2" name="br-ex-br" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2186] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2188] audit: op="connection-add" uuid="f6bdc43c-e43c-4a98-adcf-be5b06ed37f4" name="br-ex-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2198] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2200] audit: op="connection-add" uuid="47df9c61-a8e9-4643-a5aa-1adb1f85a68d" name="eth1-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2211] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2213] audit: op="connection-add" uuid="f4eb0db3-83a2-4400-b6f6-4bf5bd4556f8" name="vlan20-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2223] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2225] audit: op="connection-add" uuid="ab4bb8c8-32be-4bf9-8a52-b05f4b2528a8" name="vlan21-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2235] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2237] audit: op="connection-add" uuid="ca17c271-f5a5-4105-bf7d-009a00d7e39b" name="vlan22-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2247] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2249] audit: op="connection-add" uuid="c8b9b269-b173-43e2-9c9c-1c4698c9c13b" name="vlan23-port" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2269] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2283] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2285] audit: op="connection-add" uuid="02dbd158-1427-47a4-87f4-cbac37323c4b" name="br-ex-if" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2345] audit: op="connection-update" uuid="acf508d8-e849-5591-a169-47979e016835" name="ci-private-network" args="ipv4.routing-rules,ipv4.dns,ipv4.routes,ipv4.never-default,ipv4.addresses,ipv4.method,ovs-external-ids.data,connection.master,connection.port-type,connection.controller,connection.slave-type,connection.timestamp,ipv6.routing-rules,ipv6.dns,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ipv6.method,ovs-interface.type" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2358] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2360] audit: op="connection-add" uuid="7e6fb90e-fc50-4703-933e-483a2ba4e12f" name="vlan20-if" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2374] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2376] audit: op="connection-add" uuid="cee724e1-238e-406b-9181-31918336792d" name="vlan21-if" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2388] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2390] audit: op="connection-add" uuid="7ebf35e4-af92-4ef8-bdc9-9f4d48b59a45" name="vlan22-if" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2404] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2405] audit: op="connection-add" uuid="4ac5deed-bf8f-4702-bd4c-f565e91325aa" name="vlan23-if" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2416] audit: op="connection-delete" uuid="243170e2-823e-31e8-b127-988fe6a5c415" name="Wired connection 1" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2429] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2433] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2440] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2445] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (704f9cf5-d3cd-4aa8-9b51-d5f190bb92a2)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2446] audit: op="connection-activate" uuid="704f9cf5-d3cd-4aa8-9b51-d5f190bb92a2" name="br-ex-br" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2449] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2450] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2455] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2458] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (f6bdc43c-e43c-4a98-adcf-be5b06ed37f4)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2460] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2461] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2464] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2468] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (47df9c61-a8e9-4643-a5aa-1adb1f85a68d)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2470] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2471] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2475] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2479] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (f4eb0db3-83a2-4400-b6f6-4bf5bd4556f8)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2481] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2482] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2486] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2490] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (ab4bb8c8-32be-4bf9-8a52-b05f4b2528a8)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2492] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2493] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2497] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2501] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (ca17c271-f5a5-4105-bf7d-009a00d7e39b)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2503] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2505] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2510] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2515] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (c8b9b269-b173-43e2-9c9c-1c4698c9c13b)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2516] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2519] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2522] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2529] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2531] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2534] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2539] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (02dbd158-1427-47a4-87f4-cbac37323c4b)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2540] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2544] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2546] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2548] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2549] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2557] device (eth1): disconnecting for new activation request.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2559] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2563] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2565] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2567] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2570] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2572] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2576] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2581] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (7e6fb90e-fc50-4703-933e-483a2ba4e12f)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2582] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2585] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2588] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2590] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2593] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2595] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2598] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2602] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (cee724e1-238e-406b-9181-31918336792d)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2603] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2606] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2608] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2609] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2612] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2613] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2616] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2621] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (7ebf35e4-af92-4ef8-bdc9-9f4d48b59a45)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2622] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2625] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2626] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2628] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2631] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <warn>  [1769158470.2632] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2635] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2640] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (4ac5deed-bf8f-4702-bd4c-f565e91325aa)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2642] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2646] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2648] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2650] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2652] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2664] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2666] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2669] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2671] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2677] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2681] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2684] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2687] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2689] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2693] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2697] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2700] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2702] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2706] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 kernel: ovs-system: entered promiscuous mode
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2710] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2712] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2713] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2717] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2720] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2722] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2723] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2727] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2730] dhcp4 (eth0): canceled DHCP transaction
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2731] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2731] dhcp4 (eth0): state changed no lease
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2734] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 23 03:54:30 np0005593232 kernel: Timeout policy base is empty
Jan 23 03:54:30 np0005593232 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 23 03:54:30 np0005593232 systemd-udevd[51835]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2745] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.2747] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51832 uid=0 result="fail" reason="Device is not activated"
Jan 23 03:54:30 np0005593232 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 23 03:54:30 np0005593232 kernel: br-ex: entered promiscuous mode
Jan 23 03:54:30 np0005593232 kernel: vlan20: entered promiscuous mode
Jan 23 03:54:30 np0005593232 systemd-udevd[51837]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4706] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4712] dhcp4 (eth0): state changed new lease, address=38.102.83.174
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4727] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4735] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4747] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.4755] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 23 03:54:30 np0005593232 kernel: vlan21: entered promiscuous mode
Jan 23 03:54:30 np0005593232 kernel: vlan22: entered promiscuous mode
Jan 23 03:54:30 np0005593232 kernel: vlan23: entered promiscuous mode
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5833] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5971] device (eth1): Activation: starting connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5978] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5981] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5982] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5983] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5985] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5987] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5988] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5990] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5999] device (eth1): disconnecting for new activation request.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.5999] audit: op="connection-activate" uuid="acf508d8-e849-5591-a169-47979e016835" name="ci-private-network" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6003] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6031] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6038] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6044] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6047] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6051] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6055] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6058] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6062] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6066] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6070] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6073] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6077] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6081] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6085] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6089] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6092] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6095] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6098] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6105] device (eth1): Activation: starting connection 'ci-private-network' (acf508d8-e849-5591-a169-47979e016835)
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6132] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6136] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6143] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51832 uid=0 result="success"
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6144] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6164] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6172] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6177] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6193] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6196] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6199] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6206] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6214] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6219] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6227] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6232] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6234] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6236] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6240] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6245] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6250] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6255] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6257] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6260] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6264] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6269] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6274] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6279] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6281] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 23 03:54:30 np0005593232 NetworkManager[49057]: <info>  [1769158470.6285] device (eth1): Activation: successful, device activated.
Jan 23 03:54:31 np0005593232 NetworkManager[49057]: <info>  [1769158471.8346] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.0026] checkpoint[0x55a084a32950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.0028] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 python3.9[52194]: ansible-ansible.legacy.async_status Invoked with jid=j1408923657.51826 mode=status _async_dir=/root/.ansible_async
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.3323] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.3339] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.7413] audit: op="networking-control" arg="global-dns-configuration" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.7502] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.7565] audit: op="networking-control" arg="global-dns-configuration" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.7597] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.9252] checkpoint[0x55a084a32a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 23 03:54:32 np0005593232 NetworkManager[49057]: <info>  [1769158472.9257] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51832 uid=0 result="success"
Jan 23 03:54:32 np0005593232 ansible-async_wrapper.py[51830]: Module complete (51830)
Jan 23 03:54:33 np0005593232 ansible-async_wrapper.py[51829]: Done in kid B.
Jan 23 03:54:35 np0005593232 python3.9[52301]: ansible-ansible.legacy.async_status Invoked with jid=j1408923657.51826 mode=status _async_dir=/root/.ansible_async
Jan 23 03:54:36 np0005593232 python3.9[52400]: ansible-ansible.legacy.async_status Invoked with jid=j1408923657.51826 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 03:54:36 np0005593232 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 03:54:37 np0005593232 python3.9[52554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:54:37 np0005593232 python3.9[52677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158476.6644664-926-261527416244674/.source.returncode _original_basename=.te4v9ng5 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:38 np0005593232 python3.9[52829]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:54:39 np0005593232 python3.9[52953]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158478.0547028-974-34893556999283/.source.cfg _original_basename=.2xavqe1k follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:54:40 np0005593232 python3.9[53105]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:54:40 np0005593232 systemd[1]: Reloading Network Manager...
Jan 23 03:54:40 np0005593232 NetworkManager[49057]: <info>  [1769158480.1599] audit: op="reload" arg="0" pid=53109 uid=0 result="success"
Jan 23 03:54:40 np0005593232 NetworkManager[49057]: <info>  [1769158480.1613] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 23 03:54:40 np0005593232 systemd[1]: Reloaded Network Manager.
Jan 23 03:54:41 np0005593232 systemd[1]: session-10.scope: Deactivated successfully.
Jan 23 03:54:41 np0005593232 systemd[1]: session-10.scope: Consumed 52.289s CPU time.
Jan 23 03:54:41 np0005593232 systemd-logind[808]: Session 10 logged out. Waiting for processes to exit.
Jan 23 03:54:41 np0005593232 systemd-logind[808]: Removed session 10.
Jan 23 03:54:46 np0005593232 systemd-logind[808]: New session 11 of user zuul.
Jan 23 03:54:46 np0005593232 systemd[1]: Started Session 11 of User zuul.
Jan 23 03:54:47 np0005593232 python3.9[53293]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:54:48 np0005593232 python3.9[53447]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:54:49 np0005593232 python3.9[53641]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:54:50 np0005593232 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 23 03:54:50 np0005593232 systemd-logind[808]: Session 11 logged out. Waiting for processes to exit.
Jan 23 03:54:50 np0005593232 systemd[1]: session-11.scope: Deactivated successfully.
Jan 23 03:54:50 np0005593232 systemd[1]: session-11.scope: Consumed 2.410s CPU time.
Jan 23 03:54:50 np0005593232 systemd-logind[808]: Removed session 11.
Jan 23 03:54:56 np0005593232 systemd-logind[808]: New session 12 of user zuul.
Jan 23 03:54:56 np0005593232 systemd[1]: Started Session 12 of User zuul.
Jan 23 03:54:57 np0005593232 python3.9[53823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:54:58 np0005593232 python3.9[53978]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:54:59 np0005593232 python3.9[54134]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:55:00 np0005593232 python3.9[54218]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:55:02 np0005593232 python3.9[54372]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:55:04 np0005593232 python3.9[54567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:05 np0005593232 python3.9[54719]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:55:05 np0005593232 podman[54720]: 2026-01-23 08:55:05.829619066 +0000 UTC m=+0.050556518 system refresh
Jan 23 03:55:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:55:09 np0005593232 python3.9[54881]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:55:09 np0005593232 python3.9[55004]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158508.5055625-197-74888563609297/.source.json follow=False _original_basename=podman_network_config.j2 checksum=fb2673bf3f015427cd75501f158555e8dea54049 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:10 np0005593232 python3.9[55156]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:55:11 np0005593232 python3.9[55279]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769158510.163146-242-102511043908925/.source.conf follow=False _original_basename=registries.conf.j2 checksum=7d6103ee1a01cd01d921f72f1af62704e0a47ff2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:55:12 np0005593232 python3.9[55431]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:55:12 np0005593232 python3.9[55583]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:55:13 np0005593232 python3.9[55735]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:55:14 np0005593232 python3.9[55887]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:55:15 np0005593232 python3.9[56039]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:55:17 np0005593232 python3.9[56192]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:55:18 np0005593232 python3.9[56346]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:55:19 np0005593232 python3.9[56498]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:55:20 np0005593232 python3.9[56650]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:55:21 np0005593232 python3.9[56803]: ansible-service_facts Invoked
Jan 23 03:55:21 np0005593232 network[56820]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 03:55:21 np0005593232 network[56821]: 'network-scripts' will be removed from distribution in near future.
Jan 23 03:55:21 np0005593232 network[56822]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 03:55:27 np0005593232 python3.9[57274]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 03:55:30 np0005593232 python3.9[57427]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 03:55:31 np0005593232 python3.9[57579]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:55:32 np0005593232 python3.9[57704]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158531.2767992-674-268330514294887/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:33 np0005593232 python3.9[57858]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:55:33 np0005593232 python3.9[57983]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158532.7494335-719-154989695660083/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:35 np0005593232 python3.9[58137]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:37 np0005593232 python3.9[58291]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:55:38 np0005593232 python3.9[58375]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:55:41 np0005593232 python3.9[58529]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:55:42 np0005593232 python3.9[58613]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:55:42 np0005593232 systemd[1]: Stopping NTP client/server...
Jan 23 03:55:42 np0005593232 chronyd[785]: chronyd exiting
Jan 23 03:55:42 np0005593232 systemd[1]: chronyd.service: Deactivated successfully.
Jan 23 03:55:42 np0005593232 systemd[1]: Stopped NTP client/server.
Jan 23 03:55:42 np0005593232 systemd[1]: Starting NTP client/server...
Jan 23 03:55:42 np0005593232 chronyd[58621]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 23 03:55:42 np0005593232 chronyd[58621]: Frequency -28.390 +/- 0.216 ppm read from /var/lib/chrony/drift
Jan 23 03:55:42 np0005593232 chronyd[58621]: Loaded seccomp filter (level 2)
Jan 23 03:55:42 np0005593232 systemd[1]: Started NTP client/server.
Jan 23 03:55:43 np0005593232 systemd[1]: session-12.scope: Deactivated successfully.
Jan 23 03:55:43 np0005593232 systemd[1]: session-12.scope: Consumed 25.485s CPU time.
Jan 23 03:55:43 np0005593232 systemd-logind[808]: Session 12 logged out. Waiting for processes to exit.
Jan 23 03:55:43 np0005593232 systemd-logind[808]: Removed session 12.
Jan 23 03:55:49 np0005593232 systemd-logind[808]: New session 13 of user zuul.
Jan 23 03:55:49 np0005593232 systemd[1]: Started Session 13 of User zuul.
Jan 23 03:55:49 np0005593232 python3.9[58802]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:50 np0005593232 python3.9[58954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:55:51 np0005593232 python3.9[59077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158550.2458937-62-232324621839893/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:55:51 np0005593232 systemd[1]: session-13.scope: Deactivated successfully.
Jan 23 03:55:51 np0005593232 systemd[1]: session-13.scope: Consumed 1.585s CPU time.
Jan 23 03:55:51 np0005593232 systemd-logind[808]: Session 13 logged out. Waiting for processes to exit.
Jan 23 03:55:51 np0005593232 systemd-logind[808]: Removed session 13.
Jan 23 03:55:57 np0005593232 systemd-logind[808]: New session 14 of user zuul.
Jan 23 03:55:57 np0005593232 systemd[1]: Started Session 14 of User zuul.
Jan 23 03:55:58 np0005593232 python3.9[59255]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:55:59 np0005593232 python3.9[59411]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:00 np0005593232 python3.9[59586]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:01 np0005593232 python3.9[59709]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769158559.5557084-83-240266288947317/.source.json _original_basename=.kuvb5u0x follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:02 np0005593232 python3.9[59861]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:02 np0005593232 python3.9[59984]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158561.873216-152-52711717454441/.source _original_basename=.plrodjzq follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:03 np0005593232 python3.9[60136]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:56:04 np0005593232 python3.9[60288]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:04 np0005593232 python3.9[60411]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769158563.8467407-224-40882879715265/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:56:05 np0005593232 python3.9[60563]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:06 np0005593232 python3.9[60686]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769158564.9982083-224-90713658742221/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 03:56:06 np0005593232 python3.9[60838]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:07 np0005593232 python3.9[60990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:07 np0005593232 python3.9[61113]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158566.922744-335-133938197304361/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:08 np0005593232 python3.9[61265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:09 np0005593232 python3.9[61388]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158568.133824-380-258858451900565/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:10 np0005593232 python3.9[61540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:56:10 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:10 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:10 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:10 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:10 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:10 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:10 np0005593232 systemd[1]: Starting EDPM Container Shutdown...
Jan 23 03:56:10 np0005593232 systemd[1]: Finished EDPM Container Shutdown.
Jan 23 03:56:11 np0005593232 python3.9[61769]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:12 np0005593232 python3.9[61892]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158571.238475-449-175155403109617/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:13 np0005593232 python3.9[62044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:13 np0005593232 python3.9[62167]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158572.6312075-494-218025566821274/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:14 np0005593232 python3.9[62319]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:56:14 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:14 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:14 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:14 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:14 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:14 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:15 np0005593232 systemd[1]: Starting Create netns directory...
Jan 23 03:56:15 np0005593232 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 03:56:15 np0005593232 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 03:56:15 np0005593232 systemd[1]: Finished Create netns directory.
Jan 23 03:56:15 np0005593232 python3.9[62544]: ansible-ansible.builtin.service_facts Invoked
Jan 23 03:56:16 np0005593232 network[62561]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 03:56:16 np0005593232 network[62562]: 'network-scripts' will be removed from distribution in near future.
Jan 23 03:56:16 np0005593232 network[62563]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 03:56:20 np0005593232 python3.9[62825]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:56:20 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:20 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:20 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:20 np0005593232 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 23 03:56:20 np0005593232 iptables.init[62864]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 23 03:56:20 np0005593232 iptables.init[62864]: iptables: Flushing firewall rules: [  OK  ]
Jan 23 03:56:20 np0005593232 systemd[1]: iptables.service: Deactivated successfully.
Jan 23 03:56:20 np0005593232 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 23 03:56:21 np0005593232 python3.9[63060]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:56:22 np0005593232 python3.9[63214]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:56:22 np0005593232 systemd[1]: Reloading.
Jan 23 03:56:22 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:56:22 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:56:23 np0005593232 systemd[1]: Starting Netfilter Tables...
Jan 23 03:56:23 np0005593232 systemd[1]: Finished Netfilter Tables.
Jan 23 03:56:24 np0005593232 python3.9[63406]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:56:25 np0005593232 python3.9[63559]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:25 np0005593232 python3.9[63684]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158584.7541478-701-117863515837753/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:26 np0005593232 python3.9[63837]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:56:26 np0005593232 systemd[1]: Reloading OpenSSH server daemon...
Jan 23 03:56:26 np0005593232 systemd[1]: Reloaded OpenSSH server daemon.
Jan 23 03:56:27 np0005593232 python3.9[63993]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:28 np0005593232 python3.9[64145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:28 np0005593232 python3.9[64268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158587.607297-794-135891415516936/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:29 np0005593232 python3.9[64420]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 03:56:29 np0005593232 systemd[1]: Starting Time & Date Service...
Jan 23 03:56:29 np0005593232 systemd[1]: Started Time & Date Service.
Jan 23 03:56:31 np0005593232 python3.9[64576]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:32 np0005593232 python3.9[64728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:33 np0005593232 python3.9[64851]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158592.0554056-899-89019739510369/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:33 np0005593232 python3.9[65003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:34 np0005593232 python3.9[65126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769158593.2715352-944-202727994684116/.source.yaml _original_basename=.nsjyizxq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:35 np0005593232 python3.9[65278]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:35 np0005593232 python3.9[65401]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158594.5744631-989-266836910577195/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:36 np0005593232 python3.9[65553]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:56:36 np0005593232 python3.9[65706]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:56:37 np0005593232 python3[65859]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 03:56:38 np0005593232 python3.9[66011]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:39 np0005593232 python3.9[66134]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158598.3022695-1106-197312587493887/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:40 np0005593232 python3.9[66286]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:40 np0005593232 python3.9[66409]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158599.8002408-1151-168149312863592/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:41 np0005593232 python3.9[66561]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:42 np0005593232 python3.9[66684]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158601.0497825-1196-148265497464489/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:42 np0005593232 python3.9[66836]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:43 np0005593232 python3.9[66959]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158602.3801513-1241-39354643434539/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:44 np0005593232 python3.9[67111]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 03:56:44 np0005593232 python3.9[67234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769158603.669803-1286-38460937554328/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:45 np0005593232 python3.9[67386]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:46 np0005593232 python3.9[67538]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:56:47 np0005593232 python3.9[67697]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:48 np0005593232 python3.9[67850]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:48 np0005593232 python3.9[68002]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:56:49 np0005593232 python3.9[68154]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 03:56:50 np0005593232 python3.9[68307]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 03:56:51 np0005593232 systemd[1]: session-14.scope: Deactivated successfully.
Jan 23 03:56:51 np0005593232 systemd[1]: session-14.scope: Consumed 34.255s CPU time.
Jan 23 03:56:51 np0005593232 systemd-logind[808]: Session 14 logged out. Waiting for processes to exit.
Jan 23 03:56:51 np0005593232 systemd-logind[808]: Removed session 14.
Jan 23 03:56:56 np0005593232 systemd-logind[808]: New session 15 of user zuul.
Jan 23 03:56:56 np0005593232 systemd[1]: Started Session 15 of User zuul.
Jan 23 03:56:57 np0005593232 python3.9[68488]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 03:56:58 np0005593232 python3.9[68640]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:56:59 np0005593232 python3.9[68792]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:57:00 np0005593232 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 03:57:00 np0005593232 python3.9[68946]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7jdzOPltwN8PSb4q9DCiO5zY7TIK6sENpltjjN4gdZgxOTsj/dxnfxJlO2lYI1dFyyFnDdZj88a4x1KI5Bnnvl5KRvvZiianfivZWKq9Ngf9fzf7+5CsDFBiu6a7GAfXMf9FocVpqlXf7fsXmb5Iv2xUpNnye4EFIuW965X3SNrRpujRnDe+i0lIwrOsus4R86qn38MWOLfPBAWFYdBaVfTUYjC0eT/I81Y/T2RKqf7XK/bsuHobZ+/a7lymuPsS9L0DFg25ZoIlvkPUVfZxTO5FCyw8GMR+AgbnMQyHwx2JAmewwH3M2l+zVdDQjsE1ZRFlJCmwle9LBa1oFhuLfxLqsykQploeB5Ch/VppbnRQ/GamwWLU5HEKMH2wZ6IymURW7nSStlEhNWvK+Bb9rIy65M6AFOEW94xId4nc+IraS6rc2cuM3Rp97S/6olqjlFDZisdUwdAlhIKuJjA7SsYZ6HyCEbRN3mvMnWbkqpyY605kewQ6kdmucNeWgRtk=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE++PPNOKtggGl2mGWEm1DV2WpblvGA/F2TEEVeMrsU2#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3uOoytpWGDF46u3wwDFxwF05HMnZd51GvbceZrDgZRmc5sxbF+OawPD9kGTcjnaUTzvqWgbFNvcmpuaNTnpzc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsLbdPIA8nc52wSKcOItc1xJ6faU3FwhWecUgXZZC+Q1wLSrdN9vgOExBhQSwwodluzJ5/GT9VbCuujyBvk7RMEim1+fw7T58Th56PR8y2lL6F6F3ni4S21QxInTLml+/id8wwEZAkFjbCF/AjCRDyH7a6H4wIZtd5ZuzWJuuBENNdtu/qD1QQYkNegqllogNpkdpAFZgvee26yw2sbCX8kpbJoJsowaQUckoRtT2jj7985CLxErKZ8YO8ZozjfuCDCKbcJT0KFimievJZmKXvGaWG5H+P509XDsfN62aQr22US8FbYjdK1lfrJoetkc/MK4h7QuCs6MH2qYiqXIkJYKMSReM+sH3X7V7pSWSUkr0DHREVvBGcC2lRSx45lUCTEtcTY7XmxGORvCORMYla0l1H3mEIkfYLS4sXYtRSHkyFnyQgbNP5MnrmXlK0vrAA81r5U+dOhIL/H2e7S4xcLItH7weUOHIAmCj266mm9+xJyyd7NZ+eUgS0Md5p4Bc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUSudroiFEdRPXgUCqRHbNRLelYP5RQGMMCn6zD8pfH#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDJLsx8RxJz6M7PIyGcFdzR+Ldl788501Y8ZWLJ8hnDzMCaRkGjzE+kzO/uN75IEtV3aVEl1jNQlk7wON+lORGQ=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq2Yxebv3BUxXHPuf6nN00teEMYUUVEWMZOqcwNO1dyibdbyxre6VweeeiBR/lerW1mIcmB67juCuLffEgDo8uPtZx9HrD1psd+ji78YeJuvbKIEcTwdtGF0I8PeogHunx+4KBxFsHeF6JHN9+H7lTHiSSIDFzk9BwDkAKEWsYHe8z+5SPDU//XiYNv0drE59KiQF586rnjPR3VZk6WaR+hp2PiHbUUSOvnyB4kI4bCXSCU/Oxv7HDvgeCJapABjisMZg4aiteZ7EaD1yVndkQiS6OxfOGP1srgtNkRL4Idc/XCFXH754lbRd8GzUF0n8N0HbWTcFDuTU+bvhuIH+3EDNxsDQkSCdJTw2EPb/mqZVdXSFxLXUBcXnYkBWZirpgC3g6okg2RQU2bxigFs7lFwJT6QE+wz0DK7Z3ib0XQxjRlY6PIwn1D2soMwKVarxpeM2FfsGrHMHaHioRTVbKpzBMA1oUICSUCvzyhd0I43cO2rUEK/8EMYSsTVRulKs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII4nVnNUbCVQAtKJF7UUtMQxNhMw9eVlRVofBpQ70iUi#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqfkBgoQjr/gZBK1F9K576GMtkxSY6lVgROItGrW+R9EA2lvnOt71IGO0M0lGVvCkTtLktdNpSsYnBu2cJn+4c=#012 create=True mode=0644 path=/tmp/ansible.y0_iwtv3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:57:01 np0005593232 python3.9[69098]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.y0_iwtv3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:02 np0005593232 python3.9[69252]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.y0_iwtv3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:57:03 np0005593232 systemd[1]: session-15.scope: Deactivated successfully.
Jan 23 03:57:03 np0005593232 systemd[1]: session-15.scope: Consumed 3.275s CPU time.
Jan 23 03:57:03 np0005593232 systemd-logind[808]: Session 15 logged out. Waiting for processes to exit.
Jan 23 03:57:03 np0005593232 systemd-logind[808]: Removed session 15.
Jan 23 03:57:09 np0005593232 systemd-logind[808]: New session 16 of user zuul.
Jan 23 03:57:09 np0005593232 systemd[1]: Started Session 16 of User zuul.
Jan 23 03:57:10 np0005593232 python3.9[69431]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:57:11 np0005593232 python3.9[69587]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 03:57:13 np0005593232 python3.9[69741]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 03:57:14 np0005593232 python3.9[69894]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:15 np0005593232 python3.9[70047]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:57:16 np0005593232 python3.9[70201]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:17 np0005593232 python3.9[70356]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:57:17 np0005593232 systemd[1]: session-16.scope: Deactivated successfully.
Jan 23 03:57:17 np0005593232 systemd[1]: session-16.scope: Consumed 5.026s CPU time.
Jan 23 03:57:17 np0005593232 systemd-logind[808]: Session 16 logged out. Waiting for processes to exit.
Jan 23 03:57:17 np0005593232 systemd-logind[808]: Removed session 16.
Jan 23 03:57:23 np0005593232 systemd-logind[808]: New session 17 of user zuul.
Jan 23 03:57:23 np0005593232 systemd[1]: Started Session 17 of User zuul.
Jan 23 03:57:24 np0005593232 python3.9[70534]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:57:26 np0005593232 python3.9[70690]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 03:57:26 np0005593232 python3.9[70774]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 03:57:29 np0005593232 python3.9[70925]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:30 np0005593232 python3.9[71076]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 03:57:31 np0005593232 python3.9[71226]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:57:31 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 03:57:31 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 03:57:32 np0005593232 python3.9[71377]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 03:57:33 np0005593232 systemd[1]: session-17.scope: Deactivated successfully.
Jan 23 03:57:33 np0005593232 systemd[1]: session-17.scope: Consumed 6.240s CPU time.
Jan 23 03:57:33 np0005593232 systemd-logind[808]: Session 17 logged out. Waiting for processes to exit.
Jan 23 03:57:33 np0005593232 systemd-logind[808]: Removed session 17.
Jan 23 03:57:40 np0005593232 systemd-logind[808]: New session 18 of user zuul.
Jan 23 03:57:40 np0005593232 systemd[1]: Started Session 18 of User zuul.
Jan 23 03:57:46 np0005593232 python3[72143]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:57:48 np0005593232 python3[72238]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 03:57:50 np0005593232 python3[72265]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:57:50 np0005593232 python3[72291]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:50 np0005593232 kernel: loop: module loaded
Jan 23 03:57:50 np0005593232 kernel: loop3: detected capacity change from 0 to 14680064
Jan 23 03:57:51 np0005593232 python3[72326]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:57:51 np0005593232 lvm[72329]: PV /dev/loop3 not used.
Jan 23 03:57:51 np0005593232 lvm[72338]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 03:57:51 np0005593232 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 23 03:57:51 np0005593232 lvm[72340]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 23 03:57:51 np0005593232 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 23 03:57:51 np0005593232 python3[72418]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:57:52 np0005593232 python3[72491]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158671.4980736-36869-119555111808332/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:57:52 np0005593232 chronyd[58621]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 23 03:57:52 np0005593232 python3[72541]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 03:57:53 np0005593232 systemd[1]: Reloading.
Jan 23 03:57:53 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:57:53 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:57:53 np0005593232 systemd[1]: Starting Ceph OSD losetup...
Jan 23 03:57:53 np0005593232 bash[72583]: /dev/loop3: [64513]:4328477 (/var/lib/ceph-osd-0.img)
Jan 23 03:57:53 np0005593232 systemd[1]: Finished Ceph OSD losetup.
Jan 23 03:57:53 np0005593232 lvm[72584]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 03:57:53 np0005593232 lvm[72584]: VG ceph_vg0 finished
Jan 23 03:57:56 np0005593232 python3[72608]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 03:58:00 np0005593232 python3[72701]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 23 03:58:01 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 03:58:01 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 03:58:02 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 03:58:02 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 03:58:02 np0005593232 systemd[1]: run-r20a948e43e71424099948b35bbd08696.service: Deactivated successfully.
Jan 23 03:58:03 np0005593232 python3[72812]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:58:03 np0005593232 python3[72840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:58:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:04 np0005593232 python3[72902]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:58:04 np0005593232 python3[72928]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:58:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:05 np0005593232 python3[73006]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:58:05 np0005593232 python3[73079]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158684.8887668-37060-146010107716042/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:58:06 np0005593232 python3[73181]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 03:58:06 np0005593232 python3[73254]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158685.957872-37078-38834410572605/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 03:58:06 np0005593232 python3[73304]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:58:07 np0005593232 python3[73332]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:58:07 np0005593232 python3[73360]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:58:08 np0005593232 python3[73386]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 03:58:08 np0005593232 python3[73412]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:58:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:08 np0005593232 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 03:58:08 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 03:58:08 np0005593232 systemd-logind[808]: New session 19 of user ceph-admin.
Jan 23 03:58:08 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 03:58:08 np0005593232 systemd[1]: Starting User Manager for UID 42477...
Jan 23 03:58:08 np0005593232 systemd[73433]: Queued start job for default target Main User Target.
Jan 23 03:58:08 np0005593232 systemd[73433]: Created slice User Application Slice.
Jan 23 03:58:08 np0005593232 systemd[73433]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 03:58:08 np0005593232 systemd[73433]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 03:58:08 np0005593232 systemd[73433]: Reached target Paths.
Jan 23 03:58:08 np0005593232 systemd[73433]: Reached target Timers.
Jan 23 03:58:08 np0005593232 systemd[73433]: Starting D-Bus User Message Bus Socket...
Jan 23 03:58:08 np0005593232 systemd[73433]: Starting Create User's Volatile Files and Directories...
Jan 23 03:58:08 np0005593232 systemd[73433]: Listening on D-Bus User Message Bus Socket.
Jan 23 03:58:08 np0005593232 systemd[73433]: Finished Create User's Volatile Files and Directories.
Jan 23 03:58:08 np0005593232 systemd[73433]: Reached target Sockets.
Jan 23 03:58:08 np0005593232 systemd[73433]: Reached target Basic System.
Jan 23 03:58:08 np0005593232 systemd[73433]: Reached target Main User Target.
Jan 23 03:58:08 np0005593232 systemd[73433]: Startup finished in 113ms.
Jan 23 03:58:08 np0005593232 systemd[1]: Started User Manager for UID 42477.
Jan 23 03:58:08 np0005593232 systemd[1]: Started Session 19 of User ceph-admin.
Jan 23 03:58:08 np0005593232 systemd[1]: session-19.scope: Deactivated successfully.
Jan 23 03:58:08 np0005593232 systemd-logind[808]: Session 19 logged out. Waiting for processes to exit.
Jan 23 03:58:08 np0005593232 systemd-logind[808]: Removed session 19.
Jan 23 03:58:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-compat3842376490-merged.mount: Deactivated successfully.
Jan 23 03:58:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-compat3842376490-lower\x2dmapped.mount: Deactivated successfully.
Jan 23 03:58:19 np0005593232 systemd[1]: Stopping User Manager for UID 42477...
Jan 23 03:58:19 np0005593232 systemd[73433]: Activating special unit Exit the Session...
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped target Main User Target.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped target Basic System.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped target Paths.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped target Sockets.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped target Timers.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 03:58:19 np0005593232 systemd[73433]: Closed D-Bus User Message Bus Socket.
Jan 23 03:58:19 np0005593232 systemd[73433]: Stopped Create User's Volatile Files and Directories.
Jan 23 03:58:19 np0005593232 systemd[73433]: Removed slice User Application Slice.
Jan 23 03:58:19 np0005593232 systemd[73433]: Reached target Shutdown.
Jan 23 03:58:19 np0005593232 systemd[73433]: Finished Exit the Session.
Jan 23 03:58:19 np0005593232 systemd[73433]: Reached target Exit the Session.
Jan 23 03:58:19 np0005593232 systemd[1]: user@42477.service: Deactivated successfully.
Jan 23 03:58:19 np0005593232 systemd[1]: Stopped User Manager for UID 42477.
Jan 23 03:58:19 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 23 03:58:19 np0005593232 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 23 03:58:19 np0005593232 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 23 03:58:19 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 23 03:58:19 np0005593232 systemd[1]: Removed slice User Slice of UID 42477.
Jan 23 03:58:39 np0005593232 podman[73488]: 2026-01-23 08:58:39.236417309 +0000 UTC m=+30.297091458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.316529064 +0000 UTC m=+0.049367353 container create c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 03:58:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck1701965829-merged.mount: Deactivated successfully.
Jan 23 03:58:39 np0005593232 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 23 03:58:39 np0005593232 systemd[1]: Started libpod-conmon-c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1.scope.
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.292000967 +0000 UTC m=+0.024839306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.442225982 +0000 UTC m=+0.175064291 container init c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.449038725 +0000 UTC m=+0.181877014 container start c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.452659868 +0000 UTC m=+0.185498157 container attach c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:39 np0005593232 practical_elgamal[73565]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 23 03:58:39 np0005593232 systemd[1]: libpod-c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1.scope: Deactivated successfully.
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.797834517 +0000 UTC m=+0.530672806 container died c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:39 np0005593232 podman[73548]: 2026-01-23 08:58:39.838214513 +0000 UTC m=+0.571052792 container remove c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1 (image=quay.io/ceph/ceph:v18, name=practical_elgamal, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:58:39 np0005593232 systemd[1]: libpod-conmon-c5e766388e28aacb78233fb3c65f8cda737fec1df69dda1112b3c9ed45647da1.scope: Deactivated successfully.
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.90398368 +0000 UTC m=+0.044514625 container create 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:39 np0005593232 systemd[1]: Started libpod-conmon-69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1.scope.
Jan 23 03:58:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.963693735 +0000 UTC m=+0.104224680 container init 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.968202693 +0000 UTC m=+0.108733638 container start 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:39 np0005593232 laughing_shtern[73598]: 167 167
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.972054092 +0000 UTC m=+0.112585057 container attach 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:39 np0005593232 systemd[1]: libpod-69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1.scope: Deactivated successfully.
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.973070321 +0000 UTC m=+0.113601266 container died 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 23 03:58:39 np0005593232 podman[73581]: 2026-01-23 08:58:39.884857137 +0000 UTC m=+0.025388102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:40 np0005593232 podman[73581]: 2026-01-23 08:58:40.000917762 +0000 UTC m=+0.141448717 container remove 69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1 (image=quay.io/ceph/ceph:v18, name=laughing_shtern, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 03:58:40 np0005593232 systemd[1]: libpod-conmon-69559b8d10b04332141722d31361986614e7440da1a679259493d185911388b1.scope: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.055882682 +0000 UTC m=+0.036534918 container create c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:40 np0005593232 systemd[1]: Started libpod-conmon-c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00.scope.
Jan 23 03:58:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.125250341 +0000 UTC m=+0.105902577 container init c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.129543333 +0000 UTC m=+0.110195569 container start c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.132438095 +0000 UTC m=+0.113090351 container attach c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.040602658 +0000 UTC m=+0.021254904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:40 np0005593232 dazzling_bartik[73631]: AQBAOHNpchgFCRAAGjJOezcnO0LDybdwMPmKLA==
Jan 23 03:58:40 np0005593232 systemd[1]: libpod-c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00.scope: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.154814861 +0000 UTC m=+0.135467087 container died c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 03:58:40 np0005593232 podman[73616]: 2026-01-23 08:58:40.191285746 +0000 UTC m=+0.171937982 container remove c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00 (image=quay.io/ceph/ceph:v18, name=dazzling_bartik, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 03:58:40 np0005593232 systemd[1]: libpod-conmon-c63ac7daa416cedff3632af3e6e0f4d53805786000da9218d317f0e87a717e00.scope: Deactivated successfully.
Jan 23 03:58:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bcfb32f1a63bb5dc3926f7c9d3a8740616e943361e2db530316411597d63167a-merged.mount: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.271357379 +0000 UTC m=+0.061275151 container create 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:58:40 np0005593232 systemd[1]: Started libpod-conmon-58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe.scope.
Jan 23 03:58:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.229343186 +0000 UTC m=+0.019260948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.336863348 +0000 UTC m=+0.126781130 container init 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.342327634 +0000 UTC m=+0.132245406 container start 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.345276837 +0000 UTC m=+0.135194609 container attach 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 23 03:58:40 np0005593232 stupefied_villani[73666]: AQBAOHNp5L2iFRAA86HKsgR2FPenTBGma95rfw==
Jan 23 03:58:40 np0005593232 systemd[1]: libpod-58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe.scope: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.367011214 +0000 UTC m=+0.156928986 container died 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6546bad74be5027296074a5b869e8b17cf525b83d227099914437c7c40609ad0-merged.mount: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73651]: 2026-01-23 08:58:40.397564552 +0000 UTC m=+0.187482324 container remove 58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe (image=quay.io/ceph/ceph:v18, name=stupefied_villani, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 03:58:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:40 np0005593232 systemd[1]: libpod-conmon-58f508fbe0b8345b64075a713b9d6ff5da582c2cad76e9cc5abbc71187f3b0fe.scope: Deactivated successfully.
Jan 23 03:58:40 np0005593232 podman[73683]: 2026-01-23 08:58:40.45282565 +0000 UTC m=+0.035725365 container create a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 03:58:40 np0005593232 systemd[1]: Started libpod-conmon-a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28.scope.
Jan 23 03:58:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:40 np0005593232 podman[73683]: 2026-01-23 08:58:40.437061963 +0000 UTC m=+0.019961698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:42 np0005593232 podman[73683]: 2026-01-23 08:58:42.049047473 +0000 UTC m=+1.631947208 container init a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 03:58:42 np0005593232 podman[73683]: 2026-01-23 08:58:42.054157658 +0000 UTC m=+1.637057373 container start a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 03:58:42 np0005593232 bold_ramanujan[73699]: AQBCOHNph76CBBAAAKQmWQKpGFNMS25WQq3jZw==
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73683]: 2026-01-23 08:58:42.100619616 +0000 UTC m=+1.683519351 container attach a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 03:58:42 np0005593232 podman[73683]: 2026-01-23 08:58:42.101257034 +0000 UTC m=+1.684156749 container died a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-365a01926634b5e56a9f51485a626a387d969e6e186cd7bf0d6e4e575494d06d-merged.mount: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73683]: 2026-01-23 08:58:42.194509023 +0000 UTC m=+1.777408738 container remove a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28 (image=quay.io/ceph/ceph:v18, name=bold_ramanujan, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 03:58:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-conmon-a9a94cd2af53918ec00ee5b92a2e5cddc1b92c081a7e6106855d97b34850be28.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.283242171 +0000 UTC m=+0.064636316 container create c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.241344631 +0000 UTC m=+0.022738786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:42 np0005593232 systemd[1]: Started libpod-conmon-c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4.scope.
Jan 23 03:58:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb87c84333201f8f12cc1615bf2e452d1bbe0e21e1019088b983817a615c2d6/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.409893506 +0000 UTC m=+0.191287701 container init c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.415044962 +0000 UTC m=+0.196439097 container start c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.426394104 +0000 UTC m=+0.207788309 container attach c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 03:58:42 np0005593232 stoic_thompson[73734]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 23 03:58:42 np0005593232 stoic_thompson[73734]: setting min_mon_release = pacific
Jan 23 03:58:42 np0005593232 stoic_thompson[73734]: /usr/bin/monmaptool: set fsid to e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:42 np0005593232 stoic_thompson[73734]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.445698902 +0000 UTC m=+0.227093037 container died c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 03:58:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-acb87c84333201f8f12cc1615bf2e452d1bbe0e21e1019088b983817a615c2d6-merged.mount: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73719]: 2026-01-23 08:58:42.56751512 +0000 UTC m=+0.348909255 container remove c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4 (image=quay.io/ceph/ceph:v18, name=stoic_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-conmon-c31c36c948a48d7fefab4e9d0aafd0a3c9fe6b825b4aecac491dba2e7c6582f4.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.62984178 +0000 UTC m=+0.042445556 container create f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 03:58:42 np0005593232 systemd[1]: Started libpod-conmon-f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a.scope.
Jan 23 03:58:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0949ab55ca0bba11b735de1e4781ea5185727ac53e1627e329444f9cd72ab1/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0949ab55ca0bba11b735de1e4781ea5185727ac53e1627e329444f9cd72ab1/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0949ab55ca0bba11b735de1e4781ea5185727ac53e1627e329444f9cd72ab1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a0949ab55ca0bba11b735de1e4781ea5185727ac53e1627e329444f9cd72ab1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.607421253 +0000 UTC m=+0.020025059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.75667302 +0000 UTC m=+0.169276826 container init f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.763891535 +0000 UTC m=+0.176495311 container start f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.774162557 +0000 UTC m=+0.186766353 container attach f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.889224993 +0000 UTC m=+0.301828769 container died f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:42 np0005593232 podman[73756]: 2026-01-23 08:58:42.929850126 +0000 UTC m=+0.342453902 container remove f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a (image=quay.io/ceph/ceph:v18, name=relaxed_jang, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:58:42 np0005593232 systemd[1]: libpod-conmon-f5705030dada72b1a1787b0ee0ba78caed42597176f345877fa2eb8ca2c6b41a.scope: Deactivated successfully.
Jan 23 03:58:42 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:43 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:43 np0005593232 systemd[1]: Reached target All Ceph clusters and services.
Jan 23 03:58:43 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:43 np0005593232 systemd[1]: Reached target Ceph cluster e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:43 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:43 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:44 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:44 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:44 np0005593232 systemd[1]: Created slice Slice /system/ceph-e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:44 np0005593232 systemd[1]: Reached target System Time Set.
Jan 23 03:58:44 np0005593232 systemd[1]: Reached target System Time Synchronized.
Jan 23 03:58:44 np0005593232 systemd[1]: Starting Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 03:58:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:44 np0005593232 podman[74048]: 2026-01-23 08:58:44.436448515 +0000 UTC m=+0.020796111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:44 np0005593232 podman[74048]: 2026-01-23 08:58:44.602256891 +0000 UTC m=+0.186604497 container create 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 03:58:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0927ab2196d45f16576f85669efbcf4f31a9e7e270d805414d32d6a59d96fe0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0927ab2196d45f16576f85669efbcf4f31a9e7e270d805414d32d6a59d96fe0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0927ab2196d45f16576f85669efbcf4f31a9e7e270d805414d32d6a59d96fe0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0927ab2196d45f16576f85669efbcf4f31a9e7e270d805414d32d6a59d96fe0b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:44 np0005593232 podman[74048]: 2026-01-23 08:58:44.678348231 +0000 UTC m=+0.262695827 container init 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:44 np0005593232 podman[74048]: 2026-01-23 08:58:44.68464864 +0000 UTC m=+0.268996216 container start 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:44 np0005593232 bash[74048]: 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515
Jan 23 03:58:44 np0005593232 systemd[1]: Started Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: pidfile_write: ignore empty --pid-file
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: load: jerasure load: lrc 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: RocksDB version: 7.9.2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Git sha 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: DB SUMMARY
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: DB Session ID:  BCM57DXMJUH2ILPVW5ZD
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: CURRENT file:  CURRENT
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                         Options.error_if_exists: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.create_if_missing: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                                     Options.env: 0x55afc7974c40
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                                Options.info_log: 0x55afc8a60ec0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                              Options.statistics: (nil)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                               Options.use_fsync: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                              Options.db_log_dir: 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                                 Options.wal_dir: 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                    Options.write_buffer_manager: 0x55afc8a70b40
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.unordered_write: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                               Options.row_cache: None
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                              Options.wal_filter: None
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.two_write_queues: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.wal_compression: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.atomic_flush: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.max_background_jobs: 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.max_background_compactions: -1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.max_subcompactions: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                          Options.max_open_files: -1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Compression algorithms supported:
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kZSTD supported: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kXpressCompression supported: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kBZip2Compression supported: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kLZ4Compression supported: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kZlibCompression supported: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: #011kSnappyCompression supported: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:           Options.merge_operator: 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:        Options.compaction_filter: None
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55afc8a60aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55afc8a591f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.compression: NoCompression
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.num_levels: 7
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                           Options.bloom_locality: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                               Options.ttl: 2592000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                       Options.enable_blob_files: false
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                           Options.min_blob_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a81b8e15-1a54-43c0-85f4-c7c545c93281
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158724734284, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158724736763, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "BCM57DXMJUH2ILPVW5ZD", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158724736943, "job": 1, "event": "recovery_finished"}
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55afc8a82e00
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: DB pointer 0x55afc8b0c000
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55afc8a591f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@-1(???) e0 preinit fsid e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:44 np0005593232 podman[74069]: 2026-01-23 08:58:44.758648561 +0000 UTC m=+0.031318980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2026-01-23T08:58:42.799507Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 23 03:58:44 np0005593232 podman[74069]: 2026-01-23 08:58:44.941409209 +0000 UTC m=+0.214079608 container create 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 03:58:44 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).mds e1 new map
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mkfs e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 23 03:58:45 np0005593232 systemd[1]: Started libpod-conmon-0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c.scope.
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f2aea205b4e9bbf06319e5fb1750952af6386775748fe50b7c5f7903c4c7c7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f2aea205b4e9bbf06319e5fb1750952af6386775748fe50b7c5f7903c4c7c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f2aea205b4e9bbf06319e5fb1750952af6386775748fe50b7c5f7903c4c7c7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:45 np0005593232 podman[74069]: 2026-01-23 08:58:45.165070088 +0000 UTC m=+0.437740517 container init 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 03:58:45 np0005593232 podman[74069]: 2026-01-23 08:58:45.174960649 +0000 UTC m=+0.447631048 container start 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 03:58:45 np0005593232 podman[74069]: 2026-01-23 08:58:45.178707695 +0000 UTC m=+0.451378114 container attach 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 23 03:58:45 np0005593232 ceph-mon[74068]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1039721581' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:  cluster:
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    id:     e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    health: HEALTH_OK
Jan 23 03:58:45 np0005593232 pedantic_black[74123]: 
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:  services:
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    mon: 1 daemons, quorum compute-0 (age 0.785753s)
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    mgr: no daemons active
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    osd: 0 osds: 0 up, 0 in
Jan 23 03:58:45 np0005593232 pedantic_black[74123]: 
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:  data:
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    pools:   0 pools, 0 pgs
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    objects: 0 objects, 0 B
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    usage:   0 B used, 0 B / 0 B avail
Jan 23 03:58:45 np0005593232 pedantic_black[74123]:    pgs:     
Jan 23 03:58:45 np0005593232 pedantic_black[74123]: 
Jan 23 03:58:45 np0005593232 systemd[1]: libpod-0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c.scope: Deactivated successfully.
Jan 23 03:58:45 np0005593232 podman[74069]: 2026-01-23 08:58:45.621013891 +0000 UTC m=+0.893684300 container died 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 23 03:58:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-54f2aea205b4e9bbf06319e5fb1750952af6386775748fe50b7c5f7903c4c7c7-merged.mount: Deactivated successfully.
Jan 23 03:58:45 np0005593232 podman[74069]: 2026-01-23 08:58:45.931473754 +0000 UTC m=+1.204144153 container remove 0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c (image=quay.io/ceph/ceph:v18, name=pedantic_black, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:45 np0005593232 systemd[1]: libpod-conmon-0ce1be62ef7f083e47eff4a4140fbea54f5bf22dee1fe45464477d96629ddd9c.scope: Deactivated successfully.
Jan 23 03:58:45 np0005593232 podman[74163]: 2026-01-23 08:58:45.995198593 +0000 UTC m=+0.044649078 container create b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:58:46 np0005593232 systemd[1]: Started libpod-conmon-b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca.scope.
Jan 23 03:58:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5788e78801612354ea3f9877adbd9a7337cd2114afd82e083d9c5c6468dc1a54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5788e78801612354ea3f9877adbd9a7337cd2114afd82e083d9c5c6468dc1a54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5788e78801612354ea3f9877adbd9a7337cd2114afd82e083d9c5c6468dc1a54/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5788e78801612354ea3f9877adbd9a7337cd2114afd82e083d9c5c6468dc1a54/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:45.974098854 +0000 UTC m=+0.023549389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:46.088254605 +0000 UTC m=+0.137705110 container init b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:46.093672599 +0000 UTC m=+0.143123104 container start b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:46.097465206 +0000 UTC m=+0.146915721 container attach b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:46 np0005593232 ceph-mon[74068]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:46 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 23 03:58:46 np0005593232 ceph-mon[74068]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2912359229' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 03:58:46 np0005593232 ceph-mon[74068]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2912359229' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 03:58:46 np0005593232 fervent_wilson[74179]: 
Jan 23 03:58:46 np0005593232 fervent_wilson[74179]: [global]
Jan 23 03:58:46 np0005593232 fervent_wilson[74179]: #011fsid = e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:46 np0005593232 fervent_wilson[74179]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 23 03:58:46 np0005593232 systemd[1]: libpod-b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca.scope: Deactivated successfully.
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:46.493741206 +0000 UTC m=+0.543191701 container died b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5788e78801612354ea3f9877adbd9a7337cd2114afd82e083d9c5c6468dc1a54-merged.mount: Deactivated successfully.
Jan 23 03:58:46 np0005593232 podman[74163]: 2026-01-23 08:58:46.53547896 +0000 UTC m=+0.584929455 container remove b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca (image=quay.io/ceph/ceph:v18, name=fervent_wilson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 03:58:46 np0005593232 systemd[1]: libpod-conmon-b861adf15b64c232dd6711636a6047d9714b3910ab3049c63b5294e3758e9eca.scope: Deactivated successfully.
Jan 23 03:58:46 np0005593232 podman[74220]: 2026-01-23 08:58:46.600307931 +0000 UTC m=+0.042406755 container create 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 03:58:46 np0005593232 systemd[1]: Started libpod-conmon-454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4.scope.
Jan 23 03:58:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e9cebc300d523a741c12259e445830e6ba917de4f392422d7b3b0ca4a27bfda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e9cebc300d523a741c12259e445830e6ba917de4f392422d7b3b0ca4a27bfda/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e9cebc300d523a741c12259e445830e6ba917de4f392422d7b3b0ca4a27bfda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e9cebc300d523a741c12259e445830e6ba917de4f392422d7b3b0ca4a27bfda/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:46 np0005593232 podman[74220]: 2026-01-23 08:58:46.663227667 +0000 UTC m=+0.105326541 container init 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 03:58:46 np0005593232 podman[74220]: 2026-01-23 08:58:46.669533796 +0000 UTC m=+0.111632630 container start 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 03:58:46 np0005593232 podman[74220]: 2026-01-23 08:58:46.673077416 +0000 UTC m=+0.115176270 container attach 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 03:58:46 np0005593232 podman[74220]: 2026-01-23 08:58:46.583950196 +0000 UTC m=+0.026049030 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3287051418' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 03:58:47 np0005593232 systemd[1]: libpod-454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4.scope: Deactivated successfully.
Jan 23 03:58:47 np0005593232 podman[74220]: 2026-01-23 08:58:47.077506487 +0000 UTC m=+0.519605321 container died 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1e9cebc300d523a741c12259e445830e6ba917de4f392422d7b3b0ca4a27bfda-merged.mount: Deactivated successfully.
Jan 23 03:58:47 np0005593232 podman[74220]: 2026-01-23 08:58:47.120607551 +0000 UTC m=+0.562706375 container remove 454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4 (image=quay.io/ceph/ceph:v18, name=kind_booth, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:47 np0005593232 systemd[1]: libpod-conmon-454f9e5433438d444fcf78b722a9b7968aad9096eef414d8f2932a5bb6bb0cf4.scope: Deactivated successfully.
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: from='client.? 192.168.122.100:0/2912359229' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: from='client.? 192.168.122.100:0/2912359229' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 03:58:47 np0005593232 systemd[1]: Stopping Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: mon.compute-0@0(leader) e1 shutdown
Jan 23 03:58:47 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0[74064]: 2026-01-23T08:58:47.291+0000 7ff01e35d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 23 03:58:47 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0[74064]: 2026-01-23T08:58:47.291+0000 7ff01e35d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 03:58:47 np0005593232 ceph-mon[74068]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 03:58:47 np0005593232 podman[74302]: 2026-01-23 08:58:47.324065366 +0000 UTC m=+0.062164355 container died 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 03:58:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0927ab2196d45f16576f85669efbcf4f31a9e7e270d805414d32d6a59d96fe0b-merged.mount: Deactivated successfully.
Jan 23 03:58:47 np0005593232 podman[74302]: 2026-01-23 08:58:47.353202234 +0000 UTC m=+0.091301213 container remove 9a236b2b6abd4bb79f8b86336fcb41b93a164546cc17e0c0699560ffeea76515 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 03:58:47 np0005593232 bash[74302]: ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0
Jan 23 03:58:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 23 03:58:47 np0005593232 systemd[1]: ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@mon.compute-0.service: Deactivated successfully.
Jan 23 03:58:47 np0005593232 systemd[1]: Stopped Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:47 np0005593232 systemd[1]: Starting Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 03:58:47 np0005593232 podman[74404]: 2026-01-23 08:58:47.722305222 +0000 UTC m=+0.052646636 container create 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415f462710b5713230460019558c0e741a40fe303897fbbdf5c4c9d4ee667db5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415f462710b5713230460019558c0e741a40fe303897fbbdf5c4c9d4ee667db5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415f462710b5713230460019558c0e741a40fe303897fbbdf5c4c9d4ee667db5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/415f462710b5713230460019558c0e741a40fe303897fbbdf5c4c9d4ee667db5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 podman[74404]: 2026-01-23 08:58:47.778110626 +0000 UTC m=+0.108452060 container init 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 03:58:47 np0005593232 podman[74404]: 2026-01-23 08:58:47.786408161 +0000 UTC m=+0.116749575 container start 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 03:58:47 np0005593232 bash[74404]: 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d
Jan 23 03:58:47 np0005593232 podman[74404]: 2026-01-23 08:58:47.696272252 +0000 UTC m=+0.026613686 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:47 np0005593232 systemd[1]: Started Ceph mon.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: pidfile_write: ignore empty --pid-file
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: load: jerasure load: lrc 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: RocksDB version: 7.9.2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Git sha 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: DB SUMMARY
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: DB Session ID:  V2SI29UDTZDZQYJD4090
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: CURRENT file:  CURRENT
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55210 ; 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                         Options.error_if_exists: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.create_if_missing: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                                     Options.env: 0x557a82bb2c40
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                                Options.info_log: 0x557a8442b040
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                              Options.statistics: (nil)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                               Options.use_fsync: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                              Options.db_log_dir: 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                                 Options.wal_dir: 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                    Options.write_buffer_manager: 0x557a8443ab40
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.unordered_write: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                               Options.row_cache: None
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                              Options.wal_filter: None
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.two_write_queues: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.wal_compression: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.atomic_flush: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.max_background_jobs: 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.max_background_compactions: -1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.max_subcompactions: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.max_total_wal_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                          Options.max_open_files: -1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:       Options.compaction_readahead_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Compression algorithms supported:
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kZSTD supported: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kXpressCompression supported: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kBZip2Compression supported: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kLZ4Compression supported: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kZlibCompression supported: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: #011kSnappyCompression supported: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:           Options.merge_operator: 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:        Options.compaction_filter: None
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557a8442ac40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557a844231f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:        Options.write_buffer_size: 33554432
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:  Options.max_write_buffer_number: 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.compression: NoCompression
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.num_levels: 7
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                           Options.bloom_locality: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                               Options.ttl: 2592000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                       Options.enable_blob_files: false
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                           Options.min_blob_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a81b8e15-1a54-43c0-85f4-c7c545c93281
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158727830109, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158727834219, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 54849, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 136, "table_properties": {"data_size": 53385, "index_size": 170, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 2933, "raw_average_key_size": 29, "raw_value_size": 51027, "raw_average_value_size": 515, "num_data_blocks": 9, "num_entries": 99, "num_filter_entries": 99, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158727, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158727834323, "job": 1, "event": "recovery_finished"}
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557a8444ce00
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: DB pointer 0x557a844fc000
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.46 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.46 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.59 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???) e1 preinit fsid e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).mds e1 new map
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 23 03:58:47 np0005593232 podman[74424]: 2026-01-23 08:58:47.855290937 +0000 UTC m=+0.041974373 container create 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 23 03:58:47 np0005593232 systemd[1]: Started libpod-conmon-7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10.scope.
Jan 23 03:58:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06316ec50c2fe619b79a59ab20a91b5ca059e5eeba7d1b3e5a22ca06c0c2e99e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06316ec50c2fe619b79a59ab20a91b5ca059e5eeba7d1b3e5a22ca06c0c2e99e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06316ec50c2fe619b79a59ab20a91b5ca059e5eeba7d1b3e5a22ca06c0c2e99e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:47 np0005593232 ceph-mon[74423]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 23 03:58:47 np0005593232 podman[74424]: 2026-01-23 08:58:47.921057754 +0000 UTC m=+0.107741200 container init 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 03:58:47 np0005593232 podman[74424]: 2026-01-23 08:58:47.926969142 +0000 UTC m=+0.113652578 container start 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 03:58:47 np0005593232 podman[74424]: 2026-01-23 08:58:47.929893795 +0000 UTC m=+0.116577231 container attach 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:47 np0005593232 podman[74424]: 2026-01-23 08:58:47.838774778 +0000 UTC m=+0.025458234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Jan 23 03:58:48 np0005593232 systemd[1]: libpod-7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10.scope: Deactivated successfully.
Jan 23 03:58:48 np0005593232 podman[74424]: 2026-01-23 08:58:48.336373862 +0000 UTC m=+0.523057288 container died 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-06316ec50c2fe619b79a59ab20a91b5ca059e5eeba7d1b3e5a22ca06c0c2e99e-merged.mount: Deactivated successfully.
Jan 23 03:58:48 np0005593232 podman[74424]: 2026-01-23 08:58:48.386915627 +0000 UTC m=+0.573599053 container remove 7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10 (image=quay.io/ceph/ceph:v18, name=elated_herschel, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:48 np0005593232 systemd[1]: libpod-conmon-7b4caf1fd5ae4d160f9e1c78d1769669a340593a404d67d02ceaf95e374a2b10.scope: Deactivated successfully.
Jan 23 03:58:48 np0005593232 podman[74519]: 2026-01-23 08:58:48.489376428 +0000 UTC m=+0.047091537 container create 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:48 np0005593232 systemd[1]: Started libpod-conmon-1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7.scope.
Jan 23 03:58:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:48 np0005593232 podman[74519]: 2026-01-23 08:58:48.466766356 +0000 UTC m=+0.024481485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b85177ffd506ae1583dbcdf3becb7c5747762922cd099e956782342c0bfc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b85177ffd506ae1583dbcdf3becb7c5747762922cd099e956782342c0bfc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524b85177ffd506ae1583dbcdf3becb7c5747762922cd099e956782342c0bfc6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:48 np0005593232 podman[74519]: 2026-01-23 08:58:48.58215597 +0000 UTC m=+0.139871109 container init 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:48 np0005593232 podman[74519]: 2026-01-23 08:58:48.587764549 +0000 UTC m=+0.145479658 container start 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:48 np0005593232 podman[74519]: 2026-01-23 08:58:48.591838415 +0000 UTC m=+0.149553524 container attach 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 23 03:58:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Jan 23 03:58:48 np0005593232 systemd[1]: libpod-1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7.scope: Deactivated successfully.
Jan 23 03:58:49 np0005593232 podman[74562]: 2026-01-23 08:58:49.020789691 +0000 UTC m=+0.024204578 container died 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 03:58:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-524b85177ffd506ae1583dbcdf3becb7c5747762922cd099e956782342c0bfc6-merged.mount: Deactivated successfully.
Jan 23 03:58:49 np0005593232 podman[74562]: 2026-01-23 08:58:49.116823088 +0000 UTC m=+0.120237995 container remove 1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7 (image=quay.io/ceph/ceph:v18, name=dreamy_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:49 np0005593232 systemd[1]: libpod-conmon-1aa3eb419dd3d24788a31747098305e7eaa94f8b0febe863d4904398521649c7.scope: Deactivated successfully.
Jan 23 03:58:49 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:49 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:49 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:49 np0005593232 systemd[1]: Reloading.
Jan 23 03:58:49 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 03:58:49 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 03:58:49 np0005593232 systemd[1]: Starting Ceph mgr.compute-0.yntofk for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 03:58:49 np0005593232 podman[74706]: 2026-01-23 08:58:49.939144793 +0000 UTC m=+0.115867132 container create f3f14f229951b88bdf8758bfdd2b9e6b76401761df1552491c79ef9f7339fa00 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 03:58:49 np0005593232 podman[74706]: 2026-01-23 08:58:49.843127035 +0000 UTC m=+0.019849394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a73a659897f335ad1bfa9d972d89b6f27f522698a26b0ccb8444d967647b1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a73a659897f335ad1bfa9d972d89b6f27f522698a26b0ccb8444d967647b1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a73a659897f335ad1bfa9d972d89b6f27f522698a26b0ccb8444d967647b1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a73a659897f335ad1bfa9d972d89b6f27f522698a26b0ccb8444d967647b1c/merged/var/lib/ceph/mgr/ceph-compute-0.yntofk supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:50 np0005593232 podman[74706]: 2026-01-23 08:58:50.000375801 +0000 UTC m=+0.177098150 container init f3f14f229951b88bdf8758bfdd2b9e6b76401761df1552491c79ef9f7339fa00 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 03:58:50 np0005593232 podman[74706]: 2026-01-23 08:58:50.007663267 +0000 UTC m=+0.184385606 container start f3f14f229951b88bdf8758bfdd2b9e6b76401761df1552491c79ef9f7339fa00 (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:50 np0005593232 bash[74706]: f3f14f229951b88bdf8758bfdd2b9e6b76401761df1552491c79ef9f7339fa00
Jan 23 03:58:50 np0005593232 systemd[1]: Started Ceph mgr.compute-0.yntofk for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: pidfile_write: ignore empty --pid-file
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'alerts'
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.263716885 +0000 UTC m=+0.040337436 container create 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 03:58:50 np0005593232 systemd[1]: Started libpod-conmon-0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982.scope.
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.247618478 +0000 UTC m=+0.024239049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756e5a96b56795849abc404b7561ca76a3d116649784f973d60e3b8033c5f35/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756e5a96b56795849abc404b7561ca76a3d116649784f973d60e3b8033c5f35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9756e5a96b56795849abc404b7561ca76a3d116649784f973d60e3b8033c5f35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.432508867 +0000 UTC m=+0.209129418 container init 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.43968104 +0000 UTC m=+0.216301591 container start 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.445138175 +0000 UTC m=+0.221758756 container attach 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'balancer'
Jan 23 03:58:50 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:50.541+0000 7f157465a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 03:58:50 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:50.847+0000 7f157465a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 03:58:50 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'cephadm'
Jan 23 03:58:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:58:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171364009' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]: 
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]: {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "health": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "status": "HEALTH_OK",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "checks": {},
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "mutes": []
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "election_epoch": 5,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "quorum": [
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        0
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    ],
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "quorum_names": [
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "compute-0"
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    ],
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "quorum_age": 3,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "monmap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "epoch": 1,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "min_mon_release_name": "reef",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_mons": 1
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "osdmap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "epoch": 1,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_osds": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_up_osds": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "osd_up_since": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_in_osds": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "osd_in_since": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_remapped_pgs": 0
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "pgmap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "pgs_by_state": [],
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_pgs": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_pools": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_objects": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "data_bytes": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "bytes_used": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "bytes_avail": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "bytes_total": 0
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "fsmap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "epoch": 1,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "by_rank": [],
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "up:standby": 0
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "mgrmap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "available": false,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "num_standbys": 0,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "modules": [
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:            "iostat",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:            "nfs",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:            "restful"
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        ],
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "services": {}
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "servicemap": {
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "epoch": 1,
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:        "services": {}
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    },
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]:    "progress_events": {}
Jan 23 03:58:50 np0005593232 beautiful_hamilton[74768]: }
Jan 23 03:58:50 np0005593232 systemd[1]: libpod-0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982.scope: Deactivated successfully.
Jan 23 03:58:50 np0005593232 podman[74751]: 2026-01-23 08:58:50.909186758 +0000 UTC m=+0.685807329 container died 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 03:58:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9756e5a96b56795849abc404b7561ca76a3d116649784f973d60e3b8033c5f35-merged.mount: Deactivated successfully.
Jan 23 03:58:51 np0005593232 podman[74751]: 2026-01-23 08:58:51.100094878 +0000 UTC m=+0.876715429 container remove 0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982 (image=quay.io/ceph/ceph:v18, name=beautiful_hamilton, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 23 03:58:51 np0005593232 systemd[1]: libpod-conmon-0981b7c9a77e1d77ec38468990a60a5a6f1a94e383f5e51e88f456f3ed6f8982.scope: Deactivated successfully.
Jan 23 03:58:52 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'crash'
Jan 23 03:58:53 np0005593232 ceph-mgr[74726]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 03:58:53 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'dashboard'
Jan 23 03:58:53 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:53.167+0000 7f157465a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.176984085 +0000 UTC m=+0.045501973 container create 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 03:58:53 np0005593232 systemd[1]: Started libpod-conmon-5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00.scope.
Jan 23 03:58:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855ed0afde93698d47105c97ec1a0d0339b4b468170651c26a92c46067cd0da6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.158546291 +0000 UTC m=+0.027064199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855ed0afde93698d47105c97ec1a0d0339b4b468170651c26a92c46067cd0da6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855ed0afde93698d47105c97ec1a0d0339b4b468170651c26a92c46067cd0da6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.266962179 +0000 UTC m=+0.135480117 container init 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.273122444 +0000 UTC m=+0.141640332 container start 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.27723033 +0000 UTC m=+0.145748228 container attach 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 03:58:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:58:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/328310135' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]: 
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]: {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "health": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "status": "HEALTH_OK",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "checks": {},
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "mutes": []
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "election_epoch": 5,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "quorum": [
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        0
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    ],
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "quorum_names": [
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "compute-0"
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    ],
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "quorum_age": 5,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "monmap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "epoch": 1,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "min_mon_release_name": "reef",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_mons": 1
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "osdmap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "epoch": 1,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_osds": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_up_osds": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "osd_up_since": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_in_osds": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "osd_in_since": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_remapped_pgs": 0
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "pgmap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "pgs_by_state": [],
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_pgs": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_pools": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_objects": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "data_bytes": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "bytes_used": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "bytes_avail": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "bytes_total": 0
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "fsmap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "epoch": 1,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "by_rank": [],
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "up:standby": 0
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "mgrmap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "available": false,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "num_standbys": 0,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "modules": [
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:            "iostat",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:            "nfs",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:            "restful"
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        ],
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "services": {}
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "servicemap": {
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "epoch": 1,
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:        "services": {}
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    },
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]:    "progress_events": {}
Jan 23 03:58:53 np0005593232 inspiring_bouman[74833]: }
Jan 23 03:58:53 np0005593232 systemd[1]: libpod-5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00.scope: Deactivated successfully.
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.696713279 +0000 UTC m=+0.565231167 container died 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-855ed0afde93698d47105c97ec1a0d0339b4b468170651c26a92c46067cd0da6-merged.mount: Deactivated successfully.
Jan 23 03:58:53 np0005593232 podman[74817]: 2026-01-23 08:58:53.741730386 +0000 UTC m=+0.610248264 container remove 5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00 (image=quay.io/ceph/ceph:v18, name=inspiring_bouman, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 03:58:53 np0005593232 systemd[1]: libpod-conmon-5657038b863d55b0963d79ca1f5fd29c542f9dfaf015dc3feab335f73fd30f00.scope: Deactivated successfully.
Jan 23 03:58:54 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'devicehealth'
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:55.069+0000 7f157465a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  from numpy import show_config as show_numpy_config
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:55.659+0000 7f157465a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'influx'
Jan 23 03:58:55 np0005593232 podman[74871]: 2026-01-23 08:58:55.821987629 +0000 UTC m=+0.053226332 container create 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 03:58:55 np0005593232 systemd[1]: Started libpod-conmon-74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730.scope.
Jan 23 03:58:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6e710f6b3cfa0400dc390c998ccfce6193268d34806a0f31a4c781295e6c65/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6e710f6b3cfa0400dc390c998ccfce6193268d34806a0f31a4c781295e6c65/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6e710f6b3cfa0400dc390c998ccfce6193268d34806a0f31a4c781295e6c65/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:55 np0005593232 podman[74871]: 2026-01-23 08:58:55.891834142 +0000 UTC m=+0.123072845 container init 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:58:55 np0005593232 podman[74871]: 2026-01-23 08:58:55.798514053 +0000 UTC m=+0.029752786 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:55 np0005593232 podman[74871]: 2026-01-23 08:58:55.896572606 +0000 UTC m=+0.127811309 container start 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 03:58:55 np0005593232 podman[74871]: 2026-01-23 08:58:55.899926691 +0000 UTC m=+0.131165394 container attach 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:55.928+0000 7f157465a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 03:58:55 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'insights'
Jan 23 03:58:56 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'iostat'
Jan 23 03:58:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:58:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3607466' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]: 
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]: {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "health": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "status": "HEALTH_OK",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "checks": {},
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "mutes": []
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "election_epoch": 5,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "quorum": [
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        0
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    ],
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "quorum_names": [
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "compute-0"
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    ],
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "quorum_age": 8,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "monmap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "epoch": 1,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "min_mon_release_name": "reef",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_mons": 1
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "osdmap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "epoch": 1,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_osds": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_up_osds": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "osd_up_since": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_in_osds": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "osd_in_since": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_remapped_pgs": 0
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "pgmap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "pgs_by_state": [],
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_pgs": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_pools": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_objects": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "data_bytes": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "bytes_used": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "bytes_avail": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "bytes_total": 0
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "fsmap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "epoch": 1,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "by_rank": [],
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "up:standby": 0
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "mgrmap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "available": false,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "num_standbys": 0,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "modules": [
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:            "iostat",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:            "nfs",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:            "restful"
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        ],
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "services": {}
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "servicemap": {
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "epoch": 1,
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:        "services": {}
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    },
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]:    "progress_events": {}
Jan 23 03:58:56 np0005593232 quizzical_yonath[74887]: }
Jan 23 03:58:56 np0005593232 systemd[1]: libpod-74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730.scope: Deactivated successfully.
Jan 23 03:58:56 np0005593232 podman[74871]: 2026-01-23 08:58:56.303512918 +0000 UTC m=+0.534751631 container died 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:58:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b6e710f6b3cfa0400dc390c998ccfce6193268d34806a0f31a4c781295e6c65-merged.mount: Deactivated successfully.
Jan 23 03:58:56 np0005593232 podman[74871]: 2026-01-23 08:58:56.347863717 +0000 UTC m=+0.579102420 container remove 74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730 (image=quay.io/ceph/ceph:v18, name=quizzical_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 03:58:56 np0005593232 systemd[1]: libpod-conmon-74d8fd6eeaed676e00705817bcad26006ee81f1681aa6b2154a64a9253188730.scope: Deactivated successfully.
Jan 23 03:58:56 np0005593232 ceph-mgr[74726]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 03:58:56 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'k8sevents'
Jan 23 03:58:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:58:56.443+0000 7f157465a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 03:58:58 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'localpool'
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.422396504 +0000 UTC m=+0.043321876 container create cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 23 03:58:58 np0005593232 systemd[1]: Started libpod-conmon-cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed.scope.
Jan 23 03:58:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718417e20dcf26daf58cd388741c2fe23778dea3ba2d61c7e667092fa2327ea8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718417e20dcf26daf58cd388741c2fe23778dea3ba2d61c7e667092fa2327ea8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/718417e20dcf26daf58cd388741c2fe23778dea3ba2d61c7e667092fa2327ea8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.49404904 +0000 UTC m=+0.114974412 container init cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.500550844 +0000 UTC m=+0.121476226 container start cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.407342779 +0000 UTC m=+0.028268151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.504480395 +0000 UTC m=+0.125405767 container attach cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:58:58 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 03:58:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:58:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1276644482' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]: 
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]: {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "health": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "status": "HEALTH_OK",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "checks": {},
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "mutes": []
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "election_epoch": 5,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "quorum": [
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        0
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    ],
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "quorum_names": [
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "compute-0"
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    ],
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "quorum_age": 11,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "monmap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "epoch": 1,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "min_mon_release_name": "reef",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_mons": 1
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "osdmap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "epoch": 1,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_osds": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_up_osds": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "osd_up_since": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_in_osds": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "osd_in_since": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_remapped_pgs": 0
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "pgmap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "pgs_by_state": [],
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_pgs": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_pools": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_objects": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "data_bytes": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "bytes_used": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "bytes_avail": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "bytes_total": 0
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "fsmap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "epoch": 1,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "by_rank": [],
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "up:standby": 0
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "mgrmap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "available": false,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "num_standbys": 0,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "modules": [
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:            "iostat",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:            "nfs",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:            "restful"
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        ],
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "services": {}
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "servicemap": {
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "epoch": 1,
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:        "services": {}
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    },
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]:    "progress_events": {}
Jan 23 03:58:58 np0005593232 gifted_lamarr[74944]: }
Jan 23 03:58:58 np0005593232 systemd[1]: libpod-cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed.scope: Deactivated successfully.
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.907502588 +0000 UTC m=+0.528427960 container died cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-718417e20dcf26daf58cd388741c2fe23778dea3ba2d61c7e667092fa2327ea8-merged.mount: Deactivated successfully.
Jan 23 03:58:58 np0005593232 podman[74927]: 2026-01-23 08:58:58.954401104 +0000 UTC m=+0.575326486 container remove cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed (image=quay.io/ceph/ceph:v18, name=gifted_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:58:58 np0005593232 systemd[1]: libpod-conmon-cda1eac625283b20641a734f633aeeb21b189e516742019b6dbd5aa82d3709ed.scope: Deactivated successfully.
Jan 23 03:58:59 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'mirroring'
Jan 23 03:58:59 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'nfs'
Jan 23 03:59:00 np0005593232 ceph-mgr[74726]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 03:59:00 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'orchestrator'
Jan 23 03:59:00 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:00.414+0000 7f157465a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.029472569 +0000 UTC m=+0.046289749 container create 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 03:59:01 np0005593232 systemd[1]: Started libpod-conmon-8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f.scope.
Jan 23 03:59:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8d5f5df0c99ea2d45d2122f9cfa215e18020fc3fbd29c6180a94fbf074805d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8d5f5df0c99ea2d45d2122f9cfa215e18020fc3fbd29c6180a94fbf074805d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc8d5f5df0c99ea2d45d2122f9cfa215e18020fc3fbd29c6180a94fbf074805d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.100191389 +0000 UTC m=+0.117008649 container init 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.10553725 +0000 UTC m=+0.122354430 container start 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.011633355 +0000 UTC m=+0.028450565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.109003068 +0000 UTC m=+0.125820248 container attach 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 03:59:01 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:01.152+0000 7f157465a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'osd_support'
Jan 23 03:59:01 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:01.451+0000 7f157465a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:59:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1082353391' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]: 
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]: {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "health": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "status": "HEALTH_OK",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "checks": {},
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "mutes": []
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "election_epoch": 5,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "quorum": [
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        0
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    ],
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "quorum_names": [
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "compute-0"
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    ],
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "quorum_age": 13,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "monmap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "epoch": 1,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "min_mon_release_name": "reef",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_mons": 1
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "osdmap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "epoch": 1,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_osds": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_up_osds": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "osd_up_since": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_in_osds": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "osd_in_since": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_remapped_pgs": 0
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "pgmap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "pgs_by_state": [],
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_pgs": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_pools": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_objects": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "data_bytes": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "bytes_used": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "bytes_avail": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "bytes_total": 0
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "fsmap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "epoch": 1,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "by_rank": [],
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "up:standby": 0
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "mgrmap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "available": false,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "num_standbys": 0,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "modules": [
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:            "iostat",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:            "nfs",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:            "restful"
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        ],
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "services": {}
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "servicemap": {
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "epoch": 1,
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:        "services": {}
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    },
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]:    "progress_events": {}
Jan 23 03:59:01 np0005593232 elegant_hertz[74996]: }
Jan 23 03:59:01 np0005593232 systemd[1]: libpod-8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f.scope: Deactivated successfully.
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.531344678 +0000 UTC m=+0.548161878 container died 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 03:59:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cc8d5f5df0c99ea2d45d2122f9cfa215e18020fc3fbd29c6180a94fbf074805d-merged.mount: Deactivated successfully.
Jan 23 03:59:01 np0005593232 podman[74980]: 2026-01-23 08:59:01.577589986 +0000 UTC m=+0.594407166 container remove 8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f (image=quay.io/ceph/ceph:v18, name=elegant_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:01 np0005593232 systemd[1]: libpod-conmon-8a4d2648f1cd1b3b92419515b7d0775ca84dd0fb85722799e1f31881a6744b3f.scope: Deactivated successfully.
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 03:59:01 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:01.693+0000 7f157465a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 03:59:01 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'progress'
Jan 23 03:59:01 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:01.988+0000 7f157465a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 03:59:02 np0005593232 ceph-mgr[74726]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 03:59:02 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'prometheus'
Jan 23 03:59:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:02.242+0000 7f157465a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 03:59:03 np0005593232 ceph-mgr[74726]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 03:59:03 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rbd_support'
Jan 23 03:59:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:03.326+0000 7f157465a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 03:59:03 np0005593232 podman[75035]: 2026-01-23 08:59:03.640660871 +0000 UTC m=+0.041707240 container create 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 03:59:03 np0005593232 systemd[1]: Started libpod-conmon-4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171.scope.
Jan 23 03:59:03 np0005593232 ceph-mgr[74726]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 03:59:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:03.682+0000 7f157465a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 03:59:03 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'restful'
Jan 23 03:59:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba09ae3f28825e2d4447225a1a2a249e842b3f5aa39d3a96c6952e3577499944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba09ae3f28825e2d4447225a1a2a249e842b3f5aa39d3a96c6952e3577499944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba09ae3f28825e2d4447225a1a2a249e842b3f5aa39d3a96c6952e3577499944/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:03 np0005593232 podman[75035]: 2026-01-23 08:59:03.71136561 +0000 UTC m=+0.112412009 container init 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:03 np0005593232 podman[75035]: 2026-01-23 08:59:03.621793428 +0000 UTC m=+0.022839817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:03 np0005593232 podman[75035]: 2026-01-23 08:59:03.716582647 +0000 UTC m=+0.117629016 container start 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 03:59:03 np0005593232 podman[75035]: 2026-01-23 08:59:03.720324893 +0000 UTC m=+0.121371312 container attach 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:59:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2161214803' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]: 
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]: {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "health": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "status": "HEALTH_OK",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "checks": {},
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "mutes": []
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "election_epoch": 5,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "quorum": [
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        0
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    ],
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "quorum_names": [
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "compute-0"
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    ],
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "quorum_age": 16,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "monmap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "epoch": 1,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "min_mon_release_name": "reef",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_mons": 1
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "osdmap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "epoch": 1,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_osds": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_up_osds": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "osd_up_since": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_in_osds": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "osd_in_since": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_remapped_pgs": 0
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "pgmap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "pgs_by_state": [],
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_pgs": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_pools": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_objects": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "data_bytes": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "bytes_used": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "bytes_avail": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "bytes_total": 0
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "fsmap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "epoch": 1,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "by_rank": [],
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "up:standby": 0
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "mgrmap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "available": false,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "num_standbys": 0,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "modules": [
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:            "iostat",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:            "nfs",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:            "restful"
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        ],
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "services": {}
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "servicemap": {
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "epoch": 1,
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:        "services": {}
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    },
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]:    "progress_events": {}
Jan 23 03:59:04 np0005593232 silly_ritchie[75051]: }
Jan 23 03:59:04 np0005593232 systemd[1]: libpod-4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171.scope: Deactivated successfully.
Jan 23 03:59:04 np0005593232 podman[75035]: 2026-01-23 08:59:04.139108723 +0000 UTC m=+0.540155162 container died 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ba09ae3f28825e2d4447225a1a2a249e842b3f5aa39d3a96c6952e3577499944-merged.mount: Deactivated successfully.
Jan 23 03:59:04 np0005593232 podman[75035]: 2026-01-23 08:59:04.186171483 +0000 UTC m=+0.587217852 container remove 4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171 (image=quay.io/ceph/ceph:v18, name=silly_ritchie, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:04 np0005593232 systemd[1]: libpod-conmon-4919efd8a7c86d0490eed1ae662febacf2eeadd00e6efae3ba6b8430da3f2171.scope: Deactivated successfully.
Jan 23 03:59:04 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rgw'
Jan 23 03:59:05 np0005593232 ceph-mgr[74726]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 03:59:05 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rook'
Jan 23 03:59:05 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:05.369+0000 7f157465a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.265978402 +0000 UTC m=+0.045993641 container create 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:06 np0005593232 systemd[1]: Started libpod-conmon-12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17.scope.
Jan 23 03:59:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.247884021 +0000 UTC m=+0.027899290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d39ffb5d3e5c033d0d4878948dddbc65f4fe9c163e3bfe9073b0279314da5e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d39ffb5d3e5c033d0d4878948dddbc65f4fe9c163e3bfe9073b0279314da5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83d39ffb5d3e5c033d0d4878948dddbc65f4fe9c163e3bfe9073b0279314da5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.362375977 +0000 UTC m=+0.142391236 container init 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.36775988 +0000 UTC m=+0.147775119 container start 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.37061189 +0000 UTC m=+0.150627269 container attach 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 03:59:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:59:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889889332' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:59:06 np0005593232 busy_bose[75103]: 
Jan 23 03:59:06 np0005593232 busy_bose[75103]: {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "health": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "status": "HEALTH_OK",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "checks": {},
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "mutes": []
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "election_epoch": 5,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "quorum": [
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        0
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    ],
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "quorum_names": [
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "compute-0"
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    ],
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "quorum_age": 18,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "monmap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "epoch": 1,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "min_mon_release_name": "reef",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_mons": 1
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "osdmap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "epoch": 1,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_osds": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_up_osds": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "osd_up_since": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_in_osds": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "osd_in_since": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_remapped_pgs": 0
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "pgmap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "pgs_by_state": [],
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_pgs": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_pools": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_objects": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "data_bytes": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "bytes_used": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "bytes_avail": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "bytes_total": 0
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "fsmap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "epoch": 1,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "by_rank": [],
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "up:standby": 0
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "mgrmap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "available": false,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "num_standbys": 0,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "modules": [
Jan 23 03:59:06 np0005593232 busy_bose[75103]:            "iostat",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:            "nfs",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:            "restful"
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        ],
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "services": {}
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "servicemap": {
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "epoch": 1,
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:59:06 np0005593232 busy_bose[75103]:        "services": {}
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    },
Jan 23 03:59:06 np0005593232 busy_bose[75103]:    "progress_events": {}
Jan 23 03:59:06 np0005593232 busy_bose[75103]: }
Jan 23 03:59:06 np0005593232 systemd[1]: libpod-12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17.scope: Deactivated successfully.
Jan 23 03:59:06 np0005593232 podman[75088]: 2026-01-23 08:59:06.782297339 +0000 UTC m=+0.562312578 container died 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 03:59:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-83d39ffb5d3e5c033d0d4878948dddbc65f4fe9c163e3bfe9073b0279314da5e-merged.mount: Deactivated successfully.
Jan 23 03:59:07 np0005593232 podman[75088]: 2026-01-23 08:59:07.662256407 +0000 UTC m=+1.442271646 container remove 12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17 (image=quay.io/ceph/ceph:v18, name=busy_bose, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:07 np0005593232 systemd[1]: libpod-conmon-12a6a33d2b1f3034efdae5f93bcbaf3231da8e9fd7ddff8e5bc9435a0ca3ba17.scope: Deactivated successfully.
Jan 23 03:59:07 np0005593232 ceph-mgr[74726]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 03:59:07 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'selftest'
Jan 23 03:59:07 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:07.820+0000 7f157465a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'snap_schedule'
Jan 23 03:59:08 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:08.095+0000 7f157465a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'stats'
Jan 23 03:59:08 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:08.358+0000 7f157465a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'status'
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 03:59:08 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'telegraf'
Jan 23 03:59:08 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:08.936+0000 7f157465a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 03:59:09 np0005593232 ceph-mgr[74726]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 03:59:09 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'telemetry'
Jan 23 03:59:09 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:09.202+0000 7f157465a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 03:59:09 np0005593232 podman[75143]: 2026-01-23 08:59:09.727087492 +0000 UTC m=+0.040353712 container create b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 03:59:09 np0005593232 systemd[1]: Started libpod-conmon-b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673.scope.
Jan 23 03:59:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e650cf878f6327b716ce84663dec478bb692982c05b33bdedc8cc7241bfbf108/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e650cf878f6327b716ce84663dec478bb692982c05b33bdedc8cc7241bfbf108/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e650cf878f6327b716ce84663dec478bb692982c05b33bdedc8cc7241bfbf108/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:09 np0005593232 podman[75143]: 2026-01-23 08:59:09.709050382 +0000 UTC m=+0.022316642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:09 np0005593232 podman[75143]: 2026-01-23 08:59:09.805634633 +0000 UTC m=+0.118900893 container init b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 03:59:09 np0005593232 podman[75143]: 2026-01-23 08:59:09.81153656 +0000 UTC m=+0.124802790 container start b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:09 np0005593232 podman[75143]: 2026-01-23 08:59:09.819959058 +0000 UTC m=+0.133225308 container attach b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 03:59:09 np0005593232 ceph-mgr[74726]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 03:59:09 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 03:59:09 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:09.870+0000 7f157465a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 03:59:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:59:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2141839912' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:59:10 np0005593232 festive_haslett[75159]: 
Jan 23 03:59:10 np0005593232 festive_haslett[75159]: {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "health": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "status": "HEALTH_OK",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "checks": {},
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "mutes": []
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "election_epoch": 5,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "quorum": [
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        0
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    ],
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "quorum_names": [
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "compute-0"
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    ],
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "quorum_age": 22,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "monmap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "epoch": 1,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "min_mon_release_name": "reef",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_mons": 1
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "osdmap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "epoch": 1,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_osds": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_up_osds": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "osd_up_since": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_in_osds": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "osd_in_since": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_remapped_pgs": 0
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "pgmap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "pgs_by_state": [],
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_pgs": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_pools": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_objects": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "data_bytes": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "bytes_used": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "bytes_avail": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "bytes_total": 0
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "fsmap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "epoch": 1,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "by_rank": [],
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "up:standby": 0
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "mgrmap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "available": false,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "num_standbys": 0,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "modules": [
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:            "iostat",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:            "nfs",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:            "restful"
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        ],
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "services": {}
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "servicemap": {
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "epoch": 1,
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:        "services": {}
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    },
Jan 23 03:59:10 np0005593232 festive_haslett[75159]:    "progress_events": {}
Jan 23 03:59:10 np0005593232 festive_haslett[75159]: }
Jan 23 03:59:10 np0005593232 systemd[1]: libpod-b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673.scope: Deactivated successfully.
Jan 23 03:59:10 np0005593232 podman[75143]: 2026-01-23 08:59:10.210302564 +0000 UTC m=+0.523568794 container died b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e650cf878f6327b716ce84663dec478bb692982c05b33bdedc8cc7241bfbf108-merged.mount: Deactivated successfully.
Jan 23 03:59:10 np0005593232 podman[75143]: 2026-01-23 08:59:10.252943389 +0000 UTC m=+0.566209619 container remove b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673 (image=quay.io/ceph/ceph:v18, name=festive_haslett, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 03:59:10 np0005593232 systemd[1]: libpod-conmon-b437fe2bc1c21f68fa30387db8b7a26fb46e94ff8b0ecec59d64488d531ee673.scope: Deactivated successfully.
Jan 23 03:59:10 np0005593232 ceph-mgr[74726]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:10 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'volumes'
Jan 23 03:59:10 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:10.596+0000 7f157465a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'zabbix'
Jan 23 03:59:11 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:11.382+0000 7f157465a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 03:59:11 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:11.672+0000 7f157465a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: ms_deliver_dispatch: unhandled message 0x55ecddee4f20 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.yntofk
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr handle_mgr_map Activating!
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr handle_mgr_map I am now activating
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.yntofk(active, starting, since 0.0123039s)
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.yntofk", "id": "compute-0.yntofk"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.yntofk", "id": "compute-0.yntofk"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Manager daemon compute-0.yntofk is now available
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: balancer
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer INFO root] Starting
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: crash
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_08:59:11
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [balancer INFO root] No pools available
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: devicehealth
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Starting
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: iostat
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: nfs
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: orchestrator
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: pg_autoscaler
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: progress
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [progress INFO root] Loading...
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [progress INFO root] No stored events to load
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [progress INFO root] Loaded [] historic events
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] recovery thread starting
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] starting setup
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: rbd_support
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: restful
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: status
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: telemetry
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [restful WARNING root] server not running: no certificate configured
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] PerfHandler: starting
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TaskHandler: starting
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"} v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: Activating manager daemon compute-0.yntofk
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] setup complete
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: Manager daemon compute-0.yntofk is now available
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"}]: dispatch
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: volumes
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Jan 23 03:59:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:12 np0005593232 podman[75277]: 2026-01-23 08:59:12.525698895 +0000 UTC m=+0.246855361 container create 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 03:59:12 np0005593232 podman[75277]: 2026-01-23 08:59:12.432019445 +0000 UTC m=+0.153175931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:12 np0005593232 systemd[1]: Started libpod-conmon-444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce.scope.
Jan 23 03:59:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f967303b02327404a96b6295662e652086a09b36be86958a8bc1694310cf2de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f967303b02327404a96b6295662e652086a09b36be86958a8bc1694310cf2de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f967303b02327404a96b6295662e652086a09b36be86958a8bc1694310cf2de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:12 np0005593232 podman[75277]: 2026-01-23 08:59:12.615921525 +0000 UTC m=+0.337078011 container init 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:12 np0005593232 podman[75277]: 2026-01-23 08:59:12.62602142 +0000 UTC m=+0.347177886 container start 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 03:59:12 np0005593232 podman[75277]: 2026-01-23 08:59:12.629485908 +0000 UTC m=+0.350642394 container attach 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.yntofk(active, since 1.02174s)
Jan 23 03:59:12 np0005593232 ceph-mon[74423]: from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"}]: dispatch
Jan 23 03:59:12 np0005593232 ceph-mon[74423]: from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:12 np0005593232 ceph-mon[74423]: from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:12 np0005593232 ceph-mon[74423]: from='mgr.14102 192.168.122.100:0/323176679' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 03:59:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2481853949' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]: 
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]: {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "health": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "status": "HEALTH_OK",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "checks": {},
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "mutes": []
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "election_epoch": 5,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "quorum": [
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        0
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    ],
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "quorum_names": [
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "compute-0"
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    ],
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "quorum_age": 25,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "monmap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "epoch": 1,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "min_mon_release_name": "reef",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_mons": 1
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "osdmap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "epoch": 1,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_osds": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_up_osds": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "osd_up_since": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_in_osds": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "osd_in_since": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_remapped_pgs": 0
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "pgmap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "pgs_by_state": [],
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_pgs": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_pools": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_objects": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "data_bytes": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "bytes_used": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "bytes_avail": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "bytes_total": 0
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "fsmap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "epoch": 1,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "by_rank": [],
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "up:standby": 0
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "mgrmap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "available": true,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "num_standbys": 0,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "modules": [
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:            "iostat",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:            "nfs",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:            "restful"
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        ],
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "services": {}
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "servicemap": {
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "epoch": 1,
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "modified": "2026-01-23T08:58:44.980509+0000",
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:        "services": {}
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    },
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]:    "progress_events": {}
Jan 23 03:59:13 np0005593232 gifted_tharp[75293]: }
Jan 23 03:59:13 np0005593232 systemd[1]: libpod-444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce.scope: Deactivated successfully.
Jan 23 03:59:13 np0005593232 podman[75277]: 2026-01-23 08:59:13.38264652 +0000 UTC m=+1.103802996 container died 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 03:59:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6f967303b02327404a96b6295662e652086a09b36be86958a8bc1694310cf2de-merged.mount: Deactivated successfully.
Jan 23 03:59:13 np0005593232 podman[75277]: 2026-01-23 08:59:13.42223459 +0000 UTC m=+1.143391056 container remove 444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce (image=quay.io/ceph/ceph:v18, name=gifted_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 03:59:13 np0005593232 systemd[1]: libpod-conmon-444db53c822fc31f41fd35942d8cd360bbaaeeddac94e97b7e8d7860b5e98bce.scope: Deactivated successfully.
Jan 23 03:59:13 np0005593232 podman[75330]: 2026-01-23 08:59:13.478666325 +0000 UTC m=+0.037593834 container create e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 03:59:13 np0005593232 systemd[1]: Started libpod-conmon-e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a.scope.
Jan 23 03:59:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3204fc3bff1f702fee9eccacabd1c5cbbfb6b82e96e4f2decbdd35dfc5e0009b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3204fc3bff1f702fee9eccacabd1c5cbbfb6b82e96e4f2decbdd35dfc5e0009b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3204fc3bff1f702fee9eccacabd1c5cbbfb6b82e96e4f2decbdd35dfc5e0009b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3204fc3bff1f702fee9eccacabd1c5cbbfb6b82e96e4f2decbdd35dfc5e0009b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:13 np0005593232 podman[75330]: 2026-01-23 08:59:13.462271042 +0000 UTC m=+0.021198581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:13 np0005593232 podman[75330]: 2026-01-23 08:59:13.559449249 +0000 UTC m=+0.118376768 container init e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 03:59:13 np0005593232 podman[75330]: 2026-01-23 08:59:13.571726006 +0000 UTC m=+0.130653515 container start e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 03:59:13 np0005593232 podman[75330]: 2026-01-23 08:59:13.575351418 +0000 UTC m=+0.134278947 container attach e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 03:59:13 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.yntofk(active, since 2s)
Jan 23 03:59:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 23 03:59:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3954079545' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 03:59:14 np0005593232 systemd[1]: libpod-e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a.scope: Deactivated successfully.
Jan 23 03:59:14 np0005593232 conmon[75346]: conmon e93761258acc02e56e30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a.scope/container/memory.events
Jan 23 03:59:14 np0005593232 podman[75330]: 2026-01-23 08:59:14.366060603 +0000 UTC m=+0.924988132 container died e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 03:59:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3204fc3bff1f702fee9eccacabd1c5cbbfb6b82e96e4f2decbdd35dfc5e0009b-merged.mount: Deactivated successfully.
Jan 23 03:59:14 np0005593232 podman[75330]: 2026-01-23 08:59:14.407101314 +0000 UTC m=+0.966028833 container remove e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a (image=quay.io/ceph/ceph:v18, name=zen_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:14 np0005593232 systemd[1]: libpod-conmon-e93761258acc02e56e302d00a571fc789bc59e653aba9d0761a318424c47862a.scope: Deactivated successfully.
Jan 23 03:59:14 np0005593232 podman[75384]: 2026-01-23 08:59:14.478310627 +0000 UTC m=+0.045391745 container create e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 03:59:14 np0005593232 systemd[1]: Started libpod-conmon-e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f.scope.
Jan 23 03:59:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3731ec8bd6538f7d3f3b02b304e747cff0209a875f8e77f129875067b0282ab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3731ec8bd6538f7d3f3b02b304e747cff0209a875f8e77f129875067b0282ab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3731ec8bd6538f7d3f3b02b304e747cff0209a875f8e77f129875067b0282ab0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:14 np0005593232 podman[75384]: 2026-01-23 08:59:14.53642247 +0000 UTC m=+0.103503618 container init e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:14 np0005593232 podman[75384]: 2026-01-23 08:59:14.541464982 +0000 UTC m=+0.108546100 container start e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:14 np0005593232 podman[75384]: 2026-01-23 08:59:14.545531867 +0000 UTC m=+0.112613005 container attach e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 03:59:14 np0005593232 podman[75384]: 2026-01-23 08:59:14.459422843 +0000 UTC m=+0.026503981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:14 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3954079545' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 03:59:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Jan 23 03:59:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1841516100' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:15 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1841516100' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 23 03:59:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1841516100' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  1: '-n'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  2: 'mgr.compute-0.yntofk'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  3: '-f'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  4: '--setuser'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  5: 'ceph'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  6: '--setgroup'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  7: 'ceph'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  8: '--default-log-to-file=false'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  9: '--default-log-to-journald=true'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr respawn  exe_path /proc/self/exe
Jan 23 03:59:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.yntofk(active, since 4s)
Jan 23 03:59:15 np0005593232 systemd[1]: libpod-e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f.scope: Deactivated successfully.
Jan 23 03:59:15 np0005593232 podman[75384]: 2026-01-23 08:59:15.797609586 +0000 UTC m=+1.364690724 container died e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3731ec8bd6538f7d3f3b02b304e747cff0209a875f8e77f129875067b0282ab0-merged.mount: Deactivated successfully.
Jan 23 03:59:15 np0005593232 podman[75384]: 2026-01-23 08:59:15.841151207 +0000 UTC m=+1.408232365 container remove e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f (image=quay.io/ceph/ceph:v18, name=nervous_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: ignoring --setuser ceph since I am not root
Jan 23 03:59:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: ignoring --setgroup ceph since I am not root
Jan 23 03:59:15 np0005593232 systemd[1]: libpod-conmon-e324ccd7aec8bba6199e9b481c89fab056e3f7f077056f50742005222201936f.scope: Deactivated successfully.
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: pidfile_write: ignore empty --pid-file
Jan 23 03:59:15 np0005593232 podman[75439]: 2026-01-23 08:59:15.935615157 +0000 UTC m=+0.063605419 container create df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 03:59:15 np0005593232 systemd[1]: Started libpod-conmon-df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03.scope.
Jan 23 03:59:15 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'alerts'
Jan 23 03:59:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994df2ac0a4640ea50072a989c781c347c2f455ab4ec677aec6b3bec4404c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994df2ac0a4640ea50072a989c781c347c2f455ab4ec677aec6b3bec4404c9f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5994df2ac0a4640ea50072a989c781c347c2f455ab4ec677aec6b3bec4404c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:15 np0005593232 podman[75439]: 2026-01-23 08:59:15.900749492 +0000 UTC m=+0.028739784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:16 np0005593232 podman[75439]: 2026-01-23 08:59:16.010749622 +0000 UTC m=+0.138739934 container init df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:16 np0005593232 podman[75439]: 2026-01-23 08:59:16.016916106 +0000 UTC m=+0.144906378 container start df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 03:59:16 np0005593232 podman[75439]: 2026-01-23 08:59:16.021076333 +0000 UTC m=+0.149066615 container attach df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 03:59:16 np0005593232 ceph-mgr[74726]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 03:59:16 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:16.299+0000 7f3481ce1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 23 03:59:16 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'balancer'
Jan 23 03:59:16 np0005593232 ceph-mgr[74726]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 03:59:16 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'cephadm'
Jan 23 03:59:16 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:16.564+0000 7f3481ce1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 23 03:59:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 23 03:59:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706609094' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]: {
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]:    "epoch": 5,
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]:    "available": true,
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]:    "active_name": "compute-0.yntofk",
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]:    "num_standby": 0
Jan 23 03:59:16 np0005593232 condescending_bohr[75479]: }
Jan 23 03:59:16 np0005593232 systemd[1]: libpod-df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03.scope: Deactivated successfully.
Jan 23 03:59:16 np0005593232 podman[75439]: 2026-01-23 08:59:16.609334204 +0000 UTC m=+0.737324486 container died df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 03:59:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5994df2ac0a4640ea50072a989c781c347c2f455ab4ec677aec6b3bec4404c9f-merged.mount: Deactivated successfully.
Jan 23 03:59:16 np0005593232 podman[75439]: 2026-01-23 08:59:16.653294677 +0000 UTC m=+0.781284949 container remove df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03 (image=quay.io/ceph/ceph:v18, name=condescending_bohr, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 03:59:16 np0005593232 systemd[1]: libpod-conmon-df09e55745e509a62e03bbde80e307dd22dfc0b1cc9a9b2297cef36732b02f03.scope: Deactivated successfully.
Jan 23 03:59:16 np0005593232 podman[75517]: 2026-01-23 08:59:16.716556896 +0000 UTC m=+0.043315966 container create af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 03:59:16 np0005593232 systemd[1]: Started libpod-conmon-af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f.scope.
Jan 23 03:59:16 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1841516100' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 23 03:59:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186c0cf85b6e31219361edb1aad08a6a2f291a3f4f418e6d5bfe15d21980a4be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186c0cf85b6e31219361edb1aad08a6a2f291a3f4f418e6d5bfe15d21980a4be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:16 np0005593232 podman[75517]: 2026-01-23 08:59:16.696247902 +0000 UTC m=+0.023007002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/186c0cf85b6e31219361edb1aad08a6a2f291a3f4f418e6d5bfe15d21980a4be/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:16 np0005593232 podman[75517]: 2026-01-23 08:59:16.799893531 +0000 UTC m=+0.126652621 container init af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:16 np0005593232 podman[75517]: 2026-01-23 08:59:16.810347456 +0000 UTC m=+0.137106526 container start af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:16 np0005593232 podman[75517]: 2026-01-23 08:59:16.813349461 +0000 UTC m=+0.140108551 container attach af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:18 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'crash'
Jan 23 03:59:18 np0005593232 ceph-mgr[74726]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 03:59:18 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:18.850+0000 7f3481ce1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 23 03:59:18 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'dashboard'
Jan 23 03:59:20 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'devicehealth'
Jan 23 03:59:20 np0005593232 ceph-mgr[74726]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 03:59:20 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'diskprediction_local'
Jan 23 03:59:20 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:20.667+0000 7f3481ce1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 23 03:59:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 23 03:59:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 23 03:59:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  from numpy import show_config as show_numpy_config
Jan 23 03:59:21 np0005593232 ceph-mgr[74726]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 03:59:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:21.241+0000 7f3481ce1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 23 03:59:21 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'influx'
Jan 23 03:59:21 np0005593232 ceph-mgr[74726]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 03:59:21 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'insights'
Jan 23 03:59:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:21.504+0000 7f3481ce1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 23 03:59:21 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'iostat'
Jan 23 03:59:22 np0005593232 ceph-mgr[74726]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 03:59:22 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:22.013+0000 7f3481ce1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 23 03:59:22 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'k8sevents'
Jan 23 03:59:23 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'localpool'
Jan 23 03:59:24 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'mds_autoscaler'
Jan 23 03:59:24 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'mirroring'
Jan 23 03:59:25 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'nfs'
Jan 23 03:59:26 np0005593232 ceph-mgr[74726]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 03:59:26 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:26.045+0000 7f3481ce1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 23 03:59:26 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'orchestrator'
Jan 23 03:59:26 np0005593232 ceph-mgr[74726]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:26 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'osd_perf_query'
Jan 23 03:59:26 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:26.756+0000 7f3481ce1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:27.031+0000 7f3481ce1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'osd_support'
Jan 23 03:59:27 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:27.304+0000 7f3481ce1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'pg_autoscaler'
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:27.616+0000 7f3481ce1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'progress'
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:27.893+0000 7f3481ce1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 23 03:59:27 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'prometheus'
Jan 23 03:59:29 np0005593232 ceph-mgr[74726]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 03:59:29 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:29.027+0000 7f3481ce1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 23 03:59:29 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rbd_support'
Jan 23 03:59:29 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:29.350+0000 7f3481ce1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 03:59:29 np0005593232 ceph-mgr[74726]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 23 03:59:29 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'restful'
Jan 23 03:59:30 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rgw'
Jan 23 03:59:30 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:30.886+0000 7f3481ce1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 03:59:30 np0005593232 ceph-mgr[74726]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 23 03:59:30 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'rook'
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:33.155+0000 7f3481ce1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'selftest'
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:33.413+0000 7f3481ce1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'snap_schedule'
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:33.687+0000 7f3481ce1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'stats'
Jan 23 03:59:33 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'status'
Jan 23 03:59:34 np0005593232 ceph-mgr[74726]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 03:59:34 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:34.252+0000 7f3481ce1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 23 03:59:34 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'telegraf'
Jan 23 03:59:34 np0005593232 ceph-mgr[74726]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 03:59:34 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'telemetry'
Jan 23 03:59:34 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:34.509+0000 7f3481ce1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 23 03:59:35 np0005593232 ceph-mgr[74726]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 03:59:35 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'test_orchestrator'
Jan 23 03:59:35 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:35.158+0000 7f3481ce1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 23 03:59:35 np0005593232 ceph-mgr[74726]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:35 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'volumes'
Jan 23 03:59:35 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:35.876+0000 7f3481ce1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 03:59:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:36.626+0000 7f3481ce1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr[py] Loading python module 'zabbix'
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 03:59:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T08:59:36.884+0000 7f3481ce1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Active manager daemon compute-0.yntofk restarted
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.yntofk
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: ms_deliver_dispatch: unhandled message 0x55c59091e420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr handle_mgr_map Activating!
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr handle_mgr_map I am now activating
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.yntofk(active, starting, since 0.0258668s)
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.yntofk", "id": "compute-0.yntofk"} v 0) v1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr metadata", "who": "compute-0.yntofk", "id": "compute-0.yntofk"}]: dispatch
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e1 all = 1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 23 03:59:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Manager daemon compute-0.yntofk is now available
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: balancer
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Starting
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_08:59:36
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] No pools available
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 23 03:59:36 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: Active manager daemon compute-0.yntofk restarted
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: Activating manager daemon compute-0.yntofk
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: Manager daemon compute-0.yntofk is now available
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: cephadm
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: crash
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: devicehealth
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: iostat
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Starting
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: nfs
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: orchestrator
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: pg_autoscaler
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: progress
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [progress INFO root] Loading...
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [progress INFO root] No stored events to load
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [progress INFO root] Loaded [] historic events
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [progress INFO root] Loaded OSDMap, ready.
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] recovery thread starting
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] starting setup
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: rbd_support
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: restful
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [restful INFO root] server_addr: :: server_port: 8003
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: status
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"} v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"}]: dispatch
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: telemetry
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [restful WARNING root] server not running: no certificate configured
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] PerfHandler: starting
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TaskHandler: starting
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"} v 0) v1
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"}]: dispatch
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] setup complete
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: mgr load Constructed class from module: volumes
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019930374 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 03:59:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.yntofk(active, since 1.03156s)
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 23 03:59:37 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 23 03:59:37 np0005593232 nifty_carver[75534]: {
Jan 23 03:59:37 np0005593232 nifty_carver[75534]:    "mgrmap_epoch": 7,
Jan 23 03:59:37 np0005593232 nifty_carver[75534]:    "initialized": true
Jan 23 03:59:37 np0005593232 nifty_carver[75534]: }
Jan 23 03:59:37 np0005593232 systemd[1]: libpod-af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f.scope: Deactivated successfully.
Jan 23 03:59:37 np0005593232 podman[75517]: 2026-01-23 08:59:37.963809004 +0000 UTC m=+21.290568114 container died af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 03:59:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-186c0cf85b6e31219361edb1aad08a6a2f291a3f4f418e6d5bfe15d21980a4be-merged.mount: Deactivated successfully.
Jan 23 03:59:38 np0005593232 podman[75517]: 2026-01-23 08:59:38.026342352 +0000 UTC m=+21.353101422 container remove af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f (image=quay.io/ceph/ceph:v18, name=nifty_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:38 np0005593232 systemd[1]: libpod-conmon-af8b2e383eba50567449aafdcf454ac96d47f8afe925714396322a1ca2e53c9f.scope: Deactivated successfully.
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.10303969 +0000 UTC m=+0.044549620 container create 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: Found migration_current of "None". Setting to last migration.
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/mirror_snapshot_schedule"}]: dispatch
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.yntofk/trash_purge_schedule"}]: dispatch
Jan 23 03:59:38 np0005593232 systemd[1]: Started libpod-conmon-993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70.scope.
Jan 23 03:59:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fa138c1962dbb2976ed51cf406244a71592f755aba0d8698ab001de5bacee8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fa138c1962dbb2976ed51cf406244a71592f755aba0d8698ab001de5bacee8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60fa138c1962dbb2976ed51cf406244a71592f755aba0d8698ab001de5bacee8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.081668856 +0000 UTC m=+0.023178826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.183920647 +0000 UTC m=+0.125430577 container init 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.19003587 +0000 UTC m=+0.131545800 container start 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.192761977 +0000 UTC m=+0.134271927 container attach 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:38 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 03:59:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 03:59:38 np0005593232 systemd[1]: libpod-993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70.scope: Deactivated successfully.
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.844879472 +0000 UTC m=+0.786389402 container died 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-60fa138c1962dbb2976ed51cf406244a71592f755aba0d8698ab001de5bacee8-merged.mount: Deactivated successfully.
Jan 23 03:59:38 np0005593232 podman[75696]: 2026-01-23 08:59:38.882926228 +0000 UTC m=+0.824436158 container remove 993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70 (image=quay.io/ceph/ceph:v18, name=adoring_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 03:59:38 np0005593232 systemd[1]: libpod-conmon-993db46a440c88344b2acd79cdbcf2bfa70a2210fe0be1ccc65a5494cce8fe70.scope: Deactivated successfully.
Jan 23 03:59:38 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:38 np0005593232 podman[75750]: 2026-01-23 08:59:38.936669488 +0000 UTC m=+0.033347544 container create 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 03:59:38 np0005593232 systemd[1]: Started libpod-conmon-2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970.scope.
Jan 23 03:59:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b690742a26d3625eec522e45f880eb43e7950f9bf055644d5c67941395ce7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b690742a26d3625eec522e45f880eb43e7950f9bf055644d5c67941395ce7c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45b690742a26d3625eec522e45f880eb43e7950f9bf055644d5c67941395ce7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:39.001935073 +0000 UTC m=+0.098613139 container init 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:39.007055267 +0000 UTC m=+0.103733323 container start 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:39.014903429 +0000 UTC m=+0.111581475 container attach 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:38.921944921 +0000 UTC m=+0.018622997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_user
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_config
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 23 03:59:39 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 23 03:59:39 np0005593232 suspicious_babbage[75766]: ssh user set to ceph-admin. sudo will be used
Jan 23 03:59:39 np0005593232 systemd[1]: libpod-2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970.scope: Deactivated successfully.
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:39.563953802 +0000 UTC m=+0.660631868 container died 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-45b690742a26d3625eec522e45f880eb43e7950f9bf055644d5c67941395ce7c-merged.mount: Deactivated successfully.
Jan 23 03:59:39 np0005593232 podman[75750]: 2026-01-23 08:59:39.605067274 +0000 UTC m=+0.701745330 container remove 2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970 (image=quay.io/ceph/ceph:v18, name=suspicious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 03:59:39 np0005593232 systemd[1]: libpod-conmon-2905365cf38391366dc6eec37016993df4a60c2462f304038b86eefb6da34970.scope: Deactivated successfully.
Jan 23 03:59:39 np0005593232 podman[75805]: 2026-01-23 08:59:39.662992842 +0000 UTC m=+0.038068878 container create e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 03:59:39 np0005593232 systemd[1]: Started libpod-conmon-e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09.scope.
Jan 23 03:59:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:39 np0005593232 podman[75805]: 2026-01-23 08:59:39.73439038 +0000 UTC m=+0.109466476 container init e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 03:59:39 np0005593232 podman[75805]: 2026-01-23 08:59:39.740893274 +0000 UTC m=+0.115969300 container start e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:39 np0005593232 podman[75805]: 2026-01-23 08:59:39.64666867 +0000 UTC m=+0.021744726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:39 np0005593232 podman[75805]: 2026-01-23 08:59:39.746681868 +0000 UTC m=+0.121757954 container attach e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.yntofk(active, since 2s)
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [23/Jan/2026:08:59:40] ENGINE Bus STARTING
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [23/Jan/2026:08:59:40] ENGINE Bus STARTING
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [23/Jan/2026:08:59:40] ENGINE Serving on http://192.168.122.100:8765
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Set ssh private key
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [23/Jan/2026:08:59:40] ENGINE Serving on http://192.168.122.100:8765
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 23 03:59:40 np0005593232 systemd[1]: libpod-e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09.scope: Deactivated successfully.
Jan 23 03:59:40 np0005593232 podman[75805]: 2026-01-23 08:59:40.308727668 +0000 UTC m=+0.683803734 container died e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 03:59:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6df657ee7a8a570ed08b1f3dba2b3b86b9340e99d5c795ade23f0dee8968494f-merged.mount: Deactivated successfully.
Jan 23 03:59:40 np0005593232 podman[75805]: 2026-01-23 08:59:40.366202352 +0000 UTC m=+0.741278428 container remove e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09 (image=quay.io/ceph/ceph:v18, name=jovial_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 03:59:40 np0005593232 systemd[1]: libpod-conmon-e6ab585949ba0d67845b13d3deaa542bffaaed1acb66fa0f711272fb62771f09.scope: Deactivated successfully.
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [23/Jan/2026:08:59:40] ENGINE Serving on https://192.168.122.100:7150
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [23/Jan/2026:08:59:40] ENGINE Serving on https://192.168.122.100:7150
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [23/Jan/2026:08:59:40] ENGINE Bus STARTED
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [23/Jan/2026:08:59:40] ENGINE Bus STARTED
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: [cephadm INFO cherrypy.error] [23/Jan/2026:08:59:40] ENGINE Client ('192.168.122.100', 60732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : [23/Jan/2026:08:59:40] ENGINE Client ('192.168.122.100', 60732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 03:59:40 np0005593232 podman[75881]: 2026-01-23 08:59:40.436480229 +0000 UTC m=+0.045111246 container create ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:40 np0005593232 systemd[1]: Started libpod-conmon-ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d.scope.
Jan 23 03:59:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:40 np0005593232 podman[75881]: 2026-01-23 08:59:40.510991586 +0000 UTC m=+0.119622643 container init ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 03:59:40 np0005593232 podman[75881]: 2026-01-23 08:59:40.418522292 +0000 UTC m=+0.027153339 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:40 np0005593232 podman[75881]: 2026-01-23 08:59:40.517618673 +0000 UTC m=+0.126249680 container start ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 03:59:40 np0005593232 podman[75881]: 2026-01-23 08:59:40.521664638 +0000 UTC m=+0.130295655 container attach ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: Set ssh ssh_user
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: Set ssh ssh_config
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: ssh user set to ceph-admin. sudo will be used
Jan 23 03:59:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:40 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:41 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 23 03:59:41 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 23 03:59:41 np0005593232 systemd[1]: libpod-ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d.scope: Deactivated successfully.
Jan 23 03:59:41 np0005593232 podman[75881]: 2026-01-23 08:59:41.103845117 +0000 UTC m=+0.712476144 container died ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 03:59:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-72139baa341bfe9a706a31facb0c6f09244418e1809701185d2244b4e9e93ba9-merged.mount: Deactivated successfully.
Jan 23 03:59:41 np0005593232 podman[75881]: 2026-01-23 08:59:41.163939076 +0000 UTC m=+0.772570083 container remove ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d (image=quay.io/ceph/ceph:v18, name=interesting_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:41 np0005593232 systemd[1]: libpod-conmon-ee0fc117c7cd77ff45f448cc9f16e85402e9b85f78d9190248865599e04cf17d.scope: Deactivated successfully.
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.233305937 +0000 UTC m=+0.039019984 container create 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:41 np0005593232 systemd[1]: Started libpod-conmon-13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7.scope.
Jan 23 03:59:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e194273cb825418b41b6dffb1c33beef2215b43e0286984d158e7ac32dbf8eb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e194273cb825418b41b6dffb1c33beef2215b43e0286984d158e7ac32dbf8eb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e194273cb825418b41b6dffb1c33beef2215b43e0286984d158e7ac32dbf8eb1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.217722836 +0000 UTC m=+0.023436903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.316367225 +0000 UTC m=+0.122081352 container init 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.32218745 +0000 UTC m=+0.127901497 container start 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.325553925 +0000 UTC m=+0.131267992 container attach 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: [23/Jan/2026:08:59:40] ENGINE Bus STARTING
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: Set ssh ssh_identity_key
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: [23/Jan/2026:08:59:40] ENGINE Serving on http://192.168.122.100:8765
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: Set ssh private key
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: [23/Jan/2026:08:59:40] ENGINE Serving on https://192.168.122.100:7150
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: [23/Jan/2026:08:59:40] ENGINE Bus STARTED
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: [23/Jan/2026:08:59:40] ENGINE Client ('192.168.122.100', 60732) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 23 03:59:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:41 np0005593232 nostalgic_engelbart[75951]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC/d4keOyL2LCWVWs06uljDg4fIWrfNwUv+Wz4/woP5MWiE7sZQKeqZA12Zb2s3Yx+S/e1dF+VtJtpvhASsL5t4KA8hZWyZKk+rcWFlyYR5X3G7HRrxKV55VqBOGMoktQmyb1yEudRJjYQnd2d6tSz6AwlfMs2EfB7BldOpjxOkfod0g+ozYKuMOwVFsgsM4Ps6awUMZ2ceXg/CRyxv92yBHVPvJ1g/4eVKonMAdTS9aHNkpi+rX9HFsJy6AtdL89SuK3p29kFYMycwnLGWQT692klVigKAEAD4zzrzsoOYI0T2Oyskn9fhgfvk0IfKv5if/xm3EGwV3JM58vihTrlXVIInfz8D26/FSwRi1c8AiQikgVwfEQ99LZTCLtRTXqL/P0WmBIFpcInmbg0TZeTR0TCRH5ByLXAcFgzTE6D1l7ZOq2RMW9ff2g6qTlP1MAfS10jKWmbvxP4yDYbi0D6IRBvINFxztoFzu+jE2GX2AI+JVw/eVm065csl+kgQmRM= zuul@controller
Jan 23 03:59:41 np0005593232 systemd[1]: libpod-13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7.scope: Deactivated successfully.
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.884214598 +0000 UTC m=+0.689928655 container died 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 03:59:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e194273cb825418b41b6dffb1c33beef2215b43e0286984d158e7ac32dbf8eb1-merged.mount: Deactivated successfully.
Jan 23 03:59:41 np0005593232 podman[75935]: 2026-01-23 08:59:41.946438137 +0000 UTC m=+0.752152184 container remove 13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7 (image=quay.io/ceph/ceph:v18, name=nostalgic_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 03:59:41 np0005593232 systemd[1]: libpod-conmon-13618946b3e24e2e3ffa9a408eaa2a700459c593ddb24a5e7ce0a8d7578b76b7.scope: Deactivated successfully.
Jan 23 03:59:42 np0005593232 podman[75991]: 2026-01-23 08:59:41.987803757 +0000 UTC m=+0.021077617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:42 np0005593232 podman[75991]: 2026-01-23 08:59:42.086845697 +0000 UTC m=+0.120119537 container create 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 03:59:42 np0005593232 systemd[1]: Started libpod-conmon-837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f.scope.
Jan 23 03:59:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb84ef29fcb797c1939446ac14b2cf2628b826ea30cc5a6aa2da253efb81ced/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb84ef29fcb797c1939446ac14b2cf2628b826ea30cc5a6aa2da253efb81ced/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb84ef29fcb797c1939446ac14b2cf2628b826ea30cc5a6aa2da253efb81ced/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:42 np0005593232 podman[75991]: 2026-01-23 08:59:42.436316207 +0000 UTC m=+0.469590137 container init 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:42 np0005593232 podman[75991]: 2026-01-23 08:59:42.444631682 +0000 UTC m=+0.477905522 container start 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 03:59:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053142 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 03:59:42 np0005593232 podman[75991]: 2026-01-23 08:59:42.864486892 +0000 UTC m=+0.897760752 container attach 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:42 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:42 np0005593232 ceph-mon[74423]: Set ssh ssh_identity_pub
Jan 23 03:59:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:43 np0005593232 systemd[1]: Created slice User Slice of UID 42477.
Jan 23 03:59:43 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 23 03:59:43 np0005593232 systemd-logind[808]: New session 21 of user ceph-admin.
Jan 23 03:59:43 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 23 03:59:43 np0005593232 systemd[1]: Starting User Manager for UID 42477...
Jan 23 03:59:43 np0005593232 systemd[76038]: Queued start job for default target Main User Target.
Jan 23 03:59:43 np0005593232 systemd-logind[808]: New session 23 of user ceph-admin.
Jan 23 03:59:43 np0005593232 systemd[76038]: Created slice User Application Slice.
Jan 23 03:59:43 np0005593232 systemd[76038]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 03:59:43 np0005593232 systemd[76038]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 03:59:43 np0005593232 systemd[76038]: Reached target Paths.
Jan 23 03:59:43 np0005593232 systemd[76038]: Reached target Timers.
Jan 23 03:59:43 np0005593232 systemd[76038]: Starting D-Bus User Message Bus Socket...
Jan 23 03:59:43 np0005593232 systemd[76038]: Starting Create User's Volatile Files and Directories...
Jan 23 03:59:43 np0005593232 systemd[76038]: Listening on D-Bus User Message Bus Socket.
Jan 23 03:59:43 np0005593232 systemd[76038]: Reached target Sockets.
Jan 23 03:59:43 np0005593232 systemd[76038]: Finished Create User's Volatile Files and Directories.
Jan 23 03:59:43 np0005593232 systemd[76038]: Reached target Basic System.
Jan 23 03:59:43 np0005593232 systemd[76038]: Reached target Main User Target.
Jan 23 03:59:43 np0005593232 systemd[76038]: Startup finished in 133ms.
Jan 23 03:59:43 np0005593232 systemd[1]: Started User Manager for UID 42477.
Jan 23 03:59:43 np0005593232 systemd[1]: Started Session 21 of User ceph-admin.
Jan 23 03:59:43 np0005593232 systemd[1]: Started Session 23 of User ceph-admin.
Jan 23 03:59:43 np0005593232 systemd-logind[808]: New session 24 of user ceph-admin.
Jan 23 03:59:43 np0005593232 systemd[1]: Started Session 24 of User ceph-admin.
Jan 23 03:59:44 np0005593232 systemd-logind[808]: New session 25 of user ceph-admin.
Jan 23 03:59:44 np0005593232 systemd[1]: Started Session 25 of User ceph-admin.
Jan 23 03:59:44 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 23 03:59:44 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 23 03:59:44 np0005593232 systemd-logind[808]: New session 26 of user ceph-admin.
Jan 23 03:59:44 np0005593232 systemd[1]: Started Session 26 of User ceph-admin.
Jan 23 03:59:44 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:44 np0005593232 ceph-mon[74423]: Deploying cephadm binary to compute-0
Jan 23 03:59:45 np0005593232 systemd-logind[808]: New session 27 of user ceph-admin.
Jan 23 03:59:45 np0005593232 systemd[1]: Started Session 27 of User ceph-admin.
Jan 23 03:59:45 np0005593232 systemd-logind[808]: New session 28 of user ceph-admin.
Jan 23 03:59:45 np0005593232 systemd[1]: Started Session 28 of User ceph-admin.
Jan 23 03:59:45 np0005593232 systemd-logind[808]: New session 29 of user ceph-admin.
Jan 23 03:59:45 np0005593232 systemd[1]: Started Session 29 of User ceph-admin.
Jan 23 03:59:46 np0005593232 systemd-logind[808]: New session 30 of user ceph-admin.
Jan 23 03:59:46 np0005593232 systemd[1]: Started Session 30 of User ceph-admin.
Jan 23 03:59:46 np0005593232 systemd-logind[808]: New session 31 of user ceph-admin.
Jan 23 03:59:46 np0005593232 systemd[1]: Started Session 31 of User ceph-admin.
Jan 23 03:59:46 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:47 np0005593232 systemd-logind[808]: New session 32 of user ceph-admin.
Jan 23 03:59:47 np0005593232 systemd[1]: Started Session 32 of User ceph-admin.
Jan 23 03:59:47 np0005593232 systemd-logind[808]: New session 33 of user ceph-admin.
Jan 23 03:59:47 np0005593232 systemd[1]: Started Session 33 of User ceph-admin.
Jan 23 03:59:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054711 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Added host compute-0
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 03:59:48 np0005593232 trusting_maxwell[76008]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 03:59:48 np0005593232 systemd[1]: libpod-837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f.scope: Deactivated successfully.
Jan 23 03:59:48 np0005593232 podman[75991]: 2026-01-23 08:59:48.057719602 +0000 UTC m=+6.090993452 container died 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6eb84ef29fcb797c1939446ac14b2cf2628b826ea30cc5a6aa2da253efb81ced-merged.mount: Deactivated successfully.
Jan 23 03:59:48 np0005593232 podman[75991]: 2026-01-23 08:59:48.110185665 +0000 UTC m=+6.143459505 container remove 837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f (image=quay.io/ceph/ceph:v18, name=trusting_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 03:59:48 np0005593232 systemd[1]: libpod-conmon-837e52039dcefe34065ca7ad28e76e7d6d7cfa0d982eb76f307fb09c3c96d62f.scope: Deactivated successfully.
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.175239684 +0000 UTC m=+0.042810701 container create df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:48 np0005593232 systemd[1]: Started libpod-conmon-df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438.scope.
Jan 23 03:59:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32330a37a9ed039e4c14271911d326c4c4731526daaf10dfc74015225345599e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32330a37a9ed039e4c14271911d326c4c4731526daaf10dfc74015225345599e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32330a37a9ed039e4c14271911d326c4c4731526daaf10dfc74015225345599e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.156215737 +0000 UTC m=+0.023786784 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.255398461 +0000 UTC m=+0.122969478 container init df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.262638795 +0000 UTC m=+0.130209812 container start df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.265963289 +0000 UTC m=+0.133534306 container attach df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:48 np0005593232 podman[76801]: 2026-01-23 08:59:48.562022979 +0000 UTC m=+0.053990387 container create 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:48 np0005593232 systemd[1]: Started libpod-conmon-65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e.scope.
Jan 23 03:59:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:48 np0005593232 podman[76801]: 2026-01-23 08:59:48.536909299 +0000 UTC m=+0.028876737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:48 np0005593232 podman[76801]: 2026-01-23 08:59:48.635152217 +0000 UTC m=+0.127119645 container init 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:48 np0005593232 podman[76801]: 2026-01-23 08:59:48.640517358 +0000 UTC m=+0.132484766 container start 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:48 np0005593232 podman[76801]: 2026-01-23 08:59:48.643940595 +0000 UTC m=+0.135908003 container attach 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 03:59:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:48 np0005593232 stoic_lewin[76744]: Scheduled mon update...
Jan 23 03:59:48 np0005593232 systemd[1]: libpod-df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438.scope: Deactivated successfully.
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.845302618 +0000 UTC m=+0.712873635 container died df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 03:59:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-32330a37a9ed039e4c14271911d326c4c4731526daaf10dfc74015225345599e-merged.mount: Deactivated successfully.
Jan 23 03:59:48 np0005593232 podman[76692]: 2026-01-23 08:59:48.885544796 +0000 UTC m=+0.753115813 container remove df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438 (image=quay.io/ceph/ceph:v18, name=stoic_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:48 np0005593232 systemd[1]: libpod-conmon-df641171e4cf2e93d1d3a281b7c7095bd6bda528987ff3ce5d925422d475c438.scope: Deactivated successfully.
Jan 23 03:59:48 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.010094856 +0000 UTC m=+0.102124797 container create 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: Added host compute-0
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: Saving service mon spec with placement count:5
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:49 np0005593232 systemd[1]: Started libpod-conmon-2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce.scope.
Jan 23 03:59:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dddae105952bbd4b0cc38ef8796fdb694192c359c7e7545c9e6a7cb52a8f89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dddae105952bbd4b0cc38ef8796fdb694192c359c7e7545c9e6a7cb52a8f89/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dddae105952bbd4b0cc38ef8796fdb694192c359c7e7545c9e6a7cb52a8f89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:48.994834595 +0000 UTC m=+0.086864556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.098827535 +0000 UTC m=+0.190857486 container init 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.105053511 +0000 UTC m=+0.197083452 container start 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.108276622 +0000 UTC m=+0.200306563 container attach 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:49 np0005593232 sleepy_clarke[76837]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Jan 23 03:59:49 np0005593232 systemd[1]: libpod-65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e.scope: Deactivated successfully.
Jan 23 03:59:49 np0005593232 podman[76801]: 2026-01-23 08:59:49.127064713 +0000 UTC m=+0.619032121 container died 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-43161dc7149cc24da84268809c275e49767bec02500af552858818645125c9eb-merged.mount: Deactivated successfully.
Jan 23 03:59:49 np0005593232 podman[76801]: 2026-01-23 08:59:49.166820907 +0000 UTC m=+0.658788315 container remove 65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e (image=quay.io/ceph/ceph:v18, name=sleepy_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 03:59:49 np0005593232 systemd[1]: libpod-conmon-65d0ce88b1cd1531be9ce21db944e0df8e3f1e40e711c34782cd138bd7787c9e.scope: Deactivated successfully.
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:49 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 23 03:59:49 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:49 np0005593232 relaxed_aryabhata[76873]: Scheduled mgr update...
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.742828521 +0000 UTC m=+0.834858462 container died 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:49 np0005593232 systemd[1]: libpod-2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce.scope: Deactivated successfully.
Jan 23 03:59:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d7dddae105952bbd4b0cc38ef8796fdb694192c359c7e7545c9e6a7cb52a8f89-merged.mount: Deactivated successfully.
Jan 23 03:59:49 np0005593232 podman[76856]: 2026-01-23 08:59:49.784010596 +0000 UTC m=+0.876040537 container remove 2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce (image=quay.io/ceph/ceph:v18, name=relaxed_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:49 np0005593232 systemd[1]: libpod-conmon-2335ddc2ea163f994cc98963f4589af1057f99c0bc88f8dc3dfc892b836c84ce.scope: Deactivated successfully.
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 03:59:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:49 np0005593232 podman[77045]: 2026-01-23 08:59:49.843274171 +0000 UTC m=+0.041594847 container create 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:49 np0005593232 systemd[1]: Started libpod-conmon-72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771.scope.
Jan 23 03:59:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17486264faf42c9cc63802cf80d16c5697bcc0b34a9e624ec36d610eee2d0523/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17486264faf42c9cc63802cf80d16c5697bcc0b34a9e624ec36d610eee2d0523/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17486264faf42c9cc63802cf80d16c5697bcc0b34a9e624ec36d610eee2d0523/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:49 np0005593232 podman[77045]: 2026-01-23 08:59:49.914515716 +0000 UTC m=+0.112836652 container init 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:49 np0005593232 podman[77045]: 2026-01-23 08:59:49.920091763 +0000 UTC m=+0.118412439 container start 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:49 np0005593232 podman[77045]: 2026-01-23 08:59:49.824793289 +0000 UTC m=+0.023113985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:49 np0005593232 podman[77045]: 2026-01-23 08:59:49.923732816 +0000 UTC m=+0.122053512 container attach 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: Saving service mgr spec with placement count:2
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:50 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service crash spec with placement *
Jan 23 03:59:50 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 03:59:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:50 np0005593232 blissful_beaver[77084]: Scheduled crash update...
Jan 23 03:59:50 np0005593232 podman[77045]: 2026-01-23 08:59:50.534360919 +0000 UTC m=+0.732681595 container died 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:50 np0005593232 systemd[1]: libpod-72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771.scope: Deactivated successfully.
Jan 23 03:59:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-17486264faf42c9cc63802cf80d16c5697bcc0b34a9e624ec36d610eee2d0523-merged.mount: Deactivated successfully.
Jan 23 03:59:50 np0005593232 podman[77045]: 2026-01-23 08:59:50.579090304 +0000 UTC m=+0.777411000 container remove 72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771 (image=quay.io/ceph/ceph:v18, name=blissful_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:50 np0005593232 podman[77256]: 2026-01-23 08:59:50.583367735 +0000 UTC m=+0.077694668 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:50 np0005593232 systemd[1]: libpod-conmon-72bbcb09f7428cbd747d3065e71cb80391c7d4b9292fecb82b0ec9c330022771.scope: Deactivated successfully.
Jan 23 03:59:50 np0005593232 podman[77290]: 2026-01-23 08:59:50.655943847 +0000 UTC m=+0.053198565 container create 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 03:59:50 np0005593232 systemd[1]: Started libpod-conmon-13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca.scope.
Jan 23 03:59:50 np0005593232 podman[77290]: 2026-01-23 08:59:50.631624879 +0000 UTC m=+0.028879647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0fd5724c7f29dfda3afc7aa67de1575d1810a49a089b0733d1979aa0cf9d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0fd5724c7f29dfda3afc7aa67de1575d1810a49a089b0733d1979aa0cf9d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d0fd5724c7f29dfda3afc7aa67de1575d1810a49a089b0733d1979aa0cf9d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:50 np0005593232 podman[77290]: 2026-01-23 08:59:50.885078605 +0000 UTC m=+0.282333343 container init 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:50 np0005593232 podman[77290]: 2026-01-23 08:59:50.895822889 +0000 UTC m=+0.293077607 container start 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 03:59:50 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:51 np0005593232 podman[77290]: 2026-01-23 08:59:51.063083667 +0000 UTC m=+0.460338415 container attach 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 03:59:51 np0005593232 podman[77256]: 2026-01-23 08:59:51.090642516 +0000 UTC m=+0.584969469 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/552874767' entity='client.admin' 
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: Saving service crash spec with placement *
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:51 np0005593232 systemd[1]: libpod-13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca.scope: Deactivated successfully.
Jan 23 03:59:51 np0005593232 podman[77290]: 2026-01-23 08:59:51.865591065 +0000 UTC m=+1.262845783 container died 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 03:59:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-12d0fd5724c7f29dfda3afc7aa67de1575d1810a49a089b0733d1979aa0cf9d0-merged.mount: Deactivated successfully.
Jan 23 03:59:51 np0005593232 podman[77290]: 2026-01-23 08:59:51.910550817 +0000 UTC m=+1.307805535 container remove 13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca (image=quay.io/ceph/ceph:v18, name=eloquent_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 03:59:51 np0005593232 systemd[1]: libpod-conmon-13ee1c96548b3788f216c472283a122ce26ee72afc4c310853543bbab15862ca.scope: Deactivated successfully.
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 03:59:51 np0005593232 podman[77377]: 2026-01-23 08:59:51.974279008 +0000 UTC m=+0.043866231 container create 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 23 03:59:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:52 np0005593232 systemd[1]: Started libpod-conmon-345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce.scope.
Jan 23 03:59:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b31a96f5fa0a9e45823410574775bc5c3aff79d400296bf8b11a8812760bed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b31a96f5fa0a9e45823410574775bc5c3aff79d400296bf8b11a8812760bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77b31a96f5fa0a9e45823410574775bc5c3aff79d400296bf8b11a8812760bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:51.95453074 +0000 UTC m=+0.024117983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:52.059365114 +0000 UTC m=+0.128952367 container init 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:52.065358913 +0000 UTC m=+0.134946136 container start 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:52.068723818 +0000 UTC m=+0.138311041 container attach 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 23 03:59:52 np0005593232 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77530 (sysctl)
Jan 23 03:59:52 np0005593232 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 23 03:59:52 np0005593232 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 23 03:59:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:52 np0005593232 systemd[1]: libpod-345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce.scope: Deactivated successfully.
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:52.632422784 +0000 UTC m=+0.702010007 container died 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-77b31a96f5fa0a9e45823410574775bc5c3aff79d400296bf8b11a8812760bed-merged.mount: Deactivated successfully.
Jan 23 03:59:52 np0005593232 podman[77377]: 2026-01-23 08:59:52.687073199 +0000 UTC m=+0.756660422 container remove 345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce (image=quay.io/ceph/ceph:v18, name=festive_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:52 np0005593232 systemd[1]: libpod-conmon-345163bb226184568fc65f5d85257e958fbb0c53f1c215660d393977f6aa43ce.scope: Deactivated successfully.
Jan 23 03:59:52 np0005593232 podman[77554]: 2026-01-23 08:59:52.752206971 +0000 UTC m=+0.042819812 container create 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 03:59:52 np0005593232 systemd[1]: Started libpod-conmon-08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a.scope.
Jan 23 03:59:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbbad084efcbea1ddd35e69d8de022c04f56c812e275ca17844a3490cd3333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbbad084efcbea1ddd35e69d8de022c04f56c812e275ca17844a3490cd3333/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2bbbad084efcbea1ddd35e69d8de022c04f56c812e275ca17844a3490cd3333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:52 np0005593232 podman[77554]: 2026-01-23 08:59:52.734967773 +0000 UTC m=+0.025580644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:52 np0005593232 podman[77554]: 2026-01-23 08:59:52.838083568 +0000 UTC m=+0.128696429 container init 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:52 np0005593232 podman[77554]: 2026-01-23 08:59:52.844272073 +0000 UTC m=+0.134884914 container start 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 03:59:52 np0005593232 podman[77554]: 2026-01-23 08:59:52.847483984 +0000 UTC m=+0.138096835 container attach 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/552874767' entity='client.admin' 
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 03:59:52 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:53 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:53 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Added label _admin to host compute-0
Jan 23 03:59:53 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 23 03:59:53 np0005593232 awesome_satoshi[77580]: Added label _admin to host compute-0
Jan 23 03:59:53 np0005593232 systemd[1]: libpod-08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a.scope: Deactivated successfully.
Jan 23 03:59:53 np0005593232 podman[77554]: 2026-01-23 08:59:53.462285876 +0000 UTC m=+0.752898717 container died 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 03:59:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c2bbbad084efcbea1ddd35e69d8de022c04f56c812e275ca17844a3490cd3333-merged.mount: Deactivated successfully.
Jan 23 03:59:53 np0005593232 podman[77554]: 2026-01-23 08:59:53.501767722 +0000 UTC m=+0.792380563 container remove 08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a (image=quay.io/ceph/ceph:v18, name=awesome_satoshi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:53 np0005593232 systemd[1]: libpod-conmon-08f93a38224da16773897eeea33c3cfd02d853d6db151ab68d31c0445fe5243a.scope: Deactivated successfully.
Jan 23 03:59:53 np0005593232 podman[77835]: 2026-01-23 08:59:53.569548068 +0000 UTC m=+0.045538078 container create 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 03:59:53 np0005593232 systemd[1]: Started libpod-conmon-39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f.scope.
Jan 23 03:59:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740ddcd60d4a91ac0e45d6bd15f7454ee64225d602b21423927e14a170b43d60/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740ddcd60d4a91ac0e45d6bd15f7454ee64225d602b21423927e14a170b43d60/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/740ddcd60d4a91ac0e45d6bd15f7454ee64225d602b21423927e14a170b43d60/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:53 np0005593232 podman[77835]: 2026-01-23 08:59:53.551449816 +0000 UTC m=+0.027439846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:53 np0005593232 podman[77835]: 2026-01-23 08:59:53.655966501 +0000 UTC m=+0.131956531 container init 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:53 np0005593232 podman[77835]: 2026-01-23 08:59:53.663095013 +0000 UTC m=+0.139085013 container start 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:53 np0005593232 podman[77835]: 2026-01-23 08:59:53.667286561 +0000 UTC m=+0.143276571 container attach 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.845268653 +0000 UTC m=+0.050947471 container create 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:53 np0005593232 systemd[1]: Started libpod-conmon-625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c.scope.
Jan 23 03:59:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.820483203 +0000 UTC m=+0.026162031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.929040962 +0000 UTC m=+0.134719780 container init 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.935460343 +0000 UTC m=+0.141139151 container start 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.938949642 +0000 UTC m=+0.144628450 container attach 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 03:59:53 np0005593232 elegant_mcnulty[77911]: 167 167
Jan 23 03:59:53 np0005593232 systemd[1]: libpod-625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c.scope: Deactivated successfully.
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.940966159 +0000 UTC m=+0.146644967 container died 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 03:59:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ca41fa756dac0399a294f5777c69fcd63a9ee31befcec1d41bd5f12136dc0fb6-merged.mount: Deactivated successfully.
Jan 23 03:59:53 np0005593232 podman[77895]: 2026-01-23 08:59:53.976504613 +0000 UTC m=+0.182183421 container remove 625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mcnulty, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 03:59:53 np0005593232 systemd[1]: libpod-conmon-625c788a706aee8a0fc9751a8eaee97d4d84959e9bc09113904389e09b77066c.scope: Deactivated successfully.
Jan 23 03:59:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Jan 23 03:59:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4109462229' entity='client.admin' 
Jan 23 03:59:54 np0005593232 systemd[1]: libpod-39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f.scope: Deactivated successfully.
Jan 23 03:59:54 np0005593232 podman[77949]: 2026-01-23 08:59:54.276453954 +0000 UTC m=+0.025669517 container died 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-740ddcd60d4a91ac0e45d6bd15f7454ee64225d602b21423927e14a170b43d60-merged.mount: Deactivated successfully.
Jan 23 03:59:54 np0005593232 podman[77949]: 2026-01-23 08:59:54.314989063 +0000 UTC m=+0.064204596 container remove 39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f (image=quay.io/ceph/ceph:v18, name=gracious_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:54 np0005593232 systemd[1]: libpod-conmon-39890fec0b4514bedf301cf66627fce4d21db907f010c1e5803408108f2dfc0f.scope: Deactivated successfully.
Jan 23 03:59:54 np0005593232 podman[77964]: 2026-01-23 08:59:54.382023978 +0000 UTC m=+0.043258344 container create c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 03:59:54 np0005593232 systemd[1]: Started libpod-conmon-c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da.scope.
Jan 23 03:59:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8eab3d1c89d7fca6c306bb6ab444f22b9f1d0b4c6ac6002af84f0773065cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8eab3d1c89d7fca6c306bb6ab444f22b9f1d0b4c6ac6002af84f0773065cb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac8eab3d1c89d7fca6c306bb6ab444f22b9f1d0b4c6ac6002af84f0773065cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:54 np0005593232 podman[77964]: 2026-01-23 08:59:54.360065107 +0000 UTC m=+0.021299493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:54 np0005593232 podman[77964]: 2026-01-23 08:59:54.459604542 +0000 UTC m=+0.120838928 container init c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:54 np0005593232 podman[77964]: 2026-01-23 08:59:54.46628675 +0000 UTC m=+0.127521116 container start c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 03:59:54 np0005593232 podman[77964]: 2026-01-23 08:59:54.475248474 +0000 UTC m=+0.136482860 container attach c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:54 np0005593232 ceph-mon[74423]: Added label _admin to host compute-0
Jan 23 03:59:54 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/4109462229' entity='client.admin' 
Jan 23 03:59:54 np0005593232 ceph-mgr[74726]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 23 03:59:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Jan 23 03:59:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2410914374' entity='client.admin' 
Jan 23 03:59:55 np0005593232 agitated_easley[77979]: set mgr/dashboard/cluster/status
Jan 23 03:59:55 np0005593232 systemd[1]: libpod-c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da.scope: Deactivated successfully.
Jan 23 03:59:55 np0005593232 podman[77964]: 2026-01-23 08:59:55.178898657 +0000 UTC m=+0.840133023 container died c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9ac8eab3d1c89d7fca6c306bb6ab444f22b9f1d0b4c6ac6002af84f0773065cb-merged.mount: Deactivated successfully.
Jan 23 03:59:55 np0005593232 podman[77964]: 2026-01-23 08:59:55.225048082 +0000 UTC m=+0.886282448 container remove c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da (image=quay.io/ceph/ceph:v18, name=agitated_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 03:59:55 np0005593232 systemd[1]: libpod-conmon-c98eae540a1c5c78d11c74aa6bc04f364a5fd975d064cb932a5e3775fa6850da.scope: Deactivated successfully.
Jan 23 03:59:55 np0005593232 podman[78029]: 2026-01-23 08:59:55.394668227 +0000 UTC m=+0.049515851 container create 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 23 03:59:55 np0005593232 systemd[1]: Started libpod-conmon-2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd.scope.
Jan 23 03:59:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed9dccdd9d8b9f19a5d0382b5a6c8a6d535c77c3090898811c416ed39d05c75/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed9dccdd9d8b9f19a5d0382b5a6c8a6d535c77c3090898811c416ed39d05c75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed9dccdd9d8b9f19a5d0382b5a6c8a6d535c77c3090898811c416ed39d05c75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed9dccdd9d8b9f19a5d0382b5a6c8a6d535c77c3090898811c416ed39d05c75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:55 np0005593232 podman[78029]: 2026-01-23 08:59:55.371217794 +0000 UTC m=+0.026065438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 03:59:55 np0005593232 podman[78029]: 2026-01-23 08:59:55.466363864 +0000 UTC m=+0.121211508 container init 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 03:59:55 np0005593232 podman[78029]: 2026-01-23 08:59:55.474196986 +0000 UTC m=+0.129044610 container start 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:55 np0005593232 podman[78029]: 2026-01-23 08:59:55.476892172 +0000 UTC m=+0.131739806 container attach 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 03:59:55 np0005593232 python3[78076]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:59:55 np0005593232 podman[78077]: 2026-01-23 08:59:55.956049819 +0000 UTC m=+0.070065552 container create 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:55.92286275 +0000 UTC m=+0.036878523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:56 np0005593232 systemd[1]: Started libpod-conmon-028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407.scope.
Jan 23 03:59:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4baf8bf209be682d6e570a4700a9b6c3fda3b31872207957d7b32bb1f6e0779a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4baf8bf209be682d6e570a4700a9b6c3fda3b31872207957d7b32bb1f6e0779a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:56.080127946 +0000 UTC m=+0.194143689 container init 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:56.091952481 +0000 UTC m=+0.205968204 container start 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:56.095331716 +0000 UTC m=+0.209347469 container attach 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2410914374' entity='client.admin' 
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3580714139' entity='client.admin' 
Jan 23 03:59:56 np0005593232 systemd[1]: libpod-028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407.scope: Deactivated successfully.
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:56.64728193 +0000 UTC m=+0.761297663 container died 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 03:59:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4baf8bf209be682d6e570a4700a9b6c3fda3b31872207957d7b32bb1f6e0779a-merged.mount: Deactivated successfully.
Jan 23 03:59:56 np0005593232 podman[78077]: 2026-01-23 08:59:56.687141207 +0000 UTC m=+0.801156940 container remove 028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407 (image=quay.io/ceph/ceph:v18, name=sweet_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 03:59:56 np0005593232 systemd[1]: libpod-conmon-028a499718aa8f3472e0c34658a1eaf1344d8559ccbc02f15410c5133ba8d407.scope: Deactivated successfully.
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]: [
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:    {
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "available": false,
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "ceph_device": false,
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "lsm_data": {},
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "lvs": [],
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "path": "/dev/sr0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "rejected_reasons": [
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "Insufficient space (<5GB)",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "Has a FileSystem"
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        ],
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        "sys_api": {
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "actuators": null,
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "device_nodes": "sr0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "devname": "sr0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "human_readable_size": "482.00 KB",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "id_bus": "ata",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "model": "QEMU DVD-ROM",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "nr_requests": "2",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "parent": "/dev/sr0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "partitions": {},
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "path": "/dev/sr0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "removable": "1",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "rev": "2.5+",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "ro": "0",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "rotational": "1",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "sas_address": "",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "sas_device_handle": "",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "scheduler_mode": "mq-deadline",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "sectors": 0,
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "sectorsize": "2048",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "size": 493568.0,
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "support_discard": "2048",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "type": "disk",
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:            "vendor": "QEMU"
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:        }
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]:    }
Jan 23 03:59:56 np0005593232 quizzical_wilson[78046]: ]
Jan 23 03:59:56 np0005593232 systemd[1]: libpod-2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd.scope: Deactivated successfully.
Jan 23 03:59:56 np0005593232 systemd[1]: libpod-2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd.scope: Consumed 1.273s CPU time.
Jan 23 03:59:56 np0005593232 podman[78029]: 2026-01-23 08:59:56.764316219 +0000 UTC m=+1.419163843 container died 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 03:59:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3ed9dccdd9d8b9f19a5d0382b5a6c8a6d535c77c3090898811c416ed39d05c75-merged.mount: Deactivated successfully.
Jan 23 03:59:56 np0005593232 podman[78029]: 2026-01-23 08:59:56.81141744 +0000 UTC m=+1.466265064 container remove 2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wilson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 03:59:56 np0005593232 systemd[1]: libpod-conmon-2754b49855103d66b73ce4e370e359abb21aadd69b24cda541a84d2f660dc7fd.scope: Deactivated successfully.
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 03:59:56 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 03:59:56 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 03:59:56 np0005593232 ceph-mgr[74726]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 23 03:59:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 03:59:56 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 03:59:57 np0005593232 ansible-async_wrapper.py[79672]: Invoked with j448339117636 30 /home/zuul/.ansible/tmp/ansible-tmp-1769158797.0471573-37161-105093554726673/AnsiballZ_command.py _
Jan 23 03:59:57 np0005593232 ansible-async_wrapper.py[79745]: Starting module and watcher
Jan 23 03:59:57 np0005593232 ansible-async_wrapper.py[79745]: Start watching 79746 (30)
Jan 23 03:59:57 np0005593232 ansible-async_wrapper.py[79746]: Start module (79746)
Jan 23 03:59:57 np0005593232 ansible-async_wrapper.py[79672]: Return async_wrapper task started.
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3580714139' entity='client.admin' 
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 23 03:59:57 np0005593232 python3[79747]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 03:59:57 np0005593232 podman[79812]: 2026-01-23 08:59:57.813730537 +0000 UTC m=+0.040457555 container create 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 03:59:57 np0005593232 systemd[1]: Started libpod-conmon-25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9.scope.
Jan 23 03:59:57 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 03:59:57 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 03:59:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 03:59:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78408bc0019cc98d7042fc65749d5f3ad04dfcdd7cb6b898f9cad7393c7ac542/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78408bc0019cc98d7042fc65749d5f3ad04dfcdd7cb6b898f9cad7393c7ac542/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 03:59:57 np0005593232 podman[79812]: 2026-01-23 08:59:57.795301196 +0000 UTC m=+0.022028244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 03:59:57 np0005593232 podman[79812]: 2026-01-23 08:59:57.905387158 +0000 UTC m=+0.132114196 container init 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 03:59:57 np0005593232 podman[79812]: 2026-01-23 08:59:57.912719116 +0000 UTC m=+0.139446134 container start 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 03:59:57 np0005593232 podman[79812]: 2026-01-23 08:59:57.915841294 +0000 UTC m=+0.142568342 container attach 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 03:59:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 03:59:58 np0005593232 tender_shamir[79863]: 
Jan 23 03:59:58 np0005593232 tender_shamir[79863]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 03:59:58 np0005593232 systemd[1]: libpod-25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9.scope: Deactivated successfully.
Jan 23 03:59:58 np0005593232 podman[79812]: 2026-01-23 08:59:58.506924105 +0000 UTC m=+0.733651133 container died 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 03:59:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-78408bc0019cc98d7042fc65749d5f3ad04dfcdd7cb6b898f9cad7393c7ac542-merged.mount: Deactivated successfully.
Jan 23 03:59:58 np0005593232 podman[79812]: 2026-01-23 08:59:58.625498647 +0000 UTC m=+0.852225665 container remove 25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9 (image=quay.io/ceph/ceph:v18, name=tender_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 03:59:58 np0005593232 ceph-mon[74423]: Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 03:59:58 np0005593232 ansible-async_wrapper.py[79746]: Module complete (79746)
Jan 23 03:59:58 np0005593232 systemd[1]: libpod-conmon-25c183b3f6c3b1e35e7d6cedac32fbc0fa2a60619ac7124e0aa4dcc267cdd2b9.scope: Deactivated successfully.
Jan 23 03:59:58 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 03:59:58 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 03:59:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 03:59:59 np0005593232 python3[80419]: ansible-ansible.legacy.async_status Invoked with jid=j448339117636.79672 mode=status _async_dir=/root/.ansible_async
Jan 23 03:59:59 np0005593232 python3[80591]: ansible-ansible.legacy.async_status Invoked with jid=j448339117636.79672 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 03:59:59 np0005593232 ceph-mon[74423]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 23 03:59:59 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 03:59:59 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 03:59:59 np0005593232 python3[80817]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 23 04:00:00 np0005593232 python3[81046]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.291006723 +0000 UTC m=+0.051640201 container create 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:00:00 np0005593232 systemd[1]: Started libpod-conmon-88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5.scope.
Jan 23 04:00:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700cb1fcc03f6426ab38c5020c1fcd9020b86ec9e80cf5cfc26b2b300bd620e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700cb1fcc03f6426ab38c5020c1fcd9020b86ec9e80cf5cfc26b2b300bd620e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700cb1fcc03f6426ab38c5020c1fcd9020b86ec9e80cf5cfc26b2b300bd620e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.263601708 +0000 UTC m=+0.024235206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.366200619 +0000 UTC m=+0.126834117 container init 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.372185598 +0000 UTC m=+0.132819076 container start 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.374400101 +0000 UTC m=+0.135033599 container attach 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:00 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev ae2023ff-abe8-4085-81f8-dbe8bea00401 (Updating crash deployment (+1 -> 1))
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:00 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 23 04:00:00 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 23 04:00:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:00 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 04:00:00 np0005593232 zealous_gagarin[81162]: 
Jan 23 04:00:00 np0005593232 zealous_gagarin[81162]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 04:00:00 np0005593232 systemd[1]: libpod-88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5.scope: Deactivated successfully.
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.952997138 +0000 UTC m=+0.713630666 container died 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:00:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5700cb1fcc03f6426ab38c5020c1fcd9020b86ec9e80cf5cfc26b2b300bd620e-merged.mount: Deactivated successfully.
Jan 23 04:00:00 np0005593232 podman[81116]: 2026-01-23 09:00:00.993800302 +0000 UTC m=+0.754433780 container remove 88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5 (image=quay.io/ceph/ceph:v18, name=zealous_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:00:01 np0005593232 systemd[1]: libpod-conmon-88e84c507921a492845f7536c3a6cd9f139f70f605cde059195114eb834eb5f5.scope: Deactivated successfully.
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.295915013 +0000 UTC m=+0.042399319 container create b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:00:01 np0005593232 systemd[1]: Started libpod-conmon-b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef.scope.
Jan 23 04:00:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.356170687 +0000 UTC m=+0.102655033 container init b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.364138042 +0000 UTC m=+0.110622348 container start b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.367576009 +0000 UTC m=+0.114060345 container attach b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:01 np0005593232 goofy_bell[81555]: 167 167
Jan 23 04:00:01 np0005593232 systemd[1]: libpod-b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef.scope: Deactivated successfully.
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.369816513 +0000 UTC m=+0.116300839 container died b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.278275935 +0000 UTC m=+0.024760271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:00:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-58de8b7a1668ef6bb46cbb2baab62a54214bf96e8e3b3a82150308d00b6ba063-merged.mount: Deactivated successfully.
Jan 23 04:00:01 np0005593232 podman[81513]: 2026-01-23 09:00:01.402803225 +0000 UTC m=+0.149287531 container remove b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:01 np0005593232 systemd[1]: libpod-conmon-b066de4b609371b7da23c4ec5c363ff71d39a81679009ab9c93996b7d3a5adef.scope: Deactivated successfully.
Jan 23 04:00:01 np0005593232 systemd[1]: Reloading.
Jan 23 04:00:01 np0005593232 python3[81552]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:01 np0005593232 podman[81577]: 2026-01-23 09:00:01.510948473 +0000 UTC m=+0.040854816 container create aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:00:01 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:00:01 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:00:01 np0005593232 podman[81577]: 2026-01-23 09:00:01.493122209 +0000 UTC m=+0.023028552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:01 np0005593232 systemd[1]: Started libpod-conmon-aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a.scope.
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:00:01 np0005593232 ceph-mon[74423]: Deploying daemon crash.compute-0 on compute-0
Jan 23 04:00:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228aaa41a51b3ce0511a9702523409eb394c15ad234ec567f468962ba6d06eff/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228aaa41a51b3ce0511a9702523409eb394c15ad234ec567f468962ba6d06eff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/228aaa41a51b3ce0511a9702523409eb394c15ad234ec567f468962ba6d06eff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:01 np0005593232 podman[81577]: 2026-01-23 09:00:01.769378506 +0000 UTC m=+0.299284879 container init aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:00:01 np0005593232 systemd[1]: Reloading.
Jan 23 04:00:01 np0005593232 podman[81577]: 2026-01-23 09:00:01.777183778 +0000 UTC m=+0.307090121 container start aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:01 np0005593232 podman[81577]: 2026-01-23 09:00:01.780599865 +0000 UTC m=+0.310506208 container attach aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:00:01 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:00:01 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:00:02 np0005593232 systemd[1]: Starting Ceph crash.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:00:02 np0005593232 podman[81736]: 2026-01-23 09:00:02.240805865 +0000 UTC m=+0.051121915 container create 75130cd954df41dbbbb502048c9c30d9222012c5114a815dc32c9854f0b4683f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb78cc3b0faa36ab0eb08195d01314e578895c50a6f4165ffe6e3541587b20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb78cc3b0faa36ab0eb08195d01314e578895c50a6f4165ffe6e3541587b20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb78cc3b0faa36ab0eb08195d01314e578895c50a6f4165ffe6e3541587b20/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22fb78cc3b0faa36ab0eb08195d01314e578895c50a6f4165ffe6e3541587b20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:02 np0005593232 podman[81736]: 2026-01-23 09:00:02.302820739 +0000 UTC m=+0.113136799 container init 75130cd954df41dbbbb502048c9c30d9222012c5114a815dc32c9854f0b4683f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:02 np0005593232 podman[81736]: 2026-01-23 09:00:02.2195326 +0000 UTC m=+0.029848670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:00:02 np0005593232 podman[81736]: 2026-01-23 09:00:02.308511701 +0000 UTC m=+0.118827761 container start 75130cd954df41dbbbb502048c9c30d9222012c5114a815dc32c9854f0b4683f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:02 np0005593232 bash[81736]: 75130cd954df41dbbbb502048c9c30d9222012c5114a815dc32c9854f0b4683f
Jan 23 04:00:02 np0005593232 systemd[1]: Started Ceph crash.compute-0 for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4135208595' entity='client.admin' 
Jan 23 04:00:02 np0005593232 systemd[1]: libpod-aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a.scope: Deactivated successfully.
Jan 23 04:00:02 np0005593232 podman[81577]: 2026-01-23 09:00:02.381940669 +0000 UTC m=+0.911847012 container died aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 23 04:00:02 np0005593232 ansible-async_wrapper.py[79745]: Done in kid B.
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.730+0000 7f3064e21640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.730+0000 7f3064e21640 -1 AuthRegistry(0x7f3060066fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.731+0000 7f3064e21640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.731+0000 7f3064e21640 -1 AuthRegistry(0x7f3064e20000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.732+0000 7f305e575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: 2026-01-23T09:00:02.732+0000 7f3064e21640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 23 04:00:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 23 04:00:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:00:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-228aaa41a51b3ce0511a9702523409eb394c15ad234ec567f468962ba6d06eff-merged.mount: Deactivated successfully.
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/4135208595' entity='client.admin' 
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 podman[81577]: 2026-01-23 09:00:03.178212327 +0000 UTC m=+1.708118680 container remove aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a (image=quay.io/ceph/ceph:v18, name=brave_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev ae2023ff-abe8-4085-81f8-dbe8bea00401 (Updating crash deployment (+1 -> 1))
Jan 23 04:00:03 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event ae2023ff-abe8-4085-81f8-dbe8bea00401 (Updating crash deployment (+1 -> 1)) in 2 seconds
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c59fbc65-8bc6-4923-80e7-90f3f0043585 does not exist
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4af9402-8e67-42e4-8af2-b540bc5478d3 does not exist
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 04:00:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:03 np0005593232 systemd[1]: libpod-conmon-aa98defb8308fc152bd63222c4344c512fc1c3710e9d655ba8472c47497c3a2a.scope: Deactivated successfully.
Jan 23 04:00:03 np0005593232 python3[81859]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:03 np0005593232 podman[81934]: 2026-01-23 09:00:03.527998425 +0000 UTC m=+0.042384807 container create c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:00:03 np0005593232 systemd[1]: Started libpod-conmon-c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c.scope.
Jan 23 04:00:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3599304e72090793d19b679e4bae31a9750fc5b4595b3eac8cf4bf6313ea87fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3599304e72090793d19b679e4bae31a9750fc5b4595b3eac8cf4bf6313ea87fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3599304e72090793d19b679e4bae31a9750fc5b4595b3eac8cf4bf6313ea87fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:03 np0005593232 podman[81934]: 2026-01-23 09:00:03.51060251 +0000 UTC m=+0.024988912 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:03 np0005593232 podman[81934]: 2026-01-23 09:00:03.611983843 +0000 UTC m=+0.126370235 container init c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:00:03 np0005593232 podman[81934]: 2026-01-23 09:00:03.61820849 +0000 UTC m=+0.132594872 container start c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:00:03 np0005593232 podman[81934]: 2026-01-23 09:00:03.62136184 +0000 UTC m=+0.135748242 container attach c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:00:03 np0005593232 podman[82052]: 2026-01-23 09:00:03.999113774 +0000 UTC m=+0.052781032 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 04:00:04 np0005593232 podman[82052]: 2026-01-23 09:00:04.093117728 +0000 UTC m=+0.146784976 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4005357121' entity='client.admin' 
Jan 23 04:00:04 np0005593232 systemd[1]: libpod-c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c.scope: Deactivated successfully.
Jan 23 04:00:04 np0005593232 podman[81934]: 2026-01-23 09:00:04.248781155 +0000 UTC m=+0.763167537 container died c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 23 04:00:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3599304e72090793d19b679e4bae31a9750fc5b4595b3eac8cf4bf6313ea87fa-merged.mount: Deactivated successfully.
Jan 23 04:00:04 np0005593232 podman[81934]: 2026-01-23 09:00:04.293193528 +0000 UTC m=+0.807579910 container remove c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c (image=quay.io/ceph/ceph:v18, name=condescending_joliot, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:00:04 np0005593232 systemd[1]: libpod-conmon-c321b57ea92f4244238d74d95cc5a04147ca57e40e8c51e078ad229daafecb2c.scope: Deactivated successfully.
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c45cbed4-0e07-48f9-a664-80396d9ffd47 does not exist
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7eab216e-b29c-43a9-9088-f0886c2406bc does not exist
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8c004839-2280-453f-8f77-2875e2116bab does not exist
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:00:04 np0005593232 python3[82228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:04 np0005593232 podman[82284]: 2026-01-23 09:00:04.703281292 +0000 UTC m=+0.048189981 container create ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:04 np0005593232 podman[82284]: 2026-01-23 09:00:04.68245434 +0000 UTC m=+0.027363039 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:04 np0005593232 systemd[1]: Started libpod-conmon-ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644.scope.
Jan 23 04:00:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0e4dc73095e14c75b582de06a86fdf1f59bc4d9ee3886abe3a06571472775e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0e4dc73095e14c75b582de06a86fdf1f59bc4d9ee3886abe3a06571472775e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c0e4dc73095e14c75b582de06a86fdf1f59bc4d9ee3886abe3a06571472775e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:04 np0005593232 podman[82284]: 2026-01-23 09:00:04.850040056 +0000 UTC m=+0.194948755 container init ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:00:04 np0005593232 podman[82284]: 2026-01-23 09:00:04.855912764 +0000 UTC m=+0.200821443 container start ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:00:04 np0005593232 podman[82284]: 2026-01-23 09:00:04.858796656 +0000 UTC m=+0.203705355 container attach ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:00:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:04 np0005593232 podman[82365]: 2026-01-23 09:00:04.991330415 +0000 UTC m=+0.035129370 container create 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:00:05 np0005593232 systemd[1]: Started libpod-conmon-18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c.scope.
Jan 23 04:00:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:05.060982456 +0000 UTC m=+0.104781431 container init 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:05.066434651 +0000 UTC m=+0.110233606 container start 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:05.069977322 +0000 UTC m=+0.113776277 container attach 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:00:05 np0005593232 beautiful_bhaskara[82382]: 167 167
Jan 23 04:00:05 np0005593232 systemd[1]: libpod-18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c.scope: Deactivated successfully.
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:05.071187276 +0000 UTC m=+0.114986231 container died 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:04.975195816 +0000 UTC m=+0.018994781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:00:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b8a5237fc8ee6b428d0c57471a5d52889320ce26c88d6e2cfd2217021e715ec7-merged.mount: Deactivated successfully.
Jan 23 04:00:05 np0005593232 podman[82365]: 2026-01-23 09:00:05.107346825 +0000 UTC m=+0.151145780 container remove 18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:00:05 np0005593232 systemd[1]: libpod-conmon-18702ff038503fb4658068f70073203b76cc8ac04758e39bbd20e90d9f9ff27c.scope: Deactivated successfully.
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.yntofk (unknown last config time)...
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.yntofk (unknown last config time)...
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/4005357121' entity='client.admin' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950853076' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.623986459 +0000 UTC m=+0.043280082 container create cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:00:05 np0005593232 systemd[1]: Started libpod-conmon-cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5.scope.
Jan 23 04:00:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.689890854 +0000 UTC m=+0.109184487 container init cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.696307826 +0000 UTC m=+0.115601449 container start cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.601771737 +0000 UTC m=+0.021065380 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.699286741 +0000 UTC m=+0.118580384 container attach cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:00:05 np0005593232 elated_germain[82552]: 167 167
Jan 23 04:00:05 np0005593232 systemd[1]: libpod-cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5.scope: Deactivated successfully.
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.705291672 +0000 UTC m=+0.124585315 container died cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:00:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2f7043da6945a4ff97e26d617f6f94da6521685f24a04b5fb4375b9acb2fbb23-merged.mount: Deactivated successfully.
Jan 23 04:00:05 np0005593232 podman[82536]: 2026-01-23 09:00:05.775915811 +0000 UTC m=+0.195209434 container remove cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:00:05 np0005593232 systemd[1]: libpod-conmon-cfba8ca60429c962b85acda14892b149f7840b6a7e3855fe5d1c76b262ecb6f5.scope: Deactivated successfully.
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:00:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 25743afc-53d4-4b7f-bccd-fc46b2dddbc0 does not exist
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d7eb8521-624a-41ef-8cd3-cae9bd845c1d does not exist
Jan 23 04:00:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 260ca650-0a12-4c7a-8314-8482eb2c03e6 does not exist
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: Reconfiguring mgr.compute-0.yntofk (unknown last config time)...
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/950853076' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/950853076' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 23 04:00:06 np0005593232 strange_johnson[82345]: set require_min_compat_client to mimic
Jan 23 04:00:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 23 04:00:06 np0005593232 systemd[1]: libpod-ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644.scope: Deactivated successfully.
Jan 23 04:00:06 np0005593232 podman[82284]: 2026-01-23 09:00:06.261565473 +0000 UTC m=+1.606474182 container died ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:00:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4c0e4dc73095e14c75b582de06a86fdf1f59bc4d9ee3886abe3a06571472775e-merged.mount: Deactivated successfully.
Jan 23 04:00:06 np0005593232 podman[82284]: 2026-01-23 09:00:06.305232415 +0000 UTC m=+1.650141094 container remove ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644 (image=quay.io/ceph/ceph:v18, name=strange_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:00:06 np0005593232 systemd[1]: libpod-conmon-ccd02d614c7cf92ca2ec1f813e4036c7fb789e152e505aa7d6a74e0224276644.scope: Deactivated successfully.
Jan 23 04:00:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:06 np0005593232 python3[82659]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:07 np0005593232 podman[82660]: 2026-01-23 09:00:07.004169604 +0000 UTC m=+0.023668975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:07 np0005593232 podman[82660]: 2026-01-23 09:00:07.100811062 +0000 UTC m=+0.120310413 container create 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:00:07 np0005593232 systemd[1]: Started libpod-conmon-0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba.scope.
Jan 23 04:00:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9d4a7a8f86730512741b76a6c8155170bed0d86493bba8a8dcd3a98a1d8a18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9d4a7a8f86730512741b76a6c8155170bed0d86493bba8a8dcd3a98a1d8a18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba9d4a7a8f86730512741b76a6c8155170bed0d86493bba8a8dcd3a98a1d8a18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 1 completed events
Jan 23 04:00:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:00:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:07 np0005593232 podman[82660]: 2026-01-23 09:00:07.185567023 +0000 UTC m=+0.205066394 container init 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:00:07 np0005593232 podman[82660]: 2026-01-23 09:00:07.194205859 +0000 UTC m=+0.213705210 container start 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:07 np0005593232 podman[82660]: 2026-01-23 09:00:07.197473201 +0000 UTC m=+0.216972572 container attach 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:07 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/950853076' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 23 04:00:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:07 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 04:00:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Added host compute-0
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:00:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c5067911-756c-4f28-9b3a-14fc9958ff54 does not exist
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev daf20f7f-2643-42eb-8958-4509636e7720 does not exist
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ce59506d-b156-4964-8503-232e2461a8d6 does not exist
Jan 23 04:00:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: Added host compute-0
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:10 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 23 04:00:10 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 23 04:00:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:11 np0005593232 ceph-mon[74423]: Deploying cephadm binary to compute-1
Jan 23 04:00:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:14 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Added host compute-1
Jan 23 04:00:14 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 23 04:00:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:15 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 23 04:00:15 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 23 04:00:15 np0005593232 ceph-mon[74423]: Added host compute-1
Jan 23 04:00:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:16 np0005593232 ceph-mon[74423]: Deploying cephadm binary to compute-2
Jan 23 04:00:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Jan 23 04:00:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:18 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Added host compute-2
Jan 23 04:00:18 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 23 04:00:18 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:18 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 04:00:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 04:00:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:19 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Jan 23 04:00:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Added host 'compute-0' with addr '192.168.122.100'
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Added host 'compute-1' with addr '192.168.122.101'
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Added host 'compute-2' with addr '192.168.122.102'
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Scheduled mon update...
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Scheduled mgr update...
Jan 23 04:00:19 np0005593232 zen_khorana[82675]: Scheduled osd.default_drive_group update...
Jan 23 04:00:19 np0005593232 systemd[1]: libpod-0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba.scope: Deactivated successfully.
Jan 23 04:00:19 np0005593232 podman[82660]: 2026-01-23 09:00:19.04127697 +0000 UTC m=+12.060776321 container died 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:00:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ba9d4a7a8f86730512741b76a6c8155170bed0d86493bba8a8dcd3a98a1d8a18-merged.mount: Deactivated successfully.
Jan 23 04:00:19 np0005593232 podman[82660]: 2026-01-23 09:00:19.096185222 +0000 UTC m=+12.115684573 container remove 0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba (image=quay.io/ceph/ceph:v18, name=zen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:00:19 np0005593232 systemd[1]: libpod-conmon-0becd2b823409af127dfb6b4972624d5e7affb13d0d033f264031926e01b7bba.scope: Deactivated successfully.
Jan 23 04:00:19 np0005593232 python3[82906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:19 np0005593232 podman[82908]: 2026-01-23 09:00:19.602714408 +0000 UTC m=+0.044805545 container create 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:00:19 np0005593232 systemd[1]: Started libpod-conmon-0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020.scope.
Jan 23 04:00:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a37c0f962a19206ac5142407c2cd39d427474c39d5d35f276a7ef48bcf9b0d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a37c0f962a19206ac5142407c2cd39d427474c39d5d35f276a7ef48bcf9b0d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a37c0f962a19206ac5142407c2cd39d427474c39d5d35f276a7ef48bcf9b0d8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:19 np0005593232 podman[82908]: 2026-01-23 09:00:19.583806331 +0000 UTC m=+0.025897488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:19 np0005593232 podman[82908]: 2026-01-23 09:00:19.690606648 +0000 UTC m=+0.132697785 container init 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:00:19 np0005593232 podman[82908]: 2026-01-23 09:00:19.697657419 +0000 UTC m=+0.139748556 container start 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:19 np0005593232 podman[82908]: 2026-01-23 09:00:19.700795948 +0000 UTC m=+0.142887085 container attach 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Added host compute-2
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:00:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729909822' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:00:20 np0005593232 hopeful_moore[82924]: 
Jan 23 04:00:20 np0005593232 hopeful_moore[82924]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":92,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-23T08:58:44.980509+0000","services":{}},"progress_events":{}}
Jan 23 04:00:20 np0005593232 systemd[1]: libpod-0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020.scope: Deactivated successfully.
Jan 23 04:00:20 np0005593232 podman[82949]: 2026-01-23 09:00:20.530683022 +0000 UTC m=+0.028950664 container died 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:00:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7a37c0f962a19206ac5142407c2cd39d427474c39d5d35f276a7ef48bcf9b0d8-merged.mount: Deactivated successfully.
Jan 23 04:00:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:20 np0005593232 podman[82949]: 2026-01-23 09:00:20.929371022 +0000 UTC m=+0.427638674 container remove 0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020 (image=quay.io/ceph/ceph:v18, name=hopeful_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:00:20 np0005593232 systemd[1]: libpod-conmon-0d69765b43d0f1abbe6b30f043310c85e9a75dc48153c0691318f45d58e85020.scope: Deactivated successfully.
Jan 23 04:00:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:00:36
Jan 23 04:00:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:00:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:00:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] No pools available
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:00:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:52 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:00:52 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:00:52 np0005593232 python3[82989]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.244739634 +0000 UTC m=+0.040066081 container create 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:00:52 np0005593232 systemd[1]: Started libpod-conmon-57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37.scope.
Jan 23 04:00:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a848e2976e951627a812753118f92d5feef3226e1f31e62b375c7b7581cbc525/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a848e2976e951627a812753118f92d5feef3226e1f31e62b375c7b7581cbc525/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a848e2976e951627a812753118f92d5feef3226e1f31e62b375c7b7581cbc525/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.316020521 +0000 UTC m=+0.111347008 container init 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.324019939 +0000 UTC m=+0.119346396 container start 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.228277215 +0000 UTC m=+0.023603682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.327827347 +0000 UTC m=+0.123153804 container attach 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:00:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:00:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/852242592' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:00:52 np0005593232 sad_heyrovsky[83007]: 
Jan 23 04:00:52 np0005593232 sad_heyrovsky[83007]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":125,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T09:00:38.931319+0000","services":{}},"progress_events":{}}
Jan 23 04:00:52 np0005593232 systemd[1]: libpod-57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37.scope: Deactivated successfully.
Jan 23 04:00:52 np0005593232 podman[82991]: 2026-01-23 09:00:52.985644677 +0000 UTC m=+0.780971134 container died 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 23 04:00:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a848e2976e951627a812753118f92d5feef3226e1f31e62b375c7b7581cbc525-merged.mount: Deactivated successfully.
Jan 23 04:00:53 np0005593232 podman[82991]: 2026-01-23 09:00:53.024180783 +0000 UTC m=+0.819507230 container remove 57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37 (image=quay.io/ceph/ceph:v18, name=sad_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:00:53 np0005593232 systemd[1]: libpod-conmon-57a2f2945c941d12a75dae839ba83fd1858cfb8fbd9eb3bc82aed49f63cc3e37.scope: Deactivated successfully.
Jan 23 04:00:53 np0005593232 ceph-mon[74423]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:00:53 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:00:53 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:00:54 np0005593232 ceph-mon[74423]: Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:00:54 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:00:54 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:00:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:55 np0005593232 ceph-mon[74423]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:00:55 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:00:55 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 96893bb7-0c5d-4deb-93aa-43a14cf6f08e (Updating crash deployment (+1 -> 2))
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T09:00:56.648+0000 7f34116ec640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: service_name: mon
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: placement:
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  hosts:
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-0
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-1
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-2
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T09:00:56.649+0000 7f34116ec640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: service_name: mgr
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: placement:
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  hosts:
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-0
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-1
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]:  - compute-2
Jan 23 04:00:56 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 23 04:00:56 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: Deploying daemon crash.compute-1 on compute-1
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 23 04:00:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:00:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:00:58 np0005593232 ceph-mon[74423]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 96893bb7-0c5d-4deb-93aa-43a14cf6f08e (Updating crash deployment (+1 -> 2))
Jan 23 04:00:59 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 96893bb7-0c5d-4deb-93aa-43a14cf6f08e (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:00:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.250920626 +0000 UTC m=+0.038262299 container create f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:01:00 np0005593232 systemd[1]: Started libpod-conmon-f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a.scope.
Jan 23 04:01:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.233695286 +0000 UTC m=+0.021036969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.328725119 +0000 UTC m=+0.116066812 container init f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.335036679 +0000 UTC m=+0.122378352 container start f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.338712463 +0000 UTC m=+0.126054166 container attach f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:01:00 np0005593232 beautiful_chebyshev[83202]: 167 167
Jan 23 04:01:00 np0005593232 systemd[1]: libpod-f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a.scope: Deactivated successfully.
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.34070563 +0000 UTC m=+0.128047303 container died f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6e193c863f4cff6c2558d96bd9af2f35b55d8739ea7d7968482fd588937e2443-merged.mount: Deactivated successfully.
Jan 23 04:01:00 np0005593232 podman[83185]: 2026-01-23 09:01:00.376996922 +0000 UTC m=+0.164338595 container remove f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_chebyshev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:01:00 np0005593232 systemd[1]: libpod-conmon-f74fbca1d951720711beba4053dee568d825144af1841118ca4d0ee77460764a.scope: Deactivated successfully.
Jan 23 04:01:00 np0005593232 podman[83224]: 2026-01-23 09:01:00.542583472 +0000 UTC m=+0.039561116 container create 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:01:00 np0005593232 systemd[1]: Started libpod-conmon-9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f.scope.
Jan 23 04:01:00 np0005593232 podman[83224]: 2026-01-23 09:01:00.526125784 +0000 UTC m=+0.023103428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:00 np0005593232 podman[83224]: 2026-01-23 09:01:00.665390684 +0000 UTC m=+0.162368358 container init 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:00 np0005593232 podman[83224]: 2026-01-23 09:01:00.674320678 +0000 UTC m=+0.171298322 container start 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:00 np0005593232 podman[83224]: 2026-01-23 09:01:00.678027353 +0000 UTC m=+0.175005017 container attach 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:01 np0005593232 fervent_bartik[83240]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:01:01 np0005593232 fervent_bartik[83240]: --> relative data size: 1.0
Jan 23 04:01:01 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 04:01:01 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 90ada044-56e3-48f6-a1ea-fdd1f369ab0a
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a"} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/569395463' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a"}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/569395463' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a"}]': finished
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 23 04:01:02 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 2 completed events
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:02 np0005593232 lvm[83303]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 04:01:02 np0005593232 lvm[83303]: VG ceph_vg0 finished
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "6df7941f-8366-4880-b94b-b9b3810e23e9"} v 0) v1
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2619658229' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6df7941f-8366-4880-b94b-b9b3810e23e9"}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/2619658229' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6df7941f-8366-4880-b94b-b9b3810e23e9"}]': finished
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:02 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 23 04:01:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370671694' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: stderr: got monmap epoch 1
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: --> Creating keyring file for osd.0
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/569395463' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a"}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/569395463' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a"}]': finished
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.101:0/2619658229' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "6df7941f-8366-4880-b94b-b9b3810e23e9"}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.101:0/2619658229' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "6df7941f-8366-4880-b94b-b9b3810e23e9"}]': finished
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 23 04:01:02 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 90ada044-56e3-48f6-a1ea-fdd1f369ab0a --setuser ceph --setgroup ceph
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/333123229' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 23 04:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 04:01:03 np0005593232 ceph-mon[74423]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 23 04:01:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: stderr: 2026-01-23T09:01:02.816+0000 7f9087a10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: stderr: 2026-01-23T09:01:02.816+0000 7f9087a10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: stderr: 2026-01-23T09:01:02.816+0000 7f9087a10740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: stderr: 2026-01-23T09:01:02.816+0000 7f9087a10740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 23 04:01:06 np0005593232 fervent_bartik[83240]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 23 04:01:06 np0005593232 systemd[1]: libpod-9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f.scope: Deactivated successfully.
Jan 23 04:01:06 np0005593232 conmon[83240]: conmon 9b3145380b742dcbf407 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f.scope/container/memory.events
Jan 23 04:01:06 np0005593232 systemd[1]: libpod-9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f.scope: Consumed 3.193s CPU time.
Jan 23 04:01:06 np0005593232 podman[83224]: 2026-01-23 09:01:06.24152692 +0000 UTC m=+5.738504554 container died 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:01:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1bae419d143076f4fa347ea1a832f803941ad8cded3ca1dd3e1b12dfb6f8f6b6-merged.mount: Deactivated successfully.
Jan 23 04:01:06 np0005593232 podman[83224]: 2026-01-23 09:01:06.387951245 +0000 UTC m=+5.884928889 container remove 9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:01:06 np0005593232 systemd[1]: libpod-conmon-9b3145380b742dcbf407a6dfedba6c20955b423073b6b34b806d30129e85836f.scope: Deactivated successfully.
Jan 23 04:01:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:07 np0005593232 podman[84370]: 2026-01-23 09:01:06.966033045 +0000 UTC m=+0.020980956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:07 np0005593232 podman[84370]: 2026-01-23 09:01:07.616156564 +0000 UTC m=+0.671104455 container create 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:07 np0005593232 systemd[1]: Started libpod-conmon-65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94.scope.
Jan 23 04:01:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:08 np0005593232 podman[84370]: 2026-01-23 09:01:08.515683319 +0000 UTC m=+1.570631250 container init 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:01:08 np0005593232 podman[84370]: 2026-01-23 09:01:08.526342326 +0000 UTC m=+1.581290227 container start 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:08 np0005593232 sharp_heisenberg[84386]: 167 167
Jan 23 04:01:08 np0005593232 systemd[1]: libpod-65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94.scope: Deactivated successfully.
Jan 23 04:01:08 np0005593232 conmon[84386]: conmon 65b6227139f32fadeaa8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94.scope/container/memory.events
Jan 23 04:01:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:08 np0005593232 podman[84370]: 2026-01-23 09:01:08.734807791 +0000 UTC m=+1.789755682 container attach 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:01:08 np0005593232 podman[84370]: 2026-01-23 09:01:08.735394248 +0000 UTC m=+1.790342149 container died 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:01:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f9e38fe41b1d9227201893e9064c05a063d6df5483c17778b3cc71af38cee875-merged.mount: Deactivated successfully.
Jan 23 04:01:10 np0005593232 podman[84370]: 2026-01-23 09:01:10.00768074 +0000 UTC m=+3.062628671 container remove 65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:10 np0005593232 systemd[1]: libpod-conmon-65b6227139f32fadeaa87750009c244e6f8a9bf8641dd1f33dfd3c72c1d72a94.scope: Deactivated successfully.
Jan 23 04:01:10 np0005593232 podman[84410]: 2026-01-23 09:01:10.191979588 +0000 UTC m=+0.041081197 container create a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:01:10 np0005593232 systemd[1]: Started libpod-conmon-a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672.scope.
Jan 23 04:01:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0661c0162ba937ebc1ba3531f36337d34f473bee83460ca24fd32995465aa1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0661c0162ba937ebc1ba3531f36337d34f473bee83460ca24fd32995465aa1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0661c0162ba937ebc1ba3531f36337d34f473bee83460ca24fd32995465aa1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0661c0162ba937ebc1ba3531f36337d34f473bee83460ca24fd32995465aa1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:10 np0005593232 podman[84410]: 2026-01-23 09:01:10.175402789 +0000 UTC m=+0.024504418 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:10 np0005593232 podman[84410]: 2026-01-23 09:01:10.27905857 +0000 UTC m=+0.128160209 container init a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:01:10 np0005593232 podman[84410]: 2026-01-23 09:01:10.285140506 +0000 UTC m=+0.134242115 container start a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:01:10 np0005593232 podman[84410]: 2026-01-23 09:01:10.292942831 +0000 UTC m=+0.142044460 container attach a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:01:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]: {
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:    "0": [
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:        {
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "devices": [
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "/dev/loop3"
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            ],
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "lv_name": "ceph_lv0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "lv_size": "7511998464",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "name": "ceph_lv0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "tags": {
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.cluster_name": "ceph",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.crush_device_class": "",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.encrypted": "0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.osd_id": "0",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.type": "block",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:                "ceph.vdo": "0"
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            },
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "type": "block",
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:            "vg_name": "ceph_vg0"
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:        }
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]:    ]
Jan 23 04:01:11 np0005593232 reverent_elbakyan[84426]: }
Jan 23 04:01:11 np0005593232 systemd[1]: libpod-a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672.scope: Deactivated successfully.
Jan 23 04:01:11 np0005593232 podman[84410]: 2026-01-23 09:01:11.095428367 +0000 UTC m=+0.944529996 container died a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:01:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f0661c0162ba937ebc1ba3531f36337d34f473bee83460ca24fd32995465aa1f-merged.mount: Deactivated successfully.
Jan 23 04:01:11 np0005593232 podman[84410]: 2026-01-23 09:01:11.298720172 +0000 UTC m=+1.147821771 container remove a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:11 np0005593232 systemd[1]: libpod-conmon-a1ee7580524b0e1e00e1fd43cc65c3d7d6647e1a4cd2abf8ccbba1ebecdb9672.scope: Deactivated successfully.
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:11 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 23 04:01:11 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:11 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 23 04:01:11 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.893194085 +0000 UTC m=+0.036223236 container create f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:01:11 np0005593232 systemd[1]: Started libpod-conmon-f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d.scope.
Jan 23 04:01:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.963937986 +0000 UTC m=+0.106967157 container init f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.970050662 +0000 UTC m=+0.113079813 container start f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.877724008 +0000 UTC m=+0.020753179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:11 np0005593232 cranky_wozniak[84602]: 167 167
Jan 23 04:01:11 np0005593232 systemd[1]: libpod-f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d.scope: Deactivated successfully.
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.974759728 +0000 UTC m=+0.117789119 container attach f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:01:11 np0005593232 podman[84586]: 2026-01-23 09:01:11.97515929 +0000 UTC m=+0.118188441 container died f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:01:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3e5903cb748b7ab3394326f3c8bbc47872f02f01435a02213fee3868d20d2258-merged.mount: Deactivated successfully.
Jan 23 04:01:12 np0005593232 podman[84586]: 2026-01-23 09:01:12.006828644 +0000 UTC m=+0.149857795 container remove f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:12 np0005593232 systemd[1]: libpod-conmon-f69bbb27d276bd35fedf6c5b4821c176c36b7c381873bc275bb554f47e92528d.scope: Deactivated successfully.
Jan 23 04:01:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 04:01:12 np0005593232 ceph-mon[74423]: Deploying daemon osd.0 on compute-0
Jan 23 04:01:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 04:01:12 np0005593232 ceph-mon[74423]: Deploying daemon osd.1 on compute-1
Jan 23 04:01:12 np0005593232 podman[84634]: 2026-01-23 09:01:12.237100968 +0000 UTC m=+0.037379850 container create ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:12 np0005593232 systemd[1]: Started libpod-conmon-ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d.scope.
Jan 23 04:01:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:12 np0005593232 podman[84634]: 2026-01-23 09:01:12.3148099 +0000 UTC m=+0.115088802 container init ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:01:12 np0005593232 podman[84634]: 2026-01-23 09:01:12.222579439 +0000 UTC m=+0.022858341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:12 np0005593232 podman[84634]: 2026-01-23 09:01:12.324330625 +0000 UTC m=+0.124609507 container start ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:01:12 np0005593232 podman[84634]: 2026-01-23 09:01:12.328214017 +0000 UTC m=+0.128492919 container attach ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:01:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:12 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test[84650]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 23 04:01:12 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test[84650]:                            [--no-systemd] [--no-tmpfs]
Jan 23 04:01:12 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test[84650]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 23 04:01:13 np0005593232 systemd[1]: libpod-ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d.scope: Deactivated successfully.
Jan 23 04:01:13 np0005593232 podman[84634]: 2026-01-23 09:01:13.017728063 +0000 UTC m=+0.818006945 container died ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:01:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9b67ddb05e39d3ad4a2cc29f4eb8add4b2b3401c5abe2205c5a7d9ce5b874c3c-merged.mount: Deactivated successfully.
Jan 23 04:01:13 np0005593232 podman[84634]: 2026-01-23 09:01:13.067733026 +0000 UTC m=+0.868011908 container remove ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:13 np0005593232 systemd[1]: libpod-conmon-ea777e0347ebdeaee0644c5705368f716c75289c0deebc6c7b9301215ec7686d.scope: Deactivated successfully.
Jan 23 04:01:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:13 np0005593232 systemd[1]: Reloading.
Jan 23 04:01:13 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:01:13 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:01:13 np0005593232 systemd[1]: Reloading.
Jan 23 04:01:13 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:01:13 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:01:13 np0005593232 systemd[1]: Starting Ceph osd.0 for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:01:14 np0005593232 podman[84814]: 2026-01-23 09:01:14.050076461 +0000 UTC m=+0.036991648 container create 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:01:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:14 np0005593232 podman[84814]: 2026-01-23 09:01:14.115140928 +0000 UTC m=+0.102056135 container init 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:01:14 np0005593232 podman[84814]: 2026-01-23 09:01:14.122348416 +0000 UTC m=+0.109263603 container start 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:01:14 np0005593232 podman[84814]: 2026-01-23 09:01:14.125444986 +0000 UTC m=+0.112360203 container attach 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:14 np0005593232 podman[84814]: 2026-01-23 09:01:14.033517423 +0000 UTC m=+0.020432620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:15 np0005593232 bash[84814]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 23 04:01:15 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate[84829]: --> ceph-volume raw activate successful for osd ID: 0
Jan 23 04:01:15 np0005593232 bash[84814]: --> ceph-volume raw activate successful for osd ID: 0
Jan 23 04:01:15 np0005593232 systemd[1]: libpod-15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634.scope: Deactivated successfully.
Jan 23 04:01:15 np0005593232 systemd[1]: libpod-15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634.scope: Consumed 1.004s CPU time.
Jan 23 04:01:15 np0005593232 podman[84814]: 2026-01-23 09:01:15.115571845 +0000 UTC m=+1.102487032 container died 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:01:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a607ec10769c5ccbf4744ba9e6df8f3bfbdd0e26e5a2e62dfb452ff0a18a493d-merged.mount: Deactivated successfully.
Jan 23 04:01:15 np0005593232 podman[84814]: 2026-01-23 09:01:15.170387376 +0000 UTC m=+1.157302563 container remove 15b1e301daa29fa1ba00b5c82181e79077e80c5ec51cf59d537c5063ec72b634 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:15 np0005593232 podman[84991]: 2026-01-23 09:01:15.345735386 +0000 UTC m=+0.038730279 container create 8c77c123b1330c6b33417abe8bdcf1270c3aef4a8fcb166dd457bd34ad66abc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:01:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a1f0d79cca2b27a1ea17910c827dd86893ba32a4d81d63ace6272331d51bdc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a1f0d79cca2b27a1ea17910c827dd86893ba32a4d81d63ace6272331d51bdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a1f0d79cca2b27a1ea17910c827dd86893ba32a4d81d63ace6272331d51bdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a1f0d79cca2b27a1ea17910c827dd86893ba32a4d81d63ace6272331d51bdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33a1f0d79cca2b27a1ea17910c827dd86893ba32a4d81d63ace6272331d51bdc/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:15 np0005593232 podman[84991]: 2026-01-23 09:01:15.398719215 +0000 UTC m=+0.091714108 container init 8c77c123b1330c6b33417abe8bdcf1270c3aef4a8fcb166dd457bd34ad66abc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:01:15 np0005593232 podman[84991]: 2026-01-23 09:01:15.404329337 +0000 UTC m=+0.097324210 container start 8c77c123b1330c6b33417abe8bdcf1270c3aef4a8fcb166dd457bd34ad66abc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:15 np0005593232 bash[84991]: 8c77c123b1330c6b33417abe8bdcf1270c3aef4a8fcb166dd457bd34ad66abc4
Jan 23 04:01:15 np0005593232 podman[84991]: 2026-01-23 09:01:15.328916401 +0000 UTC m=+0.021911304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:15 np0005593232 systemd[1]: Started Ceph osd.0 for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: pidfile_write: ignore empty --pid-file
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9ef4d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9ef4d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9ef4d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9ef4d800 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8f000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8f000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8f000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8f000 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8f000 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9ef4d800 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: load: jerasure load: lrc 
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 04:01:15 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.029491025 +0000 UTC m=+0.037239585 container create 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:16 np0005593232 systemd[1]: Started libpod-conmon-3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707.scope.
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.014320258 +0000 UTC m=+0.022068838 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.15961824 +0000 UTC m=+0.167366840 container init 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.173996865 +0000 UTC m=+0.181745435 container start 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.17903185 +0000 UTC m=+0.186780420 container attach 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:01:16 np0005593232 cool_driscoll[85187]: 167 167
Jan 23 04:01:16 np0005593232 systemd[1]: libpod-3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707.scope: Deactivated successfully.
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.184167438 +0000 UTC m=+0.191915998 container died 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:01:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-200a35dfb07b0671419d07f9f755d275519b11b819835787c1a9cb664f0ace17-merged.mount: Deactivated successfully.
Jan 23 04:01:16 np0005593232 podman[85170]: 2026-01-23 09:01:16.24139295 +0000 UTC m=+0.249141520 container remove 3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_driscoll, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:01:16 np0005593232 systemd[1]: libpod-conmon-3251f2e8ed19265072662dfbb84aa53433a78fe5d04702ae7b335a2f2585d707.scope: Deactivated successfully.
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 04:01:16 np0005593232 podman[85217]: 2026-01-23 09:01:16.446373924 +0000 UTC m=+0.053423712 container create f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:01:16 np0005593232 systemd[1]: Started libpod-conmon-f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47.scope.
Jan 23 04:01:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381ab88094f5121062c3b5b4cbee27d65730f002a2bda309458c2b61708e6289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381ab88094f5121062c3b5b4cbee27d65730f002a2bda309458c2b61708e6289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381ab88094f5121062c3b5b4cbee27d65730f002a2bda309458c2b61708e6289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381ab88094f5121062c3b5b4cbee27d65730f002a2bda309458c2b61708e6289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:16 np0005593232 podman[85217]: 2026-01-23 09:01:16.422718632 +0000 UTC m=+0.029768440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:16 np0005593232 podman[85217]: 2026-01-23 09:01:16.525660792 +0000 UTC m=+0.132710600 container init f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9fd8fc00 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs mount
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 23 04:01:16 np0005593232 podman[85217]: 2026-01-23 09:01:16.534752105 +0000 UTC m=+0.141801903 container start f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs mount shared_bdev_used = 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: RocksDB version: 7.9.2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Git sha 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DB SUMMARY
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DB Session ID:  JI1KCQEQ0AL0R6FZNEW5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: CURRENT file:  CURRENT
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.error_if_exists: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.create_if_missing: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                     Options.env: 0x55fb9fde1c70
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                Options.info_log: 0x55fb9efcab60
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.statistics: (nil)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.use_fsync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.db_log_dir: 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.write_buffer_manager: 0x55fb9feec460
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.unordered_write: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.row_cache: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.wal_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.two_write_queues: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.wal_compression: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.atomic_flush: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 04:01:16 np0005593232 podman[85217]: 2026-01-23 09:01:16.54602763 +0000 UTC m=+0.153077438 container attach f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_background_jobs: 4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_background_compactions: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_subcompactions: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.max_open_files: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Compression algorithms supported:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZSTD supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kXpressCompression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kBZip2Compression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kLZ4Compression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZlibCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kSnappyCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efca5a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: be080271-2f9b-4a57-9996-f2667e192b12
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876552170, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876552437, "job": 1, "event": "recovery_finished"}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: freelist init
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: freelist _read_cfg
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs umount
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) close
Jan 23 04:01:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v47: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bdev(0x55fb9ff72400 /var/lib/ceph/osd/ceph-0/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 7.0 GiB
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs mount
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluefs mount shared_bdev_used = 4718592
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: RocksDB version: 7.9.2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Git sha 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DB SUMMARY
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DB Session ID:  JI1KCQEQ0AL0R6FZNEW4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: CURRENT file:  CURRENT
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: IDENTITY file:  IDENTITY
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.error_if_exists: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.create_if_missing: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.paranoid_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                     Options.env: 0x55fb9f1183f0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                Options.info_log: 0x55fb9efa7cc0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_file_opening_threads: 16
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.statistics: (nil)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.use_fsync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.max_log_file_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.allow_fallocate: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.use_direct_reads: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.create_missing_column_families: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.db_log_dir: 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                                 Options.wal_dir: db.wal
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.advise_random_on_open: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.write_buffer_manager: 0x55fb9feec8c0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                            Options.rate_limiter: (nil)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.unordered_write: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.row_cache: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                              Options.wal_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.allow_ingest_behind: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.two_write_queues: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.manual_wal_flush: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.wal_compression: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.atomic_flush: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.log_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.allow_data_in_errors: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.db_host_id: __hostname__
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_background_jobs: 4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_background_compactions: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_subcompactions: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.max_open_files: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.max_background_flushes: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Compression algorithms supported:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZSTD supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kXpressCompression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kBZip2Compression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kLZ4Compression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kZlibCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: #011kSnappyCompression supported: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efcb0a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efa7340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efa7340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:           Options.merge_operator: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.compaction_filter_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.sst_partitioner_factory: None
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb9efa7340)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fb9efc0dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.write_buffer_size: 16777216
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.max_write_buffer_number: 64
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.compression: LZ4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.num_levels: 7
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.level: 32767
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.compression_opts.strategy: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                  Options.compression_opts.enabled: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.arena_block_size: 1048576
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.disable_auto_compactions: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.inplace_update_support: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.bloom_locality: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                    Options.max_successive_merges: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.paranoid_file_checks: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.force_consistency_checks: 1
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.report_bg_io_stats: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                               Options.ttl: 2592000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                       Options.enable_blob_files: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                           Options.min_blob_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                          Options.blob_file_size: 268435456
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb:                Options.blob_file_starting_level: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: be080271-2f9b-4a57-9996-f2667e192b12
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876826363, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876829622, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158876, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "be080271-2f9b-4a57-9996-f2667e192b12", "db_session_id": "JI1KCQEQ0AL0R6FZNEW4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876832306, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158876, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "be080271-2f9b-4a57-9996-f2667e192b12", "db_session_id": "JI1KCQEQ0AL0R6FZNEW4", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876834805, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158876, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "be080271-2f9b-4a57-9996-f2667e192b12", "db_session_id": "JI1KCQEQ0AL0R6FZNEW4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769158876836170, "job": 1, "event": "recovery_finished"}
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fb9f090700
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: DB pointer 0x55fb9fed5a00
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 460.80 MB usage: 0
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: _get_class not permitted to load lua
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: _get_class not permitted to load sdk
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: _get_class not permitted to load test_remote_reads
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 load_pgs
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 load_pgs opened 0 pgs
Jan 23 04:01:16 np0005593232 ceph-osd[85010]: osd.0 0 log_to_monitors true
Jan 23 04:01:16 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0[85006]: 2026-01-23T09:01:16.888+0000 7f63be11a740 -1 osd.0 0 log_to_monitors true
Jan 23 04:01:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Jan 23 04:01:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]: {
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:        "osd_id": 0,
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:        "type": "bluestore"
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]:    }
Jan 23 04:01:17 np0005593232 sharp_hamilton[85234]: }
Jan 23 04:01:17 np0005593232 systemd[1]: libpod-f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47.scope: Deactivated successfully.
Jan 23 04:01:17 np0005593232 podman[85217]: 2026-01-23 09:01:17.440962083 +0000 UTC m=+1.048011891 container died f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:01:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-381ab88094f5121062c3b5b4cbee27d65730f002a2bda309458c2b61708e6289-merged.mount: Deactivated successfully.
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 23 04:01:17 np0005593232 podman[85217]: 2026-01-23 09:01:17.87372783 +0000 UTC m=+1.480777618 container remove f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_hamilton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 23 04:01:17 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 23 04:01:17 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]} v 0) v1
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0068 at location {host=compute-0,root=default}
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:17 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:17 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:01:17 np0005593232 systemd[1]: libpod-conmon-f04cf346ac72829e2d62ecf674db3453cd355d8b69205e87630c3636e2b3fc47.scope: Deactivated successfully.
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Jan 23 04:01:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v49: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 done with init, starting boot process
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 start_boot
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 23 04:01:18 np0005593232 ceph-osd[85010]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]} v 0) v1
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.1' initial_weight 0.0068 at location {host=compute-1,root=default}
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:18 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:18 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:19 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:19 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e8 e8: 2 total, 0 up, 2 in
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 0 up, 2 in
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:19 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:19 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: from='osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0068, "args": ["host=compute-0", "root=default"]}]': finished
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:19 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v52: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:20 np0005593232 podman[85893]: 2026-01-23 09:01:20.841920046 +0000 UTC m=+0.062943517 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:20 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:21 np0005593232 podman[85893]: 2026-01-23 09:01:21.027267794 +0000 UTC m=+0.248291205 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: from='osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0068, "args": ["host=compute-1", "root=default"]}]': finished
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:21 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:21 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:21 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:21 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v53: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:22 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:22 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:22 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:22 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:23 np0005593232 python3[86204]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:01:23 np0005593232 podman[86234]: 2026-01-23 09:01:23.472601023 +0000 UTC m=+0.029370319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:01:23 np0005593232 podman[86234]: 2026-01-23 09:01:23.60629698 +0000 UTC m=+0.163066246 container create 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:01:23 np0005593232 systemd[1]: Started libpod-conmon-56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d.scope.
Jan 23 04:01:23 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:23 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e3d8af3e51dfde1757136bd56309b48af9ed36f0ce139bc575b3814c402d16/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e3d8af3e51dfde1757136bd56309b48af9ed36f0ce139bc575b3814c402d16/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e3d8af3e51dfde1757136bd56309b48af9ed36f0ce139bc575b3814c402d16/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:23 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:23 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:23 np0005593232 podman[86234]: 2026-01-23 09:01:23.986982655 +0000 UTC m=+0.543751951 container init 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:01:23 np0005593232 podman[86234]: 2026-01-23 09:01:23.995588683 +0000 UTC m=+0.552357949 container start 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:24 np0005593232 podman[86234]: 2026-01-23 09:01:24.021068888 +0000 UTC m=+0.577838164 container attach 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.296954599 +0000 UTC m=+0.021593774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.409811346 +0000 UTC m=+0.134450501 container create 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:01:24 np0005593232 systemd[1]: Started libpod-conmon-2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005.scope.
Jan 23 04:01:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.603771902 +0000 UTC m=+0.328411077 container init 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.609207809 +0000 UTC m=+0.333846964 container start 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:01:24 np0005593232 nostalgic_antonelli[86308]: 167 167
Jan 23 04:01:24 np0005593232 systemd[1]: libpod-2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005.scope: Deactivated successfully.
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.61617145 +0000 UTC m=+0.340810605 container attach 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.616443048 +0000 UTC m=+0.341082203 container died 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:01:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a816e291cf0667af70f1e5510f3fa2f80a40cf4332ea5022400877aea586c482-merged.mount: Deactivated successfully.
Jan 23 04:01:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v54: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 23 04:01:24 np0005593232 podman[86292]: 2026-01-23 09:01:24.771012888 +0000 UTC m=+0.495652033 container remove 2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_antonelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:01:24 np0005593232 systemd[1]: libpod-conmon-2e58ba3f0b3eaf43b89d931af10f742df5f3f7262124b81d4bfee02bb74a8005.scope: Deactivated successfully.
Jan 23 04:01:24 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:24 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:24 np0005593232 podman[86353]: 2026-01-23 09:01:24.947647385 +0000 UTC m=+0.064796381 container create 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2044457908' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:01:24 np0005593232 nostalgic_lamarr[86263]: 
Jan 23 04:01:24 np0005593232 nostalgic_lamarr[86263]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":157,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":8,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1769158862,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T09:00:38.931319+0000","services":{}},"progress_events":{}}
Jan 23 04:01:24 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/1103297493; not ready for session (expect reconnect)
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:24 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 23 04:01:25 np0005593232 podman[86353]: 2026-01-23 09:01:24.924902248 +0000 UTC m=+0.042051274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:01:25 np0005593232 podman[86234]: 2026-01-23 09:01:25.113557842 +0000 UTC m=+1.670327118 container died 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:01:25 np0005593232 systemd[1]: Started libpod-conmon-7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d.scope.
Jan 23 04:01:25 np0005593232 systemd[1]: libpod-56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d.scope: Deactivated successfully.
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: OSD bench result of 4310.201773 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 04:01:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91255b535f5e08c7c1e301652dcbcc3022f419802a718b520aa0ff52cb8efa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91255b535f5e08c7c1e301652dcbcc3022f419802a718b520aa0ff52cb8efa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91255b535f5e08c7c1e301652dcbcc3022f419802a718b520aa0ff52cb8efa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba91255b535f5e08c7c1e301652dcbcc3022f419802a718b520aa0ff52cb8efa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 23 04:01:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-12e3d8af3e51dfde1757136bd56309b48af9ed36f0ce139bc575b3814c402d16-merged.mount: Deactivated successfully.
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493] boot
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 23 04:01:25 np0005593232 podman[86353]: 2026-01-23 09:01:25.168036734 +0000 UTC m=+0.285185770 container init 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 23 04:01:25 np0005593232 podman[86353]: 2026-01-23 09:01:25.179358611 +0000 UTC m=+0.296507617 container start 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] creating mgr pool
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 23 04:01:25 np0005593232 podman[86353]: 2026-01-23 09:01:25.19734055 +0000 UTC m=+0.314489556 container attach 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:01:25 np0005593232 podman[86234]: 2026-01-23 09:01:25.218908612 +0000 UTC m=+1.775677878 container remove 56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d (image=quay.io/ceph/ceph:v18, name=nostalgic_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:01:25 np0005593232 systemd[1]: libpod-conmon-56279cc2ac4d55c61f2a91d8b4ae038857d857160fb87032ce8d52617023370d.scope: Deactivated successfully.
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 16.319 iops: 4177.623 elapsed_sec: 0.718
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [WRN] : OSD bench result of 4177.622709 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 0 waiting for initial osdmap
Jan 23 04:01:25 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0[85006]: 2026-01-23T09:01:25.336+0000 7f63ba09a640 -1 osd.0 0 waiting for initial osdmap
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 set_numa_affinity not setting numa affinity
Jan 23 04:01:25 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-osd-0[85006]: 2026-01-23T09:01:25.369+0000 7f63b56c2640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 23 04:01:25 np0005593232 ceph-osd[85010]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3293965730; not ready for session (expect reconnect)
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:25 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: osd.1 [v2:192.168.122.101:6800/1103297493,v1:192.168.122.101:6801/1103297493] boot
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: OSD bench result of 4177.622709 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 crush map has features 3314933000852226048, adjusting msgr requires
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 crush map has features 288514051259236352, adjusting msgr requires
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730] boot
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 23 04:01:26 np0005593232 ceph-osd[85010]: osd.0 10 state: booting -> active
Jan 23 04:01:26 np0005593232 ceph-osd[85010]: osd.0 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 23 04:01:26 np0005593232 ceph-osd[85010]: osd.0 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 23 04:01:26 np0005593232 ceph-osd[85010]: osd.0 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]: [
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:    {
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "available": false,
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "ceph_device": false,
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "lsm_data": {},
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "lvs": [],
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "path": "/dev/sr0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "rejected_reasons": [
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "Insufficient space (<5GB)",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "Has a FileSystem"
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        ],
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        "sys_api": {
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "actuators": null,
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "device_nodes": "sr0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "devname": "sr0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "human_readable_size": "482.00 KB",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "id_bus": "ata",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "model": "QEMU DVD-ROM",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "nr_requests": "2",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "parent": "/dev/sr0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "partitions": {},
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "path": "/dev/sr0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "removable": "1",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "rev": "2.5+",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "ro": "0",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "rotational": "1",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "sas_address": "",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "sas_device_handle": "",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "scheduler_mode": "mq-deadline",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "sectors": 0,
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "sectorsize": "2048",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "size": 493568.0,
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "support_discard": "2048",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "type": "disk",
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:            "vendor": "QEMU"
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:        }
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]:    }
Jan 23 04:01:26 np0005593232 inspiring_golick[86373]: ]
Jan 23 04:01:26 np0005593232 systemd[1]: libpod-7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d.scope: Deactivated successfully.
Jan 23 04:01:26 np0005593232 systemd[1]: libpod-7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d.scope: Consumed 1.372s CPU time.
Jan 23 04:01:26 np0005593232 podman[86353]: 2026-01-23 09:01:26.544658185 +0000 UTC m=+1.661807191 container died 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:01:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ba91255b535f5e08c7c1e301652dcbcc3022f419802a718b520aa0ff52cb8efa-merged.mount: Deactivated successfully.
Jan 23 04:01:26 np0005593232 podman[86353]: 2026-01-23 09:01:26.598442277 +0000 UTC m=+1.715591303 container remove 7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:01:26 np0005593232 systemd[1]: libpod-conmon-7cb0210a080ce15738901dbe01c081a385f12178a53425c31f28e71098226e2d.scope: Deactivated successfully.
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:26 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 04:01:26 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 04:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 23 04:01:26 np0005593232 ceph-mgr[74726]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:01:26 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:01:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 6.6 GiB / 7.0 GiB avail
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: osd.0 [v2:192.168.122.100:6802/3293965730,v1:192.168.122.100:6803/3293965730] boot
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:01:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] creating main.db for devicehealth
Jan 23 04:01:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 04:01:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 23 04:01:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 04:01:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.yntofk(active, since 112s)
Jan 23 04:01:30 np0005593232 ceph-mon[74423]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 04:01:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:01:36
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr']
Jan 23 04:01:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156896 quantized to 1 (current 1)
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:01:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:01:47 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:01:47 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:01:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:01:49 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:01:49 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:01:49 np0005593232 ceph-mon[74423]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:01:49 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:01:49 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:01:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:50 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:01:50 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:51 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 380f328f-7d43-4540-b217-12d19f7a2c80 (Updating mon deployment (+2 -> 3))
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:51 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 23 04:01:51 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.client.admin.keyring
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:01:52 np0005593232 ceph-mon[74423]: Deploying daemon mon.compute-2 on compute-2
Jan 23 04:01:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 23 04:01:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 04:01:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:01:53 np0005593232 ceph-mon[74423]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 23 04:01:53 np0005593232 ceph-mon[74423]: Cluster is now healthy
Jan 23 04:01:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 23 04:01:54 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 23 04:01:54 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:54 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 23 04:01:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:01:55 np0005593232 python3[87603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:01:55 np0005593232 podman[87605]: 2026-01-23 09:01:55.605354436 +0000 UTC m=+0.085652193 container create 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:01:55 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:55 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:55 np0005593232 systemd[76038]: Starting Mark boot as successful...
Jan 23 04:01:55 np0005593232 systemd[76038]: Finished Mark boot as successful.
Jan 23 04:01:55 np0005593232 systemd[1]: Started libpod-conmon-16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6.scope.
Jan 23 04:01:55 np0005593232 podman[87605]: 2026-01-23 09:01:55.545466548 +0000 UTC m=+0.025764335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:01:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:01:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c7032323125194c275b292c34004681be6218f782e73739bc0fe484072cb87/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c7032323125194c275b292c34004681be6218f782e73739bc0fe484072cb87/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c7032323125194c275b292c34004681be6218f782e73739bc0fe484072cb87/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:01:55 np0005593232 podman[87605]: 2026-01-23 09:01:55.682335427 +0000 UTC m=+0.162633214 container init 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:01:55 np0005593232 podman[87605]: 2026-01-23 09:01:55.688819264 +0000 UTC m=+0.169117021 container start 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:01:55 np0005593232 podman[87605]: 2026-01-23 09:01:55.692445009 +0000 UTC m=+0.172742766 container attach 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:01:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:56 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:56 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 04:01:56 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:01:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:01:56 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 04:01:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:57 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:57 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:57 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:01:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:01:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:01:57 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 04:01:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:01:58 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:58 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 04:01:58 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:01:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:01:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:01:58 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : monmap e2: 2 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.yntofk(active, since 2m)
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 380f328f-7d43-4540-b217-12d19f7a2c80 (Updating mon deployment (+2 -> 3))
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 380f328f-7d43-4540-b217-12d19f7a2c80 (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 17817cf8-4036-4063-b4f7-e1b9574e0f5e (Updating mgr deployment (+2 -> 3))
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.nrjyzu on compute-2
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.nrjyzu on compute-2
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: Deploying daemon mon.compute-1 on compute-1
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0 calling monitor election
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-2 calling monitor election
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:01:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 04:01:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:00 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/1101659439; not ready for session (expect reconnect)
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439638452' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:02:00 np0005593232 laughing_robinson[87622]: 
Jan 23 04:02:00 np0005593232 laughing_robinson[87622]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":10,"quorum":[0,1],"quorum_names":["compute-0","compute-2"],"quorum_age":0,"monmap":{"epoch":2,"min_mon_release_name":"reef","num_mons":2},"osdmap":{"epoch":12,"num_osds":2,"num_up_osds":2,"osd_up_since":1769158886,"num_in_osds":2,"osd_in_since":1769158862,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894631936,"bytes_avail":14129364992,"bytes_total":15023996928},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-23T09:00:38.931319+0000","services":{}},"progress_events":{"380f328f-7d43-4540-b217-12d19f7a2c80":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Jan 23 04:02:00 np0005593232 systemd[1]: libpod-16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6.scope: Deactivated successfully.
Jan 23 04:02:00 np0005593232 podman[87605]: 2026-01-23 09:02:00.738355785 +0000 UTC m=+5.218653532 container died 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 04:02:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-32c7032323125194c275b292c34004681be6218f782e73739bc0fe484072cb87-merged.mount: Deactivated successfully.
Jan 23 04:02:00 np0005593232 podman[87605]: 2026-01-23 09:02:00.792405665 +0000 UTC m=+5.272703422 container remove 16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6 (image=quay.io/ceph/ceph:v18, name=laughing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:02:00 np0005593232 systemd[1]: libpod-conmon-16acb38e61cff64b01732de72e47c531c9dfb2f5471eec189cd83573c5f409e6.scope: Deactivated successfully.
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 23 04:02:00 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:00 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0) v1
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 23 04:02:00 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 23 04:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:02:01 np0005593232 python3[87683]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:01 np0005593232 podman[87684]: 2026-01-23 09:02:01.312662986 +0000 UTC m=+0.022943863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:01 np0005593232 podman[87684]: 2026-01-23 09:02:01.517152026 +0000 UTC m=+0.227432883 container create 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:02:01 np0005593232 systemd[1]: Started libpod-conmon-35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3.scope.
Jan 23 04:02:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb79b735741fedbb3f8ddc27890bdb52cabb201f0d650090a6adb7e7e7436cc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bb79b735741fedbb3f8ddc27890bdb52cabb201f0d650090a6adb7e7e7436cc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:01 np0005593232 podman[87684]: 2026-01-23 09:02:01.652530042 +0000 UTC m=+0.362810919 container init 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:01 np0005593232 podman[87684]: 2026-01-23 09:02:01.658380661 +0000 UTC m=+0.368661508 container start 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:02:01 np0005593232 podman[87684]: 2026-01-23 09:02:01.661759859 +0000 UTC m=+0.372040776 container attach 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:01 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:01 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:02 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 3 completed events
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:02 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:02 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:03 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:03 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:04 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:04 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 23 04:02:05 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:05 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : monmap e3: 3 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],compute-1=[v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0],compute-2=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]} removed_ranks: {} disallowed_leaders: {}
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.yntofk(active, since 2m)
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.wsgywz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wsgywz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0 calling monitor election
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-2 calling monitor election
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-1 calling monitor election
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wsgywz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:05 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.wsgywz on compute-1
Jan 23 04:02:05 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.wsgywz on compute-1
Jan 23 04:02:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/936567403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:06 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3496794167; not ready for session (expect reconnect)
Jan 23 04:02:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0) v1
Jan 23 04:02:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wsgywz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.wsgywz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: Deploying daemon mgr.compute-1.wsgywz on compute-1
Jan 23 04:02:07 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/936567403' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: mgr.server handle_report got status from non-daemon mon.compute-1
Jan 23 04:02:07 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T09:02:07.829+0000 7f341f708640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Jan 23 04:02:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/936567403' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 23 04:02:08 np0005593232 musing_merkle[87698]: pool 'vms' created
Jan 23 04:02:08 np0005593232 systemd[1]: libpod-35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3.scope: Deactivated successfully.
Jan 23 04:02:08 np0005593232 podman[87684]: 2026-01-23 09:02:08.755826494 +0000 UTC m=+7.466107361 container died 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/936567403' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7bb79b735741fedbb3f8ddc27890bdb52cabb201f0d650090a6adb7e7e7436cc-merged.mount: Deactivated successfully.
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:09 np0005593232 podman[87684]: 2026-01-23 09:02:09.061049361 +0000 UTC m=+7.771330218 container remove 35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3 (image=quay.io/ceph/ceph:v18, name=musing_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:09 np0005593232 systemd[1]: libpod-conmon-35774c7c7a762d49829ec1597c686ff74e375ef69938bfd7a2c87831753274b3.scope: Deactivated successfully.
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 17817cf8-4036-4063-b4f7-e1b9574e0f5e (Updating mgr deployment (+2 -> 3))
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 17817cf8-4036-4063-b4f7-e1b9574e0f5e (Updating mgr deployment (+2 -> 3)) in 9 seconds
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 8ba37624-b6e5-4537-a0fa-1336bb0c8200 (Updating crash deployment (+1 -> 3))
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 23 04:02:09 np0005593232 python3[87764]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:09 np0005593232 podman[87765]: 2026-01-23 09:02:09.437558336 +0000 UTC m=+0.071350731 container create 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:02:09 np0005593232 podman[87765]: 2026-01-23 09:02:09.392603578 +0000 UTC m=+0.026396003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:09 np0005593232 systemd[1]: Started libpod-conmon-46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed.scope.
Jan 23 04:02:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d021591ceb38db88f45b89c2aadf0d77be79fd37eed89b927f67358bf7454a17/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d021591ceb38db88f45b89c2aadf0d77be79fd37eed89b927f67358bf7454a17/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:09 np0005593232 podman[87765]: 2026-01-23 09:02:09.528815449 +0000 UTC m=+0.162607854 container init 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:09 np0005593232 podman[87765]: 2026-01-23 09:02:09.535472071 +0000 UTC m=+0.169264466 container start 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:02:09 np0005593232 podman[87765]: 2026-01-23 09:02:09.538723535 +0000 UTC m=+0.172515930 container attach 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:02:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 23 04:02:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v82: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 13 GiB / 14 GiB avail
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880218519' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 23 04:02:10 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 4 completed events
Jan 23 04:02:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:02:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 23 04:02:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v84: 2 pgs: 2 active+clean; 449 KiB data, 453 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2880218519' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 23 04:02:13 np0005593232 upbeat_meninsky[87780]: pool 'volumes' created
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: Deploying daemon crash.compute-2 on compute-2
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2880218519' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:13 np0005593232 systemd[1]: libpod-46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed.scope: Deactivated successfully.
Jan 23 04:02:13 np0005593232 podman[87765]: 2026-01-23 09:02:13.201643315 +0000 UTC m=+3.835435700 container died 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 23 04:02:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d021591ceb38db88f45b89c2aadf0d77be79fd37eed89b927f67358bf7454a17-merged.mount: Deactivated successfully.
Jan 23 04:02:13 np0005593232 podman[87765]: 2026-01-23 09:02:13.285494196 +0000 UTC m=+3.919286591 container remove 46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed (image=quay.io/ceph/ceph:v18, name=upbeat_meninsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:02:13 np0005593232 systemd[1]: libpod-conmon-46b266ebcbee46e89ad51c79d11e35f6e48f47dd734d62eaaa0eb272e0f9c5ed.scope: Deactivated successfully.
Jan 23 04:02:13 np0005593232 python3[87845]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:13 np0005593232 podman[87846]: 2026-01-23 09:02:13.64472456 +0000 UTC m=+0.044214966 container create 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:02:13 np0005593232 systemd[1]: Started libpod-conmon-03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69.scope.
Jan 23 04:02:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb46c7b7ebfbea98fddfef0fb2cb1bdc2de5e0ec1e6ef11113d6d8a6a9be39a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcb46c7b7ebfbea98fddfef0fb2cb1bdc2de5e0ec1e6ef11113d6d8a6a9be39a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:13 np0005593232 podman[87846]: 2026-01-23 09:02:13.624648864 +0000 UTC m=+0.024139290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:13 np0005593232 podman[87846]: 2026-01-23 09:02:13.726343127 +0000 UTC m=+0.125833553 container init 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:13 np0005593232 podman[87846]: 2026-01-23 09:02:13.732845191 +0000 UTC m=+0.132335597 container start 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:02:13 np0005593232 podman[87846]: 2026-01-23 09:02:13.736581946 +0000 UTC m=+0.136072352 container attach 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v86: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/136452763' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2880218519' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 23 04:02:15 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:15 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 8ba37624-b6e5-4537-a0fa-1336bb0c8200 (Updating crash deployment (+1 -> 3))
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 8ba37624-b6e5-4537-a0fa-1336bb0c8200 (Updating crash deployment (+1 -> 3)) in 6 seconds
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v88: 3 pgs: 1 unknown, 2 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.273450646 +0000 UTC m=+0.040385758 container create cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:02:16 np0005593232 systemd[1]: Started libpod-conmon-cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67.scope.
Jan 23 04:02:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.320989974 +0000 UTC m=+0.087925106 container init cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.326126869 +0000 UTC m=+0.093061981 container start cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.328947288 +0000 UTC m=+0.095882480 container attach cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:02:16 np0005593232 inspiring_greider[88040]: 167 167
Jan 23 04:02:16 np0005593232 systemd[1]: libpod-cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67.scope: Deactivated successfully.
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.330468161 +0000 UTC m=+0.097403293 container died cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:02:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9f5c5f639758f5f652282c6fe936e81140d1df86839e9f80b2a2058ad4d19159-merged.mount: Deactivated successfully.
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.25867312 +0000 UTC m=+0.025608252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:16 np0005593232 podman[88024]: 2026-01-23 09:02:16.3630996 +0000 UTC m=+0.130034722 container remove cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:16 np0005593232 systemd[1]: libpod-conmon-cf131ec475941e5634c4bd9cee7a742a1f0a120b63fefcc159765797e85c7f67.scope: Deactivated successfully.
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/136452763' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 23 04:02:16 np0005593232 musing_hypatia[87861]: pool 'backups' created
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/136452763' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:16 np0005593232 systemd[1]: libpod-03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69.scope: Deactivated successfully.
Jan 23 04:02:16 np0005593232 conmon[87861]: conmon 03d9661eb6e4c08a205f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69.scope/container/memory.events
Jan 23 04:02:16 np0005593232 podman[87846]: 2026-01-23 09:02:16.465100591 +0000 UTC m=+2.864591017 container died 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fcb46c7b7ebfbea98fddfef0fb2cb1bdc2de5e0ec1e6ef11113d6d8a6a9be39a-merged.mount: Deactivated successfully.
Jan 23 04:02:16 np0005593232 podman[87846]: 2026-01-23 09:02:16.527130518 +0000 UTC m=+2.926620924 container remove 03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69 (image=quay.io/ceph/ceph:v18, name=musing_hypatia, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:16 np0005593232 systemd[1]: libpod-conmon-03d9661eb6e4c08a205f05bd6315e00757790695ee6c95e69bb0988968cf2f69.scope: Deactivated successfully.
Jan 23 04:02:16 np0005593232 podman[88064]: 2026-01-23 09:02:16.556951137 +0000 UTC m=+0.073818289 container create 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:16 np0005593232 systemd[1]: Started libpod-conmon-6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f.scope.
Jan 23 04:02:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:16 np0005593232 podman[88064]: 2026-01-23 09:02:16.535307838 +0000 UTC m=+0.052175020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:16 np0005593232 podman[88064]: 2026-01-23 09:02:16.634665275 +0000 UTC m=+0.151532457 container init 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:02:16 np0005593232 podman[88064]: 2026-01-23 09:02:16.642975229 +0000 UTC m=+0.159842391 container start 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:16 np0005593232 podman[88064]: 2026-01-23 09:02:16.652461216 +0000 UTC m=+0.169328368 container attach 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:16 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:16 np0005593232 python3[88121]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:16 np0005593232 podman[88122]: 2026-01-23 09:02:16.995762601 +0000 UTC m=+0.044090032 container create b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:02:17 np0005593232 podman[88122]: 2026-01-23 09:02:16.979468443 +0000 UTC m=+0.027795874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:17 np0005593232 systemd[1]: Started libpod-conmon-b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961.scope.
Jan 23 04:02:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52891b438ee03221e03bd23c7d0c303cfc814b8c072aeb70e228ab8b4ce97d7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52891b438ee03221e03bd23c7d0c303cfc814b8c072aeb70e228ab8b4ce97d7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:17 np0005593232 podman[88122]: 2026-01-23 09:02:17.164102721 +0000 UTC m=+0.212430172 container init b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:17 np0005593232 podman[88122]: 2026-01-23 09:02:17.169804661 +0000 UTC m=+0.218132092 container start b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:02:17 np0005593232 podman[88122]: 2026-01-23 09:02:17.172792686 +0000 UTC m=+0.221120137 container attach b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/136452763' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 23 04:02:17 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:17 np0005593232 musing_matsumoto[88091]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:02:17 np0005593232 musing_matsumoto[88091]: --> relative data size: 1.0
Jan 23 04:02:17 np0005593232 musing_matsumoto[88091]: --> All data devices are unavailable
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"} v 0) v1
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"}]: dispatch
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"}]': finished
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:17 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:17 np0005593232 systemd[1]: libpod-6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f.scope: Deactivated successfully.
Jan 23 04:02:17 np0005593232 systemd[1]: libpod-6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f.scope: Consumed 1.221s CPU time.
Jan 23 04:02:17 np0005593232 podman[88064]: 2026-01-23 09:02:17.899463274 +0000 UTC m=+1.416330426 container died 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:02:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e91dba52947cf64a5ce36277a660158c6a6edbaa9a6a7d7b137aff56033b769c-merged.mount: Deactivated successfully.
Jan 23 04:02:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v92: 4 pgs: 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:17 np0005593232 podman[88064]: 2026-01-23 09:02:17.952280921 +0000 UTC m=+1.469148073 container remove 6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_matsumoto, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:02:17 np0005593232 systemd[1]: libpod-conmon-6400a3aee99ceb5561e2a74f78b5782dfa5b0a5ff3e2da942e3d22c35a6eb87f.scope: Deactivated successfully.
Jan 23 04:02:17 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 5 completed events
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:02:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3224534207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.578775649 +0000 UTC m=+0.041262423 container create 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:02:18 np0005593232 systemd[1]: Started libpod-conmon-6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505.scope.
Jan 23 04:02:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.652850764 +0000 UTC m=+0.115337548 container init 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.558784516 +0000 UTC m=+0.021271310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.660699515 +0000 UTC m=+0.123186289 container start 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.665063508 +0000 UTC m=+0.127550312 container attach 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 04:02:18 np0005593232 happy_swanson[88342]: 167 167
Jan 23 04:02:18 np0005593232 systemd[1]: libpod-6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505.scope: Deactivated successfully.
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.667104556 +0000 UTC m=+0.129591330 container died 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:02:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-920f343594b46da5d14a863ac8f15d540d60aac634f7cf77e35d9f624e04513d-merged.mount: Deactivated successfully.
Jan 23 04:02:18 np0005593232 podman[88326]: 2026-01-23 09:02:18.70240928 +0000 UTC m=+0.164896054 container remove 6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:18 np0005593232 systemd[1]: libpod-conmon-6b50e72cdfee37f14c08df154bd2dccf83579cf39cf4e3686043630d03325505.scope: Deactivated successfully.
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/4125312751' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"}]: dispatch
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"}]: dispatch
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "1694e9fb-559e-40c4-a465-98d21c9c2b03"}]': finished
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3224534207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:18 np0005593232 podman[88365]: 2026-01-23 09:02:18.847105743 +0000 UTC m=+0.039389700 container create 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:02:18 np0005593232 systemd[1]: Started libpod-conmon-249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365.scope.
Jan 23 04:02:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70618f2c60de7d98d7e875cf99293897a0a55ca167d5bcb0e31a971d906c3fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70618f2c60de7d98d7e875cf99293897a0a55ca167d5bcb0e31a971d906c3fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70618f2c60de7d98d7e875cf99293897a0a55ca167d5bcb0e31a971d906c3fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a70618f2c60de7d98d7e875cf99293897a0a55ca167d5bcb0e31a971d906c3fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:18 np0005593232 podman[88365]: 2026-01-23 09:02:18.828793638 +0000 UTC m=+0.021077585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:18 np0005593232 podman[88365]: 2026-01-23 09:02:18.933127825 +0000 UTC m=+0.125411762 container init 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:02:18 np0005593232 podman[88365]: 2026-01-23 09:02:18.939254038 +0000 UTC m=+0.131537955 container start 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:18 np0005593232 podman[88365]: 2026-01-23 09:02:18.943897628 +0000 UTC m=+0.136181565 container attach 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:02:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 23 04:02:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3224534207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Jan 23 04:02:19 np0005593232 goofy_lumiere[88138]: pool 'images' created
Jan 23 04:02:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Jan 23 04:02:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:19 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:19 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:19 np0005593232 systemd[1]: libpod-b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961.scope: Deactivated successfully.
Jan 23 04:02:19 np0005593232 podman[88122]: 2026-01-23 09:02:19.107741261 +0000 UTC m=+2.156068722 container died b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:02:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c52891b438ee03221e03bd23c7d0c303cfc814b8c072aeb70e228ab8b4ce97d7-merged.mount: Deactivated successfully.
Jan 23 04:02:19 np0005593232 podman[88122]: 2026-01-23 09:02:19.151092132 +0000 UTC m=+2.199419563 container remove b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961 (image=quay.io/ceph/ceph:v18, name=goofy_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:19 np0005593232 systemd[1]: libpod-conmon-b7bd50d0cc765405f7c810c95a7a835fb41cc3f0fe1a9f0b613528859e996961.scope: Deactivated successfully.
Jan 23 04:02:19 np0005593232 python3[88425]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:19 np0005593232 podman[88426]: 2026-01-23 09:02:19.482614474 +0000 UTC m=+0.040288815 container create 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:02:19 np0005593232 systemd[1]: Started libpod-conmon-057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728.scope.
Jan 23 04:02:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082d3d0ca9bc54a40d2812d04fe7f7e559acd41b7d75f3cd7a7239230e056e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e082d3d0ca9bc54a40d2812d04fe7f7e559acd41b7d75f3cd7a7239230e056e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:19 np0005593232 podman[88426]: 2026-01-23 09:02:19.464531025 +0000 UTC m=+0.022205376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:19 np0005593232 podman[88426]: 2026-01-23 09:02:19.570487648 +0000 UTC m=+0.128162019 container init 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:19 np0005593232 podman[88426]: 2026-01-23 09:02:19.57624275 +0000 UTC m=+0.133917111 container start 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:02:19 np0005593232 podman[88426]: 2026-01-23 09:02:19.57977904 +0000 UTC m=+0.137453401 container attach 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]: {
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:    "0": [
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:        {
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "devices": [
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "/dev/loop3"
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            ],
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "lv_name": "ceph_lv0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "lv_size": "7511998464",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "name": "ceph_lv0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "tags": {
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.cluster_name": "ceph",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.crush_device_class": "",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.encrypted": "0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.osd_id": "0",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.type": "block",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:                "ceph.vdo": "0"
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            },
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "type": "block",
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:            "vg_name": "ceph_vg0"
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:        }
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]:    ]
Jan 23 04:02:19 np0005593232 jolly_babbage[88381]: }
Jan 23 04:02:19 np0005593232 systemd[1]: libpod-249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365.scope: Deactivated successfully.
Jan 23 04:02:19 np0005593232 podman[88449]: 2026-01-23 09:02:19.773665908 +0000 UTC m=+0.022581467 container died 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:02:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a70618f2c60de7d98d7e875cf99293897a0a55ca167d5bcb0e31a971d906c3fe-merged.mount: Deactivated successfully.
Jan 23 04:02:19 np0005593232 podman[88449]: 2026-01-23 09:02:19.818392557 +0000 UTC m=+0.067308086 container remove 249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_babbage, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:19 np0005593232 systemd[1]: libpod-conmon-249f9d82196c66e392e39b4f0a77157ee692e85c1f5371e1c6f0b0c462cb5365.scope: Deactivated successfully.
Jan 23 04:02:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v94: 5 pgs: 1 unknown, 1 creating+peering, 3 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3224534207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Jan 23 04:02:20 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:20 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2806909960' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.375479651 +0000 UTC m=+0.034279496 container create 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:02:20 np0005593232 systemd[1]: Started libpod-conmon-0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868.scope.
Jan 23 04:02:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.455615107 +0000 UTC m=+0.114414972 container init 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.3608813 +0000 UTC m=+0.019681165 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.461776981 +0000 UTC m=+0.120576826 container start 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:02:20 np0005593232 cool_nobel[88643]: 167 167
Jan 23 04:02:20 np0005593232 systemd[1]: libpod-0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868.scope: Deactivated successfully.
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.465241778 +0000 UTC m=+0.124041703 container attach 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.469863148 +0000 UTC m=+0.128663023 container died 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3f300530af2a5632c3de13434a2af8ef315ffcb96e615b8695a4675460575b58-merged.mount: Deactivated successfully.
Jan 23 04:02:20 np0005593232 podman[88625]: 2026-01-23 09:02:20.513048714 +0000 UTC m=+0.171848599 container remove 0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:20 np0005593232 systemd[1]: libpod-conmon-0c3ccaa9bab4aeedc997c41dbf6a6189efb1438c1252d811a7cf3d9becec5868.scope: Deactivated successfully.
Jan 23 04:02:20 np0005593232 podman[88669]: 2026-01-23 09:02:20.666858374 +0000 UTC m=+0.046678585 container create 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:02:20 np0005593232 systemd[1]: Started libpod-conmon-5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943.scope.
Jan 23 04:02:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63ecc9da732fbe84b82c388a219e2573b5cf241c6ab607ab4df0082473c35d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63ecc9da732fbe84b82c388a219e2573b5cf241c6ab607ab4df0082473c35d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63ecc9da732fbe84b82c388a219e2573b5cf241c6ab607ab4df0082473c35d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63ecc9da732fbe84b82c388a219e2573b5cf241c6ab607ab4df0082473c35d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:20 np0005593232 podman[88669]: 2026-01-23 09:02:20.645425511 +0000 UTC m=+0.025245732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:20 np0005593232 podman[88669]: 2026-01-23 09:02:20.743703058 +0000 UTC m=+0.123523319 container init 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:02:20 np0005593232 podman[88669]: 2026-01-23 09:02:20.75051469 +0000 UTC m=+0.130334901 container start 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:20 np0005593232 podman[88669]: 2026-01-23 09:02:20.754717858 +0000 UTC m=+0.134538069 container attach 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2806909960' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Jan 23 04:02:21 np0005593232 interesting_agnesi[88441]: pool 'cephfs.cephfs.meta' created
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:21 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2806909960' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:21 np0005593232 systemd[1]: libpod-057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728.scope: Deactivated successfully.
Jan 23 04:02:21 np0005593232 podman[88426]: 2026-01-23 09:02:21.135176289 +0000 UTC m=+1.692850670 container died 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:02:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e082d3d0ca9bc54a40d2812d04fe7f7e559acd41b7d75f3cd7a7239230e056e1-merged.mount: Deactivated successfully.
Jan 23 04:02:21 np0005593232 podman[88426]: 2026-01-23 09:02:21.192079301 +0000 UTC m=+1.749753652 container remove 057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728 (image=quay.io/ceph/ceph:v18, name=interesting_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:21 np0005593232 systemd[1]: libpod-conmon-057fc88bea475c7af37b8f9d85eeaa2739406687ba1c29a7e564c15474719728.scope: Deactivated successfully.
Jan 23 04:02:21 np0005593232 python3[88731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:21 np0005593232 podman[88740]: 2026-01-23 09:02:21.578730697 +0000 UTC m=+0.046147350 container create 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:21 np0005593232 systemd[1]: Started libpod-conmon-93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6.scope.
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]: {
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:        "osd_id": 0,
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:        "type": "bluestore"
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]:    }
Jan 23 04:02:21 np0005593232 lucid_goldwasser[88686]: }
Jan 23 04:02:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506da208e3d5184498df6d1497612ee8a729eb89b36a278c3668fa55fbfbd02d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/506da208e3d5184498df6d1497612ee8a729eb89b36a278c3668fa55fbfbd02d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:21 np0005593232 podman[88740]: 2026-01-23 09:02:21.557645753 +0000 UTC m=+0.025062426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:21 np0005593232 podman[88740]: 2026-01-23 09:02:21.655993392 +0000 UTC m=+0.123410055 container init 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:21 np0005593232 podman[88740]: 2026-01-23 09:02:21.661765724 +0000 UTC m=+0.129182377 container start 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:21 np0005593232 podman[88740]: 2026-01-23 09:02:21.664592274 +0000 UTC m=+0.132008927 container attach 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:21 np0005593232 systemd[1]: libpod-5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943.scope: Deactivated successfully.
Jan 23 04:02:21 np0005593232 podman[88669]: 2026-01-23 09:02:21.666830897 +0000 UTC m=+1.046651118 container died 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:02:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b63ecc9da732fbe84b82c388a219e2573b5cf241c6ab607ab4df0082473c35d2-merged.mount: Deactivated successfully.
Jan 23 04:02:21 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:21 np0005593232 podman[88669]: 2026-01-23 09:02:21.71984616 +0000 UTC m=+1.099666381 container remove 5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:02:21 np0005593232 systemd[1]: libpod-conmon-5c2be12cc17c3e2745da1d23df95269458afc90a54cdb168c314627a4fc9c943.scope: Deactivated successfully.
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v97: 6 pgs: 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2806909960' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Jan 23 04:02:22 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:22 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Jan 23 04:02:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443060591' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 23 04:02:23 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3443060591' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3443060591' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Jan 23 04:02:23 np0005593232 vibrant_hellman[88761]: pool 'cephfs.cephfs.data' created
Jan 23 04:02:23 np0005593232 systemd[1]: libpod-93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6.scope: Deactivated successfully.
Jan 23 04:02:23 np0005593232 podman[88740]: 2026-01-23 09:02:23.276571416 +0000 UTC m=+1.743988069 container died 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:23 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-506da208e3d5184498df6d1497612ee8a729eb89b36a278c3668fa55fbfbd02d-merged.mount: Deactivated successfully.
Jan 23 04:02:23 np0005593232 podman[88740]: 2026-01-23 09:02:23.362381942 +0000 UTC m=+1.829798595 container remove 93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6 (image=quay.io/ceph/ceph:v18, name=vibrant_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:23 np0005593232 systemd[1]: libpod-conmon-93a79b9fc454f609bfec17d581901f28dcd373c32aa8ba594cbbcb36cadaaed6.scope: Deactivated successfully.
Jan 23 04:02:23 np0005593232 python3[88843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:23 np0005593232 podman[88844]: 2026-01-23 09:02:23.784334601 +0000 UTC m=+0.061824972 container create 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:02:23 np0005593232 systemd[1]: Started libpod-conmon-101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9.scope.
Jan 23 04:02:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce4f3021fe53dbb8cf68e45b8db057f9913012ea9fccbd8c8c710921e70f00c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce4f3021fe53dbb8cf68e45b8db057f9913012ea9fccbd8c8c710921e70f00c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:23 np0005593232 podman[88844]: 2026-01-23 09:02:23.76156704 +0000 UTC m=+0.039057411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:23 np0005593232 podman[88844]: 2026-01-23 09:02:23.868536341 +0000 UTC m=+0.146026712 container init 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:02:23 np0005593232 podman[88844]: 2026-01-23 09:02:23.875509318 +0000 UTC m=+0.152999709 container start 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:02:23 np0005593232 podman[88844]: 2026-01-23 09:02:23.880398025 +0000 UTC m=+0.157888396 container attach 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/927391621' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: Deploying daemon osd.2 on compute-2
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3443060591' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/927391621' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Jan 23 04:02:24 np0005593232 awesome_bhabha[88859]: enabled application 'rbd' on pool 'vms'
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:24 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:24 np0005593232 systemd[1]: libpod-101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9.scope: Deactivated successfully.
Jan 23 04:02:24 np0005593232 podman[88844]: 2026-01-23 09:02:24.523193512 +0000 UTC m=+0.800683873 container died 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:02:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1ce4f3021fe53dbb8cf68e45b8db057f9913012ea9fccbd8c8c710921e70f00c-merged.mount: Deactivated successfully.
Jan 23 04:02:24 np0005593232 podman[88844]: 2026-01-23 09:02:24.566353477 +0000 UTC m=+0.843843828 container remove 101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9 (image=quay.io/ceph/ceph:v18, name=awesome_bhabha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:24 np0005593232 systemd[1]: libpod-conmon-101c4f634022a154bea0cda7bd7dfbc01f789bee84153507e1124d91a83875a9.scope: Deactivated successfully.
Jan 23 04:02:24 np0005593232 python3[88921]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:25 np0005593232 podman[88922]: 2026-01-23 09:02:24.927290809 +0000 UTC m=+0.020942751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:25 np0005593232 podman[88922]: 2026-01-23 09:02:25.226353309 +0000 UTC m=+0.320005231 container create 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:02:25 np0005593232 systemd[1]: Started libpod-conmon-01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb.scope.
Jan 23 04:02:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c38d407290b539ba2354443789106d7a5fe8ef9268acc18454b69f65585756e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c38d407290b539ba2354443789106d7a5fe8ef9268acc18454b69f65585756e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:25 np0005593232 podman[88922]: 2026-01-23 09:02:25.309041636 +0000 UTC m=+0.402693558 container init 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:25 np0005593232 podman[88922]: 2026-01-23 09:02:25.317234307 +0000 UTC m=+0.410886219 container start 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:25 np0005593232 podman[88922]: 2026-01-23 09:02:25.321277261 +0000 UTC m=+0.414929203 container attach 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/927391621' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/927391621' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:25 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Jan 23 04:02:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/174650588' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 23 04:02:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v103: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/174650588' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/174650588' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Jan 23 04:02:26 np0005593232 elated_euler[88937]: enabled application 'rbd' on pool 'volumes'
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:26 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:26 np0005593232 systemd[1]: libpod-01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb.scope: Deactivated successfully.
Jan 23 04:02:26 np0005593232 podman[88922]: 2026-01-23 09:02:26.825247782 +0000 UTC m=+1.918899714 container died 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:02:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5c38d407290b539ba2354443789106d7a5fe8ef9268acc18454b69f65585756e-merged.mount: Deactivated successfully.
Jan 23 04:02:26 np0005593232 podman[88922]: 2026-01-23 09:02:26.863157529 +0000 UTC m=+1.956809431 container remove 01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb (image=quay.io/ceph/ceph:v18, name=elated_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:02:26 np0005593232 systemd[1]: libpod-conmon-01656345f1ba64bb78ac84653a138d019c2e5adb81c6734b24f36759b9672feb.scope: Deactivated successfully.
Jan 23 04:02:27 np0005593232 python3[89002]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.nrjyzu started
Jan 23 04:02:27 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-2.nrjyzu 192.168.122.102:0/1942145204; not ready for session (expect reconnect)
Jan 23 04:02:27 np0005593232 podman[89003]: 2026-01-23 09:02:27.231081968 +0000 UTC m=+0.043736843 container create aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:02:27 np0005593232 podman[89003]: 2026-01-23 09:02:27.211418064 +0000 UTC m=+0.024072959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:27 np0005593232 systemd[1]: Started libpod-conmon-aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451.scope.
Jan 23 04:02:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10eb5709ae9b5ebd016f8f3edd9fcda4f221f64dc855ced451aa142c2e5e182e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10eb5709ae9b5ebd016f8f3edd9fcda4f221f64dc855ced451aa142c2e5e182e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:27 np0005593232 podman[89003]: 2026-01-23 09:02:27.353234156 +0000 UTC m=+0.165889051 container init aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:27 np0005593232 podman[89003]: 2026-01-23 09:02:27.358748952 +0000 UTC m=+0.171403817 container start aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:02:27 np0005593232 podman[89003]: 2026-01-23 09:02:27.361718405 +0000 UTC m=+0.174373310 container attach aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/174650588' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.yntofk(active, since 2m), standbys: compute-2.nrjyzu
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.nrjyzu", "id": "compute-2.nrjyzu"} v 0) v1
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr metadata", "who": "compute-2.nrjyzu", "id": "compute-2.nrjyzu"}]: dispatch
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Jan 23 04:02:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4038784885' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 23 04:02:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v105: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 23 04:02:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4038784885' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 23 04:02:29 np0005593232 boring_bouman[89018]: enabled application 'rbd' on pool 'backups'
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:29 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:29 np0005593232 systemd[1]: libpod-aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451.scope: Deactivated successfully.
Jan 23 04:02:29 np0005593232 podman[89003]: 2026-01-23 09:02:29.120249743 +0000 UTC m=+1.932904648 container died aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:02:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-10eb5709ae9b5ebd016f8f3edd9fcda4f221f64dc855ced451aa142c2e5e182e-merged.mount: Deactivated successfully.
Jan 23 04:02:29 np0005593232 podman[89003]: 2026-01-23 09:02:29.161432283 +0000 UTC m=+1.974087158 container remove aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451 (image=quay.io/ceph/ceph:v18, name=boring_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:29 np0005593232 systemd[1]: libpod-conmon-aafd52c3e9ec056e54f5bd30f540fa9994a8fdb7fd70a6056a1706c7980d8451.scope: Deactivated successfully.
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/4038784885' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 04:02:29 np0005593232 python3[89083]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:29 np0005593232 podman[89084]: 2026-01-23 09:02:29.495445656 +0000 UTC m=+0.039804621 container create e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:29 np0005593232 systemd[1]: Started libpod-conmon-e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4.scope.
Jan 23 04:02:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:29 np0005593232 podman[89084]: 2026-01-23 09:02:29.477284175 +0000 UTC m=+0.021643160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa19cb41b41e56a34c8fddf7a40c0c9d2667496be08f378c615f846572f7aacf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa19cb41b41e56a34c8fddf7a40c0c9d2667496be08f378c615f846572f7aacf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:29 np0005593232 podman[89084]: 2026-01-23 09:02:29.587729044 +0000 UTC m=+0.132088009 container init e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:29 np0005593232 podman[89084]: 2026-01-23 09:02:29.595165004 +0000 UTC m=+0.139523969 container start e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:02:29 np0005593232 podman[89084]: 2026-01-23 09:02:29.598452596 +0000 UTC m=+0.142811561 container attach e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1670773661' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/4038784885' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='osd.2 [v2:192.168.122.102:6800/2199131998,v1:192.168.122.102:6801/2199131998]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1670773661' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1670773661' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 23 04:02:30 np0005593232 frosty_cray[89121]: enabled application 'rbd' on pool 'images'
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 23 04:02:30 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 04:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e29 create-or-move crush item name 'osd.2' initial_weight 0.0068000000000000005 at location {host=compute-2,root=default}
Jan 23 04:02:30 np0005593232 systemd[1]: libpod-e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4.scope: Deactivated successfully.
Jan 23 04:02:30 np0005593232 podman[89084]: 2026-01-23 09:02:30.343624984 +0000 UTC m=+0.887983959 container died e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fa19cb41b41e56a34c8fddf7a40c0c9d2667496be08f378c615f846572f7aacf-merged.mount: Deactivated successfully.
Jan 23 04:02:30 np0005593232 podman[89084]: 2026-01-23 09:02:30.381149071 +0000 UTC m=+0.925508036 container remove e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4 (image=quay.io/ceph/ceph:v18, name=frosty_cray, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:02:30 np0005593232 systemd[1]: libpod-conmon-e65650d2f2af3ea1a693d7874f6efd2f8645e032bfee3f7a44f357dcf0242ca4.scope: Deactivated successfully.
Jan 23 04:02:30 np0005593232 python3[89341]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:30 np0005593232 podman[89342]: 2026-01-23 09:02:30.779488525 +0000 UTC m=+0.039664787 container create 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:02:30 np0005593232 systemd[1]: Started libpod-conmon-7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0.scope.
Jan 23 04:02:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9f4264aa93cacac9a68dea86ef415612fb6d67927ee190a0baba4db664be8e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9f4264aa93cacac9a68dea86ef415612fb6d67927ee190a0baba4db664be8e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:30 np0005593232 podman[89342]: 2026-01-23 09:02:30.830852522 +0000 UTC m=+0.091028794 container init 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:30 np0005593232 podman[89342]: 2026-01-23 09:02:30.836577323 +0000 UTC m=+0.096753585 container start 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:02:30 np0005593232 podman[89342]: 2026-01-23 09:02:30.839479864 +0000 UTC m=+0.099656126 container attach 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:30 np0005593232 podman[89342]: 2026-01-23 09:02:30.761477588 +0000 UTC m=+0.021653870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3627784438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1670773661' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: from='osd.2 [v2:192.168.122.102:6800/2199131998,v1:192.168.122.102:6801/2199131998]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 23 04:02:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=12.598067284s) [] r=-1 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 active pruub 87.231460571s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 30 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=30 pruub=12.598067284s) [] r=-1 lpr=30 pi=[20,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.231460571s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 30 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=15.978895187s) [] r=-1 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 active pruub 90.612724304s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 30 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=30 pruub=15.978895187s) [] r=-1 lpr=30 pi=[15,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.612724304s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:31 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:31 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:31 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v110: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 23 04:02:32 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:32 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3627784438' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3627784438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 23 04:02:32 np0005593232 jovial_mahavira[89358]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:32 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:32 np0005593232 systemd[1]: libpod-7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0.scope: Deactivated successfully.
Jan 23 04:02:32 np0005593232 podman[89342]: 2026-01-23 09:02:32.580921941 +0000 UTC m=+1.841098203 container died 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0a9f4264aa93cacac9a68dea86ef415612fb6d67927ee190a0baba4db664be8e-merged.mount: Deactivated successfully.
Jan 23 04:02:32 np0005593232 podman[89342]: 2026-01-23 09:02:32.625887847 +0000 UTC m=+1.886064109 container remove 7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0 (image=quay.io/ceph/ceph:v18, name=jovial_mahavira, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:32 np0005593232 systemd[1]: libpod-conmon-7ee41efc1cdcf7b6f2905b1fc10dccf5a6b4b50eeea9a333d8d6e0b57c7dd8e0.scope: Deactivated successfully.
Jan 23 04:02:32 np0005593232 python3[89418]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:32 np0005593232 podman[89419]: 2026-01-23 09:02:32.956777403 +0000 UTC m=+0.040688527 container create e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:02:32 np0005593232 systemd[1]: Started libpod-conmon-e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7.scope.
Jan 23 04:02:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ea2ea810b2ba57e82a046b4b68cff926852ea3ab58281a5a714d623f0825d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43ea2ea810b2ba57e82a046b4b68cff926852ea3ab58281a5a714d623f0825d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:33 np0005593232 podman[89419]: 2026-01-23 09:02:33.026380882 +0000 UTC m=+0.110292006 container init e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:02:33 np0005593232 podman[89419]: 2026-01-23 09:02:33.031759294 +0000 UTC m=+0.115670418 container start e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:02:33 np0005593232 podman[89419]: 2026-01-23 09:02:32.939011563 +0000 UTC m=+0.022922707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:33 np0005593232 podman[89419]: 2026-01-23 09:02:33.034798129 +0000 UTC m=+0.118709253 container attach e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:33 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:33 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/3627784438' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Jan 23 04:02:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1955789217' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 23 04:02:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.wsgywz started
Jan 23 04:02:34 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from mgr.compute-1.wsgywz 192.168.122.101:0/266564918; not ready for session (expect reconnect)
Jan 23 04:02:34 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:34 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1955789217' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1955789217' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Jan 23 04:02:34 np0005593232 infallible_darwin[89434]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.yntofk(active, since 2m), standbys: compute-2.nrjyzu, compute-1.wsgywz
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.wsgywz", "id": "compute-1.wsgywz"} v 0) v1
Jan 23 04:02:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr metadata", "who": "compute-1.wsgywz", "id": "compute-1.wsgywz"}]: dispatch
Jan 23 04:02:34 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:34 np0005593232 systemd[1]: libpod-e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7.scope: Deactivated successfully.
Jan 23 04:02:34 np0005593232 podman[89419]: 2026-01-23 09:02:34.867985409 +0000 UTC m=+1.951896533 container died e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-43ea2ea810b2ba57e82a046b4b68cff926852ea3ab58281a5a714d623f0825d0-merged.mount: Deactivated successfully.
Jan 23 04:02:34 np0005593232 podman[89419]: 2026-01-23 09:02:34.913642494 +0000 UTC m=+1.997553608 container remove e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7 (image=quay.io/ceph/ceph:v18, name=infallible_darwin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:34 np0005593232 systemd[1]: libpod-conmon-e18f97fee11465c34625e981c4c9736aac368e604421644ecbb071e3331554a7.scope: Deactivated successfully.
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1955789217' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:02:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v114: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 14 GiB / 14 GiB avail
Jan 23 04:02:35 np0005593232 python3[89775]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 04:02:36 np0005593232 python3[89982]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158955.7155437-37294-126473401644475/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2199131998; not ready for session (expect reconnect)
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: Unable to set osd_memory_target on compute-2 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: Updating compute-0:/etc/ceph/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: Updating compute-1:/etc/ceph/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: Updating compute-2:/etc/ceph/ceph.conf
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: OSD bench result of 5183.157191 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2199131998,v1:192.168.122.102:6801/2199131998] boot
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Jan 23 04:02:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 23 04:02:36 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.460808754s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.231460571s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:36 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=10.841969490s) [2] r=-1 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.612724304s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:36 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 33 pg[5.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=33 pruub=7.460680485s) [2] r=-1 lpr=33 pi=[20,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.231460571s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:36 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 33 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=33 pruub=10.841874123s) [2] r=-1 lpr=33 pi=[15,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.612724304s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:36 np0005593232 python3[90319]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:02:36
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'volumes']
Jan 23 04:02:36 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.009242174413735343 quantized to 1 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 15023996928
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:02:37 np0005593232 python3[90563]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158956.6337483-37308-230062769105872/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=3aae307f56819c17c1e42d999bca327bbbc6767f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7916070c-c319-431b-8f5b-91c1c4f58a6a does not exist
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f2f7ed28-a996-41c8-b791-388c2168fd3d does not exist
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b7269d55-086b-4b6d-9a81-a3ba028adf6a does not exist
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: Updating compute-0:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: Updating compute-2:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: Updating compute-1:/var/lib/ceph/e1533653-0a5a-584c-b34b-8689f0d32e77/config/ceph.conf
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: Cluster is now healthy
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: osd.2 [v2:192.168.122.102:6800/2199131998,v1:192.168.122.102:6801/2199131998] boot
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev dc7ec880-e534-4c78-aab8-1d8145e0e8a9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:02:37 np0005593232 python3[90713]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:37 np0005593232 podman[90767]: 2026-01-23 09:02:37.774165646 +0000 UTC m=+0.040220843 container create c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:37 np0005593232 systemd[1]: Started libpod-conmon-c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4.scope.
Jan 23 04:02:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cfdbd85ecec4178d88c86e745cea674702093752176d6b5c192202c6b20b2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cfdbd85ecec4178d88c86e745cea674702093752176d6b5c192202c6b20b2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cfdbd85ecec4178d88c86e745cea674702093752176d6b5c192202c6b20b2a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:37 np0005593232 podman[90767]: 2026-01-23 09:02:37.850987579 +0000 UTC m=+0.117042766 container init c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:02:37 np0005593232 podman[90767]: 2026-01-23 09:02:37.757416325 +0000 UTC m=+0.023471542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:37 np0005593232 podman[90767]: 2026-01-23 09:02:37.857465721 +0000 UTC m=+0.123520898 container start c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:37 np0005593232 podman[90767]: 2026-01-23 09:02:37.861540536 +0000 UTC m=+0.127595753 container attach c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v117: 7 pgs: 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.068122302 +0000 UTC m=+0.040005697 container create 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:38 np0005593232 systemd[1]: Started libpod-conmon-47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b.scope.
Jan 23 04:02:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.124629733 +0000 UTC m=+0.096513158 container init 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.130735095 +0000 UTC m=+0.102618490 container start 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:38 np0005593232 flamboyant_shtern[90866]: 167 167
Jan 23 04:02:38 np0005593232 systemd[1]: libpod-47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b.scope: Deactivated successfully.
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.13481344 +0000 UTC m=+0.106696835 container attach 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.135421857 +0000 UTC m=+0.107305252 container died 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.051681359 +0000 UTC m=+0.023564774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8f9c4d3ff8099f6b256334b093ccfa03f7ab1dc681041de1f6eaf24774a92f43-merged.mount: Deactivated successfully.
Jan 23 04:02:38 np0005593232 podman[90850]: 2026-01-23 09:02:38.193541943 +0000 UTC m=+0.165425338 container remove 47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shtern, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:38 np0005593232 systemd[1]: libpod-conmon-47cd1343e79e8a6c246370694b1245eac41a776f632f32079a98eb2779ab660b.scope: Deactivated successfully.
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:38 np0005593232 podman[90911]: 2026-01-23 09:02:38.33834573 +0000 UTC m=+0.039844683 container create fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:02:38 np0005593232 systemd[1]: Started libpod-conmon-fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672.scope.
Jan 23 04:02:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:38 np0005593232 podman[90911]: 2026-01-23 09:02:38.323153892 +0000 UTC m=+0.024652865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2470226038' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 04:02:38 np0005593232 podman[90911]: 2026-01-23 09:02:38.427048507 +0000 UTC m=+0.128547490 container init fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:02:38 np0005593232 podman[90911]: 2026-01-23 09:02:38.438017416 +0000 UTC m=+0.139516369 container start fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:38 np0005593232 podman[90911]: 2026-01-23 09:02:38.441165864 +0000 UTC m=+0.142664837 container attach fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2470226038' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 04:02:38 np0005593232 naughty_rubin[90806]: 
Jan 23 04:02:38 np0005593232 naughty_rubin[90806]: [global]
Jan 23 04:02:38 np0005593232 naughty_rubin[90806]: #011fsid = e1533653-0a5a-584c-b34b-8689f0d32e77
Jan 23 04:02:38 np0005593232 naughty_rubin[90806]: #011mon_host = 192.168.122.100
Jan 23 04:02:38 np0005593232 systemd[1]: libpod-c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4.scope: Deactivated successfully.
Jan 23 04:02:38 np0005593232 podman[90767]: 2026-01-23 09:02:38.521261399 +0000 UTC m=+0.787316626 container died c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-98cfdbd85ecec4178d88c86e745cea674702093752176d6b5c192202c6b20b2a-merged.mount: Deactivated successfully.
Jan 23 04:02:38 np0005593232 podman[90767]: 2026-01-23 09:02:38.568944762 +0000 UTC m=+0.834999949 container remove c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4 (image=quay.io/ceph/ceph:v18, name=naughty_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:02:38 np0005593232 systemd[1]: libpod-conmon-c7d591d9e6e33c142061c39877a61a26f1aba96a19cf45caca8c1dc7f8aeeec4.scope: Deactivated successfully.
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 23 04:02:38 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 92704d0d-701f-4c62-a830-f121333b215a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2470226038' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 23 04:02:38 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2470226038' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 23 04:02:38 np0005593232 python3[90970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:39 np0005593232 podman[90971]: 2026-01-23 09:02:38.911026063 +0000 UTC m=+0.024330456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:39 np0005593232 modest_chaum[90927]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:02:39 np0005593232 modest_chaum[90927]: --> relative data size: 1.0
Jan 23 04:02:39 np0005593232 modest_chaum[90927]: --> All data devices are unavailable
Jan 23 04:02:39 np0005593232 systemd[1]: libpod-fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672.scope: Deactivated successfully.
Jan 23 04:02:39 np0005593232 conmon[90927]: conmon fa81a8ae26e40fd3dec8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672.scope/container/memory.events
Jan 23 04:02:39 np0005593232 podman[90971]: 2026-01-23 09:02:39.597561551 +0000 UTC m=+0.710865924 container create 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:02:39 np0005593232 podman[90911]: 2026-01-23 09:02:39.61281271 +0000 UTC m=+1.314311673 container died fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:02:39 np0005593232 systemd[1]: Started libpod-conmon-3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e.scope.
Jan 23 04:02:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 23 04:02:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626a0f250f8675fcd4ebc444e843a2ea12c2fad847eb19128bfaac1274f3eff7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626a0f250f8675fcd4ebc444e843a2ea12c2fad847eb19128bfaac1274f3eff7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626a0f250f8675fcd4ebc444e843a2ea12c2fad847eb19128bfaac1274f3eff7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:39 np0005593232 podman[90971]: 2026-01-23 09:02:39.687805731 +0000 UTC m=+0.801110124 container init 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:02:39 np0005593232 podman[90971]: 2026-01-23 09:02:39.695814337 +0000 UTC m=+0.809118710 container start 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 23 04:02:39 np0005593232 podman[90971]: 2026-01-23 09:02:39.715939833 +0000 UTC m=+0.829244206 container attach 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 23 04:02:39 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 84ce9d90-4ff1-46c1-9562-651def63480a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7510e30088a67841f38b65be517b7c315454a7a1d4b12706e0e83a21d656b512-merged.mount: Deactivated successfully.
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:39 np0005593232 podman[90911]: 2026-01-23 09:02:39.833147703 +0000 UTC m=+1.534646656 container remove fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_chaum, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:39 np0005593232 systemd[1]: libpod-conmon-fa81a8ae26e40fd3dec8f52c6dc83a2ee248c6b90baf9aa76f8a77913a6c0672.scope: Deactivated successfully.
Jan 23 04:02:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v120: 38 pgs: 31 unknown, 2 peering, 5 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2903992486' entity='client.admin' 
Jan 23 04:02:40 np0005593232 laughing_carver[91006]: set ssl_option
Jan 23 04:02:40 np0005593232 systemd[1]: libpod-3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e.scope: Deactivated successfully.
Jan 23 04:02:40 np0005593232 podman[90971]: 2026-01-23 09:02:40.452368406 +0000 UTC m=+1.565672779 container died 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:02:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-626a0f250f8675fcd4ebc444e843a2ea12c2fad847eb19128bfaac1274f3eff7-merged.mount: Deactivated successfully.
Jan 23 04:02:40 np0005593232 podman[90971]: 2026-01-23 09:02:40.549767579 +0000 UTC m=+1.663071952 container remove 3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e (image=quay.io/ceph/ceph:v18, name=laughing_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.557854756 +0000 UTC m=+0.103756622 container create c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:40 np0005593232 systemd[1]: libpod-conmon-3589ddbc8255dc7eff22cade25fafacb41916a935fbe18b95360220459bb796e.scope: Deactivated successfully.
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.474587152 +0000 UTC m=+0.020489048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:40 np0005593232 systemd[1]: Started libpod-conmon-c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c.scope.
Jan 23 04:02:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.622602309 +0000 UTC m=+0.168504175 container init c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.629777241 +0000 UTC m=+0.175679107 container start c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:02:40 np0005593232 eloquent_heyrovsky[91198]: 167 167
Jan 23 04:02:40 np0005593232 systemd[1]: libpod-c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c.scope: Deactivated successfully.
Jan 23 04:02:40 np0005593232 conmon[91198]: conmon c607b93f5b437607e725 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c.scope/container/memory.events
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.632804636 +0000 UTC m=+0.178706622 container attach c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.645605467 +0000 UTC m=+0.191507353 container died c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:02:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8b41640f02d1da05f6a262861c4d0b7d34cc2c3f427bafc2a492013cf4ef225a-merged.mount: Deactivated successfully.
Jan 23 04:02:40 np0005593232 podman[91171]: 2026-01-23 09:02:40.684221264 +0000 UTC m=+0.230123130 container remove c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:02:40 np0005593232 systemd[1]: libpod-conmon-c607b93f5b437607e725dd62e323a87164fef8d783d9cfcbb9994748f356175c.scope: Deactivated successfully.
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 23 04:02:40 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 2651791f-fa12-45a1-9f7d-6187ee21404e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2903992486' entity='client.admin' 
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:40 np0005593232 podman[91249]: 2026-01-23 09:02:40.854692872 +0000 UTC m=+0.038728811 container create 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:02:40 np0005593232 systemd[1]: Started libpod-conmon-2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad.scope.
Jan 23 04:02:40 np0005593232 python3[91243]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61b2af1ec6a847aa098190604156efa737018ca591456b3ac295e11d74ada8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61b2af1ec6a847aa098190604156efa737018ca591456b3ac295e11d74ada8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61b2af1ec6a847aa098190604156efa737018ca591456b3ac295e11d74ada8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61b2af1ec6a847aa098190604156efa737018ca591456b3ac295e11d74ada8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:40 np0005593232 podman[91249]: 2026-01-23 09:02:40.838651281 +0000 UTC m=+0.022687240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:40 np0005593232 podman[91249]: 2026-01-23 09:02:40.939069588 +0000 UTC m=+0.123105547 container init 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:02:40 np0005593232 podman[91249]: 2026-01-23 09:02:40.9451723 +0000 UTC m=+0.129208239 container start 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:40 np0005593232 podman[91249]: 2026-01-23 09:02:40.947983319 +0000 UTC m=+0.132019278 container attach 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:40 np0005593232 podman[91267]: 2026-01-23 09:02:40.954234345 +0000 UTC m=+0.046994184 container create 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:40 np0005593232 systemd[1]: Started libpod-conmon-8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d.scope.
Jan 23 04:02:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cfe82048da9b5a78e1ef558bc652cb333cedfd80e967e0ffde8a92d3c4c30d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cfe82048da9b5a78e1ef558bc652cb333cedfd80e967e0ffde8a92d3c4c30d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55cfe82048da9b5a78e1ef558bc652cb333cedfd80e967e0ffde8a92d3c4c30d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:41.027924289 +0000 UTC m=+0.120684138 container init 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:40.935331763 +0000 UTC m=+0.028091662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:41.033057694 +0000 UTC m=+0.125817533 container start 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:41.035961256 +0000 UTC m=+0.128721095 container attach 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=8.005745888s) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active pruub 92.841362000s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=8.005745888s) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown pruub 92.841362000s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev d4be3bed-bac4-4aea-9c57-2fbdbb24cf97 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14307 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:41 np0005593232 frosty_hoover[91287]: Scheduled rgw.rgw update...
Jan 23 04:02:41 np0005593232 frosty_hoover[91287]: Scheduled ingress.rgw.default update...
Jan 23 04:02:41 np0005593232 systemd[1]: libpod-8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d.scope: Deactivated successfully.
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:41.797753933 +0000 UTC m=+0.890513802 container died 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 04:02:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55cfe82048da9b5a78e1ef558bc652cb333cedfd80e967e0ffde8a92d3c4c30d-merged.mount: Deactivated successfully.
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]: {
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:    "0": [
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:        {
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "devices": [
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "/dev/loop3"
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            ],
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "lv_name": "ceph_lv0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "lv_size": "7511998464",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "name": "ceph_lv0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "tags": {
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.cluster_name": "ceph",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.crush_device_class": "",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.encrypted": "0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.osd_id": "0",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.type": "block",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:                "ceph.vdo": "0"
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            },
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "type": "block",
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:            "vg_name": "ceph_vg0"
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:        }
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]:    ]
Jan 23 04:02:41 np0005593232 crazy_thompson[91268]: }
Jan 23 04:02:41 np0005593232 podman[91267]: 2026-01-23 09:02:41.847166824 +0000 UTC m=+0.939926663 container remove 8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d (image=quay.io/ceph/ceph:v18, name=frosty_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:02:41 np0005593232 systemd[1]: libpod-conmon-8b935cf77ac459f0547643cc9e1a2f146db3acf577d2366b162cc7b1470c6a5d.scope: Deactivated successfully.
Jan 23 04:02:41 np0005593232 systemd[1]: libpod-2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad.scope: Deactivated successfully.
Jan 23 04:02:41 np0005593232 podman[91249]: 2026-01-23 09:02:41.866147248 +0000 UTC m=+1.050183187 container died 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:02:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e61b2af1ec6a847aa098190604156efa737018ca591456b3ac295e11d74ada8f-merged.mount: Deactivated successfully.
Jan 23 04:02:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v123: 69 pgs: 2 peering, 67 active+clean; 449 KiB data, 480 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:41 np0005593232 podman[91249]: 2026-01-23 09:02:41.983732088 +0000 UTC m=+1.167768027 container remove 2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_thompson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:02:41 np0005593232 systemd[1]: libpod-conmon-2447714d304a6bb09d1117e6a81c055b1bf0cc54f94cc8b041cdf56130434fad.scope: Deactivated successfully.
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.638305797 +0000 UTC m=+0.047406506 container create 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:02:42 np0005593232 systemd[1]: Started libpod-conmon-39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f.scope.
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.614795645 +0000 UTC m=+0.023896394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.72367367 +0000 UTC m=+0.132774369 container init 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.731661715 +0000 UTC m=+0.140762414 container start 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.736945414 +0000 UTC m=+0.146046133 container attach 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 23 04:02:42 np0005593232 frosty_mclean[91499]: 167 167
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:42 np0005593232 systemd[1]: libpod-39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f.scope: Deactivated successfully.
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: Saving service ingress.rgw.default spec with placement count:2
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.740081842 +0000 UTC m=+0.149182541 container died 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 23 04:02:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev d9cfedf3-1192-4ba6-9fd8-80d1f4fd1144 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev dc7ec880-e534-4c78-aab8-1d8145e0e8a9 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event dc7ec880-e534-4c78-aab8-1d8145e0e8a9 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 92704d0d-701f-4c62-a830-f121333b215a (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 92704d0d-701f-4c62-a830-f121333b215a (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 84ce9d90-4ff1-46c1-9562-651def63480a (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 84ce9d90-4ff1-46c1-9562-651def63480a (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 2651791f-fa12-45a1-9f7d-6187ee21404e (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 2651791f-fa12-45a1-9f7d-6187ee21404e (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev d4be3bed-bac4-4aea-9c57-2fbdbb24cf97 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event d4be3bed-bac4-4aea-9c57-2fbdbb24cf97 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev d9cfedf3-1192-4ba6-9fd8-80d1f4fd1144 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 23 04:02:42 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event d9cfedf3-1192-4ba6-9fd8-80d1f4fd1144 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Jan 23 04:02:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-09ff71bd75a8989e9f5e147a1dcb3be555fe3ec519645596c50d94a74e7c8448-merged.mount: Deactivated successfully.
Jan 23 04:02:42 np0005593232 podman[91479]: 2026-01-23 09:02:42.786701325 +0000 UTC m=+0.195802024 container remove 39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:42 np0005593232 systemd[1]: libpod-conmon-39337db817b8205cfa1d5f410b7d03c9bcc508c63043bbb5e2fb4a4f2243c59f.scope: Deactivated successfully.
Jan 23 04:02:42 np0005593232 podman[91596]: 2026-01-23 09:02:42.95278184 +0000 UTC m=+0.046300724 container create 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:02:43 np0005593232 podman[91596]: 2026-01-23 09:02:42.93039364 +0000 UTC m=+0.023912544 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:43 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 11 completed events
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:02:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.102962494s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active pruub 97.290000916s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 39 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39 pruub=11.102962494s) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown pruub 97.290000916s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:43 np0005593232 systemd[1]: Started libpod-conmon-475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3.scope.
Jan 23 04:02:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0624bd23323e12267ad308a11c1354eda8c94e40b6f2198a5553a6c7c6e63508/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0624bd23323e12267ad308a11c1354eda8c94e40b6f2198a5553a6c7c6e63508/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0624bd23323e12267ad308a11c1354eda8c94e40b6f2198a5553a6c7c6e63508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0624bd23323e12267ad308a11c1354eda8c94e40b6f2198a5553a6c7c6e63508/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:43 np0005593232 python3[91590]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 04:02:43 np0005593232 podman[91596]: 2026-01-23 09:02:43.138186115 +0000 UTC m=+0.231705029 container init 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:02:43 np0005593232 podman[91596]: 2026-01-23 09:02:43.146005845 +0000 UTC m=+0.239524719 container start 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:43 np0005593232 podman[91596]: 2026-01-23 09:02:43.149223285 +0000 UTC m=+0.242742189 container attach 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:43 np0005593232 python3[91687]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158962.718004-37349-213157847075280/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:02:43 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 23 04:02:43 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 23 04:02:43 np0005593232 python3[91737]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v125: 100 pgs: 31 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:43.948832893 +0000 UTC m=+0.023837573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.044137656 +0000 UTC m=+0.119142306 container create f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 23 04:02:44 np0005593232 systemd[1]: Started libpod-conmon-f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51.scope.
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.16( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.10( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.12( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.9( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.18( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.19( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=22/23 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.13( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.14( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.11( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4d7f2b22fca9c33b3816b002efd1965aeb9a87aedbe27e7e043c98098fd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4d7f2b22fca9c33b3816b002efd1965aeb9a87aedbe27e7e043c98098fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f23f4d7f2b22fca9c33b3816b002efd1965aeb9a87aedbe27e7e043c98098fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.9( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.10( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.0( empty local-lis/les=39/40 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.6( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.16( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.18( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1f( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.1d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 40 pg[6.4( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=22/22 les/c/f=23/23/0 sis=39) [0] r=0 lpr=39 pi=[22,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.122945144 +0000 UTC m=+0.197949824 container init f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]: {
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:        "osd_id": 0,
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:        "type": "bluestore"
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]:    }
Jan 23 04:02:44 np0005593232 peaceful_proskuriakova[91612]: }
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.12955508 +0000 UTC m=+0.204559740 container start f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.133068669 +0000 UTC m=+0.208073329 container attach f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:44 np0005593232 systemd[1]: libpod-475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3.scope: Deactivated successfully.
Jan 23 04:02:44 np0005593232 systemd[1]: libpod-475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3.scope: Consumed 1.004s CPU time.
Jan 23 04:02:44 np0005593232 conmon[91612]: conmon 475cadac84169f989fc0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3.scope/container/memory.events
Jan 23 04:02:44 np0005593232 podman[91596]: 2026-01-23 09:02:44.157278891 +0000 UTC m=+1.250797775 container died 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:02:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0624bd23323e12267ad308a11c1354eda8c94e40b6f2198a5553a6c7c6e63508-merged.mount: Deactivated successfully.
Jan 23 04:02:44 np0005593232 podman[91596]: 2026-01-23 09:02:44.23538271 +0000 UTC m=+1.328901594 container remove 475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:44 np0005593232 systemd[1]: libpod-conmon-475cadac84169f989fc058d15866ae04657b6cee7cbb19307c28c218e9f256e3.scope: Deactivated successfully.
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14313 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 04:02:44 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0[74419]: 2026-01-23T09:02:44.728+0000 7f19f60cf640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e2 new map
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:02:44.729005+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 23 04:02:44 np0005593232 systemd[1]: libpod-f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51.scope: Deactivated successfully.
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.785565528 +0000 UTC m=+0.860570198 container died f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:02:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1f23f4d7f2b22fca9c33b3816b002efd1965aeb9a87aedbe27e7e043c98098fd-merged.mount: Deactivated successfully.
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:02:44 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:02:44 np0005593232 podman[91738]: 2026-01-23 09:02:44.833378715 +0000 UTC m=+0.908383375 container remove f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51 (image=quay.io/ceph/ceph:v18, name=vigilant_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:02:44 np0005593232 systemd[1]: libpod-conmon-f237a1425dbb3aaa7ac3194ad6dc4a6f4ab3aa923c41e79fcd9821e13bc6fa51.scope: Deactivated successfully.
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:45 np0005593232 python3[91970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.250128487 +0000 UTC m=+0.085685853 container create 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:45 np0005593232 systemd[1]: Started libpod-conmon-07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43.scope.
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.192706531 +0000 UTC m=+0.028263907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.293953421 +0000 UTC m=+0.045160472 container create 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 04:02:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d648c176eb608c96f2b3db7bf17b2c19b64c44d08a157e3d4a730968bc4213/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d648c176eb608c96f2b3db7bf17b2c19b64c44d08a157e3d4a730968bc4213/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d648c176eb608c96f2b3db7bf17b2c19b64c44d08a157e3d4a730968bc4213/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:45 np0005593232 systemd[1]: Started libpod-conmon-8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5.scope.
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.334379889 +0000 UTC m=+0.169937265 container init 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.341582662 +0000 UTC m=+0.177140018 container start 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.34507272 +0000 UTC m=+0.180630076 container attach 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.349515566 +0000 UTC m=+0.100722617 container init 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.354738843 +0000 UTC m=+0.105945894 container start 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.358153719 +0000 UTC m=+0.109360770 container attach 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:02:45 np0005593232 goofy_bardeen[92041]: 167 167
Jan 23 04:02:45 np0005593232 systemd[1]: libpod-8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5.scope: Deactivated successfully.
Jan 23 04:02:45 np0005593232 conmon[92041]: conmon 8caa4f0b03a1e2586e64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5.scope/container/memory.events
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.360427063 +0000 UTC m=+0.111634134 container died 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.272470026 +0000 UTC m=+0.023677097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-66100318b42d33639422e81e123a4dc319895418cd39bf797b8914441597e24d-merged.mount: Deactivated successfully.
Jan 23 04:02:45 np0005593232 podman[92019]: 2026-01-23 09:02:45.39477392 +0000 UTC m=+0.145980991 container remove 8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:02:45 np0005593232 systemd[1]: libpod-conmon-8caa4f0b03a1e2586e64f52385ca38414c07b6c9e117c1ed6cdd84b285bdf5c5.scope: Deactivated successfully.
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.yntofk (monmap changed)...
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.yntofk (monmap changed)...
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14319 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:02:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:45 np0005593232 epic_ardinghelli[92035]: Scheduled mds.cephfs update...
Jan 23 04:02:45 np0005593232 systemd[1]: libpod-07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43.scope: Deactivated successfully.
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.938439016 +0000 UTC m=+0.773996382 container died 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v128: 193 pgs: 124 unknown, 69 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-02d648c176eb608c96f2b3db7bf17b2c19b64c44d08a157e3d4a730968bc4213-merged.mount: Deactivated successfully.
Jan 23 04:02:45 np0005593232 podman[91993]: 2026-01-23 09:02:45.976331442 +0000 UTC m=+0.811888798 container remove 07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43 (image=quay.io/ceph/ceph:v18, name=epic_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:45 np0005593232 systemd[1]: libpod-conmon-07c0c03c0c2676d50f8152af474461cd78b3219afc37dbdf7819f11486808a43.scope: Deactivated successfully.
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.yntofk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.277521712 +0000 UTC m=+0.036351724 container create 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:02:46 np0005593232 systemd[1]: Started libpod-conmon-74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3.scope.
Jan 23 04:02:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.341055671 +0000 UTC m=+0.099885703 container init 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.347125011 +0000 UTC m=+0.105955043 container start 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.350876557 +0000 UTC m=+0.109706599 container attach 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:46 np0005593232 eager_greider[92225]: 167 167
Jan 23 04:02:46 np0005593232 systemd[1]: libpod-74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3.scope: Deactivated successfully.
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.352834872 +0000 UTC m=+0.111664904 container died 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.261779439 +0000 UTC m=+0.020609471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8eb39406bff0313e45c846da49bc5430b06af32a779feab05122752d45dcec8f-merged.mount: Deactivated successfully.
Jan 23 04:02:46 np0005593232 podman[92209]: 2026-01-23 09:02:46.39038931 +0000 UTC m=+0.149219322 container remove 74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_greider, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:46 np0005593232 systemd[1]: libpod-conmon-74059ed137d40ea25f9a91f960202004b9c77d97da0c61aa16cc45389c5fb2a3.scope: Deactivated successfully.
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:46 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 04:02:46 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:46 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 04:02:46 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 23 04:02:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.913565489 +0000 UTC m=+0.042069486 container create deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:02:46 np0005593232 systemd[1]: Started libpod-conmon-deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174.scope.
Jan 23 04:02:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.985978867 +0000 UTC m=+0.114482874 container init deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.894614655 +0000 UTC m=+0.023118662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.992196702 +0000 UTC m=+0.120700679 container start deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:02:46 np0005593232 wonderful_mccarthy[92410]: 167 167
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.995836205 +0000 UTC m=+0.124340192 container attach deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:02:46 np0005593232 systemd[1]: libpod-deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174.scope: Deactivated successfully.
Jan 23 04:02:46 np0005593232 podman[92359]: 2026-01-23 09:02:46.996926896 +0000 UTC m=+0.125430903 container died deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 04:02:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-370920e35279ee00d59571ad73eaa533868fee47183bebd3ccda67251453dfc1-merged.mount: Deactivated successfully.
Jan 23 04:02:47 np0005593232 podman[92359]: 2026-01-23 09:02:47.02977411 +0000 UTC m=+0.158278097 container remove deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_mccarthy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:47 np0005593232 systemd[1]: libpod-conmon-deadd87a2e008bc032a4301276723211071977954f3e61b3ddcdfc9bb0a35174.scope: Deactivated successfully.
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: Reconfiguring mgr.compute-0.yntofk (monmap changed)...
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: Reconfiguring daemon mgr.compute-0.yntofk on compute-0
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 23 04:02:47 np0005593232 python3[92469]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 23 04:02:47 np0005593232 python3[92642]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769158966.9173775-37401-60741278527211/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=9a6a528427b32e6ef98709d36c90302cf328f9ef backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.601017663 +0000 UTC m=+0.045651696 container create 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:47 np0005593232 systemd[1]: Started libpod-conmon-964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6.scope.
Jan 23 04:02:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.577796919 +0000 UTC m=+0.022430972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.675469939 +0000 UTC m=+0.120103972 container init 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.681266882 +0000 UTC m=+0.125900915 container start 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:02:47 np0005593232 xenodochial_brown[92680]: 167 167
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.684790021 +0000 UTC m=+0.129424054 container attach 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:02:47 np0005593232 conmon[92680]: conmon 964f58a94bdb6d29d411 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6.scope/container/memory.events
Jan 23 04:02:47 np0005593232 systemd[1]: libpod-964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6.scope: Deactivated successfully.
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.685983845 +0000 UTC m=+0.130617878 container died 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 23 04:02:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e07af555137ce814b1acc09a552e1b9514486bf630f6ea91e4947b48949fae11-merged.mount: Deactivated successfully.
Jan 23 04:02:47 np0005593232 podman[92658]: 2026-01-23 09:02:47.718641064 +0000 UTC m=+0.163275097 container remove 964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:02:47 np0005593232 systemd[1]: libpod-conmon-964f58a94bdb6d29d411ad2d5549cec72046dbdfb0203061e8b22a69ceadd2a6.scope: Deactivated successfully.
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 04:02:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v130: 193 pgs: 32 peering, 161 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:48 np0005593232 python3[92749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: Reconfiguring osd.0 (monmap changed)...
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: Reconfiguring daemon osd.0 on compute-0
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 23 04:02:48 np0005593232 podman[92750]: 2026-01-23 09:02:48.149897835 +0000 UTC m=+0.043805315 container create 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:02:48 np0005593232 systemd[1]: Started libpod-conmon-6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7.scope.
Jan 23 04:02:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a946031586010857c79f960062e7d5ba7e663fbd7a06300a34283a759c6c5bc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a946031586010857c79f960062e7d5ba7e663fbd7a06300a34283a759c6c5bc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:48 np0005593232 podman[92750]: 2026-01-23 09:02:48.226519962 +0000 UTC m=+0.120427492 container init 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:48 np0005593232 podman[92750]: 2026-01-23 09:02:48.133252786 +0000 UTC m=+0.027160296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:48 np0005593232 podman[92750]: 2026-01-23 09:02:48.232032727 +0000 UTC m=+0.125940207 container start 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:48 np0005593232 podman[92750]: 2026-01-23 09:02:48.235917526 +0000 UTC m=+0.129825006 container attach 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:48 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 23 04:02:48 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:48 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 23 04:02:48 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:48 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 23 04:02:48 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0) v1
Jan 23 04:02:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579638420' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2579638420' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 04:02:49 np0005593232 systemd[1]: libpod-6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7.scope: Deactivated successfully.
Jan 23 04:02:49 np0005593232 podman[92750]: 2026-01-23 09:02:49.233002948 +0000 UTC m=+1.126910438 container died 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 23 04:02:49 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2579638420' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 23 04:02:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a946031586010857c79f960062e7d5ba7e663fbd7a06300a34283a759c6c5bc-merged.mount: Deactivated successfully.
Jan 23 04:02:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v131: 193 pgs: 32 peering, 161 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:49 np0005593232 podman[92750]: 2026-01-23 09:02:49.971360505 +0000 UTC m=+1.865267985 container remove 6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7 (image=quay.io/ceph/ceph:v18, name=lucid_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:02:50 np0005593232 systemd[1]: libpod-conmon-6612e64999b851ff1132249f3995f06ce47c3b6f8e10a62da19b486170e4b8d7.scope: Deactivated successfully.
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:50 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 04:02:50 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:50 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 04:02:50 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: Reconfiguring osd.1 (monmap changed)...
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: Reconfiguring daemon osd.1 on compute-1
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2579638420' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:50 np0005593232 python3[92827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:50 np0005593232 podman[92829]: 2026-01-23 09:02:50.825641386 +0000 UTC m=+0.043170697 container create a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:02:50 np0005593232 systemd[1]: Started libpod-conmon-a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373.scope.
Jan 23 04:02:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ad01138f9c93b23b585ccda5cbbfd3f0e987a1af8d462089b6f688da5e418d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17ad01138f9c93b23b585ccda5cbbfd3f0e987a1af8d462089b6f688da5e418d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:50 np0005593232 podman[92829]: 2026-01-23 09:02:50.891415667 +0000 UTC m=+0.108944978 container init a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:02:50 np0005593232 podman[92829]: 2026-01-23 09:02:50.89719697 +0000 UTC m=+0.114726281 container start a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:02:50 np0005593232 podman[92829]: 2026-01-23 09:02:50.901004097 +0000 UTC m=+0.118533408 container attach a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:02:50 np0005593232 podman[92829]: 2026-01-23 09:02:50.806192418 +0000 UTC m=+0.023721729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 04:02:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/601927195' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:02:51 np0005593232 fervent_newton[92846]: 
Jan 23 04:02:51 np0005593232 fervent_newton[92846]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":45,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":42,"num_osds":3,"num_up_osds":3,"osd_up_since":1769158956,"num_in_osds":3,"osd_in_since":1769158937,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":161},{"state_name":"peering","count":32}],"num_pgs":193,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":84164608,"bytes_avail":22451830784,"bytes_total":22535995392,"inactive_pgs_ratio":0.16580310463905334},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-23T09:02:27.943765+0000","services":{"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"0abe61f5-b38a-4099-b7a4-9ace72f7c54f":{"message":"Global Recovery Event (10s)\n      [=======================.....] (remaining: 2s)","progress":0.83419686555862427,"add_to_ceph_s":true}}}
Jan 23 04:02:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.nrjyzu (monmap changed)...
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.nrjyzu (monmap changed)...
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.nrjyzu on compute-2
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.nrjyzu on compute-2
Jan 23 04:02:51 np0005593232 systemd[1]: libpod-a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373.scope: Deactivated successfully.
Jan 23 04:02:51 np0005593232 podman[92829]: 2026-01-23 09:02:51.90137434 +0000 UTC m=+1.118903641 container died a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-17ad01138f9c93b23b585ccda5cbbfd3f0e987a1af8d462089b6f688da5e418d-merged.mount: Deactivated successfully.
Jan 23 04:02:51 np0005593232 podman[92829]: 2026-01-23 09:02:51.951524522 +0000 UTC m=+1.169053813 container remove a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373 (image=quay.io/ceph/ceph:v18, name=fervent_newton, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v132: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:02:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:51 np0005593232 systemd[1]: libpod-conmon-a0232e51f29ff58c58aaaf65bba82de6c44404427ecac133bb41abd5e2865373.scope: Deactivated successfully.
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.nrjyzu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:02:52 np0005593232 python3[92907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:52 np0005593232 podman[92908]: 2026-01-23 09:02:52.290697591 +0000 UTC m=+0.040250194 container create 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:52 np0005593232 systemd[1]: Started libpod-conmon-5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6.scope.
Jan 23 04:02:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ad4128835d2431e777b2b5a66e2f1e942444899eb34d4289ae4e2a41af2932/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ad4128835d2431e777b2b5a66e2f1e942444899eb34d4289ae4e2a41af2932/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:52 np0005593232 podman[92908]: 2026-01-23 09:02:52.366250918 +0000 UTC m=+0.115803551 container init 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:02:52 np0005593232 podman[92908]: 2026-01-23 09:02:52.275645337 +0000 UTC m=+0.025197960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:52 np0005593232 podman[92908]: 2026-01-23 09:02:52.37095589 +0000 UTC m=+0.120508503 container start 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:52 np0005593232 podman[92908]: 2026-01-23 09:02:52.374255823 +0000 UTC m=+0.123808486 container attach 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 23 04:02:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.1e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.e( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.2( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.1( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.8( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.e( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.10( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[2.1e( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.204210281s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231330872s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.204150200s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231330872s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.199254990s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.226692200s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.15( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.199224472s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.226692200s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844615936s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872222900s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844593048s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872222900s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.198850632s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.226661682s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.17( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.198819160s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.226661682s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844200134s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872192383s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844180107s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872192383s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844165802s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872367859s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.203250885s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231536865s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.844073296s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872367859s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.d( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.203217506s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231536865s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.203164101s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231567383s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.12( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.203146935s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231567383s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843609810s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872146606s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843495369s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872077942s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843563080s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872146606s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843469620s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872077942s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843367577s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872177124s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202689171s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231582642s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202904701s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231895447s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843347549s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872177124s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.2( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202826500s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231895447s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843393326s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872619629s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.843333244s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872619629s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202605247s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231948853s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.5( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202578545s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231948853s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.842301369s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871795654s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.842563629s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.872100830s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.842253685s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871795654s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.842539787s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.872100830s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841848373s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871635437s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841813087s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871635437s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202033997s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.231933594s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.3( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202007294s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231933594s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202035904s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232032776s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202004433s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232032776s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.201926231s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232009888s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1b( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.201901436s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232009888s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841576576s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871788025s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841534615s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871788025s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841151237s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871444702s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.841123581s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871444702s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840708733s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871154785s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840680122s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871154785s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840874672s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871505737s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840787888s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871505737s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.201089859s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232017517s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.7( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.201053619s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232017517s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.202530861s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.231582642s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840220451s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871315002s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840185165s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871315002s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200842857s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232017517s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.8( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200813293s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232017517s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840174675s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871429443s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.840147972s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871429443s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.839790344s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871170044s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200658798s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232070923s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.839762688s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871170044s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.a( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200638771s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232070923s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.839634895s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871177673s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200540543s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232093811s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.839605331s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871177673s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.835580826s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.867362976s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.835546494s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.867362976s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.19( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200512886s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232093811s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200095177s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232078552s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1e( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.200066566s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232078552s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.199923515s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 111.232101440s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[6.1c( empty local-lis/les=39/40 n=0 ec=39/22 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=15.199874878s) [2] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.232101440s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.838673592s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.871002197s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.838651657s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.871002197s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.834846497s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 108.867294312s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=12.834812164s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 108.867294312s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.19( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.17( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.a( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.2( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.6( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.4( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.b( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302047102' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:02:53 np0005593232 interesting_davinci[92924]: 
Jan 23 04:02:53 np0005593232 interesting_davinci[92924]: {"epoch":3,"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","modified":"2026-01-23T09:02:00.827232Z","created":"2026-01-23T08:58:42.441366Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 23 04:02:53 np0005593232 interesting_davinci[92924]: dumped monmap epoch 3
Jan 23 04:02:53 np0005593232 systemd[1]: libpod-5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6.scope: Deactivated successfully.
Jan 23 04:02:53 np0005593232 podman[92908]: 2026-01-23 09:02:53.045563183 +0000 UTC m=+0.795115786 container died 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:02:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c7ad4128835d2431e777b2b5a66e2f1e942444899eb34d4289ae4e2a41af2932-merged.mount: Deactivated successfully.
Jan 23 04:02:53 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 0abe61f5-b38a-4099-b7a4-9ace72f7c54f (Global Recovery Event) in 15 seconds
Jan 23 04:02:53 np0005593232 podman[92908]: 2026-01-23 09:02:53.082098932 +0000 UTC m=+0.831651535 container remove 5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6 (image=quay.io/ceph/ceph:v18, name=interesting_davinci, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:02:53 np0005593232 systemd[1]: libpod-conmon-5e9a0e5ca82081a281363ed995396bdb45b4bedb774384a4ebebd1374ff6deb6.scope: Deactivated successfully.
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: Reconfiguring mgr.compute-2.nrjyzu (monmap changed)...
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: Reconfiguring daemon mgr.compute-2.nrjyzu on compute-2
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 23 04:02:53 np0005593232 python3[92986]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:53 np0005593232 podman[92987]: 2026-01-23 09:02:53.868714678 +0000 UTC m=+0.065276059 container create da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 23 04:02:53 np0005593232 systemd[1]: Started libpod-conmon-da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e.scope.
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.e( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.b( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.1e( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.6( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.1( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.a( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.10( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=40/24 lis/c=40/40 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[2.1e( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=37/37/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=40/15 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 44 pg[5.17( empty local-lis/les=43/44 n=0 ec=40/20 lis/c=40/40 les/c/f=41/41/0 sis=43) [0] r=0 lpr=43 pi=[40,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:02:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:53 np0005593232 podman[92987]: 2026-01-23 09:02:53.843523338 +0000 UTC m=+0.040084759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9677992a988ebab04b1abc871534d39180a6fee5aeee510c22abb631103782/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a9677992a988ebab04b1abc871534d39180a6fee5aeee510c22abb631103782/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v135: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:53 np0005593232 podman[92987]: 2026-01-23 09:02:53.955679566 +0000 UTC m=+0.152240957 container init da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:02:53 np0005593232 podman[92987]: 2026-01-23 09:02:53.962605811 +0000 UTC m=+0.159167202 container start da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 23 04:02:53 np0005593232 podman[92987]: 2026-01-23 09:02:53.966306985 +0000 UTC m=+0.162868366 container attach da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a9168c7-12f1-42ca-bfee-bc13d0f71f2a does not exist
Jan 23 04:02:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b9e1d7ec-4f9b-4091-bccf-0066af9c65d7 does not exist
Jan 23 04:02:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 77a55d79-19bf-4034-860f-ff66e3432182 does not exist
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.647707049 +0000 UTC m=+0.045891903 container create cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:54 np0005593232 systemd[1]: Started libpod-conmon-cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07.scope.
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Jan 23 04:02:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2587378952' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 23 04:02:54 np0005593232 objective_gould[93002]: [client.openstack]
Jan 23 04:02:54 np0005593232 objective_gould[93002]: #011key = AQAWOHNpAAAAABAA2dnV5fyT8k0NgImFHzKGBA==
Jan 23 04:02:54 np0005593232 objective_gould[93002]: #011caps mgr = "allow *"
Jan 23 04:02:54 np0005593232 objective_gould[93002]: #011caps mon = "profile rbd"
Jan 23 04:02:54 np0005593232 objective_gould[93002]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 23 04:02:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:54 np0005593232 systemd[1]: libpod-da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e.scope: Deactivated successfully.
Jan 23 04:02:54 np0005593232 podman[92987]: 2026-01-23 09:02:54.709330524 +0000 UTC m=+0.905891935 container died da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.722184366 +0000 UTC m=+0.120369240 container init cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.628501838 +0000 UTC m=+0.026686722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.73228265 +0000 UTC m=+0.130467504 container start cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1a9677992a988ebab04b1abc871534d39180a6fee5aeee510c22abb631103782-merged.mount: Deactivated successfully.
Jan 23 04:02:54 np0005593232 goofy_ganguly[93181]: 167 167
Jan 23 04:02:54 np0005593232 systemd[1]: libpod-cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07.scope: Deactivated successfully.
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.736775096 +0000 UTC m=+0.134959950 container attach cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.737353523 +0000 UTC m=+0.135538387 container died cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-202958e4538a23361d5dfb4f475527f33d596e774a44068ae66e17838993b06f-merged.mount: Deactivated successfully.
Jan 23 04:02:54 np0005593232 podman[92987]: 2026-01-23 09:02:54.775540838 +0000 UTC m=+0.972102219 container remove da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e (image=quay.io/ceph/ceph:v18, name=objective_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:54 np0005593232 podman[93165]: 2026-01-23 09:02:54.788071081 +0000 UTC m=+0.186255945 container remove cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ganguly, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:54 np0005593232 systemd[1]: libpod-conmon-cd4a782ac78ea2714c05bd2acd369a368400fcb9cb5ce54237aa3112b299ac07.scope: Deactivated successfully.
Jan 23 04:02:54 np0005593232 systemd[1]: libpod-conmon-da40d13f9e1fd9f87ea62a3d2a06ca2f61e6926182b07a15ad6c5357d2471a6e.scope: Deactivated successfully.
Jan 23 04:02:54 np0005593232 podman[93220]: 2026-01-23 09:02:54.954213968 +0000 UTC m=+0.039868153 container create 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:02:54 np0005593232 systemd[1]: Started libpod-conmon-7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d.scope.
Jan 23 04:02:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:55 np0005593232 podman[93220]: 2026-01-23 09:02:54.936401907 +0000 UTC m=+0.022056112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:55 np0005593232 podman[93220]: 2026-01-23 09:02:55.04129259 +0000 UTC m=+0.126946805 container init 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:02:55 np0005593232 podman[93220]: 2026-01-23 09:02:55.04663379 +0000 UTC m=+0.132287975 container start 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:02:55 np0005593232 podman[93220]: 2026-01-23 09:02:55.049970544 +0000 UTC m=+0.135624729 container attach 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:02:55 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/2587378952' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 23 04:02:55 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 23 04:02:55 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 23 04:02:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v136: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:55 np0005593232 jolly_payne[93236]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:02:55 np0005593232 jolly_payne[93236]: --> relative data size: 1.0
Jan 23 04:02:55 np0005593232 jolly_payne[93236]: --> All data devices are unavailable
Jan 23 04:02:56 np0005593232 systemd[1]: libpod-7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d.scope: Deactivated successfully.
Jan 23 04:02:56 np0005593232 podman[93220]: 2026-01-23 09:02:56.007933093 +0000 UTC m=+1.093587288 container died 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b9e55f724cd4d59b345d236e4451529683d796d81625b58064d988f533f61e57-merged.mount: Deactivated successfully.
Jan 23 04:02:56 np0005593232 podman[93220]: 2026-01-23 09:02:56.074189418 +0000 UTC m=+1.159843603 container remove 7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:02:56 np0005593232 systemd[1]: libpod-conmon-7a32c3c17b5a4ed43040305c86f02916a6a8f19bb8befd59955e74a673e43d6d.scope: Deactivated successfully.
Jan 23 04:02:56 np0005593232 ansible-async_wrapper.py[93427]: Invoked with j332769941179 30 /home/zuul/.ansible/tmp/ansible-tmp-1769158975.7832122-37473-17155408416057/AnsiballZ_command.py _
Jan 23 04:02:56 np0005593232 ansible-async_wrapper.py[93503]: Starting module and watcher
Jan 23 04:02:56 np0005593232 ansible-async_wrapper.py[93503]: Start watching 93506 (30)
Jan 23 04:02:56 np0005593232 ansible-async_wrapper.py[93506]: Start module (93506)
Jan 23 04:02:56 np0005593232 ansible-async_wrapper.py[93427]: Return async_wrapper task started.
Jan 23 04:02:56 np0005593232 python3[93515]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:56 np0005593232 podman[93522]: 2026-01-23 09:02:56.509175174 +0000 UTC m=+0.047807376 container create 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:56 np0005593232 systemd[1]: Started libpod-conmon-5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39.scope.
Jan 23 04:02:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83257817523f86d4c498c34e2491a2b61e8521eadc5653ddd39498a54d842f6f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83257817523f86d4c498c34e2491a2b61e8521eadc5653ddd39498a54d842f6f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:56 np0005593232 podman[93522]: 2026-01-23 09:02:56.488853242 +0000 UTC m=+0.027485484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:56 np0005593232 podman[93522]: 2026-01-23 09:02:56.588019414 +0000 UTC m=+0.126651616 container init 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:56 np0005593232 podman[93522]: 2026-01-23 09:02:56.595409352 +0000 UTC m=+0.134041554 container start 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:56 np0005593232 podman[93522]: 2026-01-23 09:02:56.598499219 +0000 UTC m=+0.137131421 container attach 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.720458873 +0000 UTC m=+0.044349070 container create 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:56 np0005593232 systemd[1]: Started libpod-conmon-6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05.scope.
Jan 23 04:02:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.774234937 +0000 UTC m=+0.098125124 container init 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.779288309 +0000 UTC m=+0.103178496 container start 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.78217352 +0000 UTC m=+0.106063747 container attach 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:02:56 np0005593232 tender_kirch[93598]: 167 167
Jan 23 04:02:56 np0005593232 systemd[1]: libpod-6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05.scope: Deactivated successfully.
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.784627829 +0000 UTC m=+0.108518016 container died 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.703128735 +0000 UTC m=+0.027018932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-db6577dbfbc0611b33c23df2d3224df7d68dd36c5cb9bacfdc19a26f59c26165-merged.mount: Deactivated successfully.
Jan 23 04:02:56 np0005593232 podman[93581]: 2026-01-23 09:02:56.821008114 +0000 UTC m=+0.144898301 container remove 6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:02:56 np0005593232 systemd[1]: libpod-conmon-6f69bcf8d3eadc6030a567a3eb5ee24b93da57b44fbc208dbd2b37b2ef60de05.scope: Deactivated successfully.
Jan 23 04:02:56 np0005593232 podman[93639]: 2026-01-23 09:02:56.97717438 +0000 UTC m=+0.041445988 container create fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:57 np0005593232 systemd[1]: Started libpod-conmon-fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc.scope.
Jan 23 04:02:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c93403401b4da8ddd75033fd34f179ad62450faec81a14a6bea9a77a045f718/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c93403401b4da8ddd75033fd34f179ad62450faec81a14a6bea9a77a045f718/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c93403401b4da8ddd75033fd34f179ad62450faec81a14a6bea9a77a045f718/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c93403401b4da8ddd75033fd34f179ad62450faec81a14a6bea9a77a045f718/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:57 np0005593232 podman[93639]: 2026-01-23 09:02:56.960284915 +0000 UTC m=+0.024556543 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:57 np0005593232 podman[93639]: 2026-01-23 09:02:57.064401186 +0000 UTC m=+0.128672814 container init fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:57 np0005593232 podman[93639]: 2026-01-23 09:02:57.072164114 +0000 UTC m=+0.136435722 container start fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:57 np0005593232 podman[93639]: 2026-01-23 09:02:57.075727965 +0000 UTC m=+0.139999573 container attach fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:57 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14349 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 04:02:57 np0005593232 sweet_hellman[93563]: 
Jan 23 04:02:57 np0005593232 sweet_hellman[93563]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 04:02:57 np0005593232 systemd[1]: libpod-5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39.scope: Deactivated successfully.
Jan 23 04:02:57 np0005593232 podman[93665]: 2026-01-23 09:02:57.24180252 +0000 UTC m=+0.033106283 container died 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-83257817523f86d4c498c34e2491a2b61e8521eadc5653ddd39498a54d842f6f-merged.mount: Deactivated successfully.
Jan 23 04:02:57 np0005593232 podman[93665]: 2026-01-23 09:02:57.281847308 +0000 UTC m=+0.073151071 container remove 5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39 (image=quay.io/ceph/ceph:v18, name=sweet_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:02:57 np0005593232 systemd[1]: libpod-conmon-5f37373395155cea2a5674382890b6e9765d4e3064b730d64213072018a3da39.scope: Deactivated successfully.
Jan 23 04:02:57 np0005593232 ansible-async_wrapper.py[93506]: Module complete (93506)
Jan 23 04:02:57 np0005593232 python3[93728]: ansible-ansible.legacy.async_status Invoked with jid=j332769941179.93427 mode=status _async_dir=/root/.ansible_async
Jan 23 04:02:57 np0005593232 python3[93777]: ansible-ansible.legacy.async_status Invoked with jid=j332769941179.93427 mode=cleanup _async_dir=/root/.ansible_async
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]: {
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:    "0": [
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:        {
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "devices": [
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "/dev/loop3"
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            ],
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "lv_name": "ceph_lv0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "lv_size": "7511998464",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "name": "ceph_lv0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "tags": {
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.cluster_name": "ceph",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.crush_device_class": "",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.encrypted": "0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.osd_id": "0",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.type": "block",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:                "ceph.vdo": "0"
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            },
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "type": "block",
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:            "vg_name": "ceph_vg0"
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:        }
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]:    ]
Jan 23 04:02:57 np0005593232 magical_chatterjee[93658]: }
Jan 23 04:02:57 np0005593232 systemd[1]: libpod-fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc.scope: Deactivated successfully.
Jan 23 04:02:57 np0005593232 podman[93639]: 2026-01-23 09:02:57.926381174 +0000 UTC m=+0.990652802 container died fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:02:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1c93403401b4da8ddd75033fd34f179ad62450faec81a14a6bea9a77a045f718-merged.mount: Deactivated successfully.
Jan 23 04:02:58 np0005593232 podman[93639]: 2026-01-23 09:02:58.059905573 +0000 UTC m=+1.124177181 container remove fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_chatterjee, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:02:58 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 12 completed events
Jan 23 04:02:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:02:58 np0005593232 systemd[1]: libpod-conmon-fb52d8a01e70397aaad70d6ca200754dbbd6cfab15b21911ca74b0af85123ccc.scope: Deactivated successfully.
Jan 23 04:02:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:02:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:58 np0005593232 python3[93919]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:02:58 np0005593232 podman[93934]: 2026-01-23 09:02:58.595398119 +0000 UTC m=+0.046918602 container create d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:58 np0005593232 systemd[1]: Started libpod-conmon-d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71.scope.
Jan 23 04:02:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d66e3dcd75c7b334fb5a71ed6d7c0c6718498ccd34fdc4176d0a661bf7d7372/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d66e3dcd75c7b334fb5a71ed6d7c0c6718498ccd34fdc4176d0a661bf7d7372/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:58 np0005593232 podman[93934]: 2026-01-23 09:02:58.571446444 +0000 UTC m=+0.022966957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:02:58 np0005593232 podman[93934]: 2026-01-23 09:02:58.696158614 +0000 UTC m=+0.147679127 container init d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 04:02:58 np0005593232 podman[93934]: 2026-01-23 09:02:58.702952636 +0000 UTC m=+0.154473119 container start d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:02:58 np0005593232 podman[93934]: 2026-01-23 09:02:58.707313468 +0000 UTC m=+0.158833951 container attach d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.772518774 +0000 UTC m=+0.039780411 container create 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:02:58 np0005593232 systemd[1]: Started libpod-conmon-387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882.scope.
Jan 23 04:02:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.845114148 +0000 UTC m=+0.112375805 container init 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.755418183 +0000 UTC m=+0.022679850 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.852382923 +0000 UTC m=+0.119644560 container start 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.855797589 +0000 UTC m=+0.123059226 container attach 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:02:58 np0005593232 competent_banach[93994]: 167 167
Jan 23 04:02:58 np0005593232 systemd[1]: libpod-387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882.scope: Deactivated successfully.
Jan 23 04:02:58 np0005593232 conmon[93994]: conmon 387a771fd70b135da349 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882.scope/container/memory.events
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.862458246 +0000 UTC m=+0.129719883 container died 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:02:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9ab7dea9b0bd07386f9bb9a28a950dbf1b972409415d3a626b9fbc840837507d-merged.mount: Deactivated successfully.
Jan 23 04:02:58 np0005593232 podman[93978]: 2026-01-23 09:02:58.899638413 +0000 UTC m=+0.166900050 container remove 387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:02:58 np0005593232 systemd[1]: libpod-conmon-387a771fd70b135da349eed78da9b2e6269fd7aa1d574181dd918e81d39a7882.scope: Deactivated successfully.
Jan 23 04:02:59 np0005593232 podman[94017]: 2026-01-23 09:02:59.052197768 +0000 UTC m=+0.037850967 container create ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Jan 23 04:02:59 np0005593232 systemd[1]: Started libpod-conmon-ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21.scope.
Jan 23 04:02:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:02:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20909866962e470bd804d55e523363a88cf1f5d4ded527dc3cf68701bb371201/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20909866962e470bd804d55e523363a88cf1f5d4ded527dc3cf68701bb371201/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20909866962e470bd804d55e523363a88cf1f5d4ded527dc3cf68701bb371201/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20909866962e470bd804d55e523363a88cf1f5d4ded527dc3cf68701bb371201/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:02:59 np0005593232 podman[94017]: 2026-01-23 09:02:59.035335123 +0000 UTC m=+0.020988342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:02:59 np0005593232 podman[94017]: 2026-01-23 09:02:59.14708909 +0000 UTC m=+0.132742309 container init ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:02:59 np0005593232 podman[94017]: 2026-01-23 09:02:59.157827402 +0000 UTC m=+0.143480601 container start ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:02:59 np0005593232 podman[94017]: 2026-01-23 09:02:59.161165786 +0000 UTC m=+0.146819015 container attach ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:59 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14355 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 04:02:59 np0005593232 gifted_carver[93973]: 
Jan 23 04:02:59 np0005593232 gifted_carver[93973]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 23 04:02:59 np0005593232 systemd[1]: libpod-d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71.scope: Deactivated successfully.
Jan 23 04:02:59 np0005593232 podman[93934]: 2026-01-23 09:02:59.291553737 +0000 UTC m=+0.743074220 container died d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:02:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1d66e3dcd75c7b334fb5a71ed6d7c0c6718498ccd34fdc4176d0a661bf7d7372-merged.mount: Deactivated successfully.
Jan 23 04:02:59 np0005593232 podman[93934]: 2026-01-23 09:02:59.331545903 +0000 UTC m=+0.783066386 container remove d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71 (image=quay.io/ceph/ceph:v18, name=gifted_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:02:59 np0005593232 systemd[1]: libpod-conmon-d9ffec3efc613c416b614990bb6cc303a9a4f4435b70cce1f5c2ea1d59260c71.scope: Deactivated successfully.
Jan 23 04:02:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:02:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v138: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]: {
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:        "osd_id": 0,
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:        "type": "bluestore"
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]:    }
Jan 23 04:02:59 np0005593232 flamboyant_faraday[94051]: }
Jan 23 04:03:00 np0005593232 systemd[1]: libpod-ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21.scope: Deactivated successfully.
Jan 23 04:03:00 np0005593232 podman[94088]: 2026-01-23 09:03:00.051032649 +0000 UTC m=+0.024887742 container died ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:03:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-20909866962e470bd804d55e523363a88cf1f5d4ded527dc3cf68701bb371201-merged.mount: Deactivated successfully.
Jan 23 04:03:00 np0005593232 podman[94088]: 2026-01-23 09:03:00.102623701 +0000 UTC m=+0.076478734 container remove ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:03:00 np0005593232 systemd[1]: libpod-conmon-ae7c6b8fe534fcca53f326def0f4976d7773c6ee103b125b306b14555c5acc21.scope: Deactivated successfully.
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev d743b8f7-f92b-48f9-806d-fc82556e0c2d (Updating rgw.rgw deployment (+3 -> 3))
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.nxrebk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.nxrebk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.nxrebk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:00 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.nxrebk on compute-2
Jan 23 04:03:00 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.nxrebk on compute-2
Jan 23 04:03:00 np0005593232 python3[94128]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:00 np0005593232 podman[94129]: 2026-01-23 09:03:00.384265 +0000 UTC m=+0.049636018 container create d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:00 np0005593232 systemd[1]: Started libpod-conmon-d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2.scope.
Jan 23 04:03:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7291cce4009f4b68aa155bc20e6bad55ca9ed79e04da1c87c4fe6f1e41928/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7291cce4009f4b68aa155bc20e6bad55ca9ed79e04da1c87c4fe6f1e41928/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:00 np0005593232 podman[94129]: 2026-01-23 09:03:00.366319325 +0000 UTC m=+0.031690363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:00 np0005593232 podman[94129]: 2026-01-23 09:03:00.469197961 +0000 UTC m=+0.134568999 container init d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:00 np0005593232 podman[94129]: 2026-01-23 09:03:00.474929693 +0000 UTC m=+0.140300701 container start d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:00 np0005593232 podman[94129]: 2026-01-23 09:03:00.478625967 +0000 UTC m=+0.143996985 container attach d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.nxrebk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.nxrebk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 23 04:03:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 23 04:03:01 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14361 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 04:03:01 np0005593232 blissful_mclean[94144]: 
Jan 23 04:03:01 np0005593232 blissful_mclean[94144]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 23 04:03:01 np0005593232 systemd[1]: libpod-d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2.scope: Deactivated successfully.
Jan 23 04:03:01 np0005593232 conmon[94144]: conmon d318499155279de8fddd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2.scope/container/memory.events
Jan 23 04:03:01 np0005593232 podman[94129]: 2026-01-23 09:03:01.055250121 +0000 UTC m=+0.720621139 container died d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cfa7291cce4009f4b68aa155bc20e6bad55ca9ed79e04da1c87c4fe6f1e41928-merged.mount: Deactivated successfully.
Jan 23 04:03:01 np0005593232 podman[94129]: 2026-01-23 09:03:01.108616183 +0000 UTC m=+0.773987241 container remove d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2 (image=quay.io/ceph/ceph:v18, name=blissful_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:03:01 np0005593232 systemd[1]: libpod-conmon-d318499155279de8fdddd64560539fec19831294331c36bb849fff9ce0c07cf2.scope: Deactivated successfully.
Jan 23 04:03:01 np0005593232 ansible-async_wrapper.py[93503]: Done in kid B.
Jan 23 04:03:01 np0005593232 ceph-mon[74423]: Deploying daemon rgw.rgw.compute-2.nxrebk on compute-2
Jan 23 04:03:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:02 np0005593232 python3[94204]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:02 np0005593232 podman[94207]: 2026-01-23 09:03:02.245645465 +0000 UTC m=+0.027333611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 23 04:03:02 np0005593232 podman[94207]: 2026-01-23 09:03:02.605732181 +0000 UTC m=+0.387420287 container create 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 23 04:03:02 np0005593232 systemd[1]: Started libpod-conmon-9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a.scope.
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.odtvxh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.odtvxh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.odtvxh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4dea807e16808c1c5c38bf3d0c2b8d513b463213b2c18a1d3f182c5f865c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ff4dea807e16808c1c5c38bf3d0c2b8d513b463213b2c18a1d3f182c5f865c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 23 04:03:02 np0005593232 podman[94207]: 2026-01-23 09:03:02.699562153 +0000 UTC m=+0.481250279 container init 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:02 np0005593232 podman[94207]: 2026-01-23 09:03:02.70514315 +0000 UTC m=+0.486831256 container start 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:02 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.odtvxh on compute-1
Jan 23 04:03:02 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.odtvxh on compute-1
Jan 23 04:03:02 np0005593232 podman[94207]: 2026-01-23 09:03:02.708913176 +0000 UTC m=+0.490601282 container attach 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:03:03 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.14373 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 23 04:03:03 np0005593232 hardcore_ride[94222]: 
Jan 23 04:03:03 np0005593232 hardcore_ride[94222]: [{"container_id": "75130cd954df", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.55%", "created": "2026-01-23T09:00:02.332545Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-23T09:00:03.030423Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\"", "2026-01-23T09:02:47.114181Z daemon:crash.compute-0 [INFO] \"Reconfigured crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:01:21.581472Z", "memory_usage": 11785994, "ports": [], "service_name": "crash", "started": "2026-01-23T09:00:02.224909Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@crash.compute-0", "version": "18.2.7"}, {"container_id": "0b0f174e85f8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.70%", "created": "2026-01-23T09:00:59.607813Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "events": ["2026-01-23T09:00:59.657951Z daemon:crash.compute-1 [INFO] \"Deployed crash.compute-1 on host 'compute-1'\"", "2026-01-23T09:02:48.692181Z daemon:crash.compute-1 [INFO] \"Reconfigured crash.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:02:53.893353Z", "memory_usage": 11723079, "ports": [], "service_name": "crash", "started": "2026-01-23T09:00:59.489019Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@crash.compute-1", "version": "18.2.7"}, {"container_id": "5d87f145630e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.05%", "created": "2026-01-23T09:02:14.047363Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "events": ["2026-01-23T09:02:15.650186Z daemon:crash.compute-2 [INFO] \"Deployed crash.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-23T09:02:54.016698Z", "memory_usage": 11670650, "ports": [], "service_name": "crash", "started": "2026-01-23T09:02:13.938457Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@crash.compute-2", "version": "18.2.7"}, {"container_id": "f3f14f229951", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "37.22%", "created": "2026-01-23T08:58:50.021499Z", "daemon_id": "compute-0.yntofk", "daemon_name": "mgr.compute-0.yntofk", "daemon_type": "mgr", "events": ["2026-01-23T09:02:46.461264Z daemon:mgr.compute-0.yntofk [INFO] \"Reconfigured mgr.compute-0.yntofk on host 'compute-0'\""], "hostname": "compute-0", "is_active": true, "last_refresh": "2026-01-23T09:01:21.581414Z", "memory_usage": 547356672, "ports": [9283, 8765, 8765], "service_name": "mgr", "started": "2026-01-23T08:58:49.847517Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@mgr.compute-0.yntofk", "version": "18.2.7"}, {"container_id": "792b1d7bb8e5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "57.80%", "created": "2026-01-23T09:02:08.958775Z", "daemon_id": "compute-1.wsgywz", "daemon_name": "mgr.compute-1.wsgywz", "daemon_type": "mgr", "events": ["2026-01-23T09:02:09.132843Z daemon:mgr.compute-1.wsgywz [INFO] \"Deployed mgr.compute-1.wsgywz on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-23T09:02:53.893628Z", "memory_usage": 514011955, "ports": [8765], "service_name": "mgr", "started": "2026-01-23T09:02:07.695822Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@mgr.compute-1.wsgywz", "version": "18.2.7"}, {"container_id": "f6d49bbf16ca", "container_image_digests": ["quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c", "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "41.29%", "created": "2026-01-23T09:02:01.418967Z", "daemon_id": "compute-2.nrjyzu", "daemon_name": "mgr.compute-2.nrjyzu", "daemon_type": "mgr", "events": ["2026-01-23T09:02:05.912186Z daemon:mgr.compute-2.nrjyzu [INFO] \"Deployed mgr.compute-2.nrjyzu on host 'compute-2'\"", "2026-01-23T09:02:52.752856Z daemon:mgr.compute-2.nrjyzu [INFO] \"Reconfigured mgr.compute-2.nrjyzu on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-23T09:02:54.016599Z", "memory_usage": 513068236, "ports": [8765], "service_name": "mgr", "started": "2026-01-23T09:02:01.341775Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@mgr.compute-2.nrjyzu", "version": "18.2.7"}, {"container_id": "5952394e9ece", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.44%", "created": "2026-01-23T08:58:44.699348Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-23T09:02:45.807978Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-23T09:01:21.581338Z", "memory_request": 2147483648, "memory_usage": 34802237, "ports": [], "service_name": "mon", "started": "2026-01-23T08:58:47.700790Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-e1533653-0a5a-584c-b34b-8689f0d32e77@mon.compute-0", "version": "18.2.7"}, {"container_id": "af70d30f5114", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f4
Jan 23 04:03:03 np0005593232 systemd[1]: libpod-9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a.scope: Deactivated successfully.
Jan 23 04:03:03 np0005593232 podman[94207]: 2026-01-23 09:03:03.271614668 +0000 UTC m=+1.053302784 container died 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2ff4dea807e16808c1c5c38bf3d0c2b8d513b463213b2c18a1d3f182c5f865c7-merged.mount: Deactivated successfully.
Jan 23 04:03:03 np0005593232 podman[94207]: 2026-01-23 09:03:03.326640607 +0000 UTC m=+1.108328723 container remove 9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a (image=quay.io/ceph/ceph:v18, name=hardcore_ride, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 23 04:03:03 np0005593232 systemd[1]: libpod-conmon-9fa1bb29ce10de2a4f49081eac03ff32a0041448b33ce9e85937be8ddcacb49a.scope: Deactivated successfully.
Jan 23 04:03:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 45 pg[8.0( empty local-lis/les=0/0 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 23 04:03:03 np0005593232 rsyslogd[1008]: message too long (13968) with configured size 8096, begin of message is: [{"container_id": "75130cd954df", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/1632836641' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.odtvxh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.odtvxh", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: Deploying daemon rgw.rgw.compute-1.odtvxh on compute-1
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 23 04:03:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 23 04:03:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 46 pg[8.0( empty local-lis/les=45/46 n=0 ec=45/45 lis/c=0/0 les/c/f=0/0/0 sis=45) [0] r=0 lpr=45 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v142: 194 pgs: 1 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:04 np0005593232 python3[94286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:04 np0005593232 podman[94287]: 2026-01-23 09:03:04.369812546 +0000 UTC m=+0.044533455 container create 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:03:04 np0005593232 systemd[1]: Started libpod-conmon-7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2.scope.
Jan 23 04:03:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50907a4dc2cf9291b251a218257ffa971f5e96c64af53778f0ad8f19bbc70a0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50907a4dc2cf9291b251a218257ffa971f5e96c64af53778f0ad8f19bbc70a0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:04 np0005593232 podman[94287]: 2026-01-23 09:03:04.437275575 +0000 UTC m=+0.111996504 container init 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:03:04 np0005593232 podman[94287]: 2026-01-23 09:03:04.4427662 +0000 UTC m=+0.117487109 container start 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:03:04 np0005593232 podman[94287]: 2026-01-23 09:03:04.350929815 +0000 UTC m=+0.025650754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:04 np0005593232 podman[94287]: 2026-01-23 09:03:04.44594985 +0000 UTC m=+0.120670759 container attach 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 04:03:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 47 pg[9.0( empty local-lis/les=0/0 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jgxhia", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jgxhia", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jgxhia", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:04 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.jgxhia on compute-0
Jan 23 04:03:04 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.jgxhia on compute-0
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831391461' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 23 04:03:05 np0005593232 adoring_lamarr[94303]: 
Jan 23 04:03:05 np0005593232 adoring_lamarr[94303]: {"fsid":"e1533653-0a5a-584c-b34b-8689f0d32e77","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":59,"monmap":{"epoch":3,"min_mon_release_name":"reef","num_mons":3},"osdmap":{"epoch":47,"num_osds":3,"num_up_osds":3,"osd_up_since":1769158956,"num_in_osds":3,"osd_in_since":1769158937,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":193},{"state_name":"unknown","count":1}],"num_pgs":194,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":84254720,"bytes_avail":22451740672,"bytes_total":22535995392,"unknown_pgs_ratio":0.0051546390168368816},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":4,"modified":"2026-01-23T09:03:01.955939+0000","services":{"mgr":{"daemons":{"summary":"","compute-1.wsgywz":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.nrjyzu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{"b3afd72c-094e-4778-a831-36fc80bbd787":{"message":"Global Recovery Event (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true},"d743b8f7-f92b-48f9-806d-fc82556e0c2d":{"message":"Updating rgw.rgw deployment (+3 -> 3) (2s)\n      [=========...................] (remaining: 4s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Jan 23 04:03:05 np0005593232 systemd[1]: libpod-7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2.scope: Deactivated successfully.
Jan 23 04:03:05 np0005593232 podman[94287]: 2026-01-23 09:03:05.091466323 +0000 UTC m=+0.766187232 container died 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-50907a4dc2cf9291b251a218257ffa971f5e96c64af53778f0ad8f19bbc70a0a-merged.mount: Deactivated successfully.
Jan 23 04:03:05 np0005593232 podman[94287]: 2026-01-23 09:03:05.134005451 +0000 UTC m=+0.808726360 container remove 7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2 (image=quay.io/ceph/ceph:v18, name=adoring_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:05 np0005593232 systemd[1]: libpod-conmon-7c85977f71098b3b214235032838ddf811f4b4fdd119819b9fad03a8bf22a5b2.scope: Deactivated successfully.
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.319195354 +0000 UTC m=+0.039865273 container create 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:03:05 np0005593232 systemd[1]: Started libpod-conmon-8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435.scope.
Jan 23 04:03:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.392836898 +0000 UTC m=+0.113506847 container init 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.301906838 +0000 UTC m=+0.022576787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.399586108 +0000 UTC m=+0.120256027 container start 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:03:05 np0005593232 adoring_einstein[94495]: 167 167
Jan 23 04:03:05 np0005593232 systemd[1]: libpod-8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435.scope: Deactivated successfully.
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.403475377 +0000 UTC m=+0.124145306 container attach 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.404608349 +0000 UTC m=+0.125278278 container died 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:03:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f3286458f2228f0e4ca17aaf31d4a54e15410edb900764517366fb7f7b91e37a-merged.mount: Deactivated successfully.
Jan 23 04:03:05 np0005593232 podman[94479]: 2026-01-23 09:03:05.441383554 +0000 UTC m=+0.162053473 container remove 8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:03:05 np0005593232 systemd[1]: libpod-conmon-8f4367c05bdecf551e91b4ddda7042a730e8245e7a37e01dc0b04dbad599d435.scope: Deactivated successfully.
Jan 23 04:03:05 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:05 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:05 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/1632836641' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jgxhia", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.jgxhia", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: Deploying daemon rgw.rgw.compute-0.jgxhia on compute-0
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 23 04:03:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 23 04:03:05 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 48 pg[9.0( empty local-lis/les=47/48 n=0 ec=47/47 lis/c=0/0 les/c/f=0/0/0 sis=47) [0] r=0 lpr=47 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:05 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:05 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:05 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v145: 195 pgs: 2 unknown, 193 active+clean; 449 KiB data, 80 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:06 np0005593232 systemd[1]: Starting Ceph rgw.rgw.compute-0.jgxhia for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:03:06 np0005593232 podman[94668]: 2026-01-23 09:03:06.241752667 +0000 UTC m=+0.041962123 container create b99d0856fdadc62a85b0ff76482e31f6bb9728d053579141ca98c4d52fa5357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-rgw-rgw-compute-0-jgxhia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8679321ba79f95061473599029fd719d9fb0f4abe5d293911a6ad013c1e7ef50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8679321ba79f95061473599029fd719d9fb0f4abe5d293911a6ad013c1e7ef50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8679321ba79f95061473599029fd719d9fb0f4abe5d293911a6ad013c1e7ef50/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8679321ba79f95061473599029fd719d9fb0f4abe5d293911a6ad013c1e7ef50/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.jgxhia supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 podman[94668]: 2026-01-23 09:03:06.290601282 +0000 UTC m=+0.090810758 container init b99d0856fdadc62a85b0ff76482e31f6bb9728d053579141ca98c4d52fa5357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-rgw-rgw-compute-0-jgxhia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:03:06 np0005593232 podman[94668]: 2026-01-23 09:03:06.297683421 +0000 UTC m=+0.097892877 container start b99d0856fdadc62a85b0ff76482e31f6bb9728d053579141ca98c4d52fa5357a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-rgw-rgw-compute-0-jgxhia, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:03:06 np0005593232 bash[94668]: b99d0856fdadc62a85b0ff76482e31f6bb9728d053579141ca98c4d52fa5357a
Jan 23 04:03:06 np0005593232 podman[94668]: 2026-01-23 09:03:06.22163734 +0000 UTC m=+0.021846816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:06 np0005593232 systemd[1]: Started Ceph rgw.rgw.compute-0.jgxhia for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:03:06 np0005593232 python3[94663]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:06 np0005593232 radosgw[94687]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 23 04:03:06 np0005593232 radosgw[94687]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 23 04:03:06 np0005593232 radosgw[94687]: framework: beast
Jan 23 04:03:06 np0005593232 radosgw[94687]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 23 04:03:06 np0005593232 radosgw[94687]: init_numa not setting numa affinity
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:03:06 np0005593232 podman[94688]: 2026-01-23 09:03:06.381611194 +0000 UTC m=+0.042491327 container create cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev d743b8f7-f92b-48f9-806d-fc82556e0c2d (Updating rgw.rgw deployment (+3 -> 3))
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event d743b8f7-f92b-48f9-806d-fc82556e0c2d (Updating rgw.rgw deployment (+3 -> 3)) in 6 seconds
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 systemd[1]: Started libpod-conmon-cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8.scope.
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 37c9c624-6e0a-4002-b8ad-860e2d1a7a9b (Updating mds.cephfs deployment (+3 -> 3))
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cfzfln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cfzfln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cfzfln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.cfzfln on compute-2
Jan 23 04:03:06 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.cfzfln on compute-2
Jan 23 04:03:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cf858ccd027c6ef4bd3a84ddbdeaac5e848c4a84cd6a59b4bab63f52da1517/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/13cf858ccd027c6ef4bd3a84ddbdeaac5e848c4a84cd6a59b4bab63f52da1517/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:06 np0005593232 podman[94688]: 2026-01-23 09:03:06.364192514 +0000 UTC m=+0.025072667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:06 np0005593232 podman[94688]: 2026-01-23 09:03:06.4723832 +0000 UTC m=+0.133263363 container init cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:06 np0005593232 podman[94688]: 2026-01-23 09:03:06.479438018 +0000 UTC m=+0.140318151 container start cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:03:06 np0005593232 podman[94688]: 2026-01-23 09:03:06.482590977 +0000 UTC m=+0.143471140 container attach cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:03:06 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 23 04:03:06 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/671819638' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cfzfln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.cfzfln", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 23 04:03:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4125697578' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 23 04:03:07 np0005593232 adoring_nightingale[94762]: 
Jan 23 04:03:07 np0005593232 adoring_nightingale[94762]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target","value":"5502918246","level":"basic","can_update_at_runtime":true,"mask":"host:compute-1","location_type":"host","location_value":"compute-1"},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.jgxhia","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.odtvxh","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.nxrebk","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 23 04:03:07 np0005593232 systemd[1]: libpod-cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8.scope: Deactivated successfully.
Jan 23 04:03:07 np0005593232 podman[94688]: 2026-01-23 09:03:07.055845806 +0000 UTC m=+0.716725939 container died cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 04:03:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-13cf858ccd027c6ef4bd3a84ddbdeaac5e848c4a84cd6a59b4bab63f52da1517-merged.mount: Deactivated successfully.
Jan 23 04:03:07 np0005593232 podman[94688]: 2026-01-23 09:03:07.093252129 +0000 UTC m=+0.754132262 container remove cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8 (image=quay.io/ceph/ceph:v18, name=adoring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:03:07 np0005593232 systemd[1]: libpod-conmon-cbfecefac7beddc502a79d00c0384ef86d86172dc1285261b441d79be875f8b8.scope: Deactivated successfully.
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/671819638' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 23 04:03:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v148: 196 pgs: 1 unknown, 195 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: Deploying daemon mds.cephfs.compute-2.cfzfln on compute-2
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/671819638' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.101:0/2162450091' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/1632836641' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:07 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 python3[94836]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.073850296 +0000 UTC m=+0.041290613 container create 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:08 np0005593232 systemd[1]: Started libpod-conmon-29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64.scope.
Jan 23 04:03:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3ea42bf4296aabac7f7fd5f6e91a73f1e6af1c0adc7535424e59ba07186083/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3ea42bf4296aabac7f7fd5f6e91a73f1e6af1c0adc7535424e59ba07186083/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.125513531 +0000 UTC m=+0.092953878 container init 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.131032906 +0000 UTC m=+0.098473223 container start 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.133938278 +0000 UTC m=+0.101378625 container attach 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.058317349 +0000 UTC m=+0.025757696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djntrk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djntrk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djntrk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.djntrk on compute-0
Jan 23 04:03:08 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.djntrk on compute-0
Jan 23 04:03:08 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 13 completed events
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/116356520' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 23 04:03:08 np0005593232 lucid_murdock[94856]: mimic
Jan 23 04:03:08 np0005593232 systemd[1]: libpod-29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64.scope: Deactivated successfully.
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.704403748 +0000 UTC m=+0.671844065 container died 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5b3ea42bf4296aabac7f7fd5f6e91a73f1e6af1c0adc7535424e59ba07186083-merged.mount: Deactivated successfully.
Jan 23 04:03:08 np0005593232 podman[94837]: 2026-01-23 09:03:08.74318794 +0000 UTC m=+0.710628257 container remove 29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64 (image=quay.io/ceph/ceph:v18, name=lucid_murdock, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 23 04:03:08 np0005593232 systemd[1]: libpod-conmon-29c199f98efb589bf8e5217f278b87d88ac7ae91656a4c64a2cae714347bdb64.scope: Deactivated successfully.
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.90762503 +0000 UTC m=+0.045123112 container create 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:03:08 np0005593232 systemd[1]: Started libpod-conmon-5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae.scope.
Jan 23 04:03:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.972934418 +0000 UTC m=+0.110432510 container init 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.978834004 +0000 UTC m=+0.116332096 container start 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:03:08 np0005593232 nervous_archimedes[95047]: 167 167
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.888042548 +0000 UTC m=+0.025540660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.982787516 +0000 UTC m=+0.120285608 container attach 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:03:08 np0005593232 systemd[1]: libpod-5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae.scope: Deactivated successfully.
Jan 23 04:03:08 np0005593232 podman[95030]: 2026-01-23 09:03:08.983179137 +0000 UTC m=+0.120677229 container died 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/671819638' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djntrk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djntrk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/448850748' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.101:0/249273750' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e3 new map
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:02:44.729005+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.cfzfln{-1:24154} state up:standby seq 1 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] up:boot
Jan 23 04:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] as mds.0
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.cfzfln assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.cfzfln"} v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.cfzfln"}]: dispatch
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e3 all = 0
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e4 new map
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:08.999863+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:creating seq 1 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 23 04:03:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0ff6161a2502684a5c5cda8bf651f727e27ee5ca0328aae6b847c5f3e0d14f93-merged.mount: Deactivated successfully.
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:creating}
Jan 23 04:03:09 np0005593232 podman[95030]: 2026-01-23 09:03:09.020857848 +0000 UTC m=+0.158355940 container remove 5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:03:09 np0005593232 systemd[1]: libpod-conmon-5f6cc4494ecae44a825eb69d40590941fed80844e7b4ac260bac1a697374b8ae.scope: Deactivated successfully.
Jan 23 04:03:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 51 pg[11.0( empty local-lis/les=0/0 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.cfzfln is now active in filesystem cephfs as rank 0
Jan 23 04:03:09 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:09 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:09 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:09 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:09 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:09 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:09 np0005593232 systemd[1]: Starting Ceph mds.cephfs.compute-0.djntrk for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:03:09 np0005593232 python3[95166]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=0/0 les/c/f=0/0/0 sis=51) [0] r=0 lpr=51 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:09 np0005593232 podman[95205]: 2026-01-23 09:03:09.793601393 +0000 UTC m=+0.044021781 container create 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:03:09 np0005593232 systemd[1]: Started libpod-conmon-71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1.scope.
Jan 23 04:03:09 np0005593232 podman[95227]: 2026-01-23 09:03:09.837222801 +0000 UTC m=+0.058128868 container create ad95761c7b54038290776a3d2e82e50cfbcf9229462023a6a0a5ac37c27f2c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mds-cephfs-compute-0-djntrk, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:03:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:09 np0005593232 podman[95205]: 2026-01-23 09:03:09.776952294 +0000 UTC m=+0.027372712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f841e0a5577818c7bc72251ed06df781cb2de53690ad767aacfa8cb8090d61/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f841e0a5577818c7bc72251ed06df781cb2de53690ad767aacfa8cb8090d61/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b389e72bf49270ac907b2c2debc137a28613cc784d3696c5501622b97d386ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b389e72bf49270ac907b2c2debc137a28613cc784d3696c5501622b97d386ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b389e72bf49270ac907b2c2debc137a28613cc784d3696c5501622b97d386ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b389e72bf49270ac907b2c2debc137a28613cc784d3696c5501622b97d386ba/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.djntrk supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:09 np0005593232 podman[95205]: 2026-01-23 09:03:09.891235331 +0000 UTC m=+0.141655749 container init 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:03:09 np0005593232 podman[95205]: 2026-01-23 09:03:09.899383081 +0000 UTC m=+0.149803469 container start 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:09 np0005593232 podman[95227]: 2026-01-23 09:03:09.900509812 +0000 UTC m=+0.121415909 container init ad95761c7b54038290776a3d2e82e50cfbcf9229462023a6a0a5ac37c27f2c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mds-cephfs-compute-0-djntrk, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 04:03:09 np0005593232 podman[95205]: 2026-01-23 09:03:09.90324979 +0000 UTC m=+0.153670178 container attach 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:03:09 np0005593232 podman[95227]: 2026-01-23 09:03:09.90789251 +0000 UTC m=+0.128798577 container start ad95761c7b54038290776a3d2e82e50cfbcf9229462023a6a0a5ac37c27f2c70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mds-cephfs-compute-0-djntrk, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:03:09 np0005593232 bash[95227]: ad95761c7b54038290776a3d2e82e50cfbcf9229462023a6a0a5ac37c27f2c70
Jan 23 04:03:09 np0005593232 podman[95227]: 2026-01-23 09:03:09.819555343 +0000 UTC m=+0.040461430 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:09 np0005593232 systemd[1]: Started Ceph mds.cephfs.compute-0.djntrk for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:03:09 np0005593232 ceph-mds[95253]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 04:03:09 np0005593232 ceph-mds[95253]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 23 04:03:09 np0005593232 ceph-mds[95253]: main not setting numa affinity
Jan 23 04:03:09 np0005593232 ceph-mds[95253]: pidfile_write: ignore empty --pid-file
Jan 23 04:03:09 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mds-cephfs-compute-0-djntrk[95248]: starting mds.cephfs.compute-0.djntrk at 
Jan 23 04:03:09 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Updating MDS map to version 4 from mon.0
Jan 23 04:03:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v151: 197 pgs: 2 unknown, 195 active+clean; 450 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:03:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.elkrlx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.elkrlx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: Deploying daemon mds.cephfs.compute-0.djntrk on compute-0
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: daemon mds.cephfs.compute-2.cfzfln assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: Cluster is now healthy
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: daemon mds.cephfs.compute-2.cfzfln is now active in filesystem cephfs as rank 0
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.102:0/448850748' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.101:0/249273750' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e5 new map
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:10.006927+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djntrk{-1:14418} state up:standby seq 1 addr [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.elkrlx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:10 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Updating MDS map to version 5 from mon.0
Jan 23 04:03:10 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Monitors have assigned me to become a standby.
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] up:active
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] up:boot
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:active} 1 up:standby
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.djntrk"} v 0) v1
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.djntrk"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e5 all = 0
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:10 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.elkrlx on compute-1
Jan 23 04:03:10 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.elkrlx on compute-1
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e6 new map
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:10.006927+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:active seq 2 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djntrk{-1:14418} state up:standby seq 1 addr [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:active} 1 up:standby
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556096981' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 23 04:03:10 np0005593232 gifted_dubinsky[95242]: 
Jan 23 04:03:10 np0005593232 gifted_dubinsky[95242]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":2},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":11}}
Jan 23 04:03:10 np0005593232 systemd[1]: libpod-71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1.scope: Deactivated successfully.
Jan 23 04:03:10 np0005593232 podman[95205]: 2026-01-23 09:03:10.531355003 +0000 UTC m=+0.781775391 container died 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:03:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f8f841e0a5577818c7bc72251ed06df781cb2de53690ad767aacfa8cb8090d61-merged.mount: Deactivated successfully.
Jan 23 04:03:10 np0005593232 podman[95205]: 2026-01-23 09:03:10.573281643 +0000 UTC m=+0.823702031 container remove 71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1 (image=quay.io/ceph/ceph:v18, name=gifted_dubinsky, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:10 np0005593232 systemd[1]: libpod-conmon-71e0d29e8a6d5dace59a0bdbfffc434996841211fe6242edc4d19c537421d9e1.scope: Deactivated successfully.
Jan 23 04:03:10 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 23 04:03:10 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 23 04:03:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.elkrlx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.elkrlx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: Deploying daemon mds.cephfs.compute-1.elkrlx on compute-1
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='client.? 192.168.122.100:0/1806314718' entity='client.rgw.rgw.compute-0.jgxhia' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-2.nxrebk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:11 np0005593232 ceph-mon[74423]: from='client.? ' entity='client.rgw.rgw.compute-1.odtvxh' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 23 04:03:11 np0005593232 radosgw[94687]: LDAP not started since no server URIs were provided in the configuration.
Jan 23 04:03:11 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-rgw-rgw-compute-0-jgxhia[94683]: 2026-01-23T09:03:11.025+0000 7f0b2ecbc940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 23 04:03:11 np0005593232 radosgw[94687]: framework: beast
Jan 23 04:03:11 np0005593232 radosgw[94687]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 23 04:03:11 np0005593232 radosgw[94687]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 23 04:03:11 np0005593232 radosgw[94687]: starting handler: beast
Jan 23 04:03:11 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 23 04:03:11 np0005593232 radosgw[94687]: set uid:gid to 167:167 (ceph:ceph)
Jan 23 04:03:11 np0005593232 radosgw[94687]: mgrc service_daemon_register rgw.14406 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.jgxhia,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3e406311-b9dd-4a98-8793-19b1bcf2f2db,zone_name=default,zonegroup_id=b6190aad-4d81-46cc-a15a-858fefbf7de5,zonegroup_name=default}
Jan 23 04:03:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 23 04:03:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 23 04:03:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v153: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 277 KiB/s rd, 11 KiB/s wr, 503 op/s
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 37c9c624-6e0a-4002-b8ad-860e2d1a7a9b (Updating mds.cephfs deployment (+3 -> 3))
Jan 23 04:03:12 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 37c9c624-6e0a-4002-b8ad-860e2d1a7a9b (Updating mds.cephfs deployment (+3 -> 3)) in 6 seconds
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 1c0a8bcf-7a80-4fb9-85d6-5a81df7f4ff3 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0) v1
Jan 23 04:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:12 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.iyrury on compute-0
Jan 23 04:03:12 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.iyrury on compute-0
Jan 23 04:03:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 23 04:03:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e7 new map
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:13.047531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djntrk{-1:14418} state up:standby seq 1 addr [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.elkrlx{-1:24167} state up:standby seq 1 addr [v2:192.168.122.101:6804/4162024387,v1:192.168.122.101:6805/4162024387] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4162024387,v1:192.168.122.101:6805/4162024387] up:boot
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] up:active
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:active} 2 up:standby
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.elkrlx"} v 0) v1
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.elkrlx"}]: dispatch
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e7 all = 0
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:13 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 14 completed events
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:03:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:13 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event b3afd72c-094e-4778-a831-36fc80bbd787 (Global Recovery Event) in 10 seconds
Jan 23 04:03:13 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Jan 23 04:03:13 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Jan 23 04:03:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v154: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 194 KiB/s rd, 8.0 KiB/s wr, 353 op/s
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: Deploying daemon haproxy.rgw.default.compute-0.iyrury on compute-0
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 23 04:03:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e8 new map
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:13.047531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djntrk{-1:14418} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.elkrlx{-1:24167} state up:standby seq 1 addr [v2:192.168.122.101:6804/4162024387,v1:192.168.122.101:6805/4162024387] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:14 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Updating MDS map to version 8 from mon.0
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] up:standby
Jan 23 04:03:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:active} 2 up:standby
Jan 23 04:03:15 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 23 04:03:15 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 23 04:03:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v155: 197 pgs: 197 active+clean; 456 KiB data, 81 MiB used, 21 GiB / 21 GiB avail; 162 KiB/s rd, 6.7 KiB/s wr, 294 op/s
Jan 23 04:03:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 23 04:03:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.879668761 +0000 UTC m=+3.858661895 container create d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.864328399 +0000 UTC m=+3.843321553 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 04:03:16 np0005593232 systemd[1]: Started libpod-conmon-d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146.scope.
Jan 23 04:03:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.968835421 +0000 UTC m=+3.947828645 container init d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.976345412 +0000 UTC m=+3.955338546 container start d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.980127459 +0000 UTC m=+3.959120633 container attach d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:16 np0005593232 eloquent_ellis[96111]: 0 0
Jan 23 04:03:16 np0005593232 systemd[1]: libpod-d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146.scope: Deactivated successfully.
Jan 23 04:03:16 np0005593232 podman[95993]: 2026-01-23 09:03:16.983229826 +0000 UTC m=+3.962222960 container died d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aec7c285f56f63d6110c82cd338631b777507b02feabac664041f3b967a3432a-merged.mount: Deactivated successfully.
Jan 23 04:03:17 np0005593232 podman[95993]: 2026-01-23 09:03:17.019074705 +0000 UTC m=+3.998067839 container remove d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146 (image=quay.io/ceph/haproxy:2.3, name=eloquent_ellis)
Jan 23 04:03:17 np0005593232 systemd[1]: libpod-conmon-d7394309fce1b0e4c87761e3bb395aedeeb41eee24d3c90565b825bc10437146.scope: Deactivated successfully.
Jan 23 04:03:17 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:17 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:17 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e9 new map
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0117#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-23T09:02:44.728889+0000#012modified#0112026-01-23T09:03:13.047531+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24154}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.cfzfln{0:24154} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/817154036,v1:192.168.122.102:6805/817154036] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.djntrk{-1:14418} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/376811981,v1:192.168.122.100:6807/376811981] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.elkrlx{-1:24167} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/4162024387,v1:192.168.122.101:6805/4162024387] compat {c=[1],r=[1],i=[7ff]}]
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/4162024387,v1:192.168.122.101:6805/4162024387] up:standby
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.cfzfln=up:active} 2 up:standby
Jan 23 04:03:17 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:17 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:17 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:17 np0005593232 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.iyrury for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:03:17 np0005593232 podman[96254]: 2026-01-23 09:03:17.851507981 +0000 UTC m=+0.036424496 container create b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965d3df5c3ea1ae1be39e8c6739ca2ca43110d881b86bdf197b030d4fb2e528c/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:17 np0005593232 podman[96254]: 2026-01-23 09:03:17.908595708 +0000 UTC m=+0.093512253 container init b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:17 np0005593232 podman[96254]: 2026-01-23 09:03:17.913366013 +0000 UTC m=+0.098282528 container start b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:17 np0005593232 bash[96254]: b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d
Jan 23 04:03:17 np0005593232 podman[96254]: 2026-01-23 09:03:17.834999726 +0000 UTC m=+0.019916271 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 23 04:03:17 np0005593232 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.iyrury for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:03:17 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury[96269]: [NOTICE] 022/090317 (2) : New worker #1 (4) forked
Jan 23 04:03:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:03:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:17.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v156: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 204 KiB/s rd, 5.9 KiB/s wr, 372 op/s
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:03:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:17 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.xmknsp on compute-2
Jan 23 04:03:17 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.xmknsp on compute-2
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: Deploying daemon haproxy.rgw.default.compute-2.xmknsp on compute-2
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:18 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 15 completed events
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:03:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:03:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:19.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:03:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v157: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 167 KiB/s rd, 4.8 KiB/s wr, 305 op/s
Jan 23 04:03:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:21.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v158: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 150 KiB/s rd, 4.3 KiB/s wr, 273 op/s
Jan 23 04:03:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0) v1
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.tkmlem on compute-2
Jan 23 04:03:22 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.tkmlem on compute-2
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:23 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 23 04:03:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:23.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:23 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 23 04:03:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v159: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 78 op/s
Jan 23 04:03:24 np0005593232 ceph-mon[74423]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:24 np0005593232 ceph-mon[74423]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:24 np0005593232 ceph-mon[74423]: Deploying daemon keepalived.rgw.default.compute-2.tkmlem on compute-2
Jan 23 04:03:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 23 04:03:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 23 04:03:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:25.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v160: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 77 op/s
Jan 23 04:03:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:03:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:26.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:03:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v161: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 0 B/s wr, 77 op/s
Jan 23 04:03:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:28.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 23 04:03:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 23 04:03:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.c deep-scrub starts
Jan 23 04:03:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.c deep-scrub ok
Jan 23 04:03:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:29.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v162: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:03:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:30.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.qsixev on compute-0
Jan 23 04:03:30 np0005593232 ceph-mgr[74726]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.qsixev on compute-0
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:30 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 23 04:03:30 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 23 04:03:31 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Jan 23 04:03:31 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Jan 23 04:03:31 np0005593232 ceph-mon[74423]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 23 04:03:31 np0005593232 ceph-mon[74423]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 23 04:03:31 np0005593232 ceph-mon[74423]: Deploying daemon keepalived.rgw.default.compute-0.qsixev on compute-0
Jan 23 04:03:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:31.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v163: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:32.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:32 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.11 deep-scrub starts
Jan 23 04:03:32 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.11 deep-scrub ok
Jan 23 04:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:33.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v164: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:34.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.551050528 +0000 UTC m=+4.524115950 container create 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.buildah.version=1.28.2, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.532944258 +0000 UTC m=+4.506009710 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 04:03:35 np0005593232 systemd[1]: Started libpod-conmon-70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106.scope.
Jan 23 04:03:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.630316459 +0000 UTC m=+4.603381902 container init 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-type=git, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.63921943 +0000 UTC m=+4.612284872 container start 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, description=keepalived for Ceph, distribution-scope=public)
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.642892854 +0000 UTC m=+4.615958276 container attach 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, vendor=Red Hat, Inc., distribution-scope=public, name=keepalived, version=2.2.4, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Jan 23 04:03:35 np0005593232 crazy_liskov[96526]: 0 0
Jan 23 04:03:35 np0005593232 systemd[1]: libpod-70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106.scope: Deactivated successfully.
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.645306992 +0000 UTC m=+4.618372434 container died 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 04:03:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-30bb8515d76827238c45b1072781ad691f5ab222f0793175b69e831596b3e86b-merged.mount: Deactivated successfully.
Jan 23 04:03:35 np0005593232 podman[96432]: 2026-01-23 09:03:35.685893144 +0000 UTC m=+4.658958566 container remove 70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106 (image=quay.io/ceph/keepalived:2.2.4, name=crazy_liskov, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, release=1793, version=2.2.4, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 04:03:35 np0005593232 systemd[1]: libpod-conmon-70db62c9f9824b31ab9bd57c84087820ff465c278881eefbb930604fe2aad106.scope: Deactivated successfully.
Jan 23 04:03:35 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:35 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:35 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:35.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v165: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:36 np0005593232 systemd[1]: Reloading.
Jan 23 04:03:36 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:03:36 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:03:36 np0005593232 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.qsixev for e1533653-0a5a-584c-b34b-8689f0d32e77...
Jan 23 04:03:36 np0005593232 podman[96672]: 2026-01-23 09:03:36.569934003 +0000 UTC m=+0.042257111 container create 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, name=keepalived, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git)
Jan 23 04:03:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62c26e7d50744534dfb25197d02e6f3544b23efeb1cff87666333cd7740d1d3d/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:36 np0005593232 podman[96672]: 2026-01-23 09:03:36.625530318 +0000 UTC m=+0.097853426 container init 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, name=keepalived, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph)
Jan 23 04:03:36 np0005593232 podman[96672]: 2026-01-23 09:03:36.631414604 +0000 UTC m=+0.103737712 container start 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, version=2.2.4, description=keepalived for Ceph, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 23 04:03:36 np0005593232 bash[96672]: 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027
Jan 23 04:03:36 np0005593232 podman[96672]: 2026-01-23 09:03:36.554339764 +0000 UTC m=+0.026662872 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 23 04:03:36 np0005593232 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.qsixev for e1533653-0a5a-584c-b34b-8689f0d32e77.
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Starting VRRP child process, pid=4
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: Startup complete
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: (VI_0) Entering BACKUP STATE (init)
Jan 23 04:03:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:36 2026: VRRP_Script(check_backend) succeeded
Jan 23 04:03:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:36.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:03:36 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Jan 23 04:03:36 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:36 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 1c0a8bcf-7a80-4fb9-85d6-5a81df7f4ff3 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 23 04:03:36 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 1c0a8bcf-7a80-4fb9-85d6-5a81df7f4ff3 (Updating ingress.rgw.default deployment (+4 -> 4)) in 24 seconds
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0) v1
Jan 23 04:03:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:03:36
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'images', 'vms', '.mgr']
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:03:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:03:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:37.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v166: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:38 np0005593232 podman[96968]: 2026-01-23 09:03:38.150616614 +0000 UTC m=+0.055678839 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:38 np0005593232 podman[96968]: 2026-01-23 09:03:38.28225941 +0000 UTC m=+0.187321635 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:38 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 16 completed events
Jan 23 04:03:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:03:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:38.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:38 np0005593232 podman[97121]: 2026-01-23 09:03:38.856059354 +0000 UTC m=+0.052459988 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:38 np0005593232 podman[97121]: 2026-01-23 09:03:38.870346096 +0000 UTC m=+0.066746710 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 podman[97184]: 2026-01-23 09:03:39.199622657 +0000 UTC m=+0.062346127 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, distribution-scope=public, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Jan 23 04:03:39 np0005593232 podman[97184]: 2026-01-23 09:03:39.214557037 +0000 UTC m=+0.077280507 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=keepalived, architecture=x86_64, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vendor=Red Hat, Inc.)
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:39.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v167: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:40 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 09:03:40 2026: (VI_0) Entering MASTER STATE
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bb363f60-ffbb-477e-8bb2-ddf6027cceea does not exist
Jan 23 04:03:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 575c411d-c222-4ddc-b828-f20a1cb0028d does not exist
Jan 23 04:03:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cec455db-53ee-4d40-bd02-552285d6b476 does not exist
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.234856645 +0000 UTC m=+0.056298766 container create c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:03:41 np0005593232 systemd[1]: Started libpod-conmon-c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792.scope.
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.215150581 +0000 UTC m=+0.036592722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.334147421 +0000 UTC m=+0.155589622 container init c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.340822609 +0000 UTC m=+0.162264730 container start c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.344280256 +0000 UTC m=+0.165722417 container attach c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:41 np0005593232 zealous_shtern[97503]: 167 167
Jan 23 04:03:41 np0005593232 systemd[1]: libpod-c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792.scope: Deactivated successfully.
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.346295993 +0000 UTC m=+0.167738114 container died c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:03:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-15c713b2e5d2813cdaf6d91cdf1d4d40ef51e6eb4a3956915debe6b07c8be3e2-merged.mount: Deactivated successfully.
Jan 23 04:03:41 np0005593232 podman[97487]: 2026-01-23 09:03:41.415645865 +0000 UTC m=+0.237087986 container remove c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:03:41 np0005593232 systemd[1]: libpod-conmon-c079bca0c46c0cbe25cd315563517dd5806048a0723b1c9601d1e18bcaf3b792.scope: Deactivated successfully.
Jan 23 04:03:41 np0005593232 podman[97529]: 2026-01-23 09:03:41.577629186 +0000 UTC m=+0.043434134 container create ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:41 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Jan 23 04:03:41 np0005593232 systemd[1]: Started libpod-conmon-ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9.scope.
Jan 23 04:03:41 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Jan 23 04:03:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:41 np0005593232 podman[97529]: 2026-01-23 09:03:41.558282291 +0000 UTC m=+0.024087239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:41 np0005593232 podman[97529]: 2026-01-23 09:03:41.656115684 +0000 UTC m=+0.121920622 container init ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:41 np0005593232 podman[97529]: 2026-01-23 09:03:41.661888947 +0000 UTC m=+0.127693875 container start ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:03:41 np0005593232 podman[97529]: 2026-01-23 09:03:41.664443209 +0000 UTC m=+0.130248137 container attach ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:41.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v168: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:42 np0005593232 interesting_roentgen[97545]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:03:42 np0005593232 interesting_roentgen[97545]: --> relative data size: 1.0
Jan 23 04:03:42 np0005593232 interesting_roentgen[97545]: --> All data devices are unavailable
Jan 23 04:03:42 np0005593232 systemd[1]: libpod-ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9.scope: Deactivated successfully.
Jan 23 04:03:42 np0005593232 podman[97529]: 2026-01-23 09:03:42.516073965 +0000 UTC m=+0.981878903 container died ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c45c5abf42b623a0f118376d7a4e91b5a2adcd50e020142d0b0d831bb1f5d095-merged.mount: Deactivated successfully.
Jan 23 04:03:42 np0005593232 podman[97529]: 2026-01-23 09:03:42.571442764 +0000 UTC m=+1.037247692 container remove ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:03:42 np0005593232 systemd[1]: libpod-conmon-ee6222cba72e151cf5e936e991598a3b4dea1549b88abbc1e05bb2aae2c3aff9.scope: Deactivated successfully.
Jan 23 04:03:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:42.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 1)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 1)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:03:42 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 23 04:03:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:03:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 23 04:03:43 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev f08e90bd-0957-4769-98dd-5c4293466b3b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.182458996 +0000 UTC m=+0.040937554 container create 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:43 np0005593232 systemd[1]: Started libpod-conmon-432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df.scope.
Jan 23 04:03:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.165064576 +0000 UTC m=+0.023543164 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.276604196 +0000 UTC m=+0.135082774 container init 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.282652387 +0000 UTC m=+0.141130945 container start 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.286258158 +0000 UTC m=+0.144736716 container attach 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:43 np0005593232 stoic_merkle[97729]: 167 167
Jan 23 04:03:43 np0005593232 systemd[1]: libpod-432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df.scope: Deactivated successfully.
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.287095322 +0000 UTC m=+0.145573880 container died 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5f21b061a499f64e461155bbf3bc6801a5e5401c76581ebfface80400fe74057-merged.mount: Deactivated successfully.
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:43 np0005593232 podman[97713]: 2026-01-23 09:03:43.32824778 +0000 UTC m=+0.186726338 container remove 432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:03:43 np0005593232 systemd[1]: libpod-conmon-432fa1d5a385ec7cdbb4e920f62cc12b364b92451d4ce9d12b829ccd9db286df.scope: Deactivated successfully.
Jan 23 04:03:43 np0005593232 podman[97753]: 2026-01-23 09:03:43.480541178 +0000 UTC m=+0.040288785 container create f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:43 np0005593232 systemd[1]: Started libpod-conmon-f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218.scope.
Jan 23 04:03:43 np0005593232 podman[97753]: 2026-01-23 09:03:43.461703728 +0000 UTC m=+0.021451315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a010fc1114663f08822bce17c0a568b2cbe794f91aacd3196e3d942cfee79761/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a010fc1114663f08822bce17c0a568b2cbe794f91aacd3196e3d942cfee79761/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a010fc1114663f08822bce17c0a568b2cbe794f91aacd3196e3d942cfee79761/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a010fc1114663f08822bce17c0a568b2cbe794f91aacd3196e3d942cfee79761/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:43 np0005593232 podman[97753]: 2026-01-23 09:03:43.578075384 +0000 UTC m=+0.137823051 container init f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:43 np0005593232 podman[97753]: 2026-01-23 09:03:43.592647724 +0000 UTC m=+0.152395321 container start f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:43 np0005593232 podman[97753]: 2026-01-23 09:03:43.597415498 +0000 UTC m=+0.157163155 container attach f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:03:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:43.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v170: 197 pgs: 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 23 04:03:44 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 406abcad-ece6-43ad-a47e-4c42200d0566 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 55 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=45/46 n=4 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=15.553189278s) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 46'3 mlcod 46'3 active pruub 162.794906616s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 55 pg[8.0( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55 pruub=15.553189278s) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 46'3 mlcod 0'0 unknown pruub 162.794906616s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]: {
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:    "0": [
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:        {
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "devices": [
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "/dev/loop3"
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            ],
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "lv_name": "ceph_lv0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "lv_size": "7511998464",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "name": "ceph_lv0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "tags": {
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.cluster_name": "ceph",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.crush_device_class": "",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.encrypted": "0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.osd_id": "0",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.type": "block",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:                "ceph.vdo": "0"
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            },
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "type": "block",
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:            "vg_name": "ceph_vg0"
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:        }
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]:    ]
Jan 23 04:03:44 np0005593232 naughty_goldberg[97769]: }
Jan 23 04:03:44 np0005593232 systemd[1]: libpod-f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218.scope: Deactivated successfully.
Jan 23 04:03:44 np0005593232 podman[97753]: 2026-01-23 09:03:44.406442565 +0000 UTC m=+0.966190142 container died f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a010fc1114663f08822bce17c0a568b2cbe794f91aacd3196e3d942cfee79761-merged.mount: Deactivated successfully.
Jan 23 04:03:44 np0005593232 podman[97753]: 2026-01-23 09:03:44.485082149 +0000 UTC m=+1.044829726 container remove f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:03:44 np0005593232 systemd[1]: libpod-conmon-f2a5c780a302aed9b4c95072a6d137b17d6815cffc706a392246dcaa98eb1218.scope: Deactivated successfully.
Jan 23 04:03:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:44.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 23 04:03:45 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev da45354b-e8f0-42f9-8007-9520caf32952 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.14( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.19( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.2( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.3( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.7( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.c( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.e( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.b( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.d( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.f( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.15( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.8( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.5( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.16( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.6( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.a( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.4( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.17( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.10( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.11( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.12( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.13( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.9( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.18( v 46'4 lc 0'0 (0'0,46'4] local-lis/les=45/46 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1a( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1e( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1d( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.0( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=45/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 46'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.7( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.e( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.13( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 56 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=45/45 les/c/f=46/46/0 sis=55) [0] r=0 lpr=55 pi=[45,55)/1 crt=46'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.189930453 +0000 UTC m=+0.045506122 container create 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:45 np0005593232 systemd[1]: Started libpod-conmon-6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb.scope.
Jan 23 04:03:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.173535652 +0000 UTC m=+0.029111321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.284796633 +0000 UTC m=+0.140372302 container init 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.292986054 +0000 UTC m=+0.148561723 container start 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.29747761 +0000 UTC m=+0.153053299 container attach 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:03:45 np0005593232 wizardly_torvalds[97945]: 167 167
Jan 23 04:03:45 np0005593232 systemd[1]: libpod-6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb.scope: Deactivated successfully.
Jan 23 04:03:45 np0005593232 conmon[97945]: conmon 6b601f1687a20ad79506 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb.scope/container/memory.events
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.301137723 +0000 UTC m=+0.156713392 container died 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:03:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-59707db2241d6bf6aa81910aa718c27537082eb86c59d7ea3dcb7292a3421b0e-merged.mount: Deactivated successfully.
Jan 23 04:03:45 np0005593232 podman[97929]: 2026-01-23 09:03:45.341063027 +0000 UTC m=+0.196638706 container remove 6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_torvalds, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:45 np0005593232 systemd[1]: libpod-conmon-6b601f1687a20ad795067cf64ef6f9e0e102f2c593a5cc85f3f3f1528bb17bdb.scope: Deactivated successfully.
Jan 23 04:03:45 np0005593232 podman[97968]: 2026-01-23 09:03:45.503954694 +0000 UTC m=+0.050227045 container create 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:03:45 np0005593232 systemd[1]: Started libpod-conmon-5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa.scope.
Jan 23 04:03:45 np0005593232 podman[97968]: 2026-01-23 09:03:45.478380684 +0000 UTC m=+0.024653065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29c99c9bbbecc34cae3f280b3e426917040cfc2a66badb52e1d02282cb91f56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29c99c9bbbecc34cae3f280b3e426917040cfc2a66badb52e1d02282cb91f56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29c99c9bbbecc34cae3f280b3e426917040cfc2a66badb52e1d02282cb91f56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c29c99c9bbbecc34cae3f280b3e426917040cfc2a66badb52e1d02282cb91f56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:45 np0005593232 podman[97968]: 2026-01-23 09:03:45.598321711 +0000 UTC m=+0.144594052 container init 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:03:45 np0005593232 podman[97968]: 2026-01-23 09:03:45.604856375 +0000 UTC m=+0.151128716 container start 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:03:45 np0005593232 podman[97968]: 2026-01-23 09:03:45.608335333 +0000 UTC m=+0.154607704 container attach 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:03:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v173: 228 pgs: 31 unknown, 197 active+clean; 456 KiB data, 85 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] update: starting ev 73606363-10cc-48b3-aa9b-0a783538868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev f08e90bd-0957-4769-98dd-5c4293466b3b (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event f08e90bd-0957-4769-98dd-5c4293466b3b (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 406abcad-ece6-43ad-a47e-4c42200d0566 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 406abcad-ece6-43ad-a47e-4c42200d0566 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev da45354b-e8f0-42f9-8007-9520caf32952 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event da45354b-e8f0-42f9-8007-9520caf32952 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] complete: finished ev 73606363-10cc-48b3-aa9b-0a783538868d (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress INFO root] Completed event 73606363-10cc-48b3-aa9b-0a783538868d (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]: {
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:        "osd_id": 0,
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:        "type": "bluestore"
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]:    }
Jan 23 04:03:46 np0005593232 bold_keldysh[97985]: }
Jan 23 04:03:46 np0005593232 systemd[1]: libpod-5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa.scope: Deactivated successfully.
Jan 23 04:03:46 np0005593232 podman[97968]: 2026-01-23 09:03:46.512969601 +0000 UTC m=+1.059241942 container died 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:03:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c29c99c9bbbecc34cae3f280b3e426917040cfc2a66badb52e1d02282cb91f56-merged.mount: Deactivated successfully.
Jan 23 04:03:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:46.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:46 np0005593232 podman[97968]: 2026-01-23 09:03:46.742663248 +0000 UTC m=+1.288935579 container remove 5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:46 np0005593232 systemd[1]: libpod-conmon-5f13a6434fc8fbe134e049ccb0e8c623763281ab512c1f624eeddde2b9238efa.scope: Deactivated successfully.
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d36b48b6-a66e-4f00-a7ab-5ecc1c06025b does not exist
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2c4e2a42-2064-4753-b844-6d74e5374c61 does not exist
Jan 23 04:03:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 96a4cce8-7195-4b06-bc20-07be67dba6af does not exist
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 57 pg[9.0( v 53'1142 (0'0,53'1142] local-lis/les=47/48 n=177 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.558039665s) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 53'1141 mlcod 53'1141 active pruub 164.856506348s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 57 pg[9.0( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57 pruub=14.558039665s) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 53'1141 mlcod 0'0 unknown pruub 164.856506348s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.4( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.6( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.3( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.5( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.8( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.7( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.9( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.2( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.a( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.b( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.c( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.d( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.e( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.f( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.10( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.11( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.12( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.13( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.14( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.15( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.16( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.17( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.18( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.19( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1a( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1b( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1c( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1d( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1e( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 58 pg[9.1f( v 53'1142 lc 0'0 (0'0,53'1142] local-lis/les=47/48 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:47.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v176: 290 pgs: 62 unknown, 228 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:47 np0005593232 podman[98240]: 2026-01-23 09:03:47.973981505 +0000 UTC m=+0.053431266 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:03:48 np0005593232 podman[98240]: 2026-01-23 09:03:48.066250312 +0000 UTC m=+0.145700043 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.558847427s) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active pruub 160.909683228s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[11.0( empty local-lis/les=51/52 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59 pruub=9.558847427s) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown pruub 160.909683228s@ mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.4( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.14( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.c( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.0( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=47/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 53'1141 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.2( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1c( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 59 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=47/47 les/c/f=48/48/0 sis=57) [0] r=0 lpr=57 pi=[47,57)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:48.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:48 np0005593232 podman[98397]: 2026-01-23 09:03:48.780936833 +0000 UTC m=+0.085209870 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:48 np0005593232 ceph-mgr[74726]: [progress INFO root] Writing back 20 completed events
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Jan 23 04:03:48 np0005593232 podman[98397]: 2026-01-23 09:03:48.816538324 +0000 UTC m=+0.120811351 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:03:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 podman[98460]: 2026-01-23 09:03:49.043069142 +0000 UTC m=+0.054420323 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vcs-type=git, architecture=x86_64)
Jan 23 04:03:49 np0005593232 podman[98460]: 2026-01-23 09:03:49.057494388 +0000 UTC m=+0.068845569 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, version=2.2.4, architecture=x86_64, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container)
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.11( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.10( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.9( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.15( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.6( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.2( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1f( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.18( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=51/52 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.11( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.9( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.10( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.15( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.6( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.2( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.0( empty local-lis/les=59/60 n=0 ec=51/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 60 pg[11.18( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=51/51 les/c/f=52/52/0 sis=59) [0] r=0 lpr=59 pi=[51,59)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 090e76b4-d165-4107-81e1-b9aafbad50d5 does not exist
Jan 23 04:03:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9a0e92da-1ef0-4710-a859-93264ac61abb does not exist
Jan 23 04:03:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 38c69c9b-3657-4d21-adb1-d2ed4577a5bd does not exist
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:03:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:49.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v179: 321 pgs: 93 unknown, 228 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.256510865 +0000 UTC m=+0.069463817 container create 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:03:50 np0005593232 systemd[1]: Started libpod-conmon-0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985.scope.
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.230096241 +0000 UTC m=+0.043049263 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.342585218 +0000 UTC m=+0.155538180 container init 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.349431021 +0000 UTC m=+0.162383993 container start 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.353053633 +0000 UTC m=+0.166006605 container attach 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:03:50 np0005593232 dazzling_mccarthy[98648]: 167 167
Jan 23 04:03:50 np0005593232 systemd[1]: libpod-0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985.scope: Deactivated successfully.
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.354353009 +0000 UTC m=+0.167305961 container died 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:03:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-11982b5c7ed06643e9b24001c596b6b41234dd29880a4867b208a8228b9fa294-merged.mount: Deactivated successfully.
Jan 23 04:03:50 np0005593232 podman[98631]: 2026-01-23 09:03:50.397461213 +0000 UTC m=+0.210414175 container remove 0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:03:50 np0005593232 systemd[1]: libpod-conmon-0c660e11409c547b3b1e4ec7cd21b68d39f3456c917e9e34ba66514abe3e0985.scope: Deactivated successfully.
Jan 23 04:03:50 np0005593232 podman[98673]: 2026-01-23 09:03:50.590114837 +0000 UTC m=+0.046723967 container create 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:50 np0005593232 systemd[1]: Started libpod-conmon-306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf.scope.
Jan 23 04:03:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:50 np0005593232 podman[98673]: 2026-01-23 09:03:50.566191153 +0000 UTC m=+0.022800303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:50 np0005593232 podman[98673]: 2026-01-23 09:03:50.682241981 +0000 UTC m=+0.138851141 container init 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:50 np0005593232 podman[98673]: 2026-01-23 09:03:50.692100458 +0000 UTC m=+0.148709598 container start 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:03:50 np0005593232 podman[98673]: 2026-01-23 09:03:50.695803642 +0000 UTC m=+0.152412812 container attach 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:03:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:51 np0005593232 admiring_murdock[98689]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:03:51 np0005593232 admiring_murdock[98689]: --> relative data size: 1.0
Jan 23 04:03:51 np0005593232 admiring_murdock[98689]: --> All data devices are unavailable
Jan 23 04:03:51 np0005593232 systemd[1]: libpod-306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf.scope: Deactivated successfully.
Jan 23 04:03:51 np0005593232 podman[98673]: 2026-01-23 09:03:51.545459233 +0000 UTC m=+1.002068383 container died 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-051545f7f35392cd4213e97853b8c0b6aa54790ea5a8269ecd9c96d0613e901c-merged.mount: Deactivated successfully.
Jan 23 04:03:51 np0005593232 podman[98673]: 2026-01-23 09:03:51.945956228 +0000 UTC m=+1.402565358 container remove 306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:51 np0005593232 systemd[1]: libpod-conmon-306b75d5a9c63601c5289445cc896d453834f67923306a5b854fce50c135c7cf.scope: Deactivated successfully.
Jan 23 04:03:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v180: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:03:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:03:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.19( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.18( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.1b( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.2( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.5( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.14( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.8( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.15( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[10.13( empty local-lis/les=0/0 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064902306s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257156372s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.12( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064852715s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257156372s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.171599388s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364151001s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.12( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.171570778s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364151001s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064303398s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257125854s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.11( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064274788s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257125854s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064041138s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257110596s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.10( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064003944s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257110596s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.171047211s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364074707s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.13( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170868874s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364074707s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.063762665s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257110596s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.17( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.063734055s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257110596s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170786858s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364196777s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.7( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170747757s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364196777s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.063592911s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257064819s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.4( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.063510895s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257064819s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170326233s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364257812s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.14( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170253754s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364257812s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062995911s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257064819s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170386314s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364456177s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.5( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170361519s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364456177s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.a( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062967300s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257064819s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062839508s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257049561s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.6( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062822342s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257049561s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170101166s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364379883s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170074463s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364379883s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064542770s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.258926392s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.9( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.064496994s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.258926392s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062500954s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256958008s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.16( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062455177s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256958008s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062453270s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256958008s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.5( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062422752s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256958008s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062306404s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256927490s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169877052s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364501953s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.16( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169825554s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364501953s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.8( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062280655s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256927490s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062150955s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256896973s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.15( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062103271s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256896973s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062054634s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256896973s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062032700s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256896973s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061964035s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256851196s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169626236s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364517212s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.d( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061941147s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256851196s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169589996s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364517212s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169550896s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364532471s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.8( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169521332s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364532471s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061698914s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256820679s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061678886s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256820679s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169280052s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364562988s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.f( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169239998s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364562988s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169254303s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364593506s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.4( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169231415s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364593506s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061404228s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256774902s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169189453s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364624023s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.061348915s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256774902s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.3( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169075012s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364624023s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060813904s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256439209s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169054985s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364685059s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.2( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060789108s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256439209s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.169025421s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364685059s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060658455s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256439209s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.3( v 46'4 (0'0,46'4] local-lis/les=55/56 n=1 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060624123s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256439209s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.168731689s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.364715576s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1e( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.168705940s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.364715576s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.171013832s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367065430s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1d( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170972824s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367065430s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060254097s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256362915s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1f( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060220718s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256362915s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170881271s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367111206s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1b( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170841217s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367111206s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170736313s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367141724s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170687675s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367095947s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062483788s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.258911133s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059788704s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256240845s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.18( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.062444687s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.258911133s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1a( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170664787s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367141724s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.19( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059770584s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256240845s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170689583s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367279053s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.19( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170668602s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367279053s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060340881s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.257019043s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1b( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.060324669s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.257019043s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170500755s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 active pruub 168.367324829s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059343338s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256210327s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.17( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170463562s) [2] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367324829s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.14( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059318542s) [1] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256210327s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[11.1c( empty local-lis/les=59/60 n=0 ec=59/51 lis/c=59/59 les/c/f=60/60/0 sis=61 pruub=13.170651436s) [1] r=-1 lpr=61 pi=[59,61)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 168.367095947s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059340477s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 active pruub 164.256393433s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 61 pg[8.1c( v 46'4 (0'0,46'4] local-lis/les=55/56 n=0 ec=55/45 lis/c=55/55 les/c/f=56/56/0 sis=61 pruub=9.059166908s) [2] r=-1 lpr=61 pi=[55,61)/1 crt=46'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 164.256393433s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.579121203 +0000 UTC m=+0.036526469 container create 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:52 np0005593232 systemd[1]: Started libpod-conmon-3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1.scope.
Jan 23 04:03:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.652598602 +0000 UTC m=+0.110003898 container init 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:03:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.659457345 +0000 UTC m=+0.116862611 container start 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.563293538 +0000 UTC m=+0.020698824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:52 np0005593232 busy_leavitt[98875]: 167 167
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.662919582 +0000 UTC m=+0.120324868 container attach 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 04:03:52 np0005593232 systemd[1]: libpod-3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1.scope: Deactivated successfully.
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.666809772 +0000 UTC m=+0.124215048 container died 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:03:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-05487caee5deca6cca24c615d20ab801aee50f6abd2063f36d3f99b71a021c1d-merged.mount: Deactivated successfully.
Jan 23 04:03:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:52.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:52 np0005593232 podman[98859]: 2026-01-23 09:03:52.70403537 +0000 UTC m=+0.161440666 container remove 3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 04:03:52 np0005593232 systemd[1]: libpod-conmon-3a4004f2fd8ba6937fde210de78d97a9472d001dca0596fe37c67d9f4bdfdfa1.scope: Deactivated successfully.
Jan 23 04:03:52 np0005593232 podman[98901]: 2026-01-23 09:03:52.871355991 +0000 UTC m=+0.039601746 container create 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:03:52 np0005593232 systemd[1]: Started libpod-conmon-9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805.scope.
Jan 23 04:03:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc8d93589f6ddeab8c13cea4e3ac9fc5960d381ac00bea83d2a87c1285404a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc8d93589f6ddeab8c13cea4e3ac9fc5960d381ac00bea83d2a87c1285404a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc8d93589f6ddeab8c13cea4e3ac9fc5960d381ac00bea83d2a87c1285404a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:52 np0005593232 podman[98901]: 2026-01-23 09:03:52.855634208 +0000 UTC m=+0.023879993 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afc8d93589f6ddeab8c13cea4e3ac9fc5960d381ac00bea83d2a87c1285404a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:52 np0005593232 podman[98901]: 2026-01-23 09:03:52.962832076 +0000 UTC m=+0.131077851 container init 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:52 np0005593232 podman[98901]: 2026-01-23 09:03:52.970002168 +0000 UTC m=+0.138247913 container start 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:03:52 np0005593232 podman[98901]: 2026-01-23 09:03:52.973111395 +0000 UTC m=+0.141357170 container attach 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.13( v 50'48 (0'0,50'48] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.15( v 60'51 lc 50'27 (0'0,60'51] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.8( v 50'48 (0'0,50'48] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.14( v 60'51 lc 50'43 (0'0,60'51] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=60'51 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.5( v 50'48 (0'0,50'48] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.1b( v 50'48 (0'0,50'48] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.18( v 50'48 (0'0,50'48] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.19( v 50'48 (0'0,50'48] local-lis/les=61/62 n=0 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 62 pg[10.2( v 50'48 (0'0,50'48] local-lis/les=61/62 n=1 ec=57/49 lis/c=57/57 les/c/f=58/58/0 sis=61) [0] r=0 lpr=61 pi=[57,61)/1 crt=50'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:03:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]: {
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:    "0": [
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:        {
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "devices": [
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "/dev/loop3"
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            ],
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "lv_name": "ceph_lv0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "lv_size": "7511998464",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "name": "ceph_lv0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "tags": {
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.cluster_name": "ceph",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.crush_device_class": "",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.encrypted": "0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.osd_id": "0",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.type": "block",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:                "ceph.vdo": "0"
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            },
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "type": "block",
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:            "vg_name": "ceph_vg0"
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:        }
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]:    ]
Jan 23 04:03:53 np0005593232 vigorous_sanderson[98917]: }
Jan 23 04:03:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Jan 23 04:03:53 np0005593232 systemd[1]: libpod-9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805.scope: Deactivated successfully.
Jan 23 04:03:53 np0005593232 podman[98901]: 2026-01-23 09:03:53.777716088 +0000 UTC m=+0.945961863 container died 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:03:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-afc8d93589f6ddeab8c13cea4e3ac9fc5960d381ac00bea83d2a87c1285404a4-merged.mount: Deactivated successfully.
Jan 23 04:03:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v183: 321 pgs: 31 peering, 290 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:03:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:03:54 np0005593232 podman[98901]: 2026-01-23 09:03:54.120236171 +0000 UTC m=+1.288481926 container remove 9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:54 np0005593232 systemd[1]: libpod-conmon-9864b9919653905e2ab48502267bb52622d087b5605f86e5bd4f36f1b61b8805.scope: Deactivated successfully.
Jan 23 04:03:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Jan 23 04:03:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Jan 23 04:03:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:54.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.730562733 +0000 UTC m=+0.038953927 container create 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:03:54 np0005593232 systemd[1]: Started libpod-conmon-9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39.scope.
Jan 23 04:03:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.796952452 +0000 UTC m=+0.105343666 container init 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.803305911 +0000 UTC m=+0.111697105 container start 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.806241214 +0000 UTC m=+0.114632728 container attach 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:03:54 np0005593232 wizardly_carson[99094]: 167 167
Jan 23 04:03:54 np0005593232 systemd[1]: libpod-9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39.scope: Deactivated successfully.
Jan 23 04:03:54 np0005593232 conmon[99094]: conmon 9949eea5ccd18ddf5dd7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39.scope/container/memory.events
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.809906087 +0000 UTC m=+0.118297281 container died 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.714840081 +0000 UTC m=+0.023231295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ea8424ef3acba404e9484e704fa97a75abf4771e2483b442bff0d58e3644a7e1-merged.mount: Deactivated successfully.
Jan 23 04:03:54 np0005593232 podman[99078]: 2026-01-23 09:03:54.85690428 +0000 UTC m=+0.165295494 container remove 9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:03:54 np0005593232 systemd[1]: libpod-conmon-9949eea5ccd18ddf5dd754b33d11aa8f95a4cd815135629157349d7f5b5a8e39.scope: Deactivated successfully.
Jan 23 04:03:55 np0005593232 podman[99118]: 2026-01-23 09:03:55.024606862 +0000 UTC m=+0.045155243 container create 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:03:55 np0005593232 systemd[1]: Started libpod-conmon-3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3.scope.
Jan 23 04:03:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:03:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a6ba4dd2143500cc463117b6ec91058a24ffc92e2429176c6b3fa1f8dd28f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a6ba4dd2143500cc463117b6ec91058a24ffc92e2429176c6b3fa1f8dd28f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a6ba4dd2143500cc463117b6ec91058a24ffc92e2429176c6b3fa1f8dd28f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5a6ba4dd2143500cc463117b6ec91058a24ffc92e2429176c6b3fa1f8dd28f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:03:55 np0005593232 podman[99118]: 2026-01-23 09:03:55.104306335 +0000 UTC m=+0.124854746 container init 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:03:55 np0005593232 podman[99118]: 2026-01-23 09:03:55.002671184 +0000 UTC m=+0.023219625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:03:55 np0005593232 podman[99118]: 2026-01-23 09:03:55.113684719 +0000 UTC m=+0.134233070 container start 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:03:55 np0005593232 podman[99118]: 2026-01-23 09:03:55.116796317 +0000 UTC m=+0.137344698 container attach 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:03:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v184: 321 pgs: 31 peering, 290 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:03:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]: {
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:        "osd_id": 0,
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:        "type": "bluestore"
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]:    }
Jan 23 04:03:56 np0005593232 optimistic_kalam[99134]: }
Jan 23 04:03:56 np0005593232 podman[99118]: 2026-01-23 09:03:56.096737695 +0000 UTC m=+1.117286066 container died 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:03:56 np0005593232 systemd[1]: libpod-3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3.scope: Deactivated successfully.
Jan 23 04:03:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a5a6ba4dd2143500cc463117b6ec91058a24ffc92e2429176c6b3fa1f8dd28f9-merged.mount: Deactivated successfully.
Jan 23 04:03:56 np0005593232 podman[99118]: 2026-01-23 09:03:56.154076219 +0000 UTC m=+1.174624590 container remove 3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:03:56 np0005593232 systemd[1]: libpod-conmon-3cf8e5cff0aab7d3ff567c579091a561af33f1c3737b7b03e3f81985cbd643e3.scope: Deactivated successfully.
Jan 23 04:03:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:03:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:03:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 65dced10-a0d2-4c6d-b2df-c16e36412f83 does not exist
Jan 23 04:03:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7a78b3df-29dc-43f7-9cda-2e7e2c471184 does not exist
Jan 23 04:03:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7db12fd0-c844-4cf5-b136-bd572424d23e does not exist
Jan 23 04:03:56 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Jan 23 04:03:56 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Jan 23 04:03:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:03:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:56.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:03:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:03:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v185: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 173 B/s, 0 objects/s recovering
Jan 23 04:03:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Jan 23 04:03:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 23 04:03:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:57.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 23 04:03:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:03:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:03:58.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:03:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 23 04:03:59 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 23 04:03:59 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 23 04:03:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v187: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 174 B/s, 0 objects/s recovering
Jan 23 04:03:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Jan 23 04:03:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 23 04:03:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:03:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:03:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:03:59.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 23 04:04:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 23 04:04:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 04:04:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 23 04:04:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 23 04:04:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 23 04:04:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 23 04:04:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.292912483s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.352722168s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296989441s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357101440s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296928406s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357101440s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.292685509s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.352722168s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.297074318s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357559204s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.297057152s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357559204s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296552658s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357254028s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296236038s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357254028s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296413422s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357543945s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296358109s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357543945s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.295981407s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357467651s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296047211s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357666016s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.296002388s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357666016s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.295689583s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357467651s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.295741081s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 175.357635498s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 64 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=64 pruub=10.295458794s) [2] r=-1 lpr=64 pi=[57,64)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 175.357635498s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v189: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 173 B/s, 0 objects/s recovering
Jan 23 04:04:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Jan 23 04:04:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 23 04:04:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:01.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 23 04:04:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 23 04:04:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 04:04:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 23 04:04:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 65 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:02.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 23 04:04:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 23 04:04:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 23 04:04:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 66 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[57,65)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v192: 321 pgs: 8 unknown, 1 active+clean+scrubbing, 312 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:04:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:03.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:04:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 23 04:04:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 23 04:04:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963817596s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650192261s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963843346s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650299072s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963621140s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650070190s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.958462715s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.644958496s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.17( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.958388329s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.644958496s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963719368s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650299072s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.13( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963597298s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650192261s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.7( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.963480949s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650070190s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962926865s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650115967s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962759018s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650115967s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962615013s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650192261s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.3( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=6 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962480545s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650192261s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962496758s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650238037s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962266922s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650238037s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962099075s) [2] async=[2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 182.650115967s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 67 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=65/66 n=5 ec=57/47 lis/c=65/57 les/c/f=66/59/0 sis=67 pruub=14.962038040s) [2] r=-1 lpr=67 pi=[57,67)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 182.650115967s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:04.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 23 04:04:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 23 04:04:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 23 04:04:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v195: 321 pgs: 8 unknown, 1 active+clean+scrubbing, 312 active+clean; 455 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:05.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:06.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v196: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 220 B/s, 7 objects/s recovering
Jan 23 04:04:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Jan 23 04:04:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 23 04:04:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:07.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:08.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 23 04:04:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 04:04:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 23 04:04:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 23 04:04:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 23 04:04:08 np0005593232 python3[99299]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:04:08 np0005593232 podman[99301]: 2026-01-23 09:04:08.972455928 +0000 UTC m=+0.045777890 container create e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:04:09 np0005593232 systemd[1]: Started libpod-conmon-e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790.scope.
Jan 23 04:04:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:04:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d28d2f1d3dcdcd64034bc27a157c9852b8472a6cb0e74cb3ef3fb84de23fc2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:04:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d28d2f1d3dcdcd64034bc27a157c9852b8472a6cb0e74cb3ef3fb84de23fc2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:08.953098163 +0000 UTC m=+0.026420155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:09.057992466 +0000 UTC m=+0.131314448 container init e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:09.064208611 +0000 UTC m=+0.137530573 container start e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:09.068482732 +0000 UTC m=+0.141804704 container attach e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086408615s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.357452393s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086348534s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.357452393s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.087430954s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.358779907s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.087412834s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.358779907s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086727142s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.358428955s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086679459s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.358428955s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086874008s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.358825684s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 69 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=69 pruub=11.086851120s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.358825684s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:09 np0005593232 infallible_moore[99316]: could not fetch user info: no user info saved
Jan 23 04:04:09 np0005593232 systemd[1]: libpod-e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790.scope: Deactivated successfully.
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:09.300498544 +0000 UTC m=+0.373820506 container died e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:04:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e0d28d2f1d3dcdcd64034bc27a157c9852b8472a6cb0e74cb3ef3fb84de23fc2-merged.mount: Deactivated successfully.
Jan 23 04:04:09 np0005593232 podman[99301]: 2026-01-23 09:04:09.340361706 +0000 UTC m=+0.413683668 container remove e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790 (image=quay.io/ceph/ceph:v18, name=infallible_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:04:09 np0005593232 systemd[1]: libpod-conmon-e73773161a09a64d414886114dd07e5507635d658c860877542b0b025542e790.scope: Deactivated successfully.
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 23 04:04:09 np0005593232 python3[99438]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid e1533653-0a5a-584c-b34b-8689f0d32e77 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:04:09 np0005593232 podman[99439]: 2026-01-23 09:04:09.737661671 +0000 UTC m=+0.040702177 container create 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:04:09 np0005593232 systemd[1]: Started libpod-conmon-31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8.scope.
Jan 23 04:04:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:04:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8205ebe864e7cc16a0413131b4e5bab7874f2855545130b7ff38e1dac3c488e2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:04:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8205ebe864e7cc16a0413131b4e5bab7874f2855545130b7ff38e1dac3c488e2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:04:09 np0005593232 podman[99439]: 2026-01-23 09:04:09.808769883 +0000 UTC m=+0.111810419 container init 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:04:09 np0005593232 podman[99439]: 2026-01-23 09:04:09.814373251 +0000 UTC m=+0.117413757 container start 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:04:09 np0005593232 podman[99439]: 2026-01-23 09:04:09.721317841 +0000 UTC m=+0.024358367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Jan 23 04:04:09 np0005593232 podman[99439]: 2026-01-23 09:04:09.817830858 +0000 UTC m=+0.120871364 container attach 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:09 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 70 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v199: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 77 KiB/s rd, 1.5 KiB/s wr, 138 op/s; 223 B/s, 7 objects/s recovering
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]: {
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "user_id": "openstack",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "display_name": "openstack",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "email": "",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "suspended": 0,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "max_buckets": 1000,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "subusers": [],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "keys": [
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        {
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:            "user": "openstack",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:            "access_key": "MAMOAQ8ZROOEJWPW8VTL",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:            "secret_key": "KBAgAIQH7Ycs6WJCidD03RFRQpXPq5fAKizww8B3"
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        }
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    ],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "swift_keys": [],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "caps": [],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "op_mask": "read, write, delete",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "default_placement": "",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "default_storage_class": "",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "placement_tags": [],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "bucket_quota": {
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "enabled": false,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "check_on_raw": false,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_size": -1,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_size_kb": 0,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_objects": -1
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    },
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "user_quota": {
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "enabled": false,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "check_on_raw": false,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_size": -1,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_size_kb": 0,
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:        "max_objects": -1
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    },
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "temp_url_keys": [],
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "type": "rgw",
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]:    "mfa_ids": []
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]: }
Jan 23 04:04:09 np0005593232 sharp_cohen[99453]: 
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Jan 23 04:04:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 23 04:04:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:10 np0005593232 systemd[1]: libpod-31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8.scope: Deactivated successfully.
Jan 23 04:04:10 np0005593232 podman[99439]: 2026-01-23 09:04:10.030153476 +0000 UTC m=+0.333193982 container died 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:04:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8205ebe864e7cc16a0413131b4e5bab7874f2855545130b7ff38e1dac3c488e2-merged.mount: Deactivated successfully.
Jan 23 04:04:10 np0005593232 podman[99439]: 2026-01-23 09:04:10.0675709 +0000 UTC m=+0.370611406 container remove 31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8 (image=quay.io/ceph/ceph:v18, name=sharp_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:04:10 np0005593232 systemd[1]: libpod-conmon-31fdb6afe6defc53b1d00d2aa65f557fb8d9af472a4814e85d99e9e5dec386d8.scope: Deactivated successfully.
Jan 23 04:04:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:10.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 23 04:04:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 23 04:04:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 04:04:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 23 04:04:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354490280s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.357467651s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354415894s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.357467651s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354041100s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.357803345s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354354858s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.358245850s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.353910446s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.357803345s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354326248s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.358245850s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354469299s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 183.358856201s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=71 pruub=9.354407310s) [1] r=-1 lpr=71 pi=[57,71)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 183.358856201s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:10 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 71 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[57,70)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 23 04:04:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 23 04:04:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 23 04:04:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 23 04:04:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.983472824s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 190.017471313s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.979680061s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 190.013885498s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.1d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.979637146s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.013885498s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.983077049s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 190.017425537s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.d( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.983262062s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.017471313s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.5( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=6 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.982692719s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.017425537s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.982493401s) [2] async=[2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 190.017578125s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:11 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 72 pg[9.15( v 53'1142 (0'0,53'1142] local-lis/les=70/71 n=5 ec=57/47 lis/c=70/57 les/c/f=71/59/0 sis=72 pruub=14.982375145s) [2] r=-1 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 190.017578125s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v202: 321 pgs: 4 unknown, 317 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:11.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 23 04:04:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:12.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 23 04:04:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 23 04:04:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 73 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] async=[1] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 73 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] async=[1] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 73 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] async=[1] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:12 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 73 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=72) [1]/[0] async=[1] r=0 lpr=72 pi=[57,72)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 23 04:04:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 23 04:04:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.120002747s) [1] async=[1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 192.068511963s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.119952202s) [1] async=[1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 192.068756104s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.16( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.119918823s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.068511963s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.119884491s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.068756104s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.123119354s) [1] async=[1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 192.072280884s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.6( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=6 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.123064995s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.072280884s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.119236946s) [1] async=[1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 192.068527222s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:13 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 74 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=72/73 n=5 ec=57/47 lis/c=72/57 les/c/f=73/59/0 sis=74 pruub=15.119159698s) [1] r=-1 lpr=74 pi=[57,74)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 192.068527222s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v205: 321 pgs: 4 peering, 4 unknown, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 148 B/s, 5 objects/s recovering
Jan 23 04:04:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:14.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:14.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 23 04:04:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 23 04:04:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 23 04:04:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v207: 321 pgs: 4 peering, 4 unknown, 313 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 2.9 KiB/s rd, 250 B/s wr, 3 op/s; 145 B/s, 5 objects/s recovering
Jan 23 04:04:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:16.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:16.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:17 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.1f deep-scrub starts
Jan 23 04:04:17 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.1f deep-scrub ok
Jan 23 04:04:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v208: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 36 KiB/s rd, 682 B/s wr, 63 op/s; 135 B/s, 7 objects/s recovering
Jan 23 04:04:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Jan 23 04:04:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 23 04:04:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:18.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 23 04:04:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 04:04:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 23 04:04:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 23 04:04:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:18.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 23 04:04:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 23 04:04:19 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 23 04:04:19 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 23 04:04:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v210: 321 pgs: 321 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 663 B/s wr, 60 op/s; 35 B/s, 3 objects/s recovering
Jan 23 04:04:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Jan 23 04:04:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 23 04:04:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:20.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 23 04:04:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 23 04:04:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 04:04:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 23 04:04:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 23 04:04:20 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 77 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=77 pruub=15.795166016s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 199.359161377s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:20 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 77 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=77 pruub=15.795102119s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.359161377s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:20 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 77 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=77 pruub=15.793843269s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 199.358810425s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:20 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 77 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=77 pruub=15.793787956s) [2] r=-1 lpr=77 pi=[57,77)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.358810425s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:20.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 23 04:04:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 23 04:04:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 23 04:04:21 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 78 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 78 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 78 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 78 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.2 deep-scrub starts
Jan 23 04:04:21 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.2 deep-scrub ok
Jan 23 04:04:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v213: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail; 35 KiB/s rd, 682 B/s wr, 62 op/s; 36 B/s, 3 objects/s recovering
Jan 23 04:04:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:22.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 23 04:04:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 23 04:04:22 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 23 04:04:22 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 79 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:22 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 79 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=78) [2]/[0] async=[2] r=0 lpr=78 pi=[57,78)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 23 04:04:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 23 04:04:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 80 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=6 ec=57/47 lis/c=78/57 les/c/f=79/59/0 sis=80 pruub=14.981110573s) [2] async=[2] r=-1 lpr=80 pi=[57,80)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 201.338531494s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 80 pg[9.8( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=6 ec=57/47 lis/c=78/57 les/c/f=79/59/0 sis=80 pruub=14.981016159s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.338531494s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 80 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=5 ec=57/47 lis/c=78/57 les/c/f=79/59/0 sis=80 pruub=14.979028702s) [2] async=[2] r=-1 lpr=80 pi=[57,80)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 201.337600708s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 80 pg[9.18( v 53'1142 (0'0,53'1142] local-lis/les=78/79 n=5 ec=57/47 lis/c=78/57 les/c/f=79/59/0 sis=80 pruub=14.978890419s) [2] r=-1 lpr=80 pi=[57,80)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 201.337600708s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 23 04:04:23 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 23 04:04:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v216: 321 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 318 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:24.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 23 04:04:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 23 04:04:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 23 04:04:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:24.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 23 04:04:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 23 04:04:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v218: 321 pgs: 1 active+clean+scrubbing+deep, 2 remapped+peering, 318 active+clean; 456 KiB data, 103 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:26.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:26 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 23 04:04:26 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 23 04:04:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:26.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v219: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 23 04:04:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Jan 23 04:04:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 23 04:04:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:28.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 23 04:04:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 23 04:04:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.3 deep-scrub starts
Jan 23 04:04:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.3 deep-scrub ok
Jan 23 04:04:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v221: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 32 B/s, 1 objects/s recovering
Jan 23 04:04:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Jan 23 04:04:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 23 04:04:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:30.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 23 04:04:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 23 04:04:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 04:04:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 23 04:04:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 82 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=82 pruub=13.612620354s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 207.358261108s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=82 pruub=13.612466812s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.358261108s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=83 pruub=13.612299919s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 207.358322144s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=83 pruub=13.612011909s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.358322144s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 82 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=82 pruub=13.611680984s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 207.358917236s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=82 pruub=13.611519814s) [2] r=-1 lpr=82 pi=[57,82)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.358917236s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=83 pruub=13.611418724s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 207.359161377s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 83 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=83 pruub=13.611279488s) [1] r=-1 lpr=83 pi=[57,83)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 207.359161377s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:30.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 23 04:04:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 23 04:04:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 23 04:04:31 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 84 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v224: 321 pgs: 2 remapped+peering, 2 unknown, 317 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 36 B/s, 1 objects/s recovering
Jan 23 04:04:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:32.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:32 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 23 04:04:32 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 23 04:04:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 23 04:04:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 23 04:04:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 23 04:04:32 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 85 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:32 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 85 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [2]/[0] async=[2] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:32.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 85 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 85 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[57,84)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 23 04:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 23 04:04:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=14.968559265s) [2] async=[2] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 211.800003052s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=14.968458176s) [2] async=[2] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 211.799972534s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=14.967984200s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.800003052s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.9( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=14.967484474s) [2] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.799972534s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=15.479506493s) [1] async=[1] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 212.312057495s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=6 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=15.479333878s) [1] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 212.312057495s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=15.478722572s) [1] async=[1] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 212.311950684s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 86 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=84/85 n=5 ec=57/47 lis/c=84/57 les/c/f=85/59/0 sis=86 pruub=15.478574753s) [1] r=-1 lpr=86 pi=[57,86)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 212.311950684s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v227: 321 pgs: 2 remapped+peering, 2 unknown, 317 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 23 04:04:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 23 04:04:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 23 04:04:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 23 04:04:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 23 04:04:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v229: 321 pgs: 2 remapped+peering, 2 unknown, 317 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:36 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 23 04:04:36 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 23 04:04:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:36.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:04:37
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Some PGs (0.006231) are unknown; try again later
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:04:37 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 23 04:04:37 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:04:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v230: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 341 B/s wr, 31 op/s; 91 B/s, 4 objects/s recovering
Jan 23 04:04:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Jan 23 04:04:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 23 04:04:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:38.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:38.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 23 04:04:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 23 04:04:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 23 04:04:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v232: 321 pgs: 321 active+clean; 456 KiB data, 104 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 325 B/s wr, 29 op/s; 87 B/s, 3 objects/s recovering
Jan 23 04:04:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Jan 23 04:04:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 23 04:04:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:40.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:40.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 23 04:04:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v234: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s rd, 281 B/s wr, 25 op/s; 75 B/s, 3 objects/s recovering
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Jan 23 04:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 04:04:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:42.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 23 04:04:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 04:04:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 23 04:04:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 23 04:04:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 23 04:04:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 23 04:04:43 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 23 04:04:43 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 23 04:04:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 23 04:04:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v236: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 23 04:04:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:44.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 23 04:04:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 23 04:04:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 23 04:04:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 04:04:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 23 04:04:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 23 04:04:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v239: 321 pgs: 321 active+clean; 456 KiB data, 121 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 23 04:04:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:46.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:04:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.724886004094547e-06 of space, bias 1.0, pg target 0.002017465801228364 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:04:46 np0005593232 systemd-logind[808]: New session 34 of user zuul.
Jan 23 04:04:46 np0005593232 systemd[1]: Started Session 34 of User zuul.
Jan 23 04:04:46 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 23 04:04:46 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 23 04:04:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:46.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 23 04:04:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 23 04:04:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 23 04:04:47 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 23 04:04:47 np0005593232 python3.9[99824]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:04:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v242: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:48.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 23 04:04:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:48.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 23 04:04:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 23 04:04:49 np0005593232 python3.9[100039]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:04:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 23 04:04:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 23 04:04:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 23 04:04:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v246: 321 pgs: 2 remapped+peering, 319 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:04:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:50.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:50.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v247: 321 pgs: 321 active+clean; 456 KiB data, 122 MiB used, 21 GiB / 21 GiB avail; 114 B/s, 5 objects/s recovering
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 23 04:04:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:52.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 23 04:04:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 23 04:04:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 98 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=98 pruub=15.592073441s) [1] r=-1 lpr=98 pi=[57,98)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 231.358978271s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:52 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 98 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=98 pruub=15.592007637s) [1] r=-1 lpr=98 pi=[57,98)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.358978271s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:52.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 23 04:04:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 23 04:04:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 23 04:04:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 23 04:04:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 99 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=99) [1]/[0] r=0 lpr=99 pi=[57,99)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:53 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 99 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=99) [1]/[0] r=0 lpr=99 pi=[57,99)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:04:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:04:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v250: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 106 B/s, 4 objects/s recovering
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 23 04:04:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:04:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:54.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 23 04:04:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 23 04:04:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 23 04:04:54 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 23 04:04:54 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 100 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=100 pruub=13.469819069s) [1] r=-1 lpr=100 pi=[57,100)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 231.359313965s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:04:54 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 100 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=100 pruub=13.469747543s) [1] r=-1 lpr=100 pi=[57,100)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 231.359313965s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:04:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:54.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:54 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 100 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=99/100 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[57,99)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:04:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 23 04:04:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v252: 321 pgs: 321 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 91 B/s, 4 objects/s recovering
Jan 23 04:04:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Jan 23 04:04:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 23 04:04:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:04:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:56.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:04:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:56.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:57 np0005593232 podman[100275]: 2026-01-23 09:04:57.448990948 +0000 UTC m=+0.065304492 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:04:57 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 23 04:04:57 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 23 04:04:57 np0005593232 podman[100275]: 2026-01-23 09:04:57.566856074 +0000 UTC m=+0.183169588 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:04:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v253: 321 pgs: 1 unknown, 1 active+remapped, 319 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 04:04:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:04:58.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:04:58 np0005593232 podman[100477]: 2026-01-23 09:04:58.153001944 +0000 UTC m=+0.051045574 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:04:58 np0005593232 podman[100477]: 2026-01-23 09:04:58.163416354 +0000 UTC m=+0.061459984 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:04:58 np0005593232 podman[100543]: 2026-01-23 09:04:58.374846958 +0000 UTC m=+0.055036605 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, name=keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 23 04:04:58 np0005593232 podman[100543]: 2026-01-23 09:04:58.390256978 +0000 UTC m=+0.070446595 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Jan 23 04:04:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:04:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:04:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:04:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:04:58.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:04:58 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 23 04:04:59 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 23 04:04:59 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 23 04:05:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v254: 321 pgs: 1 active+clean+scrubbing, 1 unknown, 1 active+remapped, 318 active+clean; 456 KiB data, 139 MiB used, 21 GiB / 21 GiB avail; 0 B/s, 0 objects/s recovering
Jan 23 04:05:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:00.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 23 04:05:00 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 101 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=99/100 n=6 ec=57/47 lis/c=99/57 les/c/f=100/59/0 sis=101 pruub=10.661913872s) [1] async=[1] r=-1 lpr=101 pi=[57,101)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 233.952774048s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:00 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 101 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=101) [1]/[0] r=0 lpr=101 pi=[57,101)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:00 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 101 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=101) [1]/[0] r=0 lpr=101 pi=[57,101)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:00 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 101 pg[9.10( v 53'1142 (0'0,53'1142] local-lis/les=99/100 n=6 ec=57/47 lis/c=99/57 les/c/f=100/59/0 sis=101 pruub=10.661807060s) [1] r=-1 lpr=101 pi=[57,101)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 233.952774048s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 23 04:05:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 04:05:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 23 04:05:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 23 04:05:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 102 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=102 pruub=14.820915222s) [1] r=-1 lpr=102 pi=[57,102)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 239.359619141s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 102 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=102 pruub=14.820637703s) [1] r=-1 lpr=102 pi=[57,102)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 239.359619141s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:01 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 102 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=101/102 n=6 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[57,101)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v257: 321 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 318 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 15 B/s, 0 objects/s recovering
Jan 23 04:05:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:02.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.324673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102324978, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7282, "num_deletes": 251, "total_data_size": 8950280, "memory_usage": 9200304, "flush_reason": "Manual Compaction"}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102420384, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7269410, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 145, "largest_seqno": 7418, "table_properties": {"data_size": 7243342, "index_size": 16929, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 74620, "raw_average_key_size": 23, "raw_value_size": 7181399, "raw_average_value_size": 2232, "num_data_blocks": 752, "num_entries": 3217, "num_filter_entries": 3217, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158727, "oldest_key_time": 1769158727, "file_creation_time": 1769159102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 95731 microseconds, and 32624 cpu microseconds.
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.420495) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7269410 bytes OK
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.420532) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.422685) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.422717) EVENT_LOG_v1 {"time_micros": 1769159102422711, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.422741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 8918723, prev total WAL file size 8926154, number of live WAL files 2.
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.425630) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7099KB) 13(53KB) 8(1944B)]
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102425800, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7326203, "oldest_snapshot_seqno": -1}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 103 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=101/102 n=6 ec=57/47 lis/c=101/57 les/c/f=102/59/0 sis=103 pruub=14.988105774s) [1] async=[1] r=-1 lpr=103 pi=[57,103)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 240.550918579s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 103 pg[9.11( v 53'1142 (0'0,53'1142] local-lis/les=101/102 n=6 ec=57/47 lis/c=101/57 les/c/f=102/59/0 sis=103 pruub=14.988009453s) [1] r=-1 lpr=103 pi=[57,103)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 240.550918579s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 103 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=103) [1]/[0] r=0 lpr=103 pi=[57,103)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 103 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=57/59 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=103) [1]/[0] r=0 lpr=103 pi=[57,103)/1 crt=53'1142 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3029 keys, 7281685 bytes, temperature: kUnknown
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102507440, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7281685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7256048, "index_size": 16951, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7621, "raw_key_size": 72525, "raw_average_key_size": 23, "raw_value_size": 7195822, "raw_average_value_size": 2375, "num_data_blocks": 756, "num_entries": 3029, "num_filter_entries": 3029, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159102, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.507722) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7281685 bytes
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.509490) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.6 rd, 89.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.0, 0.0 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3321, records dropped: 292 output_compression: NoCompression
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.509513) EVENT_LOG_v1 {"time_micros": 1769159102509503, "job": 4, "event": "compaction_finished", "compaction_time_micros": 81730, "compaction_time_cpu_micros": 19124, "output_level": 6, "num_output_files": 1, "total_output_size": 7281685, "num_input_records": 3321, "num_output_records": 3029, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102510652, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102510717, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159102510764, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:05:02.425114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 23 04:05:02 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 23 04:05:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:05:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 23 04:05:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 23 04:05:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v259: 321 pgs: 1 active+remapped, 1 peering, 1 active+clean+scrubbing, 318 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 8.7 KiB/s rd, 170 B/s wr, 15 op/s; 18 B/s, 0 objects/s recovering
Jan 23 04:05:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:04.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 23 04:05:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 23 04:05:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 23 04:05:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 23 04:05:04 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 104 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=103/104 n=5 ec=57/47 lis/c=57/57 les/c/f=59/59/0 sis=103) [1]/[0] async=[1] r=0 lpr=103 pi=[57,103)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:04.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:05 np0005593232 systemd[1]: session-34.scope: Deactivated successfully.
Jan 23 04:05:05 np0005593232 systemd[1]: session-34.scope: Consumed 8.721s CPU time.
Jan 23 04:05:05 np0005593232 systemd-logind[808]: Session 34 logged out. Waiting for processes to exit.
Jan 23 04:05:05 np0005593232 systemd-logind[808]: Removed session 34.
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2ec02f36-a45d-43fc-8adf-1d173af632eb does not exist
Jan 23 04:05:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 626f90a9-82a5-42f0-9c07-008dc040eece does not exist
Jan 23 04:05:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2eb6a475-ded6-4a45-a5ff-015da2163b3e does not exist
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 23 04:05:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v261: 321 pgs: 1 active+remapped, 1 peering, 319 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail; 8.9 KiB/s rd, 175 B/s wr, 16 op/s; 18 B/s, 0 objects/s recovering
Jan 23 04:05:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 23 04:05:06 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 105 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=103/104 n=5 ec=57/47 lis/c=103/57 les/c/f=104/59/0 sis=105 pruub=14.774005890s) [1] async=[1] r=-1 lpr=105 pi=[57,105)/1 crt=53'1142 lcod 0'0 mlcod 0'0 active pruub 243.928878784s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 23 04:05:06 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 105 pg[9.12( v 53'1142 (0'0,53'1142] local-lis/les=103/104 n=5 ec=57/47 lis/c=103/57 les/c/f=104/59/0 sis=105 pruub=14.773596764s) [1] r=-1 lpr=105 pi=[57,105)/1 crt=53'1142 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 243.928878784s@ mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.06628239 +0000 UTC m=+0.048472442 container create 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:05:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:06.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:06 np0005593232 systemd[76038]: Created slice User Background Tasks Slice.
Jan 23 04:05:06 np0005593232 systemd[76038]: Starting Cleanup of User's Temporary Files and Directories...
Jan 23 04:05:06 np0005593232 systemd[1]: Started libpod-conmon-4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91.scope.
Jan 23 04:05:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:06 np0005593232 systemd[76038]: Finished Cleanup of User's Temporary Files and Directories.
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.124729089 +0000 UTC m=+0.106919171 container init 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.136036854 +0000 UTC m=+0.118226906 container start 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.040242474 +0000 UTC m=+0.022432546 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:06 np0005593232 hungry_buck[100872]: 167 167
Jan 23 04:05:06 np0005593232 systemd[1]: libpod-4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91.scope: Deactivated successfully.
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.148584524 +0000 UTC m=+0.130774606 container attach 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.149024246 +0000 UTC m=+0.131214298 container died 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:05:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aa49561a1c5bcdb7637cae9cc938a06b491675e3c5cb87efe28542d3287adaf6-merged.mount: Deactivated successfully.
Jan 23 04:05:06 np0005593232 podman[100855]: 2026-01-23 09:05:06.188756454 +0000 UTC m=+0.170946506 container remove 4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_buck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:05:06 np0005593232 systemd[1]: libpod-conmon-4d9bfd5b1785fa102c3f8e29d5022cca123950336da13d5ee3839b2616ce2e91.scope: Deactivated successfully.
Jan 23 04:05:06 np0005593232 podman[100894]: 2026-01-23 09:05:06.344251379 +0000 UTC m=+0.044649096 container create 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 04:05:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:05:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:05:06 np0005593232 systemd[1]: Started libpod-conmon-09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c.scope.
Jan 23 04:05:06 np0005593232 podman[100894]: 2026-01-23 09:05:06.324398335 +0000 UTC m=+0.024796082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:06 np0005593232 podman[100894]: 2026-01-23 09:05:06.4408028 +0000 UTC m=+0.141200547 container init 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:05:06 np0005593232 podman[100894]: 2026-01-23 09:05:06.450420359 +0000 UTC m=+0.150818066 container start 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:05:06 np0005593232 podman[100894]: 2026-01-23 09:05:06.454007879 +0000 UTC m=+0.154405626 container attach 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:05:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:06.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 23 04:05:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 23 04:05:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:07 np0005593232 boring_saha[100910]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:05:07 np0005593232 boring_saha[100910]: --> relative data size: 1.0
Jan 23 04:05:07 np0005593232 boring_saha[100910]: --> All data devices are unavailable
Jan 23 04:05:07 np0005593232 systemd[1]: libpod-09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c.scope: Deactivated successfully.
Jan 23 04:05:07 np0005593232 podman[100894]: 2026-01-23 09:05:07.318325653 +0000 UTC m=+1.018723400 container died 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:05:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-56e8b0d5d49e691d6e3cfcc12deb260fdffb026247d746542661e9ffcac5bef5-merged.mount: Deactivated successfully.
Jan 23 04:05:07 np0005593232 podman[100894]: 2026-01-23 09:05:07.37344219 +0000 UTC m=+1.073839907 container remove 09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_saha, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:05:07 np0005593232 systemd[1]: libpod-conmon-09bae57a53007310456dd00c758b5630d279f5e711a490381f0482747aeb1e6c.scope: Deactivated successfully.
Jan 23 04:05:07 np0005593232 podman[101079]: 2026-01-23 09:05:07.933131061 +0000 UTC m=+0.041994641 container create 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:05:07 np0005593232 systemd[1]: Started libpod-conmon-7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235.scope.
Jan 23 04:05:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:08.003822322 +0000 UTC m=+0.112685892 container init 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:07.915090139 +0000 UTC m=+0.023953739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v264: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Jan 23 04:05:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:08.011326851 +0000 UTC m=+0.120190431 container start 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:08.014921271 +0000 UTC m=+0.123784861 container attach 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:05:08 np0005593232 gracious_archimedes[101095]: 167 167
Jan 23 04:05:08 np0005593232 systemd[1]: libpod-7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235.scope: Deactivated successfully.
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:08.018552583 +0000 UTC m=+0.127416163 container died 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:05:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-edbff2d690e71bf6af6bb90bb08f4ad870e1cab26e3e4516cd92d9e8d569542c-merged.mount: Deactivated successfully.
Jan 23 04:05:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:08.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:08 np0005593232 podman[101079]: 2026-01-23 09:05:08.072888287 +0000 UTC m=+0.181751857 container remove 7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_archimedes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:05:08 np0005593232 systemd[1]: libpod-conmon-7ce62eb1f1ab632f087a778d038b08da91399c38ee5eb95c2629aefd7f891235.scope: Deactivated successfully.
Jan 23 04:05:08 np0005593232 podman[101122]: 2026-01-23 09:05:08.223529967 +0000 UTC m=+0.043889855 container create a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:05:08 np0005593232 systemd[1]: Started libpod-conmon-a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404.scope.
Jan 23 04:05:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea9f4f4f3a5d2939e04ec038343e7b4374f5a7f1c197927edf0e193afd7395b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea9f4f4f3a5d2939e04ec038343e7b4374f5a7f1c197927edf0e193afd7395b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea9f4f4f3a5d2939e04ec038343e7b4374f5a7f1c197927edf0e193afd7395b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea9f4f4f3a5d2939e04ec038343e7b4374f5a7f1c197927edf0e193afd7395b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:08 np0005593232 podman[101122]: 2026-01-23 09:05:08.203597481 +0000 UTC m=+0.023957389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:08 np0005593232 podman[101122]: 2026-01-23 09:05:08.304441573 +0000 UTC m=+0.124801481 container init a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:05:08 np0005593232 podman[101122]: 2026-01-23 09:05:08.31760988 +0000 UTC m=+0.137969768 container start a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:05:08 np0005593232 podman[101122]: 2026-01-23 09:05:08.321120358 +0000 UTC m=+0.141480296 container attach a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:05:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 23 04:05:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 23 04:05:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:08.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:09 np0005593232 focused_euclid[101139]: {
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:    "0": [
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:        {
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "devices": [
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "/dev/loop3"
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            ],
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "lv_name": "ceph_lv0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "lv_size": "7511998464",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "name": "ceph_lv0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "tags": {
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.cluster_name": "ceph",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.crush_device_class": "",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.encrypted": "0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.osd_id": "0",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.type": "block",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:                "ceph.vdo": "0"
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            },
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "type": "block",
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:            "vg_name": "ceph_vg0"
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:        }
Jan 23 04:05:09 np0005593232 focused_euclid[101139]:    ]
Jan 23 04:05:09 np0005593232 focused_euclid[101139]: }
Jan 23 04:05:09 np0005593232 systemd[1]: libpod-a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404.scope: Deactivated successfully.
Jan 23 04:05:09 np0005593232 podman[101122]: 2026-01-23 09:05:09.112380126 +0000 UTC m=+0.932740054 container died a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:05:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 04:05:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 23 04:05:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 23 04:05:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cea9f4f4f3a5d2939e04ec038343e7b4374f5a7f1c197927edf0e193afd7395b-merged.mount: Deactivated successfully.
Jan 23 04:05:09 np0005593232 podman[101122]: 2026-01-23 09:05:09.16849613 +0000 UTC m=+0.988856018 container remove a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:05:09 np0005593232 systemd[1]: libpod-conmon-a872deae975f22cbf6ac79e4469d1fa4094f7a02fbe559532b8f3b03fdf68404.scope: Deactivated successfully.
Jan 23 04:05:09 np0005593232 podman[101303]: 2026-01-23 09:05:09.698614318 +0000 UTC m=+0.034836542 container create 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:05:09 np0005593232 systemd[1]: Started libpod-conmon-553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42.scope.
Jan 23 04:05:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:09 np0005593232 podman[101303]: 2026-01-23 09:05:09.77867452 +0000 UTC m=+0.114896794 container init 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:05:09 np0005593232 podman[101303]: 2026-01-23 09:05:09.684458604 +0000 UTC m=+0.020680858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:09 np0005593232 podman[101303]: 2026-01-23 09:05:09.7847783 +0000 UTC m=+0.121000524 container start 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:05:09 np0005593232 dreamy_ganguly[101319]: 167 167
Jan 23 04:05:09 np0005593232 podman[101303]: 2026-01-23 09:05:09.787796564 +0000 UTC m=+0.124018838 container attach 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:05:09 np0005593232 systemd[1]: libpod-553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42.scope: Deactivated successfully.
Jan 23 04:05:09 np0005593232 podman[101324]: 2026-01-23 09:05:09.828786407 +0000 UTC m=+0.026007876 container died 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:05:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1d0db18852603b5b54898066a5a1f5c0f820ac20654d5901a17fea3a75e937d6-merged.mount: Deactivated successfully.
Jan 23 04:05:09 np0005593232 podman[101324]: 2026-01-23 09:05:09.863570517 +0000 UTC m=+0.060792006 container remove 553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_ganguly, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:05:09 np0005593232 systemd[1]: libpod-conmon-553c5196aa9ac1ae6c440e6797c1fc2b1d9ea9a6036c5c7195dd694b55c9cb42.scope: Deactivated successfully.
Jan 23 04:05:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v266: 321 pgs: 321 active+clean; 456 KiB data, 140 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:10.015744069 +0000 UTC m=+0.041653522 container create 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:05:10 np0005593232 systemd[1]: Started libpod-conmon-844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d.scope.
Jan 23 04:05:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:10.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:05:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb121a6485fb6d170792761618aa1e66e0d71dd2f7bb2184fd22edb56958410/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb121a6485fb6d170792761618aa1e66e0d71dd2f7bb2184fd22edb56958410/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb121a6485fb6d170792761618aa1e66e0d71dd2f7bb2184fd22edb56958410/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb121a6485fb6d170792761618aa1e66e0d71dd2f7bb2184fd22edb56958410/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:09.995392332 +0000 UTC m=+0.021301795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:10.097651962 +0000 UTC m=+0.123561395 container init 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:10.103109954 +0000 UTC m=+0.129019387 container start 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:10.107309571 +0000 UTC m=+0.133219014 container attach 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 23 04:05:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 23 04:05:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:10.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]: {
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:        "osd_id": 0,
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:        "type": "bluestore"
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]:    }
Jan 23 04:05:10 np0005593232 hungry_bardeen[101363]: }
Jan 23 04:05:10 np0005593232 systemd[1]: libpod-844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d.scope: Deactivated successfully.
Jan 23 04:05:10 np0005593232 conmon[101363]: conmon 844fd6a2410b363a4393 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d.scope/container/memory.events
Jan 23 04:05:10 np0005593232 podman[101346]: 2026-01-23 09:05:10.961484083 +0000 UTC m=+0.987393526 container died 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:05:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aeb121a6485fb6d170792761618aa1e66e0d71dd2f7bb2184fd22edb56958410-merged.mount: Deactivated successfully.
Jan 23 04:05:11 np0005593232 podman[101346]: 2026-01-23 09:05:11.196010371 +0000 UTC m=+1.221919804 container remove 844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bardeen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:05:11 np0005593232 systemd[1]: libpod-conmon-844fd6a2410b363a43939f77e501e0451939b8a193810c002d620bed0768625d.scope: Deactivated successfully.
Jan 23 04:05:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:05:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 23 04:05:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:05:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9b92a2c0-e244-49a7-837a-2d08cedd1cfb does not exist
Jan 23 04:05:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 03143369-70fa-4fc0-96dc-582691e902c3 does not exist
Jan 23 04:05:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 71754909-654c-4328-b404-e1c92fb67d5f does not exist
Jan 23 04:05:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 23 04:05:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 23 04:05:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v268: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 23 04:05:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:12.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 23 04:05:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 23 04:05:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 23 04:05:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 23 04:05:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:12.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 23 04:05:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 23 04:05:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 23 04:05:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 23 04:05:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v271: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 22 B/s, 0 objects/s recovering
Jan 23 04:05:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Jan 23 04:05:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 23 04:05:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:14.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Jan 23 04:05:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Jan 23 04:05:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 23 04:05:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 23 04:05:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 04:05:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 23 04:05:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 23 04:05:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v273: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 18 B/s, 0 objects/s recovering
Jan 23 04:05:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:16.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 23 04:05:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 23 04:05:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 23 04:05:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 23 04:05:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 23 04:05:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 23 04:05:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 23 04:05:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 23 04:05:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 23 04:05:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v276: 321 pgs: 1 remapped+peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:18.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 23 04:05:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 23 04:05:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 23 04:05:18 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 23 04:05:18 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 23 04:05:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:18.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 23 04:05:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 23 04:05:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 23 04:05:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v279: 321 pgs: 1 peering, 1 remapped+peering, 319 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 23 04:05:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:20 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 23 04:05:20 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 23 04:05:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:20.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:20 np0005593232 systemd-logind[808]: New session 35 of user zuul.
Jan 23 04:05:20 np0005593232 systemd[1]: Started Session 35 of User zuul.
Jan 23 04:05:21 np0005593232 python3.9[101659]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 23 04:05:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v280: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 39 B/s, 1 objects/s recovering
Jan 23 04:05:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:22.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:22.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:23 np0005593232 python3.9[101834]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:05:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v281: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 33 B/s, 1 objects/s recovering
Jan 23 04:05:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:24.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:25 np0005593232 python3.9[101991]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:05:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v282: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 23 04:05:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:26.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 23 04:05:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 23 04:05:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:26.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:27 np0005593232 python3.9[102145]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:05:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 23 04:05:27 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 23 04:05:27 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 23 04:05:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v284: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 0 objects/s recovering
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 23 04:05:28 np0005593232 python3.9[102299]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:05:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:28.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 23 04:05:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:05:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:05:29 np0005593232 python3.9[102452]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:05:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 23 04:05:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 23 04:05:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 23 04:05:30 np0005593232 python3.9[102602]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:05:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v286: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 23 04:05:30 np0005593232 network[102619]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:05:30 np0005593232 network[102620]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:05:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:30.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:30 np0005593232 network[102621]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 23 04:05:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 23 04:05:30 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 118 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=118) [0] r=0 lpr=118 pi=[86,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:30 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 23 04:05:30 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 23 04:05:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:30.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 23 04:05:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 23 04:05:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 23 04:05:31 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 23 04:05:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:31 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 119 pg[9.19( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[86,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v289: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 04:05:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 23 04:05:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 23 04:05:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:32.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 121 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=120) [0] r=0 lpr=121 pi=[86,120)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 121 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=119/86 les/c/f=120/87/0 sis=121) [0] r=0 lpr=121 pi=[86,121)/1 luod=0'0 crt=53'1142 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 121 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=119/86 les/c/f=120/87/0 sis=121) [0] r=0 lpr=121 pi=[86,121)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 23 04:05:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 122 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[86,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 122 pg[9.1a( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=86/86 les/c/f=87/87/0 sis=122) [0]/[1] r=-1 lpr=122 pi=[86,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:33 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 122 pg[9.19( v 53'1142 (0'0,53'1142] local-lis/les=121/122 n=5 ec=57/47 lis/c=119/86 les/c/f=120/87/0 sis=121) [0] r=0 lpr=121 pi=[86,121)/1 crt=53'1142 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v293: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 23 04:05:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:05:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:34.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:05:34 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 23 04:05:34 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 23 04:05:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 23 04:05:34 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 123 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=123) [0] r=0 lpr=123 pi=[67,123)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:34.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 23 04:05:35 np0005593232 python3.9[102884]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:05:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 23 04:05:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 23 04:05:35 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 23 04:05:35 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 124 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=124) [0]/[2] r=-1 lpr=124 pi=[67,124)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:35 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 124 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 luod=0'0 crt=53'1142 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:35 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 124 pg[9.1b( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=67/67 les/c/f=68/68/0 sis=124) [0]/[2] r=-1 lpr=124 pi=[67,124)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:35 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 124 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v296: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 23 04:05:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:36.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:36 np0005593232 python3.9[103034]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 23 04:05:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:36.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 23 04:05:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 23 04:05:36 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 125 pg[9.1a( v 53'1142 (0'0,53'1142] local-lis/les=124/125 n=5 ec=57/47 lis/c=122/86 les/c/f=123/87/0 sis=124) [0] r=0 lpr=124 pi=[86,124)/1 crt=53'1142 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:05:37
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', '.mgr', 'backups', 'default.rgw.control']
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:05:37 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 23 04:05:37 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:05:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:05:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 23 04:05:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 23 04:05:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 23 04:05:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 23 04:05:37 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 126 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=124/67 les/c/f=125/68/0 sis=126) [0] r=0 lpr=126 pi=[67,126)/1 luod=0'0 crt=53'1142 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:37 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 126 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=124/67 les/c/f=125/68/0 sis=126) [0] r=0 lpr=126 pi=[67,126)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v299: 321 pgs: 1 active+clean+scrubbing, 320 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 23 04:05:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Jan 23 04:05:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 23 04:05:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:05:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:38.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:05:38 np0005593232 python3.9[103212]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:05:38 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 23 04:05:38 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 23 04:05:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:38.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 23 04:05:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 23 04:05:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 04:05:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 23 04:05:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 23 04:05:39 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 127 pg[9.1b( v 53'1142 (0'0,53'1142] local-lis/les=126/127 n=5 ec=57/47 lis/c=124/67 les/c/f=125/68/0 sis=126) [0] r=0 lpr=126 pi=[67,126)/1 crt=53'1142 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:39 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Jan 23 04:05:39 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Jan 23 04:05:39 np0005593232 python3.9[103398]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:05:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v301: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 53 B/s, 2 objects/s recovering
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 23 04:05:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:40.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 23 04:05:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 23 04:05:40 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=75/75/0 sis=128) [0] r=0 lpr=128 pi=[74,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:40 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 23 04:05:40 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 23 04:05:40 np0005593232 python3.9[103482]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:05:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:40.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 23 04:05:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 23 04:05:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 23 04:05:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 23 04:05:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 23 04:05:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=75/75/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[74,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:41 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=74/74 les/c/f=75/75/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[74,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v304: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 26 B/s, 0 objects/s recovering
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:05:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:42.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 23 04:05:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 23 04:05:42 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 130 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=96/96 les/c/f=97/97/0 sis=130) [0] r=0 lpr=130 pi=[96,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 23 04:05:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 23 04:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 23 04:05:43 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 23 04:05:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=96/96 les/c/f=97/97/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[96,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 131 pg[9.1f( empty local-lis/les=0/0 n=0 ec=57/47 lis/c=96/96 les/c/f=97/97/0 sis=131) [0]/[1] r=-1 lpr=131 pi=[96,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 23 04:05:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 131 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=129/74 les/c/f=130/75/0 sis=131) [0] r=0 lpr=131 pi=[74,131)/1 luod=0'0 crt=53'1142 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:43 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 131 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=129/74 les/c/f=130/75/0 sis=131) [0] r=0 lpr=131 pi=[74,131)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v307: 321 pgs: 321 active+clean; 456 KiB data, 144 MiB used, 21 GiB / 21 GiB avail; 27 B/s, 0 objects/s recovering
Jan 23 04:05:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:44.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 23 04:05:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 23 04:05:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 23 04:05:44 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 132 pg[9.1e( v 53'1142 (0'0,53'1142] local-lis/les=131/132 n=5 ec=57/47 lis/c=129/74 les/c/f=130/75/0 sis=131) [0] r=0 lpr=131 pi=[74,131)/1 crt=53'1142 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:44 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 23 04:05:44 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 23 04:05:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:44.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 23 04:05:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 23 04:05:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 23 04:05:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 133 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=131/96 les/c/f=132/97/0 sis=133) [0] r=0 lpr=133 pi=[96,133)/1 luod=0'0 crt=53'1142 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 23 04:05:45 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 133 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=0/0 n=5 ec=57/47 lis/c=131/96 les/c/f=132/97/0 sis=133) [0] r=0 lpr=133 pi=[96,133)/1 crt=53'1142 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v310: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 1 objects/s recovering
Jan 23 04:05:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:05:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:46.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.543132328308208e-06 of space, bias 1.0, pg target 0.001962939698492462 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:05:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 23 04:05:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:46.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 23 04:05:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 23 04:05:46 np0005593232 ceph-osd[85010]: osd.0 pg_epoch: 134 pg[9.1f( v 53'1142 (0'0,53'1142] local-lis/les=133/134 n=5 ec=57/47 lis/c=131/96 les/c/f=132/97/0 sis=133) [0] r=0 lpr=133 pi=[96,133)/1 crt=53'1142 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 23 04:05:47 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 23 04:05:47 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 23 04:05:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v312: 321 pgs: 2 peering, 319 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 70 B/s, 3 objects/s recovering
Jan 23 04:05:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:05:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:48.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:49 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 23 04:05:49 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 23 04:05:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v313: 321 pgs: 1 peering, 320 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 54 B/s, 2 objects/s recovering
Jan 23 04:05:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:50.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:50.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 23 04:05:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 23 04:05:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v314: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 14 B/s, 1 objects/s recovering
Jan 23 04:05:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:05:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:52.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:05:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 23 04:05:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 23 04:05:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:52.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 23 04:05:53 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 23 04:05:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v315: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 12 B/s, 1 objects/s recovering
Jan 23 04:05:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:54.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Jan 23 04:05:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Jan 23 04:05:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:54.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v316: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 B/s, 0 objects/s recovering
Jan 23 04:05:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:05:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:56.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:05:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:57 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 23 04:05:57 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 23 04:05:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v317: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:05:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:05:58.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:05:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:05:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:05:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:05:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:05:58.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v318: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:00.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 23 04:06:00 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 23 04:06:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v319: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:02.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v320: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:04.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:05 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 23 04:06:05 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 23 04:06:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v321: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:06.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:06.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v322: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:08.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:08.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v323: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:10.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v324: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:12.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 23 04:06:12 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 23 04:06:12 np0005593232 podman[103870]: 2026-01-23 09:06:12.648072404 +0000 UTC m=+0.056829060 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:06:12 np0005593232 podman[103870]: 2026-01-23 09:06:12.778356076 +0000 UTC m=+0.187112702 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:06:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:12.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:06:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v325: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:14 np0005593232 podman[104025]: 2026-01-23 09:06:14.053219856 +0000 UTC m=+0.691103683 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:14.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:14 np0005593232 podman[104025]: 2026-01-23 09:06:14.236293175 +0000 UTC m=+0.874177002 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:06:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 23 04:06:14 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:14 np0005593232 podman[104097]: 2026-01-23 09:06:14.807887475 +0000 UTC m=+0.078279417 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Jan 23 04:06:14 np0005593232 podman[104097]: 2026-01-23 09:06:14.820388212 +0000 UTC m=+0.090780124 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, description=keepalived for Ceph, vcs-type=git, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64)
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:06:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:14.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:06:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 09c373a2-79a6-4770-b2e0-4e6330fece97 does not exist
Jan 23 04:06:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0619ad1c-9f57-48fa-97b8-ecc4802fee1e does not exist
Jan 23 04:06:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1cdc8f5c-bfd5-4998-baf4-7a8c5456bcf5 does not exist
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:06:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v326: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:16.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.609587219 +0000 UTC m=+0.045467294 container create 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:06:16 np0005593232 systemd[1]: Started libpod-conmon-1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3.scope.
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.593786791 +0000 UTC m=+0.029666866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.728102724 +0000 UTC m=+0.163982859 container init 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.742586717 +0000 UTC m=+0.178466802 container start 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:06:16 np0005593232 elastic_joliot[104417]: 167 167
Jan 23 04:06:16 np0005593232 systemd[1]: libpod-1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3.scope: Deactivated successfully.
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.805509156 +0000 UTC m=+0.241389251 container attach 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.806999038 +0000 UTC m=+0.242879133 container died 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:06:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0af10c0950f6d15404267720c88eb5541ad296485ceea298edf2b62bd7006734-merged.mount: Deactivated successfully.
Jan 23 04:06:16 np0005593232 podman[104401]: 2026-01-23 09:06:16.85637286 +0000 UTC m=+0.292252935 container remove 1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:06:16 np0005593232 systemd[1]: libpod-conmon-1547c9250f27e29ebc13001bbe1803d05c0e147693e3b04658cb7fbcf63d5de3.scope: Deactivated successfully.
Jan 23 04:06:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:06:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:16.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:06:17 np0005593232 podman[104440]: 2026-01-23 09:06:17.001248608 +0000 UTC m=+0.038583144 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:17 np0005593232 podman[104440]: 2026-01-23 09:06:17.158805858 +0000 UTC m=+0.196140374 container create ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:06:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:06:17 np0005593232 systemd[1]: Started libpod-conmon-ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d.scope.
Jan 23 04:06:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:17 np0005593232 podman[104440]: 2026-01-23 09:06:17.311679238 +0000 UTC m=+0.349013814 container init ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:06:17 np0005593232 podman[104440]: 2026-01-23 09:06:17.320553844 +0000 UTC m=+0.357888370 container start ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:06:17 np0005593232 podman[104440]: 2026-01-23 09:06:17.334959064 +0000 UTC m=+0.372293630 container attach ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:06:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v327: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:18.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:18 np0005593232 gallant_nash[104457]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:06:18 np0005593232 gallant_nash[104457]: --> relative data size: 1.0
Jan 23 04:06:18 np0005593232 gallant_nash[104457]: --> All data devices are unavailable
Jan 23 04:06:18 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 23 04:06:18 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 23 04:06:18 np0005593232 systemd[1]: libpod-ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d.scope: Deactivated successfully.
Jan 23 04:06:18 np0005593232 podman[104440]: 2026-01-23 09:06:18.286719442 +0000 UTC m=+1.324053958 container died ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:06:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bb9888f85bfcb4560fae2dbff7c9954f52fd59faf2ebd48d6fa65dd87d0fd910-merged.mount: Deactivated successfully.
Jan 23 04:06:18 np0005593232 podman[104440]: 2026-01-23 09:06:18.345088185 +0000 UTC m=+1.382422701 container remove ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_nash, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:06:18 np0005593232 systemd[1]: libpod-conmon-ad6e69374d6ac8c6a63fa2275dcf060e7eb95858a5a015f1716b1e453f66742d.scope: Deactivated successfully.
Jan 23 04:06:18 np0005593232 podman[104680]: 2026-01-23 09:06:18.946580186 +0000 UTC m=+0.038377438 container create 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:06:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:18.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:18 np0005593232 systemd[1]: Started libpod-conmon-5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a.scope.
Jan 23 04:06:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:19.022711032 +0000 UTC m=+0.114508304 container init 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:18.931296301 +0000 UTC m=+0.023093563 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:19.029912182 +0000 UTC m=+0.121709434 container start 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:19.03305949 +0000 UTC m=+0.124856762 container attach 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:06:19 np0005593232 trusting_knuth[104695]: 167 167
Jan 23 04:06:19 np0005593232 systemd[1]: libpod-5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a.scope: Deactivated successfully.
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:19.035091336 +0000 UTC m=+0.126888588 container died 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:06:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e0ffb73b9a8ee4b0b5294e2c02dc07cd69bc0a13f4f62438739ccbf375d1ecaa-merged.mount: Deactivated successfully.
Jan 23 04:06:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:19 np0005593232 podman[104680]: 2026-01-23 09:06:19.07155573 +0000 UTC m=+0.163352982 container remove 5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:06:19 np0005593232 systemd[1]: libpod-conmon-5248fee912a3db6de5a26b03233da523715805140dfeb05895be2f8ef0c7e70a.scope: Deactivated successfully.
Jan 23 04:06:19 np0005593232 podman[104721]: 2026-01-23 09:06:19.255539703 +0000 UTC m=+0.056921633 container create 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 23 04:06:19 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 23 04:06:19 np0005593232 systemd[1]: Started libpod-conmon-218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42.scope.
Jan 23 04:06:19 np0005593232 podman[104721]: 2026-01-23 09:06:19.231199567 +0000 UTC m=+0.032581537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:19 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 23 04:06:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3583dadbc46db8a500121f455caaf4e2296b7617ff59de48cfabdbc12b0e88a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3583dadbc46db8a500121f455caaf4e2296b7617ff59de48cfabdbc12b0e88a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3583dadbc46db8a500121f455caaf4e2296b7617ff59de48cfabdbc12b0e88a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3583dadbc46db8a500121f455caaf4e2296b7617ff59de48cfabdbc12b0e88a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:19 np0005593232 podman[104721]: 2026-01-23 09:06:19.352763116 +0000 UTC m=+0.154145086 container init 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:06:19 np0005593232 podman[104721]: 2026-01-23 09:06:19.360274595 +0000 UTC m=+0.161656575 container start 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:06:19 np0005593232 podman[104721]: 2026-01-23 09:06:19.365181341 +0000 UTC m=+0.166563291 container attach 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:06:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v328: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:20 np0005593232 jolly_black[104737]: {
Jan 23 04:06:20 np0005593232 jolly_black[104737]:    "0": [
Jan 23 04:06:20 np0005593232 jolly_black[104737]:        {
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "devices": [
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "/dev/loop3"
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            ],
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "lv_name": "ceph_lv0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "lv_size": "7511998464",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "name": "ceph_lv0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "tags": {
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.cluster_name": "ceph",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.crush_device_class": "",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.encrypted": "0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.osd_id": "0",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.type": "block",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:                "ceph.vdo": "0"
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            },
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "type": "block",
Jan 23 04:06:20 np0005593232 jolly_black[104737]:            "vg_name": "ceph_vg0"
Jan 23 04:06:20 np0005593232 jolly_black[104737]:        }
Jan 23 04:06:20 np0005593232 jolly_black[104737]:    ]
Jan 23 04:06:20 np0005593232 jolly_black[104737]: }
Jan 23 04:06:20 np0005593232 systemd[1]: libpod-218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42.scope: Deactivated successfully.
Jan 23 04:06:20 np0005593232 podman[104721]: 2026-01-23 09:06:20.138838978 +0000 UTC m=+0.940220908 container died 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:06:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:20.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f3583dadbc46db8a500121f455caaf4e2296b7617ff59de48cfabdbc12b0e88a-merged.mount: Deactivated successfully.
Jan 23 04:06:20 np0005593232 podman[104721]: 2026-01-23 09:06:20.200577364 +0000 UTC m=+1.001959294 container remove 218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_black, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:06:20 np0005593232 systemd[1]: libpod-conmon-218e16c7449e011de0c4f2386326d80148d175b39e75b670498e424555da2a42.scope: Deactivated successfully.
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.901465268 +0000 UTC m=+0.038639495 container create bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:06:20 np0005593232 systemd[1]: Started libpod-conmon-bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04.scope.
Jan 23 04:06:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:20.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.980509026 +0000 UTC m=+0.117683253 container init bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.884767354 +0000 UTC m=+0.021941611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.98677021 +0000 UTC m=+0.123944437 container start bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.990167964 +0000 UTC m=+0.127342211 container attach bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:06:20 np0005593232 trusting_goldstine[104916]: 167 167
Jan 23 04:06:20 np0005593232 systemd[1]: libpod-bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04.scope: Deactivated successfully.
Jan 23 04:06:20 np0005593232 podman[104899]: 2026-01-23 09:06:20.992808868 +0000 UTC m=+0.129983095 container died bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:06:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1aeac40a8d165dd052374f42139f2e929b93b6f844dfb424e4cb875d4787d85a-merged.mount: Deactivated successfully.
Jan 23 04:06:21 np0005593232 podman[104899]: 2026-01-23 09:06:21.023542962 +0000 UTC m=+0.160717189 container remove bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_goldstine, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:06:21 np0005593232 systemd[1]: libpod-conmon-bb477262014bef5e6ab165e11a93c1c9afef857bfe89b3a2e54f981e64039f04.scope: Deactivated successfully.
Jan 23 04:06:21 np0005593232 podman[104940]: 2026-01-23 09:06:21.199466852 +0000 UTC m=+0.052156810 container create d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:06:21 np0005593232 systemd[1]: Started libpod-conmon-d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3.scope.
Jan 23 04:06:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:06:21 np0005593232 podman[104940]: 2026-01-23 09:06:21.174271622 +0000 UTC m=+0.026961600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:06:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa52ed9405ec7e33126d64181ce71f63b55f19781ac84c6a9ef39682b4d8c1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa52ed9405ec7e33126d64181ce71f63b55f19781ac84c6a9ef39682b4d8c1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa52ed9405ec7e33126d64181ce71f63b55f19781ac84c6a9ef39682b4d8c1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daa52ed9405ec7e33126d64181ce71f63b55f19781ac84c6a9ef39682b4d8c1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:06:21 np0005593232 podman[104940]: 2026-01-23 09:06:21.282337656 +0000 UTC m=+0.135027634 container init d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:06:21 np0005593232 podman[104940]: 2026-01-23 09:06:21.289952238 +0000 UTC m=+0.142642196 container start d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:06:21 np0005593232 podman[104940]: 2026-01-23 09:06:21.293368473 +0000 UTC m=+0.146058431 container attach d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:06:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v329: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:22.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]: {
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:        "osd_id": 0,
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:        "type": "bluestore"
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]:    }
Jan 23 04:06:22 np0005593232 nervous_hofstadter[104957]: }
Jan 23 04:06:22 np0005593232 systemd[1]: libpod-d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3.scope: Deactivated successfully.
Jan 23 04:06:22 np0005593232 podman[104940]: 2026-01-23 09:06:22.213181213 +0000 UTC m=+1.065871171 container died d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:06:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-daa52ed9405ec7e33126d64181ce71f63b55f19781ac84c6a9ef39682b4d8c1a-merged.mount: Deactivated successfully.
Jan 23 04:06:22 np0005593232 podman[104940]: 2026-01-23 09:06:22.283190929 +0000 UTC m=+1.135880907 container remove d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:06:22 np0005593232 systemd[1]: libpod-conmon-d15ff4a1acee2f92d6666e3b06ea6629292891496c589c868ff923f862245fb3.scope: Deactivated successfully.
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fad4c2c1-e4cc-4256-b3d2-0e8ab48cd796 does not exist
Jan 23 04:06:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev becc9431-cc93-4a70-a4b9-d45f9f3324ea does not exist
Jan 23 04:06:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7216743a-d9bd-422c-a502-bdaadcab24fc does not exist
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:06:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:22.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v330: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:24.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:24.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 23 04:06:25 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 23 04:06:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v331: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:26.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:26 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 23 04:06:26 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 23 04:06:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:27 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 23 04:06:27 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 23 04:06:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v332: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:28.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Jan 23 04:06:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Jan 23 04:06:28 np0005593232 python3.9[105201]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:06:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:28.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 23 04:06:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 23 04:06:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v333: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:30.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:30 np0005593232 python3.9[105489]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 23 04:06:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:30.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:31 np0005593232 python3.9[105642]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 23 04:06:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v334: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:32.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:32 np0005593232 python3.9[105794]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:06:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:32.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 23 04:06:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 23 04:06:33 np0005593232 python3.9[105947]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 23 04:06:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v335: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:34.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:34.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:35 np0005593232 python3.9[106100]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:06:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 23 04:06:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 23 04:06:35 np0005593232 python3.9[106252]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:06:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v336: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:36.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:36 np0005593232 python3.9[106330]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:06:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:36.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:06:37
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta']
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:06:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:06:37 np0005593232 python3.9[106483]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:06:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v337: 321 pgs: 321 active+clean; 455 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:38.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:39 np0005593232 python3.9[106688]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 23 04:06:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v338: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:40 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 23 04:06:40 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 23 04:06:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:40.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:40 np0005593232 python3.9[106841]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 23 04:06:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:06:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:40.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:06:41 np0005593232 python3.9[106996]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 04:06:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v339: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:42 np0005593232 python3.9[107148]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 23 04:06:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:43.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:43 np0005593232 python3.9[107302]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:06:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v340: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:44.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:45.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:45 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 23 04:06:45 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v341: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:06:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:46.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:46 np0005593232 python3.9[107457]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:06:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:47 np0005593232 python3.9[107610]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:06:47 np0005593232 python3.9[107688]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:06:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v342: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:48 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 23 04:06:48 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 23 04:06:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:48.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:49.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v343: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:50.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:50 np0005593232 python3.9[107842]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:06:50 np0005593232 python3.9[107921]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:06:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 23 04:06:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:51.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:51 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 23 04:06:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 23 04:06:52 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 23 04:06:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v344: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:52.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:52 np0005593232 python3.9[108074]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:06:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:53.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 23 04:06:54 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 23 04:06:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v345: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:54.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:55 np0005593232 python3.9[108227]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:06:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:55.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:55 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 23 04:06:55 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 23 04:06:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v346: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:56 np0005593232 python3.9[108380]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 23 04:06:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:56.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:57.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:57 np0005593232 python3.9[108532]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:06:58 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 23 04:06:58 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 23 04:06:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v347: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:06:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:06:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:06:58.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:06:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:06:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:06:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:06:59.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:06:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:06:59 np0005593232 python3.9[108735]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:06:59 np0005593232 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 23 04:06:59 np0005593232 systemd[1]: tuned.service: Deactivated successfully.
Jan 23 04:06:59 np0005593232 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 23 04:06:59 np0005593232 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 23 04:06:59 np0005593232 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 23 04:07:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v348: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:00 np0005593232 python3.9[108898]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 23 04:07:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:01 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 23 04:07:01 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 23 04:07:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v349: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:02.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:03 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 23 04:07:03 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 23 04:07:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v350: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.8 deep-scrub starts
Jan 23 04:07:04 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.8 deep-scrub ok
Jan 23 04:07:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:04.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:05.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v351: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:06 np0005593232 python3.9[109052]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:07:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:06.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:06 np0005593232 python3.9[109206]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:07:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:07 np0005593232 systemd[1]: session-35.scope: Deactivated successfully.
Jan 23 04:07:07 np0005593232 systemd[1]: session-35.scope: Consumed 1min 6.825s CPU time.
Jan 23 04:07:07 np0005593232 systemd-logind[808]: Session 35 logged out. Waiting for processes to exit.
Jan 23 04:07:07 np0005593232 systemd-logind[808]: Removed session 35.
Jan 23 04:07:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v352: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:08 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 23 04:07:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:08 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 23 04:07:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:09 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 23 04:07:09 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 23 04:07:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v353: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:10.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 23 04:07:11 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 23 04:07:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v354: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:12.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:13 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 23 04:07:13 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 23 04:07:13 np0005593232 systemd-logind[808]: New session 36 of user zuul.
Jan 23 04:07:13 np0005593232 systemd[1]: Started Session 36 of User zuul.
Jan 23 04:07:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v355: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:07:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:14.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:07:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:15.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:15 np0005593232 python3.9[109393]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:07:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v356: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 23 04:07:16 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 23 04:07:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:16.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:07:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:17.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:07:17 np0005593232 python3.9[109550]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 23 04:07:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v357: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:18 np0005593232 python3.9[109704]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:07:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:19.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:19 np0005593232 python3.9[109840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 04:07:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v358: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:20.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v359: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:22 np0005593232 python3.9[109994]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:07:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:23.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:24 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 23 04:07:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v360: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:24 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 23 04:07:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:07:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:07:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:25.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:25 np0005593232 python3.9[110280]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 497afbf5-e1a6-4308-a578-c74f053d3ec8 does not exist
Jan 23 04:07:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 872f0066-3586-4a44-98a2-2dd34de643c8 does not exist
Jan 23 04:07:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a3250bdf-6ea4-4eb6-a97f-9b689ec800dc does not exist
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:07:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:07:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v361: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:07:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:26 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.287753517 +0000 UTC m=+0.049987852 container create 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:07:26 np0005593232 systemd[1]: Started libpod-conmon-4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3.scope.
Jan 23 04:07:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.265297502 +0000 UTC m=+0.027531867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.381759505 +0000 UTC m=+0.143993840 container init 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.391222918 +0000 UTC m=+0.153457253 container start 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.394777467 +0000 UTC m=+0.157011802 container attach 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:07:26 np0005593232 systemd[1]: libpod-4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3.scope: Deactivated successfully.
Jan 23 04:07:26 np0005593232 quizzical_jennings[110590]: 167 167
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.399286062 +0000 UTC m=+0.161520397 container died 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:07:26 np0005593232 conmon[110590]: conmon 4cefa193442549cee40c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3.scope/container/memory.events
Jan 23 04:07:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bf92675913df8506395fee10e43f38d65b8e9f7ac83f851e5324a4c2671b2f13-merged.mount: Deactivated successfully.
Jan 23 04:07:26 np0005593232 podman[110554]: 2026-01-23 09:07:26.446385144 +0000 UTC m=+0.208619479 container remove 4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:07:26 np0005593232 systemd[1]: libpod-conmon-4cefa193442549cee40cf1f32053838fb2b57ec6341dbec507990dcc1d2471b3.scope: Deactivated successfully.
Jan 23 04:07:26 np0005593232 python3.9[110583]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:07:26 np0005593232 podman[110614]: 2026-01-23 09:07:26.640666713 +0000 UTC m=+0.052533134 container create 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:07:26 np0005593232 systemd[1]: Started libpod-conmon-2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e.scope.
Jan 23 04:07:26 np0005593232 podman[110614]: 2026-01-23 09:07:26.618634859 +0000 UTC m=+0.030501300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:26 np0005593232 podman[110614]: 2026-01-23 09:07:26.747274031 +0000 UTC m=+0.159140472 container init 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:07:26 np0005593232 podman[110614]: 2026-01-23 09:07:26.75730091 +0000 UTC m=+0.169167331 container start 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:07:26 np0005593232 podman[110614]: 2026-01-23 09:07:26.761852077 +0000 UTC m=+0.173718528 container attach 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:07:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:27.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:27 np0005593232 interesting_merkle[110652]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:07:27 np0005593232 interesting_merkle[110652]: --> relative data size: 1.0
Jan 23 04:07:27 np0005593232 interesting_merkle[110652]: --> All data devices are unavailable
Jan 23 04:07:27 np0005593232 systemd[1]: libpod-2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e.scope: Deactivated successfully.
Jan 23 04:07:27 np0005593232 podman[110614]: 2026-01-23 09:07:27.700753196 +0000 UTC m=+1.112619637 container died 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:07:27 np0005593232 python3.9[110788]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 23 04:07:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-77885cc03669c657c1bce2204403a0f19950af6eed0980e0e22f73d0afd37e7f-merged.mount: Deactivated successfully.
Jan 23 04:07:27 np0005593232 podman[110614]: 2026-01-23 09:07:27.950297564 +0000 UTC m=+1.362163985 container remove 2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_merkle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:07:27 np0005593232 systemd[1]: libpod-conmon-2762d6e00b18e7f603a473f68aff6389bdfd9a9bea5c529cb520c79ace956b5e.scope: Deactivated successfully.
Jan 23 04:07:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 23 04:07:28 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 23 04:07:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v362: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:28.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.604532748 +0000 UTC m=+0.051765312 container create 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:07:28 np0005593232 systemd[1]: Started libpod-conmon-718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6.scope.
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.582916236 +0000 UTC m=+0.030148830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.700741157 +0000 UTC m=+0.147973761 container init 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.708378559 +0000 UTC m=+0.155611143 container start 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.712751561 +0000 UTC m=+0.159984165 container attach 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:07:28 np0005593232 elastic_shannon[111087]: 167 167
Jan 23 04:07:28 np0005593232 systemd[1]: libpod-718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6.scope: Deactivated successfully.
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.715292062 +0000 UTC m=+0.162524646 container died 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:07:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7a031758b43185e30fc9ca1ef35db47dfec2a38f5adf26916322c71b66a17eaa-merged.mount: Deactivated successfully.
Jan 23 04:07:28 np0005593232 podman[111034]: 2026-01-23 09:07:28.753853326 +0000 UTC m=+0.201085910 container remove 718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:07:28 np0005593232 systemd[1]: libpod-conmon-718439308e464f33fc78de8dcd9096327fd7839ce94116b897cadcacf1ece7f6.scope: Deactivated successfully.
Jan 23 04:07:28 np0005593232 podman[111140]: 2026-01-23 09:07:28.919326953 +0000 UTC m=+0.051858555 container create 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:07:28 np0005593232 systemd[1]: Started libpod-conmon-2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852.scope.
Jan 23 04:07:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6695d7b2bd7197e1139435ef02a2a94ca3c316357e2a7865e7e4da7f1df0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6695d7b2bd7197e1139435ef02a2a94ca3c316357e2a7865e7e4da7f1df0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6695d7b2bd7197e1139435ef02a2a94ca3c316357e2a7865e7e4da7f1df0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf6695d7b2bd7197e1139435ef02a2a94ca3c316357e2a7865e7e4da7f1df0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:28 np0005593232 podman[111140]: 2026-01-23 09:07:28.897737292 +0000 UTC m=+0.030268914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:29 np0005593232 podman[111140]: 2026-01-23 09:07:29.002435026 +0000 UTC m=+0.134966648 container init 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:07:29 np0005593232 podman[111140]: 2026-01-23 09:07:29.009682548 +0000 UTC m=+0.142214150 container start 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:07:29 np0005593232 podman[111140]: 2026-01-23 09:07:29.014557734 +0000 UTC m=+0.147089336 container attach 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:07:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 23 04:07:29 np0005593232 python3.9[111128]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:07:29 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 23 04:07:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:29.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]: {
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:    "0": [
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:        {
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "devices": [
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "/dev/loop3"
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            ],
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "lv_name": "ceph_lv0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "lv_size": "7511998464",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "name": "ceph_lv0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "tags": {
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.cluster_name": "ceph",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.crush_device_class": "",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.encrypted": "0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.osd_id": "0",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.type": "block",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:                "ceph.vdo": "0"
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            },
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "type": "block",
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:            "vg_name": "ceph_vg0"
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:        }
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]:    ]
Jan 23 04:07:29 np0005593232 infallible_chatterjee[111157]: }
Jan 23 04:07:29 np0005593232 systemd[1]: libpod-2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852.scope: Deactivated successfully.
Jan 23 04:07:29 np0005593232 podman[111140]: 2026-01-23 09:07:29.843213335 +0000 UTC m=+0.975744957 container died 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:07:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v363: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9bf6695d7b2bd7197e1139435ef02a2a94ca3c316357e2a7865e7e4da7f1df0e-merged.mount: Deactivated successfully.
Jan 23 04:07:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:30.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:30 np0005593232 podman[111140]: 2026-01-23 09:07:30.27118923 +0000 UTC m=+1.403720832 container remove 2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chatterjee, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:07:30 np0005593232 systemd[1]: libpod-conmon-2a94938edba9345a5de52617caf6c41e104123938d90ef5b0a1dd08b2274e852.scope: Deactivated successfully.
Jan 23 04:07:30 np0005593232 python3.9[111335]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.88648111 +0000 UTC m=+0.042584957 container create 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:07:30 np0005593232 systemd[1]: Started libpod-conmon-94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc.scope.
Jan 23 04:07:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.866505073 +0000 UTC m=+0.022608940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.967160116 +0000 UTC m=+0.123263983 container init 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.975076936 +0000 UTC m=+0.131180783 container start 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.982061161 +0000 UTC m=+0.138165008 container attach 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:07:30 np0005593232 hardcore_mestorf[111495]: 167 167
Jan 23 04:07:30 np0005593232 systemd[1]: libpod-94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc.scope: Deactivated successfully.
Jan 23 04:07:30 np0005593232 podman[111479]: 2026-01-23 09:07:30.984439517 +0000 UTC m=+0.140543364 container died 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:07:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6491a1b8d00025a0b5c7af027c002851ac7ec1b3b0f6828cbf73ead7c6f65d66-merged.mount: Deactivated successfully.
Jan 23 04:07:31 np0005593232 podman[111479]: 2026-01-23 09:07:31.025537041 +0000 UTC m=+0.181640888 container remove 94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:07:31 np0005593232 systemd[1]: libpod-conmon-94e9b3a19855576800332360f3a2c059117dce217091899d6862157be79956bc.scope: Deactivated successfully.
Jan 23 04:07:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:31.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:31 np0005593232 podman[111517]: 2026-01-23 09:07:31.188018935 +0000 UTC m=+0.039711557 container create 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:07:31 np0005593232 systemd[1]: Started libpod-conmon-243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb.scope.
Jan 23 04:07:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:07:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf0a5999fbaa10aa2ebbb3c959c804eb4d33e080040443953e8e8de23a298f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf0a5999fbaa10aa2ebbb3c959c804eb4d33e080040443953e8e8de23a298f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf0a5999fbaa10aa2ebbb3c959c804eb4d33e080040443953e8e8de23a298f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbf0a5999fbaa10aa2ebbb3c959c804eb4d33e080040443953e8e8de23a298f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:07:31 np0005593232 podman[111517]: 2026-01-23 09:07:31.170559919 +0000 UTC m=+0.022252561 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:07:31 np0005593232 podman[111517]: 2026-01-23 09:07:31.269324608 +0000 UTC m=+0.121017260 container init 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:07:31 np0005593232 podman[111517]: 2026-01-23 09:07:31.279237934 +0000 UTC m=+0.130930556 container start 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:07:31 np0005593232 podman[111517]: 2026-01-23 09:07:31.282717401 +0000 UTC m=+0.134410103 container attach 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:07:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v364: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]: {
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:        "osd_id": 0,
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:        "type": "bluestore"
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]:    }
Jan 23 04:07:32 np0005593232 happy_agnesi[111533]: }
Jan 23 04:07:32 np0005593232 systemd[1]: libpod-243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb.scope: Deactivated successfully.
Jan 23 04:07:32 np0005593232 podman[111579]: 2026-01-23 09:07:32.191552424 +0000 UTC m=+0.034947054 container died 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:07:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dbf0a5999fbaa10aa2ebbb3c959c804eb4d33e080040443953e8e8de23a298f3-merged.mount: Deactivated successfully.
Jan 23 04:07:32 np0005593232 podman[111579]: 2026-01-23 09:07:32.246814823 +0000 UTC m=+0.090209373 container remove 243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:07:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:32 np0005593232 systemd[1]: libpod-conmon-243b01dc1975a6812939ae49aa807b4834d035be580840815283b18d8506c3bb.scope: Deactivated successfully.
Jan 23 04:07:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:07:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:07:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ef0673da-f841-4db4-afab-8e8cfd42aca4 does not exist
Jan 23 04:07:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4d1b4575-7839-4c55-bc89-ea3538c38a98 does not exist
Jan 23 04:07:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4b155984-8cd9-4323-9097-f1615a3fd5f0 does not exist
Jan 23 04:07:33 np0005593232 python3.9[111771]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:07:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:33.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 23 04:07:33 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 23 04:07:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:07:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v365: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:34.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:34 np0005593232 python3.9[112059]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 23 04:07:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:35.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 23 04:07:35 np0005593232 ceph-osd[85010]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 23 04:07:35 np0005593232 python3.9[112209]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:07:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v366: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:36.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:36 np0005593232 python3.9[112364]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:07:37
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.meta', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'vms']
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:07:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:07:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:07:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v367: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:39.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:39 np0005593232 python3.9[112570]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:07:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v368: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:40.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:41.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:42 np0005593232 python3.9[112724]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:07:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v369: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:42.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:42 np0005593232 python3.9[112879]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 23 04:07:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:43.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v370: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:44.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:44 np0005593232 systemd[1]: session-36.scope: Deactivated successfully.
Jan 23 04:07:44 np0005593232 systemd[1]: session-36.scope: Consumed 19.837s CPU time.
Jan 23 04:07:44 np0005593232 systemd-logind[808]: Session 36 logged out. Waiting for processes to exit.
Jan 23 04:07:44 np0005593232 systemd-logind[808]: Removed session 36.
Jan 23 04:07:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v371: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:07:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:46.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:47.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:47.733507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159267733692, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2576, "num_deletes": 251, "total_data_size": 3811378, "memory_usage": 3885192, "flush_reason": "Manual Compaction"}
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159267941531, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 3706461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7419, "largest_seqno": 9994, "table_properties": {"data_size": 3695135, "index_size": 6860, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 29501, "raw_average_key_size": 21, "raw_value_size": 3669857, "raw_average_value_size": 2732, "num_data_blocks": 302, "num_entries": 1343, "num_filter_entries": 1343, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159102, "oldest_key_time": 1769159102, "file_creation_time": 1769159267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 208042 microseconds, and 10813 cpu microseconds.
Jan 23 04:07:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:07:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v372: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:47.941604) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 3706461 bytes OK
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:47.941632) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.095775) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.095824) EVENT_LOG_v1 {"time_micros": 1769159268095814, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.095849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 3799899, prev total WAL file size 3799899, number of live WAL files 2.
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.097384) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(3619KB)], [20(7111KB)]
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159268097535, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 10988146, "oldest_snapshot_seqno": -1}
Jan 23 04:07:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:48.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3851 keys, 9441699 bytes, temperature: kUnknown
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159268320034, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 9441699, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9410252, "index_size": 20713, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9669, "raw_key_size": 92850, "raw_average_key_size": 24, "raw_value_size": 9335064, "raw_average_value_size": 2424, "num_data_blocks": 905, "num_entries": 3851, "num_filter_entries": 3851, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159268, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.320686) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 9441699 bytes
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.322692) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.3 rd, 42.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 6.9 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 4372, records dropped: 521 output_compression: NoCompression
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.322747) EVENT_LOG_v1 {"time_micros": 1769159268322727, "job": 6, "event": "compaction_finished", "compaction_time_micros": 222922, "compaction_time_cpu_micros": 28972, "output_level": 6, "num_output_files": 1, "total_output_size": 9441699, "num_input_records": 4372, "num_output_records": 3851, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159268324057, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159268325603, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.097188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.325746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.325753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.325755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.325756) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:07:48.325758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:07:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:49.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v373: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:51.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:51 np0005593232 systemd-logind[808]: New session 37 of user zuul.
Jan 23 04:07:51 np0005593232 systemd[1]: Started Session 37 of User zuul.
Jan 23 04:07:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v374: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:52.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:52 np0005593232 python3.9[113061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:07:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:53.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v375: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:54 np0005593232 python3.9[113216]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:07:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:55.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:55 np0005593232 python3.9[113410]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:07:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v376: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:56 np0005593232 systemd[1]: session-37.scope: Deactivated successfully.
Jan 23 04:07:56 np0005593232 systemd[1]: session-37.scope: Consumed 2.245s CPU time.
Jan 23 04:07:56 np0005593232 systemd-logind[808]: Session 37 logged out. Waiting for processes to exit.
Jan 23 04:07:56 np0005593232 systemd-logind[808]: Removed session 37.
Jan 23 04:07:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:07:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:56.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:07:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:57.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v377: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:07:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:07:58.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:07:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:07:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:07:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:07:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:07:59.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v378: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:01.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v379: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:02.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:02 np0005593232 systemd-logind[808]: New session 38 of user zuul.
Jan 23 04:08:02 np0005593232 systemd[1]: Started Session 38 of User zuul.
Jan 23 04:08:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:03.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v380: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:04 np0005593232 python3.9[113643]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:08:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:04.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:05.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:05 np0005593232 python3.9[113798]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:08:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v381: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:06 np0005593232 python3.9[113954]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:08:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:06.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:07 np0005593232 python3.9[114039]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:08:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:07.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v382: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:09.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:09 np0005593232 python3.9[114193]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:08:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v383: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:11.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:11 np0005593232 python3.9[114389]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:08:12 np0005593232 python3.9[114541]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:08:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v384: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:12.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:12 np0005593232 python3.9[114707]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:08:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:13.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:13 np0005593232 python3.9[114785]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:08:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v385: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:14 np0005593232 python3.9[114937]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:08:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:14 np0005593232 python3.9[115015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:08:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:15.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:15 np0005593232 python3.9[115168]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:08:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v386: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:16 np0005593232 python3.9[115320]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:08:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:16.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:16 np0005593232 python3.9[115472]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:08:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:17 np0005593232 python3.9[115625]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:08:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v387: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:18.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:18 np0005593232 python3.9[115777]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:08:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:19.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v388: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:21.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:21 np0005593232 python3.9[115982]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:08:21 np0005593232 python3.9[116136]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:08:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v389: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:22 np0005593232 python3.9[116288]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:08:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:08:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:23.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:08:23 np0005593232 python3.9[116441]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:08:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v390: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:08:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:08:24 np0005593232 python3.9[116595]: ansible-service_facts Invoked
Jan 23 04:08:24 np0005593232 network[116612]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:08:24 np0005593232 network[116613]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:08:24 np0005593232 network[116614]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:08:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v391: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:27.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v392: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:28.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:29.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v393: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:31.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v394: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:32.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:33.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:33 np0005593232 python3.9[117170]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:08:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v395: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:34.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:35.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v396: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:36.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:08:37
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes', 'vms', 'default.rgw.control', 'backups', 'default.rgw.meta']
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:08:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:37.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:08:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:08:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:08:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:08:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v397: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:38.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fa637f82-c186-43bb-a7e9-23d5de7ca44d does not exist
Jan 23 04:08:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b6440097-a89f-43c7-b0d1-b708b969d796 does not exist
Jan 23 04:08:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2266e022-7e61-4ac7-9653-323a682df01e does not exist
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:08:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:08:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:39.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:08:39 np0005593232 podman[117420]: 2026-01-23 09:08:39.43869897 +0000 UTC m=+0.024004076 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:39 np0005593232 podman[117420]: 2026-01-23 09:08:39.598130006 +0000 UTC m=+0.183435102 container create a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:08:39 np0005593232 systemd[1]: Started libpod-conmon-a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd.scope.
Jan 23 04:08:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v398: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:40 np0005593232 podman[117420]: 2026-01-23 09:08:40.149758165 +0000 UTC m=+0.735063351 container init a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:08:40 np0005593232 podman[117420]: 2026-01-23 09:08:40.162126163 +0000 UTC m=+0.747431299 container start a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:08:40 np0005593232 eager_rubin[117436]: 167 167
Jan 23 04:08:40 np0005593232 systemd[1]: libpod-a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd.scope: Deactivated successfully.
Jan 23 04:08:40 np0005593232 podman[117420]: 2026-01-23 09:08:40.256638103 +0000 UTC m=+0.841943209 container attach a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:08:40 np0005593232 podman[117420]: 2026-01-23 09:08:40.257431715 +0000 UTC m=+0.842736821 container died a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:08:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:40.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3d02e3c26fade6acdc5dbc267deed48609e9af2472666662a8d024cb5f7b2874-merged.mount: Deactivated successfully.
Jan 23 04:08:40 np0005593232 podman[117420]: 2026-01-23 09:08:40.783165726 +0000 UTC m=+1.368470822 container remove a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:08:40 np0005593232 systemd[1]: libpod-conmon-a02e268c33d12dd22ec4d7942adf202caa5f200ce44f347a4b950c5d6de8ecbd.scope: Deactivated successfully.
Jan 23 04:08:40 np0005593232 podman[117464]: 2026-01-23 09:08:40.964446106 +0000 UTC m=+0.068634602 container create 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:08:41 np0005593232 systemd[1]: Started libpod-conmon-3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e.scope.
Jan 23 04:08:41 np0005593232 podman[117464]: 2026-01-23 09:08:40.91941896 +0000 UTC m=+0.023607476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:41 np0005593232 podman[117464]: 2026-01-23 09:08:41.039273552 +0000 UTC m=+0.143462068 container init 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 04:08:41 np0005593232 podman[117464]: 2026-01-23 09:08:41.048610374 +0000 UTC m=+0.152798870 container start 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:08:41 np0005593232 podman[117464]: 2026-01-23 09:08:41.18496017 +0000 UTC m=+0.289148726 container attach 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 23 04:08:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:41.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:41 np0005593232 lucid_easley[117480]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:08:41 np0005593232 lucid_easley[117480]: --> relative data size: 1.0
Jan 23 04:08:41 np0005593232 lucid_easley[117480]: --> All data devices are unavailable
Jan 23 04:08:41 np0005593232 systemd[1]: libpod-3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e.scope: Deactivated successfully.
Jan 23 04:08:41 np0005593232 podman[117495]: 2026-01-23 09:08:41.915959527 +0000 UTC m=+0.025859569 container died 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:08:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v399: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:42.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3bbf224bed6a2444f5820bf3aab458127d7a58c302179a5b8d4d6729bead9b63-merged.mount: Deactivated successfully.
Jan 23 04:08:42 np0005593232 podman[117495]: 2026-01-23 09:08:42.502393835 +0000 UTC m=+0.612293847 container remove 3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_easley, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:08:42 np0005593232 systemd[1]: libpod-conmon-3fc046891f908d99731942ab9b0ab8429a98a8118dd15c7e02d4fe745f667d2e.scope: Deactivated successfully.
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.062324958 +0000 UTC m=+0.039873573 container create d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:08:43 np0005593232 systemd[1]: Started libpod-conmon-d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56.scope.
Jan 23 04:08:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.042527121 +0000 UTC m=+0.020075756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.183983301 +0000 UTC m=+0.161531926 container init d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.191543983 +0000 UTC m=+0.169092598 container start d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:08:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:43 np0005593232 charming_lewin[117670]: 167 167
Jan 23 04:08:43 np0005593232 systemd[1]: libpod-d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56.scope: Deactivated successfully.
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.221609869 +0000 UTC m=+0.199158514 container attach d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.222318689 +0000 UTC m=+0.199867304 container died d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:08:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-44152e626111c47a4e9216213cd8c1d2ffe70796c86650dea440e40a0cae0bc8-merged.mount: Deactivated successfully.
Jan 23 04:08:43 np0005593232 podman[117653]: 2026-01-23 09:08:43.413253062 +0000 UTC m=+0.390801677 container remove d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:08:43 np0005593232 systemd[1]: libpod-conmon-d3e937e226ef95dcaa5a377edb988d072394b32a33f377ae51590de6e5731b56.scope: Deactivated successfully.
Jan 23 04:08:43 np0005593232 podman[117696]: 2026-01-23 09:08:43.552810838 +0000 UTC m=+0.029923773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:43 np0005593232 podman[117696]: 2026-01-23 09:08:43.728078729 +0000 UTC m=+0.205191634 container create f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:08:43 np0005593232 systemd[1]: Started libpod-conmon-f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b.scope.
Jan 23 04:08:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc996a32e7a3af320622d639729cc87dcf7a27208f4d0762948413ef43bc5352/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc996a32e7a3af320622d639729cc87dcf7a27208f4d0762948413ef43bc5352/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc996a32e7a3af320622d639729cc87dcf7a27208f4d0762948413ef43bc5352/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc996a32e7a3af320622d639729cc87dcf7a27208f4d0762948413ef43bc5352/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:43 np0005593232 podman[117696]: 2026-01-23 09:08:43.985147191 +0000 UTC m=+0.462260136 container init f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:08:43 np0005593232 podman[117696]: 2026-01-23 09:08:43.995714098 +0000 UTC m=+0.472827013 container start f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v400: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:44 np0005593232 podman[117696]: 2026-01-23 09:08:44.151434019 +0000 UTC m=+0.628546924 container attach f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:08:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:44.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.441530) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159324441787, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 693, "num_deletes": 254, "total_data_size": 968133, "memory_usage": 980000, "flush_reason": "Manual Compaction"}
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159324631921, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 622003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9995, "largest_seqno": 10687, "table_properties": {"data_size": 618910, "index_size": 1001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7879, "raw_average_key_size": 19, "raw_value_size": 612447, "raw_average_value_size": 1527, "num_data_blocks": 45, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159268, "oldest_key_time": 1769159268, "file_creation_time": 1769159324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 190396 microseconds, and 4614 cpu microseconds.
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]: {
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:    "0": [
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:        {
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "devices": [
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "/dev/loop3"
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            ],
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "lv_name": "ceph_lv0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "lv_size": "7511998464",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "name": "ceph_lv0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "tags": {
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.cluster_name": "ceph",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.crush_device_class": "",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.encrypted": "0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.osd_id": "0",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.type": "block",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:                "ceph.vdo": "0"
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            },
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "type": "block",
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:            "vg_name": "ceph_vg0"
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:        }
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]:    ]
Jan 23 04:08:44 np0005593232 silly_dubinsky[117712]: }
Jan 23 04:08:44 np0005593232 systemd[1]: libpod-f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b.scope: Deactivated successfully.
Jan 23 04:08:44 np0005593232 podman[117774]: 2026-01-23 09:08:44.886750087 +0000 UTC m=+0.023096581 container died f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.632013) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 622003 bytes OK
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.632044) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.891575) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.891650) EVENT_LOG_v1 {"time_micros": 1769159324891628, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.891677) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 964600, prev total WAL file size 980573, number of live WAL files 2.
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.893459) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323535' seq:0, type:0; will stop at (end)
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(607KB)], [23(9220KB)]
Jan 23 04:08:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159324893634, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 10063702, "oldest_snapshot_seqno": -1}
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3753 keys, 7583564 bytes, temperature: kUnknown
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159325093012, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 7583564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7555661, "index_size": 17425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 91283, "raw_average_key_size": 24, "raw_value_size": 7484968, "raw_average_value_size": 1994, "num_data_blocks": 762, "num_entries": 3753, "num_filter_entries": 3753, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159324, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.093310) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7583564 bytes
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.095768) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.5 rd, 38.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.0 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(28.4) write-amplify(12.2) OK, records in: 4252, records dropped: 499 output_compression: NoCompression
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.095793) EVENT_LOG_v1 {"time_micros": 1769159325095782, "job": 8, "event": "compaction_finished", "compaction_time_micros": 199475, "compaction_time_cpu_micros": 26334, "output_level": 6, "num_output_files": 1, "total_output_size": 7583564, "num_input_records": 4252, "num_output_records": 3753, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159325096090, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159325098254, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:44.893230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.098354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.098361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.098364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.098366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:08:45.098368) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:08:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cc996a32e7a3af320622d639729cc87dcf7a27208f4d0762948413ef43bc5352-merged.mount: Deactivated successfully.
Jan 23 04:08:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:45.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:45 np0005593232 podman[117774]: 2026-01-23 09:08:45.206675148 +0000 UTC m=+0.343021602 container remove f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_dubinsky, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:08:45 np0005593232 systemd[1]: libpod-conmon-f2a6c006ad7cd81b4292e7212067b8553bcd58e5218ca0191f7d86112819a79b.scope: Deactivated successfully.
Jan 23 04:08:45 np0005593232 python3.9[117864]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 23 04:08:45 np0005593232 podman[118018]: 2026-01-23 09:08:45.762266049 +0000 UTC m=+0.022743551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:45 np0005593232 podman[118018]: 2026-01-23 09:08:45.895381393 +0000 UTC m=+0.155858875 container create 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:08:46 np0005593232 systemd[1]: Started libpod-conmon-38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6.scope.
Jan 23 04:08:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v401: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:08:46 np0005593232 podman[118018]: 2026-01-23 09:08:46.254822396 +0000 UTC m=+0.515299908 container init 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:08:46 np0005593232 podman[118018]: 2026-01-23 09:08:46.268014907 +0000 UTC m=+0.528492379 container start 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:08:46 np0005593232 nice_dijkstra[118047]: 167 167
Jan 23 04:08:46 np0005593232 systemd[1]: libpod-38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6.scope: Deactivated successfully.
Jan 23 04:08:46 np0005593232 podman[118018]: 2026-01-23 09:08:46.282935247 +0000 UTC m=+0.543412749 container attach 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:08:46 np0005593232 podman[118018]: 2026-01-23 09:08:46.283453082 +0000 UTC m=+0.543930554 container died 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:08:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:46.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-53cdc9b32663c6bb2893ce29f02ca9ad6af2893cd156ee04871cb969f71bb041-merged.mount: Deactivated successfully.
Jan 23 04:08:46 np0005593232 podman[118018]: 2026-01-23 09:08:46.646814594 +0000 UTC m=+0.907292066 container remove 38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_dijkstra, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:08:46 np0005593232 systemd[1]: libpod-conmon-38b84a4744dfabd7cd2e42e9b97fe2bd3bfe5425a62a1dc76cd535d14f4fccb6.scope: Deactivated successfully.
Jan 23 04:08:46 np0005593232 podman[118154]: 2026-01-23 09:08:46.774541868 +0000 UTC m=+0.022206266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:08:46 np0005593232 podman[118154]: 2026-01-23 09:08:46.935228149 +0000 UTC m=+0.182892527 container create edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:08:47 np0005593232 python3.9[118213]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:08:47 np0005593232 systemd[1]: Started libpod-conmon-edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac.scope.
Jan 23 04:08:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:08:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5fa28c6cd9c010518d08e3448f013686b51e8d704dac03e6c302edc1c8de3f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5fa28c6cd9c010518d08e3448f013686b51e8d704dac03e6c302edc1c8de3f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5fa28c6cd9c010518d08e3448f013686b51e8d704dac03e6c302edc1c8de3f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5fa28c6cd9c010518d08e3448f013686b51e8d704dac03e6c302edc1c8de3f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:08:47 np0005593232 podman[118154]: 2026-01-23 09:08:47.153995784 +0000 UTC m=+0.401660182 container init edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:08:47 np0005593232 podman[118154]: 2026-01-23 09:08:47.161183306 +0000 UTC m=+0.408847684 container start edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:08:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:47.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:47 np0005593232 podman[118154]: 2026-01-23 09:08:47.355734599 +0000 UTC m=+0.603398977 container attach edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:08:47 np0005593232 python3.9[118299]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2344 writes, 10K keys, 2344 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2344 writes, 2344 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2344 writes, 10K keys, 2344 commit groups, 1.0 writes per commit group, ingest: 13.11 MB, 0.02 MB/s#012Interval WAL: 2344 writes, 2344 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     22.3      0.50              0.05         4    0.124       0      0       0.0       0.0#012  L6      1/0    7.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.1     53.7     46.0      0.50              0.07         3    0.168     11K   1312       0.0       0.0#012 Sum      1/0    7.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     27.0     34.2      1.00              0.12         7    0.143     11K   1312       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.1     27.1     34.3      1.00              0.12         6    0.166     11K   1312       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     53.7     46.0      0.50              0.07         3    0.168     11K   1312       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     22.4      0.49              0.05         3    0.165       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.011, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.0 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 1.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 308.00 MB usage: 1.27 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(66,1.14 MB,0.369099%) FilterBlock(8,41.73 KB,0.0132325%) IndexBlock(8,93.83 KB,0.0297497%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]: {
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:        "osd_id": 0,
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:        "type": "bluestore"
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]:    }
Jan 23 04:08:48 np0005593232 compassionate_taussig[118219]: }
Jan 23 04:08:48 np0005593232 systemd[1]: libpod-edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac.scope: Deactivated successfully.
Jan 23 04:08:48 np0005593232 podman[118154]: 2026-01-23 09:08:48.034811135 +0000 UTC m=+1.282475523 container died edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:08:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v402: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d5fa28c6cd9c010518d08e3448f013686b51e8d704dac03e6c302edc1c8de3f1-merged.mount: Deactivated successfully.
Jan 23 04:08:48 np0005593232 podman[118154]: 2026-01-23 09:08:48.253616821 +0000 UTC m=+1.501281199 container remove edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_taussig, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:08:48 np0005593232 systemd[1]: libpod-conmon-edf8448e82e3931641b8ef8bd5ac43882d47125ee793d915970635ca867197ac.scope: Deactivated successfully.
Jan 23 04:08:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:08:48 np0005593232 python3.9[118478]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:08:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:48.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:08:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 112fb26f-9cac-4487-ab27-e8edc1106ef6 does not exist
Jan 23 04:08:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 137e72cb-b79f-43fa-88c0-395d6ef59380 does not exist
Jan 23 04:08:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 20252bfe-c27b-4516-8dde-17405ece8fe3 does not exist
Jan 23 04:08:48 np0005593232 python3.9[118607]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:08:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v403: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:50.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:50.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:08:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v404: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:52.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:52.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:52 np0005593232 python3.9[118762]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:08:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v405: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:08:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:54.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:54.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:55 np0005593232 python3.9[118915]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:08:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v406: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:08:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:56.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:08:56 np0005593232 python3.9[118999]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:08:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v407: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:08:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:08:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:08:58.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:08:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:08:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:08:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:08:58.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:08:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v408: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:00.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:09:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:00.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:09:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v409: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:02.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:02.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v410: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:04.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:04 np0005593232 systemd[1]: session-38.scope: Deactivated successfully.
Jan 23 04:09:04 np0005593232 systemd[1]: session-38.scope: Consumed 23.998s CPU time.
Jan 23 04:09:04 np0005593232 systemd-logind[808]: Session 38 logged out. Waiting for processes to exit.
Jan 23 04:09:04 np0005593232 systemd-logind[808]: Removed session 38.
Jan 23 04:09:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v411: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:09:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:06.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:09:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:06.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v412: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:08.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:09 np0005593232 systemd[1]: session-18.scope: Deactivated successfully.
Jan 23 04:09:09 np0005593232 systemd[1]: session-18.scope: Consumed 1min 41.934s CPU time.
Jan 23 04:09:09 np0005593232 systemd-logind[808]: Session 18 logged out. Waiting for processes to exit.
Jan 23 04:09:09 np0005593232 systemd-logind[808]: Removed session 18.
Jan 23 04:09:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v413: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:10.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v414: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:12.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v415: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:14.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:14.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v416: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:16.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:16.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v417: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:18.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:18 np0005593232 systemd-logind[808]: New session 39 of user zuul.
Jan 23 04:09:18 np0005593232 systemd[1]: Started Session 39 of User zuul.
Jan 23 04:09:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:18.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:19 np0005593232 python3.9[119246]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v418: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:20.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:20 np0005593232 python3.9[119447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:20 np0005593232 python3.9[119526]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:20.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:21 np0005593232 systemd[1]: session-39.scope: Deactivated successfully.
Jan 23 04:09:21 np0005593232 systemd[1]: session-39.scope: Consumed 1.410s CPU time.
Jan 23 04:09:21 np0005593232 systemd-logind[808]: Session 39 logged out. Waiting for processes to exit.
Jan 23 04:09:21 np0005593232 systemd-logind[808]: Removed session 39.
Jan 23 04:09:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v419: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:22.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:22.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v420: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:24.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:24.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v421: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:09:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:26.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:09:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:26.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v422: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:28.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:28 np0005593232 systemd-logind[808]: New session 40 of user zuul.
Jan 23 04:09:28 np0005593232 systemd[1]: Started Session 40 of User zuul.
Jan 23 04:09:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:28.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:29 np0005593232 python3.9[119709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:09:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v423: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:30.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:30 np0005593232 python3.9[119865]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:31 np0005593232 python3.9[120041]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v424: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:32.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:32 np0005593232 python3.9[120119]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.xtl3rj1h recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:33 np0005593232 python3.9[120272]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:33 np0005593232 python3.9[120350]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ul4kg3ui recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v425: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:34 np0005593232 python3.9[120502]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:09:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:34.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:35 np0005593232 python3.9[120655]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:35 np0005593232 python3.9[120733]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:09:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v426: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:36 np0005593232 python3.9[120885]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:36 np0005593232 python3.9[120964]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:09:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:36.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:09:37
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:09:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:09:37 np0005593232 python3.9[121116]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v427: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:38 np0005593232 python3.9[121268]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:38 np0005593232 python3.9[121347]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:09:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:38.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:09:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:39 np0005593232 python3.9[121549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v428: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:40 np0005593232 python3.9[121627]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:40.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:41 np0005593232 python3.9[121780]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:09:41 np0005593232 systemd[1]: Reloading.
Jan 23 04:09:41 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:09:41 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:09:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v429: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:42.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:42 np0005593232 python3.9[121970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:43 np0005593232 python3.9[122049]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:43 np0005593232 python3.9[122201]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v430: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:44 np0005593232 python3.9[122279]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:45 np0005593232 python3.9[122432]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:09:45 np0005593232 systemd[1]: Reloading.
Jan 23 04:09:45 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:09:45 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:09:45 np0005593232 systemd[1]: Starting Create netns directory...
Jan 23 04:09:45 np0005593232 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 04:09:45 np0005593232 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 04:09:45 np0005593232 systemd[1]: Finished Create netns directory.
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v431: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:09:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:09:46 np0005593232 python3.9[122624]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:09:46 np0005593232 network[122641]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:09:46 np0005593232 network[122642]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:09:46 np0005593232 network[122643]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:09:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v432: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:48.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:49.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v433: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:51.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:09:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:09:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v434: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 483ac51e-1d9a-4194-b2cc-1f37ba6b4ae2 does not exist
Jan 23 04:09:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 04374456-14ad-47bf-a7fd-78056e2ee53f does not exist
Jan 23 04:09:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4e6391e-75fd-45f6-b453-3387a17908a7 does not exist
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:09:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:09:52 np0005593232 python3.9[123258]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.764975936 +0000 UTC m=+0.039863852 container create d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:09:52 np0005593232 systemd[1]: Started libpod-conmon-d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba.scope.
Jan 23 04:09:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.747449773 +0000 UTC m=+0.022337709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.847549078 +0000 UTC m=+0.122437014 container init d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.85404594 +0000 UTC m=+0.128933856 container start d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.857650032 +0000 UTC m=+0.132537978 container attach d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:09:52 np0005593232 happy_lewin[123318]: 167 167
Jan 23 04:09:52 np0005593232 systemd[1]: libpod-d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba.scope: Deactivated successfully.
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.85901065 +0000 UTC m=+0.133898566 container died d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:09:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-73651ef97dbbbdd56693678312df62c5b8a9f5a70e64f6be944d92dea9b3ba8d-merged.mount: Deactivated successfully.
Jan 23 04:09:52 np0005593232 podman[123300]: 2026-01-23 09:09:52.900608539 +0000 UTC m=+0.175496455 container remove d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:09:52 np0005593232 systemd[1]: libpod-conmon-d9a6c354774d85f931c6c8deb9a38edff9c0975900bd025ba67812c9af9e03ba.scope: Deactivated successfully.
Jan 23 04:09:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:53.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:53 np0005593232 podman[123418]: 2026-01-23 09:09:53.076059452 +0000 UTC m=+0.056579752 container create 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:09:53 np0005593232 systemd[1]: Started libpod-conmon-0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f.scope.
Jan 23 04:09:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:53 np0005593232 podman[123418]: 2026-01-23 09:09:53.051130521 +0000 UTC m=+0.031650851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:53 np0005593232 podman[123418]: 2026-01-23 09:09:53.150860065 +0000 UTC m=+0.131380385 container init 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 04:09:53 np0005593232 podman[123418]: 2026-01-23 09:09:53.159553159 +0000 UTC m=+0.140073459 container start 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:09:53 np0005593232 podman[123418]: 2026-01-23 09:09:53.162926814 +0000 UTC m=+0.143447114 container attach 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:09:53 np0005593232 python3.9[123419]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:54 np0005593232 python3.9[123591]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:54 np0005593232 keen_mendeleev[123435]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:09:54 np0005593232 keen_mendeleev[123435]: --> relative data size: 1.0
Jan 23 04:09:54 np0005593232 keen_mendeleev[123435]: --> All data devices are unavailable
Jan 23 04:09:54 np0005593232 systemd[1]: libpod-0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f.scope: Deactivated successfully.
Jan 23 04:09:54 np0005593232 podman[123418]: 2026-01-23 09:09:54.049693785 +0000 UTC m=+1.030214075 container died 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:09:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0b8c25822fbdf7e2a22fd8b0b1e614479e0940a690eec9b8ce23efc6279c7a49-merged.mount: Deactivated successfully.
Jan 23 04:09:54 np0005593232 podman[123418]: 2026-01-23 09:09:54.109659111 +0000 UTC m=+1.090179411 container remove 0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_mendeleev, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:09:54 np0005593232 systemd[1]: libpod-conmon-0c272494520ac85e41cc62d09384bd5f9aedffe40fcec19110cd7ea0d523962f.scope: Deactivated successfully.
Jan 23 04:09:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v435: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:54.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.683134625 +0000 UTC m=+0.038033391 container create 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:09:54 np0005593232 python3.9[123873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:54 np0005593232 systemd[1]: Started libpod-conmon-1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6.scope.
Jan 23 04:09:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.758774791 +0000 UTC m=+0.113673577 container init 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.665242211 +0000 UTC m=+0.020140997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.765251793 +0000 UTC m=+0.120150569 container start 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.771040726 +0000 UTC m=+0.125939492 container attach 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:09:54 np0005593232 systemd[1]: libpod-1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6.scope: Deactivated successfully.
Jan 23 04:09:54 np0005593232 eloquent_pasteur[123921]: 167 167
Jan 23 04:09:54 np0005593232 conmon[123921]: conmon 1416bfc5353bc48a133d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6.scope/container/memory.events
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.773549817 +0000 UTC m=+0.128448603 container died 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:09:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-126c452ff53ceb422a84c28b1a5bc597b64e4e09a5b33dae638d1c1971ace8d6-merged.mount: Deactivated successfully.
Jan 23 04:09:54 np0005593232 podman[123903]: 2026-01-23 09:09:54.996178046 +0000 UTC m=+0.351076812 container remove 1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_pasteur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:09:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:55 np0005593232 systemd[1]: libpod-conmon-1416bfc5353bc48a133dcaa6592f4dfbbd3d0ebcb30fe8923b82b51eb22f80a6.scope: Deactivated successfully.
Jan 23 04:09:55 np0005593232 python3.9[124015]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:55 np0005593232 podman[124023]: 2026-01-23 09:09:55.16313552 +0000 UTC m=+0.047483986 container create 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:09:55 np0005593232 systemd[1]: Started libpod-conmon-2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb.scope.
Jan 23 04:09:55 np0005593232 podman[124023]: 2026-01-23 09:09:55.140485583 +0000 UTC m=+0.024834069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45bbb62d2568dca8e8c74cd5f06c74fa0e6d804bc39b8c4c38ae3607782ba63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45bbb62d2568dca8e8c74cd5f06c74fa0e6d804bc39b8c4c38ae3607782ba63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45bbb62d2568dca8e8c74cd5f06c74fa0e6d804bc39b8c4c38ae3607782ba63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45bbb62d2568dca8e8c74cd5f06c74fa0e6d804bc39b8c4c38ae3607782ba63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:55 np0005593232 podman[124023]: 2026-01-23 09:09:55.255650841 +0000 UTC m=+0.139999327 container init 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:09:55 np0005593232 podman[124023]: 2026-01-23 09:09:55.26237509 +0000 UTC m=+0.146723556 container start 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:09:55 np0005593232 podman[124023]: 2026-01-23 09:09:55.276844847 +0000 UTC m=+0.161193323 container attach 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]: {
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:    "0": [
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:        {
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "devices": [
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "/dev/loop3"
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            ],
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "lv_name": "ceph_lv0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "lv_size": "7511998464",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "name": "ceph_lv0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "tags": {
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.cluster_name": "ceph",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.crush_device_class": "",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.encrypted": "0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.osd_id": "0",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.type": "block",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:                "ceph.vdo": "0"
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            },
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "type": "block",
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:            "vg_name": "ceph_vg0"
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:        }
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]:    ]
Jan 23 04:09:56 np0005593232 hopeful_benz[124047]: }
Jan 23 04:09:56 np0005593232 systemd[1]: libpod-2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb.scope: Deactivated successfully.
Jan 23 04:09:56 np0005593232 podman[124023]: 2026-01-23 09:09:56.092051476 +0000 UTC m=+0.976399962 container died 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:09:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e45bbb62d2568dca8e8c74cd5f06c74fa0e6d804bc39b8c4c38ae3607782ba63-merged.mount: Deactivated successfully.
Jan 23 04:09:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v436: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:56 np0005593232 podman[124023]: 2026-01-23 09:09:56.161821798 +0000 UTC m=+1.046170264 container remove 2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:09:56 np0005593232 systemd[1]: libpod-conmon-2c765c97bcc9908608cb860aa4283aa8f59a1fd29555d9635978007cecd7aabb.scope: Deactivated successfully.
Jan 23 04:09:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:56.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:56 np0005593232 python3.9[124256]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 23 04:09:56 np0005593232 systemd[1]: Starting Time & Date Service...
Jan 23 04:09:56 np0005593232 systemd[1]: Started Time & Date Service.
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.74876763 +0000 UTC m=+0.043259487 container create b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:09:56 np0005593232 systemd[1]: Started libpod-conmon-b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef.scope.
Jan 23 04:09:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.727549124 +0000 UTC m=+0.022041011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.83837322 +0000 UTC m=+0.132865097 container init b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.847216148 +0000 UTC m=+0.141708005 container start b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.852816456 +0000 UTC m=+0.147308313 container attach b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:09:56 np0005593232 systemd[1]: libpod-b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef.scope: Deactivated successfully.
Jan 23 04:09:56 np0005593232 jovial_noyce[124395]: 167 167
Jan 23 04:09:56 np0005593232 conmon[124395]: conmon b301d7ee0474d0bfe799 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef.scope/container/memory.events
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.854345999 +0000 UTC m=+0.148837866 container died b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:09:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-884bf400ee18e7c9290f989bb213300afc2c1e7a6b86efc72a9191d21de5c2b2-merged.mount: Deactivated successfully.
Jan 23 04:09:56 np0005593232 podman[124355]: 2026-01-23 09:09:56.905798215 +0000 UTC m=+0.200290072 container remove b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_noyce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:09:56 np0005593232 systemd[1]: libpod-conmon-b301d7ee0474d0bfe799795a591124f5491d798c3c5953b595d800f90e4ec6ef.scope: Deactivated successfully.
Jan 23 04:09:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:57 np0005593232 podman[124473]: 2026-01-23 09:09:57.055594617 +0000 UTC m=+0.041292982 container create 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:09:57 np0005593232 systemd[1]: Started libpod-conmon-1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc.scope.
Jan 23 04:09:57 np0005593232 podman[124473]: 2026-01-23 09:09:57.036471439 +0000 UTC m=+0.022169824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:09:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:09:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e062ad6359efd663b2c914dac94b3310816452d13874a694a6adf12ceb9423d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e062ad6359efd663b2c914dac94b3310816452d13874a694a6adf12ceb9423d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e062ad6359efd663b2c914dac94b3310816452d13874a694a6adf12ceb9423d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e062ad6359efd663b2c914dac94b3310816452d13874a694a6adf12ceb9423d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:09:57 np0005593232 podman[124473]: 2026-01-23 09:09:57.151211555 +0000 UTC m=+0.136909940 container init 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:09:57 np0005593232 podman[124473]: 2026-01-23 09:09:57.159238941 +0000 UTC m=+0.144937306 container start 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:09:57 np0005593232 podman[124473]: 2026-01-23 09:09:57.16631307 +0000 UTC m=+0.152011435 container attach 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:09:57 np0005593232 python3.9[124569]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]: {
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:        "osd_id": 0,
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:        "type": "bluestore"
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]:    }
Jan 23 04:09:58 np0005593232 nifty_mccarthy[124512]: }
Jan 23 04:09:58 np0005593232 systemd[1]: libpod-1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc.scope: Deactivated successfully.
Jan 23 04:09:58 np0005593232 podman[124473]: 2026-01-23 09:09:58.052479683 +0000 UTC m=+1.038178048 container died 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:09:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e062ad6359efd663b2c914dac94b3310816452d13874a694a6adf12ceb9423d8-merged.mount: Deactivated successfully.
Jan 23 04:09:58 np0005593232 podman[124473]: 2026-01-23 09:09:58.119440076 +0000 UTC m=+1.105138441 container remove 1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_mccarthy, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:09:58 np0005593232 systemd[1]: libpod-conmon-1d25ddcf270b9aa64dcf5fca946b3ddc05a395fbcfd6f112d4f1dd254312b1dc.scope: Deactivated successfully.
Jan 23 04:09:58 np0005593232 python3.9[124729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:09:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v437: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:09:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:09:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1c949b43-e11a-40c8-92bf-537b8967b147 does not exist
Jan 23 04:09:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5eb7dc58-f491-45b3-b550-127544ebeb57 does not exist
Jan 23 04:09:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 50718730-f585-44ea-bb33-391afa21cb8a does not exist
Jan 23 04:09:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:09:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:09:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:09:58 np0005593232 python3.9[124879]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:09:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:09:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:09:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:09:59.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:09:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:09:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:09:59 np0005593232 python3.9[125032]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:09:59 np0005593232 python3.9[125112]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6bznbsh8 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:10:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v438: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:10:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:00 np0005593232 python3.9[125312]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:01 np0005593232 python3.9[125391]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:02 np0005593232 python3.9[125543]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:10:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v439: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 04:10:03 np0005593232 python3[125697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 04:10:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:03 np0005593232 python3.9[125849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v440: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:04 np0005593232 python3.9[125927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:04.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:05 np0005593232 python3.9[126080]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:05 np0005593232 python3.9[126205]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159404.538217-899-266660978152591/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v441: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:06.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:06 np0005593232 python3.9[126357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:06 np0005593232 python3.9[126436]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:07.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:08 np0005593232 python3.9[126588]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v442: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:08 np0005593232 python3.9[126666]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:09.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:09 np0005593232 python3.9[126819]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:09 np0005593232 python3.9[126897]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v443: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:10 np0005593232 python3.9[127049]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:10:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:11.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:11 np0005593232 python3.9[127205]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v444: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:12 np0005593232 python3.9[127357]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:12 np0005593232 python3.9[127510]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:13.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:13 np0005593232 python3.9[127662]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 04:10:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v445: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:10:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:14.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:10:14 np0005593232 python3.9[127814]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 23 04:10:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:15.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:15 np0005593232 systemd[1]: session-40.scope: Deactivated successfully.
Jan 23 04:10:15 np0005593232 systemd[1]: session-40.scope: Consumed 28.206s CPU time.
Jan 23 04:10:15 np0005593232 systemd-logind[808]: Session 40 logged out. Waiting for processes to exit.
Jan 23 04:10:15 np0005593232 systemd-logind[808]: Removed session 40.
Jan 23 04:10:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v446: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:16.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:17.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v447: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:18.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:10:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:19.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:10:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v448: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:20.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:20 np0005593232 systemd-logind[808]: New session 41 of user zuul.
Jan 23 04:10:20 np0005593232 systemd[1]: Started Session 41 of User zuul.
Jan 23 04:10:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:21.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:21 np0005593232 python3.9[128048]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 23 04:10:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v449: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:22 np0005593232 python3.9[128200]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:10:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:22.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:23 np0005593232 python3.9[128355]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 23 04:10:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:23.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:23 np0005593232 python3.9[128507]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.occi5bl5 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:10:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v450: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:24.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:24 np0005593232 python3.9[128632]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.occi5bl5 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159423.2830338-107-199255192034358/.source.occi5bl5 _original_basename=.dg_pdssh follow=False checksum=10ad371b9444ca89894e9504601831d6af2e14d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:25.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:25 np0005593232 python3.9[128785]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:10:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v451: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:26.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:26 np0005593232 python3.9[128937]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsLbdPIA8nc52wSKcOItc1xJ6faU3FwhWecUgXZZC+Q1wLSrdN9vgOExBhQSwwodluzJ5/GT9VbCuujyBvk7RMEim1+fw7T58Th56PR8y2lL6F6F3ni4S21QxInTLml+/id8wwEZAkFjbCF/AjCRDyH7a6H4wIZtd5ZuzWJuuBENNdtu/qD1QQYkNegqllogNpkdpAFZgvee26yw2sbCX8kpbJoJsowaQUckoRtT2jj7985CLxErKZ8YO8ZozjfuCDCKbcJT0KFimievJZmKXvGaWG5H+P509XDsfN62aQr22US8FbYjdK1lfrJoetkc/MK4h7QuCs6MH2qYiqXIkJYKMSReM+sH3X7V7pSWSUkr0DHREVvBGcC2lRSx45lUCTEtcTY7XmxGORvCORMYla0l1H3mEIkfYLS4sXYtRSHkyFnyQgbNP5MnrmXlK0vrAA81r5U+dOhIL/H2e7S4xcLItH7weUOHIAmCj266mm9+xJyyd7NZ+eUgS0Md5p4Bc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUSudroiFEdRPXgUCqRHbNRLelYP5RQGMMCn6zD8pfH#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDJLsx8RxJz6M7PIyGcFdzR+Ldl788501Y8ZWLJ8hnDzMCaRkGjzE+kzO/uN75IEtV3aVEl1jNQlk7wON+lORGQ=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7jdzOPltwN8PSb4q9DCiO5zY7TIK6sENpltjjN4gdZgxOTsj/dxnfxJlO2lYI1dFyyFnDdZj88a4x1KI5Bnnvl5KRvvZiianfivZWKq9Ngf9fzf7+5CsDFBiu6a7GAfXMf9FocVpqlXf7fsXmb5Iv2xUpNnye4EFIuW965X3SNrRpujRnDe+i0lIwrOsus4R86qn38MWOLfPBAWFYdBaVfTUYjC0eT/I81Y/T2RKqf7XK/bsuHobZ+/a7lymuPsS9L0DFg25ZoIlvkPUVfZxTO5FCyw8GMR+AgbnMQyHwx2JAmewwH3M2l+zVdDQjsE1ZRFlJCmwle9LBa1oFhuLfxLqsykQploeB5Ch/VppbnRQ/GamwWLU5HEKMH2wZ6IymURW7nSStlEhNWvK+Bb9rIy65M6AFOEW94xId4nc+IraS6rc2cuM3Rp97S/6olqjlFDZisdUwdAlhIKuJjA7SsYZ6HyCEbRN3mvMnWbkqpyY605kewQ6kdmucNeWgRtk=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE++PPNOKtggGl2mGWEm1DV2WpblvGA/F2TEEVeMrsU2#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3uOoytpWGDF46u3wwDFxwF05HMnZd51GvbceZrDgZRmc5sxbF+OawPD9kGTcjnaUTzvqWgbFNvcmpuaNTnpzc=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq2Yxebv3BUxXHPuf6nN00teEMYUUVEWMZOqcwNO1dyibdbyxre6VweeeiBR/lerW1mIcmB67juCuLffEgDo8uPtZx9HrD1psd+ji78YeJuvbKIEcTwdtGF0I8PeogHunx+4KBxFsHeF6JHN9+H7lTHiSSIDFzk9BwDkAKEWsYHe8z+5SPDU//XiYNv0drE59KiQF586rnjPR3VZk6WaR+hp2PiHbUUSOvnyB4kI4bCXSCU/Oxv7HDvgeCJapABjisMZg4aiteZ7EaD1yVndkQiS6OxfOGP1srgtNkRL4Idc/XCFXH754lbRd8GzUF0n8N0HbWTcFDuTU+bvhuIH+3EDNxsDQkSCdJTw2EPb/mqZVdXSFxLXUBcXnYkBWZirpgC3g6okg2RQU2bxigFs7lFwJT6QE+wz0DK7Z3ib0XQxjRlY6PIwn1D2soMwKVarxpeM2FfsGrHMHaHioRTVbKpzBMA1oUICSUCvzyhd0I43cO2rUEK/8EMYSsTVRulKs=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII4nVnNUbCVQAtKJF7UUtMQxNhMw9eVlRVofBpQ70iUi#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPqfkBgoQjr/gZBK1F9K576GMtkxSY6lVgROItGrW+R9EA2lvnOt71IGO0M0lGVvCkTtLktdNpSsYnBu2cJn+4c=#012 create=True mode=0644 path=/tmp/ansible.occi5bl5 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:26 np0005593232 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 04:10:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:27.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:27 np0005593232 python3.9[129092]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.occi5bl5' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:10:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v452: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:28 np0005593232 python3.9[129246]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.occi5bl5 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:28 np0005593232 systemd[1]: session-41.scope: Deactivated successfully.
Jan 23 04:10:28 np0005593232 systemd[1]: session-41.scope: Consumed 4.825s CPU time.
Jan 23 04:10:28 np0005593232 systemd-logind[808]: Session 41 logged out. Waiting for processes to exit.
Jan 23 04:10:28 np0005593232 systemd-logind[808]: Removed session 41.
Jan 23 04:10:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:29.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v453: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:31.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v454: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:33.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v455: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:34 np0005593232 systemd-logind[808]: New session 42 of user zuul.
Jan 23 04:10:34 np0005593232 systemd[1]: Started Session 42 of User zuul.
Jan 23 04:10:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:34.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:35.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:35 np0005593232 python3.9[129428]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:10:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v456: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:36 np0005593232 python3.9[129584]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:10:37
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root', 'vms', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:10:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:37.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:10:37 np0005593232 python3.9[129739]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:10:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:10:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v457: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:38 np0005593232 python3.9[129892]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:10:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:39.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:39 np0005593232 python3.9[130046]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:10:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v458: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:40 np0005593232 python3.9[130248]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:10:41 np0005593232 systemd[1]: session-42.scope: Deactivated successfully.
Jan 23 04:10:41 np0005593232 systemd[1]: session-42.scope: Consumed 4.156s CPU time.
Jan 23 04:10:41 np0005593232 systemd-logind[808]: Session 42 logged out. Waiting for processes to exit.
Jan 23 04:10:41 np0005593232 systemd-logind[808]: Removed session 42.
Jan 23 04:10:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:41.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v459: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:42.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:43.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v460: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:10:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:44.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:10:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:45.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v461: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:10:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:10:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:46.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:46 np0005593232 systemd-logind[808]: New session 43 of user zuul.
Jan 23 04:10:46 np0005593232 systemd[1]: Started Session 43 of User zuul.
Jan 23 04:10:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:47.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:47 np0005593232 python3.9[130430]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:10:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v462: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:48.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:48 np0005593232 python3.9[130586]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:10:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:49.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:50 np0005593232 python3.9[130671]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 23 04:10:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v463: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:50.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v464: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:52.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:52 np0005593232 python3.9[130823]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:10:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:53.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v465: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:54 np0005593232 python3.9[130975]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 04:10:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:54.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:10:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:10:55 np0005593232 python3.9[131126]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:10:56 np0005593232 python3.9[131276]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:10:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v466: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:56.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:56 np0005593232 systemd[1]: session-43.scope: Deactivated successfully.
Jan 23 04:10:56 np0005593232 systemd[1]: session-43.scope: Consumed 6.692s CPU time.
Jan 23 04:10:56 np0005593232 systemd-logind[808]: Session 43 logged out. Waiting for processes to exit.
Jan 23 04:10:56 np0005593232 systemd-logind[808]: Removed session 43.
Jan 23 04:10:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:10:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:10:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v467: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:10:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:10:58.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:10:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:10:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:10:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:10:59.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:10:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f417b753-9c9c-4f98-a463-cc9ce133be01 does not exist
Jan 23 04:11:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 94ff43f3-669f-4581-8a3a-29c268f9a852 does not exist
Jan 23 04:11:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fea1b6dc-d370-4908-853f-b4c6c09f5259 does not exist
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v468: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:00.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.615264755 +0000 UTC m=+0.045873932 container create a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:11:00 np0005593232 systemd[1]: Started libpod-conmon-a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606.scope.
Jan 23 04:11:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.593886778 +0000 UTC m=+0.024495975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.699989141 +0000 UTC m=+0.130598338 container init a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.70746089 +0000 UTC m=+0.138070067 container start a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.711100012 +0000 UTC m=+0.141709189 container attach a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:11:00 np0005593232 clever_fermat[131641]: 167 167
Jan 23 04:11:00 np0005593232 systemd[1]: libpod-a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606.scope: Deactivated successfully.
Jan 23 04:11:00 np0005593232 conmon[131641]: conmon a3cc384b2f3241617c79 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606.scope/container/memory.events
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.717412528 +0000 UTC m=+0.148021705 container died a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:11:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6b8be935ad75c22beef7a4a9c095b656f06aeb28c9e410ec336358995d451b08-merged.mount: Deactivated successfully.
Jan 23 04:11:00 np0005593232 podman[131624]: 2026-01-23 09:11:00.761684635 +0000 UTC m=+0.192293822 container remove a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:11:00 np0005593232 systemd[1]: libpod-conmon-a3cc384b2f3241617c7900771979826126695e1e5940bc603fd1dc9beafbd606.scope: Deactivated successfully.
Jan 23 04:11:00 np0005593232 podman[131666]: 2026-01-23 09:11:00.922202498 +0000 UTC m=+0.044691789 container create a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 04:11:00 np0005593232 systemd[1]: Started libpod-conmon-a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3.scope.
Jan 23 04:11:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:00.905128351 +0000 UTC m=+0.027617662 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:01.01179516 +0000 UTC m=+0.134284451 container init a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:01.021679826 +0000 UTC m=+0.144169117 container start a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:01.025567935 +0000 UTC m=+0.148057226 container attach a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:11:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:01.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:01 np0005593232 fervent_banzai[131682]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:11:01 np0005593232 fervent_banzai[131682]: --> relative data size: 1.0
Jan 23 04:11:01 np0005593232 fervent_banzai[131682]: --> All data devices are unavailable
Jan 23 04:11:01 np0005593232 systemd[1]: libpod-a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3.scope: Deactivated successfully.
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:01.894756031 +0000 UTC m=+1.017245332 container died a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:11:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-35071cdbaa6e4d407489aa3f4c3bffd5baa3afcc0c0fcd807eb358f87859bdb4-merged.mount: Deactivated successfully.
Jan 23 04:11:01 np0005593232 podman[131666]: 2026-01-23 09:11:01.955200659 +0000 UTC m=+1.077689950 container remove a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:11:01 np0005593232 systemd[1]: libpod-conmon-a833f284cbbcce1639749b7c8f730ee442dbaafc1547d541cbda42cc22679ae3.scope: Deactivated successfully.
Jan 23 04:11:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v469: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:02.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.615033109 +0000 UTC m=+0.044381931 container create e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:11:02 np0005593232 systemd[1]: Started libpod-conmon-e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b.scope.
Jan 23 04:11:02 np0005593232 systemd-logind[808]: New session 44 of user zuul.
Jan 23 04:11:02 np0005593232 systemd[1]: Started Session 44 of User zuul.
Jan 23 04:11:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.684384166 +0000 UTC m=+0.113733008 container init e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.591664436 +0000 UTC m=+0.021013288 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.691550556 +0000 UTC m=+0.120899368 container start e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.695025173 +0000 UTC m=+0.124373985 container attach e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:11:02 np0005593232 naughty_agnesi[131866]: 167 167
Jan 23 04:11:02 np0005593232 systemd[1]: libpod-e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b.scope: Deactivated successfully.
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.69707464 +0000 UTC m=+0.126423462 container died e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:11:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f24da00c8125c8b87344049a736ea353f5287f0dc8140f05116328d94c448687-merged.mount: Deactivated successfully.
Jan 23 04:11:02 np0005593232 podman[131849]: 2026-01-23 09:11:02.731719118 +0000 UTC m=+0.161067930 container remove e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_agnesi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:11:02 np0005593232 systemd[1]: libpod-conmon-e413fb4e6a7ddeff9019a6f469876f2373a3c111598414ec6b298100493d457b.scope: Deactivated successfully.
Jan 23 04:11:02 np0005593232 podman[131944]: 2026-01-23 09:11:02.886489451 +0000 UTC m=+0.048748403 container create 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:11:02 np0005593232 systemd[1]: Started libpod-conmon-5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237.scope.
Jan 23 04:11:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727e8e670050ddfd1f86301ad32903aded051f0eb2c5bd341a4feba509a109a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727e8e670050ddfd1f86301ad32903aded051f0eb2c5bd341a4feba509a109a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727e8e670050ddfd1f86301ad32903aded051f0eb2c5bd341a4feba509a109a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3727e8e670050ddfd1f86301ad32903aded051f0eb2c5bd341a4feba509a109a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:02 np0005593232 podman[131944]: 2026-01-23 09:11:02.86393 +0000 UTC m=+0.026189002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:02 np0005593232 podman[131944]: 2026-01-23 09:11:02.966047003 +0000 UTC m=+0.128305975 container init 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:11:02 np0005593232 podman[131944]: 2026-01-23 09:11:02.973751698 +0000 UTC m=+0.136010650 container start 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:11:02 np0005593232 podman[131944]: 2026-01-23 09:11:02.977638826 +0000 UTC m=+0.139897778 container attach 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:11:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:03.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]: {
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:    "0": [
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:        {
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "devices": [
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "/dev/loop3"
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            ],
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "lv_name": "ceph_lv0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "lv_size": "7511998464",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "name": "ceph_lv0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "tags": {
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.cluster_name": "ceph",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.crush_device_class": "",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.encrypted": "0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.osd_id": "0",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.type": "block",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:                "ceph.vdo": "0"
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            },
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "type": "block",
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:            "vg_name": "ceph_vg0"
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:        }
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]:    ]
Jan 23 04:11:03 np0005593232 beautiful_meninsky[131960]: }
Jan 23 04:11:03 np0005593232 systemd[1]: libpod-5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237.scope: Deactivated successfully.
Jan 23 04:11:03 np0005593232 podman[131944]: 2026-01-23 09:11:03.776294923 +0000 UTC m=+0.938553875 container died 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:11:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3727e8e670050ddfd1f86301ad32903aded051f0eb2c5bd341a4feba509a109a-merged.mount: Deactivated successfully.
Jan 23 04:11:03 np0005593232 podman[131944]: 2026-01-23 09:11:03.839836408 +0000 UTC m=+1.002095360 container remove 5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:11:03 np0005593232 systemd[1]: libpod-conmon-5da5bcf2bd2d18c6ba387f6da88e15239970ff09ea39409930fb6561ee35c237.scope: Deactivated successfully.
Jan 23 04:11:04 np0005593232 python3.9[132066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:11:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v470: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.423714916 +0000 UTC m=+0.038556608 container create 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:11:04 np0005593232 systemd[1]: Started libpod-conmon-54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb.scope.
Jan 23 04:11:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:04.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.408125981 +0000 UTC m=+0.022967703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.505045598 +0000 UTC m=+0.119887320 container init 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.511998232 +0000 UTC m=+0.126839924 container start 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:11:04 np0005593232 affectionate_shockley[132267]: 167 167
Jan 23 04:11:04 np0005593232 systemd[1]: libpod-54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb.scope: Deactivated successfully.
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.517751693 +0000 UTC m=+0.132593385 container attach 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.5179994 +0000 UTC m=+0.132841092 container died 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:11:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8d2381d05b584d1459bdeb69d9465eb2461e8461685ba729c33ee522417ca7ec-merged.mount: Deactivated successfully.
Jan 23 04:11:04 np0005593232 podman[132251]: 2026-01-23 09:11:04.549770517 +0000 UTC m=+0.164612209 container remove 54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_shockley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:11:04 np0005593232 systemd[1]: libpod-conmon-54b184d58f04bf73d7703f7186c1c44902b10e1ea907c592794526287cff62bb.scope: Deactivated successfully.
Jan 23 04:11:04 np0005593232 podman[132290]: 2026-01-23 09:11:04.722532892 +0000 UTC m=+0.040694577 container create 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:11:04 np0005593232 systemd[1]: Started libpod-conmon-51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454.scope.
Jan 23 04:11:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:11:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b7f88135d1babcd11986c5a984b82795373763d1cece481ffab4a03640ebcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b7f88135d1babcd11986c5a984b82795373763d1cece481ffab4a03640ebcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b7f88135d1babcd11986c5a984b82795373763d1cece481ffab4a03640ebcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70b7f88135d1babcd11986c5a984b82795373763d1cece481ffab4a03640ebcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:11:04 np0005593232 podman[132290]: 2026-01-23 09:11:04.70526039 +0000 UTC m=+0.023422085 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:11:04 np0005593232 podman[132290]: 2026-01-23 09:11:04.816479316 +0000 UTC m=+0.134641041 container init 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:11:04 np0005593232 podman[132290]: 2026-01-23 09:11:04.823828782 +0000 UTC m=+0.141990457 container start 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:11:04 np0005593232 podman[132290]: 2026-01-23 09:11:04.827303069 +0000 UTC m=+0.145464794 container attach 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:11:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:05.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:05 np0005593232 silly_wiles[132307]: {
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:        "osd_id": 0,
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:        "type": "bluestore"
Jan 23 04:11:05 np0005593232 silly_wiles[132307]:    }
Jan 23 04:11:05 np0005593232 silly_wiles[132307]: }
Jan 23 04:11:05 np0005593232 systemd[1]: libpod-51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454.scope: Deactivated successfully.
Jan 23 04:11:05 np0005593232 podman[132290]: 2026-01-23 09:11:05.730126234 +0000 UTC m=+1.048287919 container died 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:11:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-70b7f88135d1babcd11986c5a984b82795373763d1cece481ffab4a03640ebcd-merged.mount: Deactivated successfully.
Jan 23 04:11:05 np0005593232 podman[132290]: 2026-01-23 09:11:05.845079506 +0000 UTC m=+1.163241181 container remove 51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:11:05 np0005593232 systemd[1]: libpod-conmon-51e8c9d65945fe30be5a19583ef4aa9f5f27b9b7e2ac9deb95c30ad03745b454.scope: Deactivated successfully.
Jan 23 04:11:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:11:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:11:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1dd8f609-3e28-4763-bca6-b403a47dd60b does not exist
Jan 23 04:11:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 23d22408-46ff-47e7-b3b9-d385268a5310 does not exist
Jan 23 04:11:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6bffdeb7-d53b-4b0e-9a46-39db4c437620 does not exist
Jan 23 04:11:06 np0005593232 python3.9[132468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v471: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:06.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:06 np0005593232 python3.9[132670]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:07.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:07 np0005593232 python3.9[132823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:08 np0005593232 python3.9[132946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159466.978708-161-234650553819732/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=55a7baccbe705f22d6ffac6de692594a46770960 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v472: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:08.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:08 np0005593232 python3.9[133098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:09.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:09 np0005593232 python3.9[133222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159468.334749-161-117469953038814/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=b5c90b44c6774a0fb2738dc9aefa548e4239c50f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:09 np0005593232 python3.9[133374]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v473: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:10 np0005593232 python3.9[133497]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159469.4271638-161-221710712966927/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ade55979fed860256086549215fb4d0c072c98c5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:10.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:11 np0005593232 python3.9[133650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:11.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:11 np0005593232 python3.9[133802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v474: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:12.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:12 np0005593232 python3.9[133954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:13 np0005593232 python3.9[134078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159472.1381361-330-142245474545239/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b1bea02119dad146e80098e23793304fbfd2f748 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:13.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:13 np0005593232 python3.9[134230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v475: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:14 np0005593232 python3.9[134353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159473.2887356-330-270904125036486/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d54572c36838e9e23d527be56268c8c0160f31d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:14.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:14 np0005593232 python3.9[134506]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:15.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:15 np0005593232 python3.9[134629]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159474.5157943-330-211016966947437/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ffb65e96ac393eb8a44a68a3d82ce9208b6716b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v476: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:16 np0005593232 python3.9[134781]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:16.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 7749 writes, 32K keys, 7749 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 7749 writes, 1502 syncs, 5.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7749 writes, 32K keys, 7749 commit groups, 1.0 writes per commit group, ingest: 20.66 MB, 0.03 MB/s#012Interval WAL: 7749 writes, 1502 syncs, 5.16 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Jan 23 04:11:16 np0005593232 python3.9[134934]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:17.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:17 np0005593232 python3.9[135086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:18 np0005593232 python3.9[135209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159477.1069386-500-84314024034327/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=9ad46c51287a6f97a7f8be8935f06033c17f8d97 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v477: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:18.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:18 np0005593232 python3.9[135361]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:19.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:19 np0005593232 python3.9[135485]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159478.284707-500-174604601350456/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=4d54572c36838e9e23d527be56268c8c0160f31d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:20 np0005593232 python3.9[135637]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v478: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:20.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:20 np0005593232 python3.9[135810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159479.4697065-500-161926498930840/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=75980fac94be3fc1d5f54b26a7bcad27c561e181 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:21.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:21 np0005593232 python3.9[135963]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v479: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:22.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:22 np0005593232 python3.9[136115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:23 np0005593232 python3.9[136239]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159482.0613184-700-210115362661994/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:23.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:23 np0005593232 python3.9[136391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v480: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:24 np0005593232 python3.9[136543]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:25.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:25 np0005593232 python3.9[136667]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159484.249144-774-130503016261607/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v481: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:26 np0005593232 python3.9[136819]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:26.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:26 np0005593232 python3.9[136972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:27.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:11:28 np0005593232 python3.9[137095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159486.4401987-851-256912559055133/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v482: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:28.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:28 np0005593232 python3.9[137247]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:29.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:29 np0005593232 python3.9[137400]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v483: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:30 np0005593232 python3.9[137523]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159489.2987807-929-146539480563820/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:30.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:31.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:31 np0005593232 python3.9[137676]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:32 np0005593232 python3.9[137828]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v484: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:32 np0005593232 python3.9[137951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159491.500241-1000-156842339825151/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:33.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:33 np0005593232 python3.9[138104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:11:34 np0005593232 python3.9[138256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v485: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:34 np0005593232 python3.9[138379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159493.7285867-1072-45050108346868/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=193e99f8e1220a4ec0ffff2d0cee79b79a562ce2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:35.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v486: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:36 np0005593232 systemd[1]: session-44.scope: Deactivated successfully.
Jan 23 04:11:36 np0005593232 systemd[1]: session-44.scope: Consumed 22.539s CPU time.
Jan 23 04:11:36 np0005593232 systemd-logind[808]: Session 44 logged out. Waiting for processes to exit.
Jan 23 04:11:36 np0005593232 systemd-logind[808]: Removed session 44.
Jan 23 04:11:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:11:37
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'volumes', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log']
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:11:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:11:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:11:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v487: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:38.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:39.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v488: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.022581) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501022818, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1651, "num_deletes": 251, "total_data_size": 3108117, "memory_usage": 3147040, "flush_reason": "Manual Compaction"}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501071713, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3042263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10688, "largest_seqno": 12338, "table_properties": {"data_size": 3034673, "index_size": 4597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14784, "raw_average_key_size": 19, "raw_value_size": 3019675, "raw_average_value_size": 3962, "num_data_blocks": 207, "num_entries": 762, "num_filter_entries": 762, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159324, "oldest_key_time": 1769159324, "file_creation_time": 1769159501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 49206 microseconds, and 9547 cpu microseconds.
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.071828) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3042263 bytes OK
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.071905) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.073588) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.073616) EVENT_LOG_v1 {"time_micros": 1769159501073610, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.073642) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3101325, prev total WAL file size 3101325, number of live WAL files 2.
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.075510) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2970KB)], [26(7405KB)]
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501075721, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10625827, "oldest_snapshot_seqno": -1}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3998 keys, 8626020 bytes, temperature: kUnknown
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501172856, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8626020, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8596031, "index_size": 18871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 96999, "raw_average_key_size": 24, "raw_value_size": 8520526, "raw_average_value_size": 2131, "num_data_blocks": 817, "num_entries": 3998, "num_filter_entries": 3998, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159501, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.173375) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8626020 bytes
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.175257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.2 rd, 88.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 7.2 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(6.3) write-amplify(2.8) OK, records in: 4515, records dropped: 517 output_compression: NoCompression
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.175317) EVENT_LOG_v1 {"time_micros": 1769159501175296, "job": 10, "event": "compaction_finished", "compaction_time_micros": 97330, "compaction_time_cpu_micros": 30715, "output_level": 6, "num_output_files": 1, "total_output_size": 8626020, "num_input_records": 4515, "num_output_records": 3998, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501176433, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159501178216, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.075122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.178429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.178442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.178444) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.178446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:11:41.178448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:11:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:41.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v489: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:42.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v490: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:44.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:44 np0005593232 systemd-logind[808]: New session 45 of user zuul.
Jan 23 04:11:44 np0005593232 systemd[1]: Started Session 45 of User zuul.
Jan 23 04:11:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:45.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:45 np0005593232 python3.9[138615]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v491: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:11:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:11:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:46.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:46 np0005593232 python3.9[138768]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:47.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:47 np0005593232 python3.9[138891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159506.1437871-62-150531126264840/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=9a6a528427b32e6ef98709d36c90302cf328f9ef backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v492: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:48.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:48 np0005593232 python3.9[139043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:11:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:49 np0005593232 python3.9[139167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159507.794996-62-216848028332311/.source.conf _original_basename=ceph.conf follow=False checksum=e4aedaaab1f9b40918a770d92609389e4ab78681 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:11:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:49.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:49 np0005593232 systemd[1]: session-45.scope: Deactivated successfully.
Jan 23 04:11:49 np0005593232 systemd[1]: session-45.scope: Consumed 2.627s CPU time.
Jan 23 04:11:49 np0005593232 systemd-logind[808]: Session 45 logged out. Waiting for processes to exit.
Jan 23 04:11:49 np0005593232 systemd-logind[808]: Removed session 45.
Jan 23 04:11:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v493: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:51.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v494: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:53.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v495: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:54.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:55.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v496: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:11:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:56.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:11:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:57.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:57 np0005593232 systemd-logind[808]: New session 46 of user zuul.
Jan 23 04:11:57 np0005593232 systemd[1]: Started Session 46 of User zuul.
Jan 23 04:11:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v497: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:11:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:11:58.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:11:58 np0005593232 python3.9[139350]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:11:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:11:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:11:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:11:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:11:59.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v498: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:00 np0005593232 python3.9[139507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:00 np0005593232 python3.9[139710]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:01.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:01 np0005593232 python3.9[139860]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:12:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v499: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:02 np0005593232 python3.9[140012]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 04:12:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:03.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v500: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:05.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:05 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 23 04:12:05 np0005593232 python3.9[140170]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:12:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v501: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:06.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:06 np0005593232 python3.9[140336]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:12:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:07.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dfe1dfaf-0c53-4021-90ad-186bcada4a40 does not exist
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 619cd335-a843-4ccb-9d64-e11a1b04c466 does not exist
Jan 23 04:12:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cf2be64d-5785-4ce3-a227-7dc54e4e4dce does not exist
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:12:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:12:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v502: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.273472834 +0000 UTC m=+0.040533087 container create 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:12:08 np0005593232 systemd[1]: Started libpod-conmon-4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d.scope.
Jan 23 04:12:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.254163003 +0000 UTC m=+0.021223276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.367769799 +0000 UTC m=+0.134830072 container init 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.377719618 +0000 UTC m=+0.144779881 container start 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.381419601 +0000 UTC m=+0.148479894 container attach 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:12:08 np0005593232 gracious_cartwright[140550]: 167 167
Jan 23 04:12:08 np0005593232 systemd[1]: libpod-4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d.scope: Deactivated successfully.
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.387854722 +0000 UTC m=+0.154914975 container died 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-009bda52df8d4d34f27bfae44e239e3e5d97b3bfbfe8aed56b10d92cb76b9d62-merged.mount: Deactivated successfully.
Jan 23 04:12:08 np0005593232 podman[140527]: 2026-01-23 09:12:08.431226728 +0000 UTC m=+0.198286991 container remove 4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:12:08 np0005593232 systemd[1]: libpod-conmon-4d3ce2c1307c9037cdce379fee2161bb4849757bfa0ed379597736d19a337e2d.scope: Deactivated successfully.
Jan 23 04:12:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:12:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:12:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:12:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:08.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:12:08 np0005593232 podman[140641]: 2026-01-23 09:12:08.58427798 +0000 UTC m=+0.044339425 container create d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:12:08 np0005593232 systemd[1]: Started libpod-conmon-d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2.scope.
Jan 23 04:12:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:08 np0005593232 podman[140641]: 2026-01-23 09:12:08.566558983 +0000 UTC m=+0.026620458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:08 np0005593232 podman[140641]: 2026-01-23 09:12:08.684532111 +0000 UTC m=+0.144593576 container init d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:12:08 np0005593232 podman[140641]: 2026-01-23 09:12:08.690994322 +0000 UTC m=+0.151055767 container start d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:12:08 np0005593232 podman[140641]: 2026-01-23 09:12:08.694328766 +0000 UTC m=+0.154390231 container attach d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:12:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:09.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:09 np0005593232 python3.9[140741]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:12:09 np0005593232 magical_swanson[140660]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:12:09 np0005593232 magical_swanson[140660]: --> relative data size: 1.0
Jan 23 04:12:09 np0005593232 magical_swanson[140660]: --> All data devices are unavailable
Jan 23 04:12:09 np0005593232 systemd[1]: libpod-d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2.scope: Deactivated successfully.
Jan 23 04:12:09 np0005593232 podman[140641]: 2026-01-23 09:12:09.66052283 +0000 UTC m=+1.120584285 container died d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:12:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e36bd688e4b122a0a62258a01495e37bd9ef3c77c2f8b26f17bb0fe98faaec7c-merged.mount: Deactivated successfully.
Jan 23 04:12:09 np0005593232 podman[140641]: 2026-01-23 09:12:09.840377473 +0000 UTC m=+1.300438928 container remove d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:12:09 np0005593232 systemd[1]: libpod-conmon-d5feb8751b6bdbf3824f6dc490adf5bf35c404f4381d28ab2179b29277384fd2.scope: Deactivated successfully.
Jan 23 04:12:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v503: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.45189339 +0000 UTC m=+0.041315720 container create 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:12:10 np0005593232 systemd[1]: Started libpod-conmon-067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584.scope.
Jan 23 04:12:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.526689587 +0000 UTC m=+0.116111947 container init 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.433108873 +0000 UTC m=+0.022531233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:10 np0005593232 python3[141041]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.534802295 +0000 UTC m=+0.124224615 container start 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.538944281 +0000 UTC m=+0.128366631 container attach 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:12:10 np0005593232 tender_mcclintock[141075]: 167 167
Jan 23 04:12:10 np0005593232 systemd[1]: libpod-067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584.scope: Deactivated successfully.
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.541596155 +0000 UTC m=+0.131018505 container died 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:12:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:10.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1ab6b8082464d22b938acd4e619dc2735d047012c813460a743e3c14dd5b57a5-merged.mount: Deactivated successfully.
Jan 23 04:12:10 np0005593232 podman[141058]: 2026-01-23 09:12:10.575026443 +0000 UTC m=+0.164448773 container remove 067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mcclintock, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:12:10 np0005593232 systemd[1]: libpod-conmon-067e46227026ff09d9b3c8752f2b7de3eebc6c70eb9075185141277265319584.scope: Deactivated successfully.
Jan 23 04:12:10 np0005593232 podman[141122]: 2026-01-23 09:12:10.746572293 +0000 UTC m=+0.047851873 container create 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:10 np0005593232 systemd[1]: Started libpod-conmon-2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460.scope.
Jan 23 04:12:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97e6ee8131f70f72bf924fdc27e07da1a65a16ad89da8c96c34e0bd837baa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97e6ee8131f70f72bf924fdc27e07da1a65a16ad89da8c96c34e0bd837baa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97e6ee8131f70f72bf924fdc27e07da1a65a16ad89da8c96c34e0bd837baa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:10 np0005593232 podman[141122]: 2026-01-23 09:12:10.724906756 +0000 UTC m=+0.026186356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fb97e6ee8131f70f72bf924fdc27e07da1a65a16ad89da8c96c34e0bd837baa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:10 np0005593232 podman[141122]: 2026-01-23 09:12:10.832153473 +0000 UTC m=+0.133433073 container init 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 04:12:10 np0005593232 podman[141122]: 2026-01-23 09:12:10.842468872 +0000 UTC m=+0.143748452 container start 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:12:10 np0005593232 podman[141122]: 2026-01-23 09:12:10.849226612 +0000 UTC m=+0.150506192 container attach 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:11.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:11 np0005593232 python3.9[141272]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]: {
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:    "0": [
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:        {
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "devices": [
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "/dev/loop3"
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            ],
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "lv_name": "ceph_lv0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "lv_size": "7511998464",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "name": "ceph_lv0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "tags": {
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.cluster_name": "ceph",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.crush_device_class": "",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.encrypted": "0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.osd_id": "0",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.type": "block",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:                "ceph.vdo": "0"
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            },
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "type": "block",
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:            "vg_name": "ceph_vg0"
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:        }
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]:    ]
Jan 23 04:12:11 np0005593232 friendly_wiles[141140]: }
Jan 23 04:12:11 np0005593232 systemd[1]: libpod-2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460.scope: Deactivated successfully.
Jan 23 04:12:11 np0005593232 podman[141122]: 2026-01-23 09:12:11.764189409 +0000 UTC m=+1.065468999 container died 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:12:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7fb97e6ee8131f70f72bf924fdc27e07da1a65a16ad89da8c96c34e0bd837baa-merged.mount: Deactivated successfully.
Jan 23 04:12:11 np0005593232 podman[141122]: 2026-01-23 09:12:11.827149474 +0000 UTC m=+1.128429044 container remove 2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:12:11 np0005593232 systemd[1]: libpod-conmon-2235c5879a9052b4e6011657074d3ee11eee90353d5c23a66a0f0dc767d65460.scope: Deactivated successfully.
Jan 23 04:12:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v504: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:12 np0005593232 python3.9[141540]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.443809267 +0000 UTC m=+0.069705016 container create 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:12:12 np0005593232 systemd[1]: Started libpod-conmon-464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8.scope.
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.398019743 +0000 UTC m=+0.023915512 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:12:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:12.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.565087307 +0000 UTC m=+0.190983076 container init 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.572073533 +0000 UTC m=+0.197969282 container start 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:12:12 np0005593232 systemd[1]: libpod-464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8.scope: Deactivated successfully.
Jan 23 04:12:12 np0005593232 clever_bassi[141620]: 167 167
Jan 23 04:12:12 np0005593232 conmon[141620]: conmon 464e3b57a573b26ac147 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8.scope/container/memory.events
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.586693503 +0000 UTC m=+0.212589262 container attach 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.587585038 +0000 UTC m=+0.213480787 container died 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-046251ebd597055a47f564ddd9d7963c103246f80f6a039b8fa30faad5a8038a-merged.mount: Deactivated successfully.
Jan 23 04:12:12 np0005593232 python3.9[141688]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:12 np0005593232 podman[141581]: 2026-01-23 09:12:12.88721299 +0000 UTC m=+0.513108739 container remove 464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:12:12 np0005593232 systemd[1]: libpod-conmon-464e3b57a573b26ac147f7e670d0b37792024faf0daf62d50bc44820aa2a15a8.scope: Deactivated successfully.
Jan 23 04:12:13 np0005593232 podman[141724]: 2026-01-23 09:12:13.020548649 +0000 UTC m=+0.023482789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:12:13 np0005593232 podman[141724]: 2026-01-23 09:12:13.140480962 +0000 UTC m=+0.143415072 container create aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:13 np0005593232 systemd[1]: Started libpod-conmon-aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2.scope.
Jan 23 04:12:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:12:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aebfe73e9f09132b010e190803fe57d23c90696859556af3098073cdfad55d07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aebfe73e9f09132b010e190803fe57d23c90696859556af3098073cdfad55d07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aebfe73e9f09132b010e190803fe57d23c90696859556af3098073cdfad55d07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aebfe73e9f09132b010e190803fe57d23c90696859556af3098073cdfad55d07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:12:13 np0005593232 podman[141724]: 2026-01-23 09:12:13.252969967 +0000 UTC m=+0.255904087 container init aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:12:13 np0005593232 podman[141724]: 2026-01-23 09:12:13.259657124 +0000 UTC m=+0.262591244 container start aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:12:13 np0005593232 podman[141724]: 2026-01-23 09:12:13.262937256 +0000 UTC m=+0.265871506 container attach aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:12:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:13.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:13 np0005593232 python3.9[141873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]: {
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:        "osd_id": 0,
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:        "type": "bluestore"
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]:    }
Jan 23 04:12:14 np0005593232 youthful_ritchie[141741]: }
Jan 23 04:12:14 np0005593232 systemd[1]: libpod-aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2.scope: Deactivated successfully.
Jan 23 04:12:14 np0005593232 podman[141724]: 2026-01-23 09:12:14.185497835 +0000 UTC m=+1.188431955 container died aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:12:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v505: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:14 np0005593232 python3.9[141962]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k19qrqh2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aebfe73e9f09132b010e190803fe57d23c90696859556af3098073cdfad55d07-merged.mount: Deactivated successfully.
Jan 23 04:12:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:14.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:14 np0005593232 podman[141968]: 2026-01-23 09:12:14.577056136 +0000 UTC m=+0.380253284 container remove aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ritchie, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:12:14 np0005593232 systemd[1]: libpod-conmon-aa2e957147b29eb9a79157dea211fd42930a7f6efb734a27ebf7510035b64ae2.scope: Deactivated successfully.
Jan 23 04:12:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:12:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:12:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6f055f7d-ef02-4eea-80b2-94a61f5217b5 does not exist
Jan 23 04:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd8e3fa7-6e37-4fe8-a6f1-669fadbe6a7c does not exist
Jan 23 04:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 404e1c1e-ed94-423f-8ba7-10c7f4abfc5f does not exist
Jan 23 04:12:15 np0005593232 python3.9[142185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:12:15 np0005593232 python3.9[142263]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v506: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:16 np0005593232 python3.9[142415]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:17 np0005593232 python3[142569]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 04:12:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v507: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:18.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:18 np0005593232 python3.9[142721]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:19 np0005593232 python3.9[142847]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159538.1577106-431-185211994895330/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v508: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:20 np0005593232 python3.9[142999]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:21.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:21 np0005593232 python3.9[143175]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159539.970262-476-97246678921382/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v509: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:22 np0005593232 python3.9[143327]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:23 np0005593232 python3.9[143453]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159541.842367-521-204793689217914/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:23.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:23 np0005593232 python3.9[143605]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v510: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:24.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:24 np0005593232 python3.9[143730]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159543.317333-566-49152379036232/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:25.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:25 np0005593232 python3.9[143883]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v511: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:26 np0005593232 python3.9[144008]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159544.8836472-611-256299494022233/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:26.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:27 np0005593232 python3.9[144161]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:27.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:28 np0005593232 python3.9[144313]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v512: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:28.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:29 np0005593232 python3.9[144469]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:29.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v513: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:30 np0005593232 python3.9[144621]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:30.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:31 np0005593232 python3.9[144775]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:12:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:31.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:32 np0005593232 python3.9[144929]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v514: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:32.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:32 np0005593232 python3.9[145084]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v515: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:34 np0005593232 python3.9[145235]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:12:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:34.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:35.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:36 np0005593232 python3.9[145389]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:36 np0005593232 ovs-vsctl[145390]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 23 04:12:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v516: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:36.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:37 np0005593232 python3.9[145543]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:12:37
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:12:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:37.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:12:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:12:38 np0005593232 python3.9[145698]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:12:38 np0005593232 ovs-vsctl[145699]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 23 04:12:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v517: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:38.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:38 np0005593232 python3.9[145850]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:12:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:39.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:39 np0005593232 python3.9[146004]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v518: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:40.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:40 np0005593232 python3.9[146156]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:41 np0005593232 python3.9[146285]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:41.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:41 np0005593232 python3.9[146437]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v519: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:42 np0005593232 python3.9[146515]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:42.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:43.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:43 np0005593232 python3.9[146668]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:44 np0005593232 python3.9[146820]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v520: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:44.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:44 np0005593232 python3.9[146898]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:45.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:45 np0005593232 python3.9[147051]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:46 np0005593232 python3.9[147129]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v521: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:12:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:12:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:46.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:46 np0005593232 python3.9[147281]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:12:46 np0005593232 systemd[1]: Reloading.
Jan 23 04:12:47 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:12:47 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:12:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:47.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v522: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:48 np0005593232 python3.9[147471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:48.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:48 np0005593232 python3.9[147550]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:49 np0005593232 python3.9[147702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v523: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:50 np0005593232 python3.9[147780]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:50.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:51 np0005593232 python3.9[147933]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:12:51 np0005593232 systemd[1]: Reloading.
Jan 23 04:12:51 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:12:51 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:12:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:51 np0005593232 systemd[1]: Starting Create netns directory...
Jan 23 04:12:51 np0005593232 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 04:12:51 np0005593232 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 04:12:51 np0005593232 systemd[1]: Finished Create netns directory.
Jan 23 04:12:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v524: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:52.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:52 np0005593232 python3.9[148127]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:53 np0005593232 python3.9[148280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:54 np0005593232 python3.9[148403]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159572.950467-1364-155903137639569/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v525: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:54.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:55.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:55 np0005593232 python3.9[148556]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:56 np0005593232 python3.9[148708]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:12:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v526: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:12:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:56.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:12:56 np0005593232 python3.9[148861]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:12:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:12:57 np0005593232 python3.9[148984]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159576.4923356-1463-182598475130020/.source.json _original_basename=.z6ci_2lw follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v527: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:12:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:12:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:12:58.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:12:59 np0005593232 python3.9[149135]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:12:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:12:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:12:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:12:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:12:59.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v528: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:00.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:01.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:01 np0005593232 python3.9[149609]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 23 04:13:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v529: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:02.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:02 np0005593232 python3.9[149762]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 04:13:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v530: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:04 np0005593232 python3[149914]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 04:13:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:04.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v531: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:06.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:07.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v532: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:08.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:09.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:09 np0005593232 podman[149927]: 2026-01-23 09:13:09.566515042 +0000 UTC m=+5.034326647 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 04:13:09 np0005593232 podman[150048]: 2026-01-23 09:13:09.698995398 +0000 UTC m=+0.047018397 container create 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 04:13:09 np0005593232 podman[150048]: 2026-01-23 09:13:09.675729707 +0000 UTC m=+0.023752726 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 04:13:09 np0005593232 python3[149914]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 23 04:13:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v533: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:10.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:11.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:11 np0005593232 python3.9[150242]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:13:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v534: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 0 B/s wr, 17 op/s
Jan 23 04:13:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:12.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:12 np0005593232 python3.9[150396]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:13:13 np0005593232 python3.9[150473]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:13:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:13.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:13 np0005593232 python3.9[150624]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769159593.2489102-1697-18193456986158/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:13:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v535: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 0 B/s wr, 61 op/s
Jan 23 04:13:14 np0005593232 python3.9[150700]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:13:14 np0005593232 systemd[1]: Reloading.
Jan 23 04:13:14 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:13:14 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:13:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:14.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:15 np0005593232 python3.9[150813]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:13:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:15.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:15 np0005593232 systemd[1]: Reloading.
Jan 23 04:13:15 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:13:15 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:13:15 np0005593232 systemd[1]: Starting ovn_controller container...
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:13:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:13:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v536: 321 pgs: 321 active+clean; 456 KiB data, 149 MiB used, 21 GiB / 21 GiB avail; 62 KiB/s rd, 0 B/s wr, 104 op/s
Jan 23 04:13:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:16.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b87113db9995339022d3b8919d8980cb7a8d857bd905fd5fd5ce15f93ada948d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d7e1cae1-349b-48d1-ad9d-3d947f652750 does not exist
Jan 23 04:13:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e1a40c59-3509-4bae-b802-54651fe69676 does not exist
Jan 23 04:13:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2917bae4-1093-41af-97a0-385f0651badf does not exist
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:13:17 np0005593232 systemd[1]: Started /usr/bin/podman healthcheck run 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5.
Jan 23 04:13:17 np0005593232 podman[150972]: 2026-01-23 09:13:17.067533655 +0000 UTC m=+1.359607110 container init 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + sudo -E kolla_set_configs
Jan 23 04:13:17 np0005593232 podman[150972]: 2026-01-23 09:13:17.099962317 +0000 UTC m=+1.392035782 container start 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 04:13:17 np0005593232 edpm-start-podman-container[150972]: ovn_controller
Jan 23 04:13:17 np0005593232 systemd[1]: Created slice User Slice of UID 0.
Jan 23 04:13:17 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:13:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:13:17 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 23 04:13:17 np0005593232 systemd[1]: Starting User Manager for UID 0...
Jan 23 04:13:17 np0005593232 podman[151021]: 2026-01-23 09:13:17.180623011 +0000 UTC m=+0.073781659 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 04:13:17 np0005593232 edpm-start-podman-container[150971]: Creating additional drop-in dependency for "ovn_controller" (709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5)
Jan 23 04:13:17 np0005593232 systemd[1]: 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5-4bf483fb5083d838.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 04:13:17 np0005593232 systemd[1]: 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5-4bf483fb5083d838.service: Failed with result 'exit-code'.
Jan 23 04:13:17 np0005593232 systemd[1]: Reloading.
Jan 23 04:13:17 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:13:17 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:13:17 np0005593232 systemd[151055]: Queued start job for default target Main User Target.
Jan 23 04:13:17 np0005593232 systemd[151055]: Created slice User Application Slice.
Jan 23 04:13:17 np0005593232 systemd[151055]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 23 04:13:17 np0005593232 systemd[151055]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 04:13:17 np0005593232 systemd[151055]: Reached target Paths.
Jan 23 04:13:17 np0005593232 systemd[151055]: Reached target Timers.
Jan 23 04:13:17 np0005593232 systemd[151055]: Starting D-Bus User Message Bus Socket...
Jan 23 04:13:17 np0005593232 systemd[151055]: Starting Create User's Volatile Files and Directories...
Jan 23 04:13:17 np0005593232 systemd[151055]: Finished Create User's Volatile Files and Directories.
Jan 23 04:13:17 np0005593232 systemd[151055]: Listening on D-Bus User Message Bus Socket.
Jan 23 04:13:17 np0005593232 systemd[151055]: Reached target Sockets.
Jan 23 04:13:17 np0005593232 systemd[151055]: Reached target Basic System.
Jan 23 04:13:17 np0005593232 systemd[151055]: Reached target Main User Target.
Jan 23 04:13:17 np0005593232 systemd[151055]: Startup finished in 179ms.
Jan 23 04:13:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:17.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:17 np0005593232 systemd[1]: Started User Manager for UID 0.
Jan 23 04:13:17 np0005593232 systemd[1]: Started ovn_controller container.
Jan 23 04:13:17 np0005593232 systemd[1]: Started Session c1 of User root.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: INFO:__main__:Validating config file
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: INFO:__main__:Writing out command to execute
Jan 23 04:13:17 np0005593232 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: ++ cat /run_command
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + ARGS=
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + sudo kolla_copy_cacerts
Jan 23 04:13:17 np0005593232 systemd[1]: Started Session c2 of User root.
Jan 23 04:13:17 np0005593232 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + [[ ! -n '' ]]
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + . kolla_extend_start
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + umask 0022
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.6775] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.6783] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <warn>  [1769159597.6786] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.6796] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.6802] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.6806] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 04:13:17 np0005593232 kernel: br-int: entered promiscuous mode
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 04:13:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:17Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.7053] manager: (ovn-3ec410-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.7060] manager: (ovn-e9717b-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.7065] manager: (ovn-539cfa-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 23 04:13:17 np0005593232 systemd-udevd[151233]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:13:17 np0005593232 systemd-udevd[151235]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.7227] device (genev_sys_6081): carrier: link connected
Jan 23 04:13:17 np0005593232 NetworkManager[49057]: <info>  [1769159597.7231] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 23 04:13:17 np0005593232 kernel: genev_sys_6081: entered promiscuous mode
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.015803697 +0000 UTC m=+0.040437151 container create 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:13:18 np0005593232 systemd[1]: Started libpod-conmon-35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b.scope.
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:17.996768576 +0000 UTC m=+0.021402050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.117043666 +0000 UTC m=+0.141677140 container init 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.130473687 +0000 UTC m=+0.155107141 container start 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.134124341 +0000 UTC m=+0.158757875 container attach 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:13:18 np0005593232 lucid_aryabhata[151346]: 167 167
Jan 23 04:13:18 np0005593232 systemd[1]: libpod-35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b.scope: Deactivated successfully.
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.139303018 +0000 UTC m=+0.163936482 container died 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:13:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b94b6b2d0fa107ae4ca7f47dbfb1521d4d7c3e0327fd5ac0bc0582f455119479-merged.mount: Deactivated successfully.
Jan 23 04:13:18 np0005593232 podman[151329]: 2026-01-23 09:13:18.186169251 +0000 UTC m=+0.210802705 container remove 35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_aryabhata, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:18 np0005593232 systemd[1]: libpod-conmon-35274e8e993871e0ffc8275e3e9f2a252eb905f4efc75339996d154bff32970b.scope: Deactivated successfully.
Jan 23 04:13:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v537: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 150 op/s
Jan 23 04:13:18 np0005593232 podman[151394]: 2026-01-23 09:13:18.352467359 +0000 UTC m=+0.043625621 container create 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:13:18 np0005593232 systemd[1]: Started libpod-conmon-7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794.scope.
Jan 23 04:13:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:18 np0005593232 podman[151394]: 2026-01-23 09:13:18.333383107 +0000 UTC m=+0.024541379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:18 np0005593232 podman[151394]: 2026-01-23 09:13:18.443503028 +0000 UTC m=+0.134661300 container init 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:13:18 np0005593232 podman[151394]: 2026-01-23 09:13:18.449695894 +0000 UTC m=+0.140854156 container start 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:13:18 np0005593232 podman[151394]: 2026-01-23 09:13:18.455028936 +0000 UTC m=+0.146187198 container attach 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:13:18 np0005593232 python3.9[151463]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 04:13:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:18.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:19 np0005593232 competent_tu[151450]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:13:19 np0005593232 competent_tu[151450]: --> relative data size: 1.0
Jan 23 04:13:19 np0005593232 competent_tu[151450]: --> All data devices are unavailable
Jan 23 04:13:19 np0005593232 systemd[1]: libpod-7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794.scope: Deactivated successfully.
Jan 23 04:13:19 np0005593232 podman[151394]: 2026-01-23 09:13:19.333739801 +0000 UTC m=+1.024898063 container died 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:13:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-07e8016f747bbbd52f8f2c39698ea44f4fa07f1f0be7151acaf77a2f9dad3bcb-merged.mount: Deactivated successfully.
Jan 23 04:13:19 np0005593232 podman[151394]: 2026-01-23 09:13:19.389137956 +0000 UTC m=+1.080296218 container remove 7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_tu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:13:19 np0005593232 systemd[1]: libpod-conmon-7ecb40ee1db12e69eedd30a94b8d9e4a969519df8c50cbc33115fdc82be30794.scope: Deactivated successfully.
Jan 23 04:13:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:19.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:19 np0005593232 podman[151652]: 2026-01-23 09:13:19.979639487 +0000 UTC m=+0.040527203 container create 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 23 04:13:20 np0005593232 systemd[1]: Started libpod-conmon-8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a.scope.
Jan 23 04:13:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:20.042644468 +0000 UTC m=+0.103532224 container init 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:20.048976448 +0000 UTC m=+0.109864164 container start 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:20.052416726 +0000 UTC m=+0.113304442 container attach 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:13:20 np0005593232 unruffled_cohen[151668]: 167 167
Jan 23 04:13:20 np0005593232 systemd[1]: libpod-8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a.scope: Deactivated successfully.
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:20.056102601 +0000 UTC m=+0.116990317 container died 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:19.960979686 +0000 UTC m=+0.021867432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ec43c9b367651d797c4ddd8ead966d91a76d73422057b6d549af0fe525e75d03-merged.mount: Deactivated successfully.
Jan 23 04:13:20 np0005593232 podman[151652]: 2026-01-23 09:13:20.092776834 +0000 UTC m=+0.153664550 container remove 8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_cohen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:13:20 np0005593232 systemd[1]: libpod-conmon-8211d45530e0efc6ef119b27fa2baf519b311632af2166af2953ac1cb52ad95a.scope: Deactivated successfully.
Jan 23 04:13:20 np0005593232 podman[151691]: 2026-01-23 09:13:20.246007921 +0000 UTC m=+0.039068912 container create 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:13:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v538: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 150 op/s
Jan 23 04:13:20 np0005593232 systemd[1]: Started libpod-conmon-167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce.scope.
Jan 23 04:13:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa469a5871a0f3f20d76bc99a31c1fe6e9051ce8b26cb6abf997d56757c7871/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa469a5871a0f3f20d76bc99a31c1fe6e9051ce8b26cb6abf997d56757c7871/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa469a5871a0f3f20d76bc99a31c1fe6e9051ce8b26cb6abf997d56757c7871/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fa469a5871a0f3f20d76bc99a31c1fe6e9051ce8b26cb6abf997d56757c7871/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:20 np0005593232 podman[151691]: 2026-01-23 09:13:20.227797453 +0000 UTC m=+0.020858454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:20 np0005593232 podman[151691]: 2026-01-23 09:13:20.339013605 +0000 UTC m=+0.132074586 container init 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:13:20 np0005593232 podman[151691]: 2026-01-23 09:13:20.346706974 +0000 UTC m=+0.139767975 container start 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:20 np0005593232 podman[151691]: 2026-01-23 09:13:20.354189497 +0000 UTC m=+0.147250508 container attach 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:13:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]: {
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:    "0": [
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:        {
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "devices": [
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "/dev/loop3"
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            ],
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "lv_name": "ceph_lv0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "lv_size": "7511998464",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "name": "ceph_lv0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "tags": {
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.cluster_name": "ceph",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.crush_device_class": "",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.encrypted": "0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.osd_id": "0",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.type": "block",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:                "ceph.vdo": "0"
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            },
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "type": "block",
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:            "vg_name": "ceph_vg0"
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:        }
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]:    ]
Jan 23 04:13:21 np0005593232 dreamy_khorana[151708]: }
Jan 23 04:13:21 np0005593232 systemd[1]: libpod-167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce.scope: Deactivated successfully.
Jan 23 04:13:21 np0005593232 podman[151844]: 2026-01-23 09:13:21.218217905 +0000 UTC m=+0.025937119 container died 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:13:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4fa469a5871a0f3f20d76bc99a31c1fe6e9051ce8b26cb6abf997d56757c7871-merged.mount: Deactivated successfully.
Jan 23 04:13:21 np0005593232 podman[151844]: 2026-01-23 09:13:21.283204663 +0000 UTC m=+0.090923857 container remove 167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_khorana, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:13:21 np0005593232 systemd[1]: libpod-conmon-167617fef5c63e7eba9be58751c8daa7fb7d8a952fcf1cb0a81f35ff671e39ce.scope: Deactivated successfully.
Jan 23 04:13:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:21.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:21 np0005593232 python3.9[151919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.863199323 +0000 UTC m=+0.041419569 container create b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:13:21 np0005593232 systemd[1]: Started libpod-conmon-b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599.scope.
Jan 23 04:13:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.844691697 +0000 UTC m=+0.022911963 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.943025683 +0000 UTC m=+0.121245949 container init b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.950772413 +0000 UTC m=+0.128992659 container start b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.954005075 +0000 UTC m=+0.132225381 container attach b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:21 np0005593232 great_agnesi[152189]: 167 167
Jan 23 04:13:21 np0005593232 systemd[1]: libpod-b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599.scope: Deactivated successfully.
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.957539556 +0000 UTC m=+0.135759802 container died b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:13:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-18974e1771553287d5a265b07f89ad697cb47b3a4c85e9294742edf2f7dbf4c6-merged.mount: Deactivated successfully.
Jan 23 04:13:21 np0005593232 podman[152133]: 2026-01-23 09:13:21.999039976 +0000 UTC m=+0.177260222 container remove b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:22 np0005593232 systemd[1]: libpod-conmon-b117ee9ae2fb06ff6267c60363099ba99f6400c448831d601ea68d8d50817599.scope: Deactivated successfully.
Jan 23 04:13:22 np0005593232 python3.9[152191]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159601.0312257-1832-24421178372813/.source.yaml _original_basename=.x2my7j_w follow=False checksum=d3cbb0a9c550a24d080b6861631678a3f2e708bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:13:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:13:22 np0005593232 podman[152213]: 2026-01-23 09:13:22.153795226 +0000 UTC m=+0.042076717 container create 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:13:22 np0005593232 systemd[1]: Started libpod-conmon-0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a.scope.
Jan 23 04:13:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465df5e042b17265b9c32062c7ee88eb805440345b9b1435decb6676cdf94c98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465df5e042b17265b9c32062c7ee88eb805440345b9b1435decb6676cdf94c98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465df5e042b17265b9c32062c7ee88eb805440345b9b1435decb6676cdf94c98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465df5e042b17265b9c32062c7ee88eb805440345b9b1435decb6676cdf94c98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:13:22 np0005593232 podman[152213]: 2026-01-23 09:13:22.228468079 +0000 UTC m=+0.116749570 container init 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:13:22 np0005593232 podman[152213]: 2026-01-23 09:13:22.13635826 +0000 UTC m=+0.024639781 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:13:22 np0005593232 podman[152213]: 2026-01-23 09:13:22.23410992 +0000 UTC m=+0.122391411 container start 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:13:22 np0005593232 podman[152213]: 2026-01-23 09:13:22.236784576 +0000 UTC m=+0.125066067 container attach 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:13:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v539: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 90 KiB/s rd, 0 B/s wr, 150 op/s
Jan 23 04:13:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:22.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:22 np0005593232 python3.9[152385]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:13:22 np0005593232 ovs-vsctl[152387]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 23 04:13:23 np0005593232 tender_mclean[152247]: {
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:        "osd_id": 0,
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:        "type": "bluestore"
Jan 23 04:13:23 np0005593232 tender_mclean[152247]:    }
Jan 23 04:13:23 np0005593232 tender_mclean[152247]: }
Jan 23 04:13:23 np0005593232 systemd[1]: libpod-0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a.scope: Deactivated successfully.
Jan 23 04:13:23 np0005593232 podman[152213]: 2026-01-23 09:13:23.066080876 +0000 UTC m=+0.954362367 container died 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:13:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-465df5e042b17265b9c32062c7ee88eb805440345b9b1435decb6676cdf94c98-merged.mount: Deactivated successfully.
Jan 23 04:13:23 np0005593232 podman[152213]: 2026-01-23 09:13:23.119488895 +0000 UTC m=+1.007770386 container remove 0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:13:23 np0005593232 systemd[1]: libpod-conmon-0cd87b06d1c6e02e2021369aa1c3cf54c91d9993a7504f5b2898c2dc8a854f3a.scope: Deactivated successfully.
Jan 23 04:13:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:13:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:13:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4517d1e7-2e25-4c20-977d-95673751b1b2 does not exist
Jan 23 04:13:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 46e5ee2e-42eb-4397-a6bd-1d7d322de575 does not exist
Jan 23 04:13:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4750ed63-86a9-46e3-850f-b451ff77380d does not exist
Jan 23 04:13:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:23.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:23 np0005593232 python3.9[152619]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:13:23 np0005593232 ovs-vsctl[152621]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 23 04:13:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v540: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 80 KiB/s rd, 0 B/s wr, 133 op/s
Jan 23 04:13:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:13:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:24 np0005593232 python3.9[152774]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:13:24 np0005593232 ovs-vsctl[152776]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 23 04:13:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:25.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:25 np0005593232 systemd[1]: session-46.scope: Deactivated successfully.
Jan 23 04:13:25 np0005593232 systemd[1]: session-46.scope: Consumed 56.692s CPU time.
Jan 23 04:13:25 np0005593232 systemd-logind[808]: Session 46 logged out. Waiting for processes to exit.
Jan 23 04:13:25 np0005593232 systemd-logind[808]: Removed session 46.
Jan 23 04:13:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v541: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 0 B/s wr, 89 op/s
Jan 23 04:13:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:26.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:27.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:27 np0005593232 systemd[1]: Stopping User Manager for UID 0...
Jan 23 04:13:27 np0005593232 systemd[151055]: Activating special unit Exit the Session...
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped target Main User Target.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped target Basic System.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped target Paths.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped target Sockets.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped target Timers.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 04:13:27 np0005593232 systemd[151055]: Closed D-Bus User Message Bus Socket.
Jan 23 04:13:27 np0005593232 systemd[151055]: Stopped Create User's Volatile Files and Directories.
Jan 23 04:13:27 np0005593232 systemd[151055]: Removed slice User Application Slice.
Jan 23 04:13:27 np0005593232 systemd[151055]: Reached target Shutdown.
Jan 23 04:13:27 np0005593232 systemd[151055]: Finished Exit the Session.
Jan 23 04:13:27 np0005593232 systemd[151055]: Reached target Exit the Session.
Jan 23 04:13:27 np0005593232 systemd[1]: user@0.service: Deactivated successfully.
Jan 23 04:13:27 np0005593232 systemd[1]: Stopped User Manager for UID 0.
Jan 23 04:13:27 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 23 04:13:27 np0005593232 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 23 04:13:27 np0005593232 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 23 04:13:27 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 23 04:13:27 np0005593232 systemd[1]: Removed slice User Slice of UID 0.
Jan 23 04:13:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v542: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Jan 23 04:13:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v543: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:31.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:32 np0005593232 systemd-logind[808]: New session 48 of user zuul.
Jan 23 04:13:32 np0005593232 systemd[1]: Started Session 48 of User zuul.
Jan 23 04:13:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v544: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:33 np0005593232 python3.9[152959]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:13:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:33.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v545: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:34 np0005593232 python3.9[153115]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:35 np0005593232 python3.9[153268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:35.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:36 np0005593232 python3.9[153421]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v546: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:36.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:36 np0005593232 python3.9[153573]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:13:37
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta']
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:13:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:37.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:37 np0005593232 python3.9[153726]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:13:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:13:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v547: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:38 np0005593232 python3.9[153876]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:13:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:38.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:39.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:39 np0005593232 python3.9[154029]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 23 04:13:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v548: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:41 np0005593232 python3.9[154180]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:41.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:41 np0005593232 python3.9[154351]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159620.6598835-218-184973195350377/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v549: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:42 np0005593232 python3.9[154501]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:43 np0005593232 python3.9[154623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159622.2598367-263-112292737462037/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:43.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v550: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:44 np0005593232 python3.9[154775]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:13:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:45 np0005593232 python3.9[154860]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:13:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v551: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:13:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:13:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:47 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:47Z|00025|memory|INFO|15872 kB peak resident set size after 29.7 seconds
Jan 23 04:13:47 np0005593232 ovn_controller[151001]: 2026-01-23T09:13:47Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 23 04:13:47 np0005593232 podman[154939]: 2026-01-23 09:13:47.426377576 +0000 UTC m=+0.085109461 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 04:13:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:47.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:47 np0005593232 python3.9[155041]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:13:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v552: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:48 np0005593232 python3.9[155194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:49 np0005593232 python3.9[155316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159628.2182791-374-211025107777929/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:49.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:49 np0005593232 python3.9[155466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v553: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:50 np0005593232 python3.9[155587]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159629.4683964-374-215558785813346/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:51.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:52 np0005593232 python3.9[155738]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v554: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:52 np0005593232 python3.9[155859]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159631.5742803-506-167761990166955/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:52.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:53 np0005593232 python3.9[156010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:53.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:53 np0005593232 python3.9[156131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159632.7006488-506-151251653236514/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v555: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:55 np0005593232 python3.9[156282]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:13:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:55.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v556: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:13:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:13:56 np0005593232 python3.9[156436]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:57 np0005593232 python3.9[156589]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:13:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:57.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:13:57 np0005593232 python3.9[156667]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v557: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:13:58 np0005593232 python3.9[156819]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:13:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:13:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:58 np0005593232 python3.9[156898]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:13:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:13:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:13:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:13:59.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:13:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:13:59 np0005593232 python3.9[157050]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v558: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:00 np0005593232 python3.9[157202]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:01 np0005593232 python3.9[157281]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:01.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:02 np0005593232 python3.9[157483]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v559: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:02 np0005593232 python3.9[157561]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:02.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:03 np0005593232 python3.9[157714]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:14:03 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:03.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:03 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:03 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v560: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:04.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:05 np0005593232 python3.9[157903]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:05.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:05 np0005593232 python3.9[157981]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v561: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:06.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:06 np0005593232 python3.9[158133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:07 np0005593232 python3.9[158212]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:07.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:08 np0005593232 python3.9[158364]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:14:08 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:08 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:08 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v562: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:08 np0005593232 systemd[1]: Starting Create netns directory...
Jan 23 04:14:08 np0005593232 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 23 04:14:08 np0005593232 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 23 04:14:08 np0005593232 systemd[1]: Finished Create netns directory.
Jan 23 04:14:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:08.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:09.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:09 np0005593232 python3.9[158559]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:14:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v563: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:10 np0005593232 python3.9[158711]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:10 np0005593232 python3.9[158835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769159649.826565-959-28921713969819/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:14:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:11.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:12 np0005593232 python3.9[158987]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v564: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:12.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:12 np0005593232 python3.9[159140]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:14:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:13.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:13 np0005593232 python3.9[159292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:14 np0005593232 python3.9[159415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159653.1749563-1058-264485518630651/.source.json _original_basename=.r70g0rpt follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v565: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:15 np0005593232 python3.9[159566]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:15.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v566: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:17.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:17 np0005593232 podman[159962]: 2026-01-23 09:14:17.605829245 +0000 UTC m=+0.102819325 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:14:17 np0005593232 python3.9[159997]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 23 04:14:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v567: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:18.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:18 np0005593232 python3.9[160169]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 04:14:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:19.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:20 np0005593232 python3[160322]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 04:14:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v568: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:20.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:21.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v569: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:22.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:23.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v570: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:25.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v571: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:27.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v572: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:28.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:29.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:29 np0005593232 podman[160336]: 2026-01-23 09:14:29.655011897 +0000 UTC m=+9.411528820 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:14:29 np0005593232 podman[160657]: 2026-01-23 09:14:29.841259712 +0000 UTC m=+0.056046990 container create 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:14:29 np0005593232 podman[160657]: 2026-01-23 09:14:29.806587062 +0000 UTC m=+0.021374360 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:14:29 np0005593232 python3[160322]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:14:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v573: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:31.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v574: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:32.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:33.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v575: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:14:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:34.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:35.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:36 np0005593232 python3.9[160848]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c322e304-4f93-4b49-a7f2-4967831fb8ab does not exist
Jan 23 04:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 70701e4c-80d2-44b0-9d05-427ab26176d9 does not exist
Jan 23 04:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5ed3a223-3c3f-407c-a45b-9dc9709fb5c8 does not exist
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:14:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v576: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:36 np0005593232 podman[161127]: 2026-01-23 09:14:36.911524896 +0000 UTC m=+0.043087611 container create 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:14:36 np0005593232 systemd[1]: Started libpod-conmon-3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7.scope.
Jan 23 04:14:36 np0005593232 podman[161127]: 2026-01-23 09:14:36.894549891 +0000 UTC m=+0.026112626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:37 np0005593232 podman[161127]: 2026-01-23 09:14:37.024785588 +0000 UTC m=+0.156348333 container init 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:14:37 np0005593232 podman[161127]: 2026-01-23 09:14:37.031929021 +0000 UTC m=+0.163491736 container start 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:14:37 np0005593232 podman[161127]: 2026-01-23 09:14:37.035441682 +0000 UTC m=+0.167004397 container attach 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:14:37 np0005593232 compassionate_varahamihira[161161]: 167 167
Jan 23 04:14:37 np0005593232 systemd[1]: libpod-3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7.scope: Deactivated successfully.
Jan 23 04:14:37 np0005593232 podman[161127]: 2026-01-23 09:14:37.043260575 +0000 UTC m=+0.174823290 container died 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:14:37
Jan 23 04:14:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a4b92378ba976bc0850ceb78e244d70468547dde68b1da6d38daa790b06f3e42-merged.mount: Deactivated successfully.
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'images', 'default.rgw.meta', 'volumes']
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:14:37 np0005593232 podman[161127]: 2026-01-23 09:14:37.087658932 +0000 UTC m=+0.219221667 container remove 3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:14:37 np0005593232 systemd[1]: libpod-conmon-3fa1fa90f40984a20834556f02daa0e282bc3a55efa0a77bbe0e49dcc4b038d7.scope: Deactivated successfully.
Jan 23 04:14:37 np0005593232 python3.9[161158]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:14:37 np0005593232 podman[161186]: 2026-01-23 09:14:37.271815267 +0000 UTC m=+0.048476435 container create 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:14:37 np0005593232 systemd[1]: Started libpod-conmon-00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87.scope.
Jan 23 04:14:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:37 np0005593232 podman[161186]: 2026-01-23 09:14:37.253650458 +0000 UTC m=+0.030311646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:37 np0005593232 podman[161186]: 2026-01-23 09:14:37.35851039 +0000 UTC m=+0.135171578 container init 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:14:37 np0005593232 podman[161186]: 2026-01-23 09:14:37.366535519 +0000 UTC m=+0.143196687 container start 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:14:37 np0005593232 podman[161186]: 2026-01-23 09:14:37.370217474 +0000 UTC m=+0.146878682 container attach 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:14:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:37.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:14:37 np0005593232 python3.9[161282]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:14:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 04:14:38 np0005593232 zen_blackwell[161248]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:14:38 np0005593232 zen_blackwell[161248]: --> relative data size: 1.0
Jan 23 04:14:38 np0005593232 zen_blackwell[161248]: --> All data devices are unavailable
Jan 23 04:14:38 np0005593232 systemd[1]: libpod-00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87.scope: Deactivated successfully.
Jan 23 04:14:38 np0005593232 podman[161186]: 2026-01-23 09:14:38.207692431 +0000 UTC m=+0.984353609 container died 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:14:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d7dd553fbcbe8851d15600f81ca59cc21db679485652893a89ea0c0bbe902cfe-merged.mount: Deactivated successfully.
Jan 23 04:14:38 np0005593232 podman[161186]: 2026-01-23 09:14:38.27071017 +0000 UTC m=+1.047371338 container remove 00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:14:38 np0005593232 systemd[1]: libpod-conmon-00b66f34b7c74fa5eedbe1e02069bf1732ec5f0c388fcb3cb8dd273100b12d87.scope: Deactivated successfully.
Jan 23 04:14:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v577: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:38 np0005593232 python3.9[161443]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769159677.685997-1292-176090905512595/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:38 np0005593232 podman[161670]: 2026-01-23 09:14:38.880425737 +0000 UTC m=+0.049662358 container create c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:14:38 np0005593232 python3.9[161631]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:14:38 np0005593232 systemd[1]: Started libpod-conmon-c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1.scope.
Jan 23 04:14:38 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:38 np0005593232 podman[161670]: 2026-01-23 09:14:38.855026153 +0000 UTC m=+0.024262864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:39 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:39 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:39 np0005593232 podman[161670]: 2026-01-23 09:14:39.274296696 +0000 UTC m=+0.443533317 container init c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:14:39 np0005593232 podman[161670]: 2026-01-23 09:14:39.28460004 +0000 UTC m=+0.453836661 container start c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:14:39 np0005593232 podman[161670]: 2026-01-23 09:14:39.288233934 +0000 UTC m=+0.457470555 container attach c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:14:39 np0005593232 naughty_nash[161687]: 167 167
Jan 23 04:14:39 np0005593232 systemd[1]: libpod-c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1.scope: Deactivated successfully.
Jan 23 04:14:39 np0005593232 podman[161670]: 2026-01-23 09:14:39.290951092 +0000 UTC m=+0.460187723 container died c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:14:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bd5a33f21adab80abed9a11ef65103b198fc6d280fe56e02843606b5d04f7fde-merged.mount: Deactivated successfully.
Jan 23 04:14:39 np0005593232 podman[161670]: 2026-01-23 09:14:39.327429742 +0000 UTC m=+0.496666363 container remove c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nash, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:14:39 np0005593232 systemd[1]: libpod-conmon-c8657d46961f3b76d66a9bdd1d3b34f6052c47b3d2ac0238221853168c90d3e1.scope: Deactivated successfully.
Jan 23 04:14:39 np0005593232 podman[161791]: 2026-01-23 09:14:39.495969142 +0000 UTC m=+0.043111872 container create d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:14:39 np0005593232 systemd[1]: Started libpod-conmon-d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739.scope.
Jan 23 04:14:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:39.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5001fb49f2ce0a10726fea639ad36ac544b98ebfe670698aa26154ee02245eb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5001fb49f2ce0a10726fea639ad36ac544b98ebfe670698aa26154ee02245eb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5001fb49f2ce0a10726fea639ad36ac544b98ebfe670698aa26154ee02245eb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5001fb49f2ce0a10726fea639ad36ac544b98ebfe670698aa26154ee02245eb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:39 np0005593232 podman[161791]: 2026-01-23 09:14:39.476136286 +0000 UTC m=+0.023279016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:39 np0005593232 podman[161791]: 2026-01-23 09:14:39.576339965 +0000 UTC m=+0.123482715 container init d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:14:39 np0005593232 podman[161791]: 2026-01-23 09:14:39.582548382 +0000 UTC m=+0.129691112 container start d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:14:39 np0005593232 podman[161791]: 2026-01-23 09:14:39.586029251 +0000 UTC m=+0.133172001 container attach d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:14:39 np0005593232 python3.9[161836]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:14:39 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:39 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:39 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:40 np0005593232 systemd[1]: Starting ovn_metadata_agent container...
Jan 23 04:14:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baccd023c59bb60e966fcb7b6f175725c926e2658d7041651c2475dd3985e6ea/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/baccd023c59bb60e966fcb7b6f175725c926e2658d7041651c2475dd3985e6ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:40 np0005593232 systemd[1]: Started /usr/bin/podman healthcheck run 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f.
Jan 23 04:14:40 np0005593232 podman[161879]: 2026-01-23 09:14:40.325047328 +0000 UTC m=+0.128769485 container init 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 04:14:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v578: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + sudo -E kolla_set_configs
Jan 23 04:14:40 np0005593232 podman[161879]: 2026-01-23 09:14:40.354926201 +0000 UTC m=+0.158648348 container start 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:14:40 np0005593232 edpm-start-podman-container[161879]: ovn_metadata_agent
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]: {
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:    "0": [
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:        {
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "devices": [
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "/dev/loop3"
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            ],
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "lv_name": "ceph_lv0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "lv_size": "7511998464",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "name": "ceph_lv0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "tags": {
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.cluster_name": "ceph",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.crush_device_class": "",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.encrypted": "0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.osd_id": "0",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.type": "block",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:                "ceph.vdo": "0"
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            },
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "type": "block",
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:            "vg_name": "ceph_vg0"
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:        }
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]:    ]
Jan 23 04:14:40 np0005593232 sleepy_golick[161834]: }
Jan 23 04:14:40 np0005593232 edpm-start-podman-container[161878]: Creating additional drop-in dependency for "ovn_metadata_agent" (8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f)
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Validating config file
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Copying service configuration files
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 23 04:14:40 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:40 np0005593232 podman[161791]: 2026-01-23 09:14:40.437785275 +0000 UTC m=+0.984927995 container died d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:14:40 np0005593232 podman[161904]: 2026-01-23 09:14:40.442403147 +0000 UTC m=+0.078513992 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Writing out command to execute
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: ++ cat /run_command
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + CMD=neutron-ovn-metadata-agent
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + ARGS=
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + sudo kolla_copy_cacerts
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + [[ ! -n '' ]]
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + . kolla_extend_start
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: Running command: 'neutron-ovn-metadata-agent'
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + umask 0022
Jan 23 04:14:40 np0005593232 ovn_metadata_agent[161895]: + exec neutron-ovn-metadata-agent
Jan 23 04:14:40 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:40 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:40.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:40 np0005593232 systemd[1]: Started ovn_metadata_agent container.
Jan 23 04:14:40 np0005593232 systemd[1]: libpod-d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739.scope: Deactivated successfully.
Jan 23 04:14:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5001fb49f2ce0a10726fea639ad36ac544b98ebfe670698aa26154ee02245eb2-merged.mount: Deactivated successfully.
Jan 23 04:14:40 np0005593232 podman[161791]: 2026-01-23 09:14:40.783440778 +0000 UTC m=+1.330583518 container remove d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:14:40 np0005593232 systemd[1]: libpod-conmon-d6bea1558d9395bfab3902bc9f32499c1c4ebf049a332e3216a7c612780e9739.scope: Deactivated successfully.
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.427615199 +0000 UTC m=+0.042697999 container create 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:14:41 np0005593232 systemd[1]: Started libpod-conmon-883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf.scope.
Jan 23 04:14:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.412064165 +0000 UTC m=+0.027146975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.514457607 +0000 UTC m=+0.129540437 container init 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.523751972 +0000 UTC m=+0.138834752 container start 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.526725467 +0000 UTC m=+0.141808337 container attach 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:14:41 np0005593232 wonderful_germain[162226]: 167 167
Jan 23 04:14:41 np0005593232 systemd[1]: libpod-883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf.scope: Deactivated successfully.
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.531712059 +0000 UTC m=+0.146794859 container died 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:14:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:41.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d0f4b8398c63d6b865f7629c7f83141dc97658ac76a779260a8a1504eaef2879-merged.mount: Deactivated successfully.
Jan 23 04:14:41 np0005593232 podman[162170]: 2026-01-23 09:14:41.563049924 +0000 UTC m=+0.178132714 container remove 883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_germain, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:14:41 np0005593232 systemd[1]: libpod-conmon-883a25ec19e2d13bbb7afd34415aceda63afa9de4cf1bca96eace89a15fc2eaf.scope: Deactivated successfully.
Jan 23 04:14:41 np0005593232 podman[162279]: 2026-01-23 09:14:41.727709392 +0000 UTC m=+0.047627170 container create be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:14:41 np0005593232 systemd[1]: Started libpod-conmon-be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394.scope.
Jan 23 04:14:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:14:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4766273dc03213c0d3ccc80569988b34e4b07eed2b27ab184df5c64729dfe395/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4766273dc03213c0d3ccc80569988b34e4b07eed2b27ab184df5c64729dfe395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:41 np0005593232 podman[162279]: 2026-01-23 09:14:41.708240757 +0000 UTC m=+0.028158565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:14:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4766273dc03213c0d3ccc80569988b34e4b07eed2b27ab184df5c64729dfe395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4766273dc03213c0d3ccc80569988b34e4b07eed2b27ab184df5c64729dfe395/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:14:41 np0005593232 podman[162279]: 2026-01-23 09:14:41.822691743 +0000 UTC m=+0.142609541 container init be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:14:41 np0005593232 podman[162279]: 2026-01-23 09:14:41.830573247 +0000 UTC m=+0.150491025 container start be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:14:41 np0005593232 podman[162279]: 2026-01-23 09:14:41.835389995 +0000 UTC m=+0.155307793 container attach be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:14:42 np0005593232 python3.9[162399]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 23 04:14:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v579: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.468 161902 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.469 161902 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.469 161902 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.469 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.469 161902 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.469 161902 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.470 161902 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.471 161902 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.472 161902 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.473 161902 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.474 161902 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.475 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.476 161902 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.477 161902 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.478 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.479 161902 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.480 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.481 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.482 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.483 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.484 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.485 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.486 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.487 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.488 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.489 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.490 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.491 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.492 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.493 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.494 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.495 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.496 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.497 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.498 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.499 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.500 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.501 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.502 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.502 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.502 161902 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.502 161902 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.513 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.514 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.514 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.515 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.515 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.530 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d80bc768-e67f-4e48-bcf3-42912cda98f1 (UUID: d80bc768-e67f-4e48-bcf3-42912cda98f1) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.556 161902 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.556 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.556 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.556 161902 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.559 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.565 161902 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.571 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd80bc768-e67f-4e48-bcf3-42912cda98f1'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], external_ids={}, name=d80bc768-e67f-4e48-bcf3-42912cda98f1, nb_cfg_timestamp=1769159605709, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.572 161902 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f955c555f40>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.573 161902 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.579 161902 DEBUG oslo_service.service [-] Started child 162432 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.582 162432 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-168924'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.582 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpfqklrb5a/privsep.sock']#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.605 162432 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.606 162432 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.606 162432 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.608 162432 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.614 162432 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 23 04:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:42.620 162432 INFO eventlet.wsgi.server [-] (162432) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]: {
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:        "osd_id": 0,
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:        "type": "bluestore"
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]:    }
Jan 23 04:14:42 np0005593232 stoic_kepler[162364]: }
Jan 23 04:14:42 np0005593232 systemd[1]: libpod-be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394.scope: Deactivated successfully.
Jan 23 04:14:42 np0005593232 podman[162279]: 2026-01-23 09:14:42.705009609 +0000 UTC m=+1.024927397 container died be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:14:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4766273dc03213c0d3ccc80569988b34e4b07eed2b27ab184df5c64729dfe395-merged.mount: Deactivated successfully.
Jan 23 04:14:42 np0005593232 podman[162279]: 2026-01-23 09:14:42.765892866 +0000 UTC m=+1.085810654 container remove be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:14:42 np0005593232 systemd[1]: libpod-conmon-be4a06f540746a8be8a13049b02589a2672c365a2eae9785fe325aa66c055394.scope: Deactivated successfully.
Jan 23 04:14:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:14:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:14:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 65c10d9b-f90f-4e44-a9b3-a9cc8ce6173d does not exist
Jan 23 04:14:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 67ddb058-30e2-4f12-96f1-f86eb7fc287d does not exist
Jan 23 04:14:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 66787b28-cee0-44dd-9244-f37d9b0e5523 does not exist
Jan 23 04:14:43 np0005593232 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 23 04:14:43 np0005593232 python3.9[162636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.280 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.281 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpfqklrb5a/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.162 162637 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.166 162637 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.167 162637 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.168 162637 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162637#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.283 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b4d6b3-e982-4f32-b5ef-1bae51f75544]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:14:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:43.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:14:43 np0005593232 python3.9[162766]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159682.7642584-1427-108184166357282/.source.yaml _original_basename=.wsba_ksh follow=False checksum=29c9ae8bd33f53131de391173ae7a464927d83f8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.804 162637 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.804 162637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:14:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:43.804 162637 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:14:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v580: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.372 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9c20902c-bce9-4ef3-ba8e-f979b465b5bb]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.375 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, column=external_ids, values=({'neutron:ovn-metadata-id': '1112b2d7-9077-5e11-ac8b-fd5107f0b2dd'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:14:44 np0005593232 systemd[1]: session-48.scope: Deactivated successfully.
Jan 23 04:14:44 np0005593232 systemd[1]: session-48.scope: Consumed 56.888s CPU time.
Jan 23 04:14:44 np0005593232 systemd-logind[808]: Session 48 logged out. Waiting for processes to exit.
Jan 23 04:14:44 np0005593232 systemd-logind[808]: Removed session 48.
Jan 23 04:14:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.758 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.769 161902 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.769 161902 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.770 161902 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.771 161902 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.772 161902 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.773 161902 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.774 161902 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.775 161902 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.776 161902 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.777 161902 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.778 161902 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.779 161902 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.780 161902 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.781 161902 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.782 161902 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.783 161902 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.784 161902 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.785 161902 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.786 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.787 161902 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.788 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.789 161902 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.790 161902 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.791 161902 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.792 161902 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.793 161902 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.794 161902 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.795 161902 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.796 161902 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.797 161902 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.798 161902 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.798 161902 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.799 161902 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.800 161902 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.801 161902 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.801 161902 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.802 161902 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.802 161902 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.803 161902 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.805 161902 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.806 161902 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.806 161902 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.807 161902 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.807 161902 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.808 161902 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.809 161902 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.810 161902 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.811 161902 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.811 161902 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.812 161902 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.813 161902 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.814 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.815 161902 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.816 161902 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.817 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.818 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.819 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.820 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:14:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:14:44.821 161902 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 23 04:14:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:45.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:14:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v581: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:47.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v582: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:48 np0005593232 podman[162793]: 2026-01-23 09:14:48.483814312 +0000 UTC m=+0.141897470 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:14:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:48.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:49.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v583: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:50 np0005593232 systemd-logind[808]: New session 49 of user zuul.
Jan 23 04:14:50 np0005593232 systemd[1]: Started Session 49 of User zuul.
Jan 23 04:14:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:50.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:51 np0005593232 python3.9[162976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:14:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:51.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v584: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:53 np0005593232 python3.9[163133]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:14:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:53.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v585: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:54 np0005593232 python3.9[163298]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:14:54 np0005593232 systemd[1]: Reloading.
Jan 23 04:14:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:54.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:54 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:14:54 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:14:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:14:56 np0005593232 python3.9[163483]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:14:56 np0005593232 network[163500]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:14:56 np0005593232 network[163501]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:14:56 np0005593232 network[163502]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:14:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v586: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:14:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:56.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:14:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v587: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:14:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:14:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:14:58.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:14:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:14:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:14:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:14:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:14:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v588: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:00.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:01.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:02 np0005593232 python3.9[163767]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v589: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:02.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:02 np0005593232 python3.9[163970]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:03 np0005593232 python3.9[164124]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:03.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:04 np0005593232 python3.9[164277]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v590: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:04.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:05 np0005593232 python3.9[164431]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:05.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:05 np0005593232 python3.9[164584]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v591: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:06 np0005593232 python3.9[164737]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:15:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:07.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:08 np0005593232 python3.9[164891]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v592: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:08.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:08 np0005593232 python3.9[165044]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:09 np0005593232 python3.9[165196]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:09.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:10 np0005593232 python3.9[165348]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v593: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:10 np0005593232 python3.9[165500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:10.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:11 np0005593232 podman[165625]: 2026-01-23 09:15:11.166391122 +0000 UTC m=+0.076063732 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 04:15:11 np0005593232 python3.9[165671]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:11.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:11 np0005593232 python3.9[165825]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v594: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:12.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:13 np0005593232 python3.9[165978]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:13.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:13 np0005593232 python3.9[166130]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:14 np0005593232 auditd[705]: Audit daemon rotating log files
Jan 23 04:15:14 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:15:14 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:15:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v595: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:14 np0005593232 python3.9[166282]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:14 np0005593232 python3.9[166436]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:15.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:15 np0005593232 python3.9[166588]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v596: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:16 np0005593232 python3.9[166740]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:17 np0005593232 python3.9[166893]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:15:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:17.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v597: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:18 np0005593232 python3.9[167045]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:18 np0005593232 podman[167048]: 2026-01-23 09:15:18.635097649 +0000 UTC m=+0.106524158 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:15:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:19 np0005593232 python3.9[167224]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 04:15:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v598: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:20.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:20 np0005593232 python3.9[167376]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:15:20 np0005593232 systemd[1]: Reloading.
Jan 23 04:15:20 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:15:20 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:15:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:21.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:21 np0005593232 python3.9[167564]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v599: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:22 np0005593232 python3.9[167767]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:23 np0005593232 python3.9[167921]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:23.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:23 np0005593232 python3.9[168074]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:24 np0005593232 python3.9[168227]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v600: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:24 np0005593232 python3.9[168380]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:25 np0005593232 python3.9[168534]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:15:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:25.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v601: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:26.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:27.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v602: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:28 np0005593232 python3.9[168689]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 23 04:15:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:29.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:29 np0005593232 python3.9[168842]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 04:15:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v603: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:30.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:31 np0005593232 python3.9[169001]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 04:15:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:31.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v604: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:32 np0005593232 python3.9[169161]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:15:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:33 np0005593232 python3.9[169246]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:15:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:33.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v605: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:34.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:35.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v606: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:15:37
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', '.mgr']
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:15:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:37.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:15:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:15:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v607: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:38.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:39.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v608: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:40.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:41 np0005593232 podman[169324]: 2026-01-23 09:15:41.401555415 +0000 UTC m=+0.058712320 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:15:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v609: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:15:42.516 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:15:42.518 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:15:42.518 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:15:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:42.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:43.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v610: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:44.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 187339b4-1ba8-445a-83c6-347a91a22da5 does not exist
Jan 23 04:15:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0cf79ec7-096e-44f5-ae28-c1e1106e18ce does not exist
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b32dbc85-df76-41c3-a9d9-ef065b27d7fe does not exist
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:15:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:45.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.126026382 +0000 UTC m=+0.039744690 container create 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:15:46 np0005593232 systemd[1]: Started libpod-conmon-8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61.scope.
Jan 23 04:15:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.108751901 +0000 UTC m=+0.022470229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.209841964 +0000 UTC m=+0.123560292 container init 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.217827161 +0000 UTC m=+0.131545469 container start 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.222102263 +0000 UTC m=+0.135820571 container attach 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:15:46 np0005593232 strange_bouman[169793]: 167 167
Jan 23 04:15:46 np0005593232 systemd[1]: libpod-8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61.scope: Deactivated successfully.
Jan 23 04:15:46 np0005593232 conmon[169793]: conmon 8833b38e32e98bb11e6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61.scope/container/memory.events
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.226006784 +0000 UTC m=+0.139725092 container died 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:15:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d2780ed8b8959d4606cc4daa3b5a1711f9d157e81db61e4feba0742bcf9d944a-merged.mount: Deactivated successfully.
Jan 23 04:15:46 np0005593232 podman[169777]: 2026-01-23 09:15:46.282807108 +0000 UTC m=+0.196525416 container remove 8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:15:46 np0005593232 systemd[1]: libpod-conmon-8833b38e32e98bb11e6a71b9df203d2d8a40a4982739bbca15a5aa7b9312ed61.scope: Deactivated successfully.
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:15:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v611: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:46 np0005593232 podman[169818]: 2026-01-23 09:15:46.464119551 +0000 UTC m=+0.059858572 container create 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:15:46 np0005593232 systemd[1]: Started libpod-conmon-0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8.scope.
Jan 23 04:15:46 np0005593232 podman[169818]: 2026-01-23 09:15:46.428608972 +0000 UTC m=+0.024348003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:46 np0005593232 podman[169818]: 2026-01-23 09:15:46.553653965 +0000 UTC m=+0.149392986 container init 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:15:46 np0005593232 podman[169818]: 2026-01-23 09:15:46.559401859 +0000 UTC m=+0.155140880 container start 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:15:46 np0005593232 podman[169818]: 2026-01-23 09:15:46.565180303 +0000 UTC m=+0.160919384 container attach 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:47 np0005593232 zen_nightingale[169835]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:15:47 np0005593232 zen_nightingale[169835]: --> relative data size: 1.0
Jan 23 04:15:47 np0005593232 zen_nightingale[169835]: --> All data devices are unavailable
Jan 23 04:15:47 np0005593232 systemd[1]: libpod-0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8.scope: Deactivated successfully.
Jan 23 04:15:47 np0005593232 podman[169818]: 2026-01-23 09:15:47.459335515 +0000 UTC m=+1.055074536 container died 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-68fa85fa701cf2c6d8babfc96a79282e1deb1fb1f0307af053b1e1f84e372b0f-merged.mount: Deactivated successfully.
Jan 23 04:15:47 np0005593232 podman[169818]: 2026-01-23 09:15:47.516626213 +0000 UTC m=+1.112365224 container remove 0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:15:47 np0005593232 systemd[1]: libpod-conmon-0cc0271c84ed71a1c249640f4cc6b27729c389c33584891eda9a5bf3eedc08c8.scope: Deactivated successfully.
Jan 23 04:15:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:47.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.099810987 +0000 UTC m=+0.038566347 container create bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:15:48 np0005593232 systemd[1]: Started libpod-conmon-bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83.scope.
Jan 23 04:15:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.164288139 +0000 UTC m=+0.103043549 container init bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.172806321 +0000 UTC m=+0.111561681 container start bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.175833427 +0000 UTC m=+0.114588807 container attach bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:48 np0005593232 nice_bouman[170023]: 167 167
Jan 23 04:15:48 np0005593232 systemd[1]: libpod-bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83.scope: Deactivated successfully.
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.177303029 +0000 UTC m=+0.116058389 container died bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.083316468 +0000 UTC m=+0.022071848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-90eb2a8dbbeef3517e4168494430bd711caff4348e112389dd806da9dcca3556-merged.mount: Deactivated successfully.
Jan 23 04:15:48 np0005593232 podman[170007]: 2026-01-23 09:15:48.224027827 +0000 UTC m=+0.162783187 container remove bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:15:48 np0005593232 systemd[1]: libpod-conmon-bd3633d9099540c0c8f7f7ecd3b34128e7d6ba8249e97f5b05e8399cd90bff83.scope: Deactivated successfully.
Jan 23 04:15:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v612: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:48 np0005593232 podman[170047]: 2026-01-23 09:15:48.414959882 +0000 UTC m=+0.054710335 container create 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:15:48 np0005593232 systemd[1]: Started libpod-conmon-1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93.scope.
Jan 23 04:15:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07abf17861ebededd7eff7c2bd99cb3b315a3617950e414b874527ff0ae751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07abf17861ebededd7eff7c2bd99cb3b315a3617950e414b874527ff0ae751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07abf17861ebededd7eff7c2bd99cb3b315a3617950e414b874527ff0ae751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:48 np0005593232 podman[170047]: 2026-01-23 09:15:48.399714319 +0000 UTC m=+0.039464792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab07abf17861ebededd7eff7c2bd99cb3b315a3617950e414b874527ff0ae751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:48 np0005593232 podman[170047]: 2026-01-23 09:15:48.517462835 +0000 UTC m=+0.157213308 container init 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:48 np0005593232 podman[170047]: 2026-01-23 09:15:48.525635728 +0000 UTC m=+0.165386171 container start 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:15:48 np0005593232 podman[170047]: 2026-01-23 09:15:48.530373162 +0000 UTC m=+0.170123625 container attach 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:15:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:48.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]: {
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:    "0": [
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:        {
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "devices": [
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "/dev/loop3"
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            ],
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "lv_name": "ceph_lv0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "lv_size": "7511998464",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "name": "ceph_lv0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "tags": {
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.cluster_name": "ceph",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.crush_device_class": "",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.encrypted": "0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.osd_id": "0",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.type": "block",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:                "ceph.vdo": "0"
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            },
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "type": "block",
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:            "vg_name": "ceph_vg0"
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:        }
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]:    ]
Jan 23 04:15:49 np0005593232 relaxed_roentgen[170064]: }
Jan 23 04:15:49 np0005593232 systemd[1]: libpod-1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93.scope: Deactivated successfully.
Jan 23 04:15:49 np0005593232 podman[170047]: 2026-01-23 09:15:49.442886926 +0000 UTC m=+1.082637379 container died 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:49 np0005593232 podman[170075]: 2026-01-23 09:15:49.449617057 +0000 UTC m=+0.106981761 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:15:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ab07abf17861ebededd7eff7c2bd99cb3b315a3617950e414b874527ff0ae751-merged.mount: Deactivated successfully.
Jan 23 04:15:49 np0005593232 podman[170047]: 2026-01-23 09:15:49.49829734 +0000 UTC m=+1.138047793 container remove 1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:15:49 np0005593232 systemd[1]: libpod-conmon-1f91da9290d1a517128f38e7efd6011643d2d4114b4cf3b6cb87d315e88b4b93.scope: Deactivated successfully.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.596047) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749596227, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2424, "num_deletes": 502, "total_data_size": 4171613, "memory_usage": 4244400, "flush_reason": "Manual Compaction"}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749628355, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 4103421, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12339, "largest_seqno": 14762, "table_properties": {"data_size": 4092973, "index_size": 6302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 22417, "raw_average_key_size": 18, "raw_value_size": 4070378, "raw_average_value_size": 3366, "num_data_blocks": 281, "num_entries": 1209, "num_filter_entries": 1209, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159502, "oldest_key_time": 1769159502, "file_creation_time": 1769159749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 32429 microseconds, and 11114 cpu microseconds.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.628536) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 4103421 bytes OK
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.628592) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.630968) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.630988) EVENT_LOG_v1 {"time_micros": 1769159749630984, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.631007) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 4161007, prev total WAL file size 4161007, number of live WAL files 2.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.632643) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(4007KB)], [29(8423KB)]
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749632749, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 12729441, "oldest_snapshot_seqno": -1}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4184 keys, 10315936 bytes, temperature: kUnknown
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749721252, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 10315936, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10283133, "index_size": 21237, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 103259, "raw_average_key_size": 24, "raw_value_size": 10202621, "raw_average_value_size": 2438, "num_data_blocks": 898, "num_entries": 4184, "num_filter_entries": 4184, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.722027) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10315936 bytes
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.729368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.1 rd, 115.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 8.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(5.6) write-amplify(2.5) OK, records in: 5207, records dropped: 1023 output_compression: NoCompression
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.729439) EVENT_LOG_v1 {"time_micros": 1769159749729414, "job": 12, "event": "compaction_finished", "compaction_time_micros": 88975, "compaction_time_cpu_micros": 31182, "output_level": 6, "num_output_files": 1, "total_output_size": 10315936, "num_input_records": 5207, "num_output_records": 4184, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749730352, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159749731895, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.632460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.731934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.731940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.731943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.731945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:15:49.731947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:15:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:49.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.108120042 +0000 UTC m=+0.044810915 container create ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:15:50 np0005593232 systemd[1]: Started libpod-conmon-ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609.scope.
Jan 23 04:15:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.174292802 +0000 UTC m=+0.110983675 container init ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.182558547 +0000 UTC m=+0.119249420 container start ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.088859574 +0000 UTC m=+0.025550467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.187348583 +0000 UTC m=+0.124039476 container attach ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:50 np0005593232 hungry_poincare[170272]: 167 167
Jan 23 04:15:50 np0005593232 systemd[1]: libpod-ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609.scope: Deactivated successfully.
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.190947396 +0000 UTC m=+0.127638299 container died ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:15:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0494db92a18137a1710ef220559d2048aff928e284bc2e3fdb06186c61f38f6c-merged.mount: Deactivated successfully.
Jan 23 04:15:50 np0005593232 podman[170256]: 2026-01-23 09:15:50.239151725 +0000 UTC m=+0.175842598 container remove ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:15:50 np0005593232 systemd[1]: libpod-conmon-ce8df1dae7de592f056ccc6f48777caed182ab75bfd914b55a5fb440eeff8609.scope: Deactivated successfully.
Jan 23 04:15:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v613: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:50 np0005593232 podman[170297]: 2026-01-23 09:15:50.408624992 +0000 UTC m=+0.044449714 container create 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:15:50 np0005593232 systemd[1]: Started libpod-conmon-4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5.scope.
Jan 23 04:15:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:15:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35890cd63c56d11ded02a9a38fae15bdc23f86b06cd6a51867ae8e0df3e4d3e0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35890cd63c56d11ded02a9a38fae15bdc23f86b06cd6a51867ae8e0df3e4d3e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35890cd63c56d11ded02a9a38fae15bdc23f86b06cd6a51867ae8e0df3e4d3e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35890cd63c56d11ded02a9a38fae15bdc23f86b06cd6a51867ae8e0df3e4d3e0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:15:50 np0005593232 podman[170297]: 2026-01-23 09:15:50.391425893 +0000 UTC m=+0.027250615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:15:50 np0005593232 podman[170297]: 2026-01-23 09:15:50.496745166 +0000 UTC m=+0.132569898 container init 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:15:50 np0005593232 podman[170297]: 2026-01-23 09:15:50.503387495 +0000 UTC m=+0.139212217 container start 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:15:50 np0005593232 podman[170297]: 2026-01-23 09:15:50.50671599 +0000 UTC m=+0.142540762 container attach 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:15:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:50.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:51 np0005593232 charming_kare[170314]: {
Jan 23 04:15:51 np0005593232 charming_kare[170314]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:15:51 np0005593232 charming_kare[170314]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:15:51 np0005593232 charming_kare[170314]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:15:51 np0005593232 charming_kare[170314]:        "osd_id": 0,
Jan 23 04:15:51 np0005593232 charming_kare[170314]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:15:51 np0005593232 charming_kare[170314]:        "type": "bluestore"
Jan 23 04:15:51 np0005593232 charming_kare[170314]:    }
Jan 23 04:15:51 np0005593232 charming_kare[170314]: }
Jan 23 04:15:51 np0005593232 systemd[1]: libpod-4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5.scope: Deactivated successfully.
Jan 23 04:15:51 np0005593232 podman[170297]: 2026-01-23 09:15:51.463545733 +0000 UTC m=+1.099370465 container died 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:15:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-35890cd63c56d11ded02a9a38fae15bdc23f86b06cd6a51867ae8e0df3e4d3e0-merged.mount: Deactivated successfully.
Jan 23 04:15:51 np0005593232 podman[170297]: 2026-01-23 09:15:51.519832562 +0000 UTC m=+1.155657284 container remove 4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_kare, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:15:51 np0005593232 systemd[1]: libpod-conmon-4504b3246357908ae1fa0f382a71fd43b4771d2c1bbe800e55acdc1fce2339a5.scope: Deactivated successfully.
Jan 23 04:15:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:15:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:15:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 093124e0-226a-496a-8ef1-7ab9743ac339 does not exist
Jan 23 04:15:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ea0c5bbd-c42a-406a-aa40-cc54f1713dff does not exist
Jan 23 04:15:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4d5139f3-d90a-4340-b364-802c90560dac does not exist
Jan 23 04:15:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:51.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v614: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:15:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:52.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:53.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v615: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:54.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:55.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v616: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:56.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:15:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:15:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:57.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:15:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v617: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:15:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:15:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:15:58.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:15:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:15:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:15:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:15:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:15:59.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:00 np0005593232 kernel: SELinux:  Converting 2776 SID table entries...
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 04:16:00 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 04:16:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v618: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:00.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:16:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:01.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:16:02 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 23 04:16:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v619: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:16:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:02.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:16:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v620: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:04.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:16:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:05.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:16:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v621: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:06.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v622: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:08.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:09.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v623: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:10 np0005593232 kernel: SELinux:  Converting 2776 SID table entries...
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 04:16:10 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 04:16:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:16:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:10.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:16:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:16:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:16:12 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 23 04:16:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v624: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:12 np0005593232 podman[170474]: 2026-01-23 09:16:12.439582879 +0000 UTC m=+0.082298054 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 04:16:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:12.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:13.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v625: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:14.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:15.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v626: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:16.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v627: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v628: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:20 np0005593232 podman[170499]: 2026-01-23 09:16:20.42492062 +0000 UTC m=+0.083774468 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 23 04:16:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:20.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:21.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v629: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:22.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v630: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:24.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:25.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v631: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:27.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v632: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:28.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:29.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v633: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:30.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:31.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v634: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:32.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:33.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v635: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:34.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:35.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v636: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:36.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:16:37
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', '.rgw.root', '.mgr', 'vms']
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:16:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:16:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:37.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v637: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:38.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:39.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v638: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:40.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:41.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v639: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:16:42.567 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:16:42.569 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:16:42.569 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:16:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:42.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:42 np0005593232 podman[183284]: 2026-01-23 09:16:42.975814791 +0000 UTC m=+0.091131097 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:16:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:43.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v640: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:44.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:45.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:16:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v641: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:46.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v642: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:48.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:49.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v643: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:51 np0005593232 podman[187517]: 2026-01-23 09:16:51.479286728 +0000 UTC m=+0.133045772 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:16:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:51.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v644: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:52.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:53 np0005593232 podman[187726]: 2026-01-23 09:16:53.065228017 +0000 UTC m=+0.204086281 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:16:53 np0005593232 podman[187726]: 2026-01-23 09:16:53.250599551 +0000 UTC m=+0.389457795 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:16:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:53.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:54 np0005593232 podman[187889]: 2026-01-23 09:16:54.345880853 +0000 UTC m=+0.079509922 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:16:54 np0005593232 podman[187889]: 2026-01-23 09:16:54.3612883 +0000 UTC m=+0.094917339 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:16:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v645: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:54 np0005593232 podman[187955]: 2026-01-23 09:16:54.61253268 +0000 UTC m=+0.072289797 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, release=1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 04:16:54 np0005593232 podman[187955]: 2026-01-23 09:16:54.630085811 +0000 UTC m=+0.089842908 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, name=keepalived)
Jan 23 04:16:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:16:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:16:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:16:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:55.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:16:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 571768f8-828d-4ae5-8469-5ae43472dbb3 does not exist
Jan 23 04:16:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 97d7093f-9f54-4a1f-b7f7-d59e5651470a does not exist
Jan 23 04:16:56 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 900c94d5-6bb8-4d10-910c-ad7bd07f8341 does not exist
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:16:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v646: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:16:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:56.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:56 np0005593232 podman[188261]: 2026-01-23 09:16:56.936976217 +0000 UTC m=+0.038278027 container create 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:16:56 np0005593232 systemd[1]: Started libpod-conmon-0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed.scope.
Jan 23 04:16:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:56.920587831 +0000 UTC m=+0.021889661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:57.017634462 +0000 UTC m=+0.118936302 container init 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:57.024748413 +0000 UTC m=+0.126050223 container start 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:57.028550316 +0000 UTC m=+0.129852146 container attach 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:16:57 np0005593232 busy_rubin[188278]: 167 167
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:57.032509394 +0000 UTC m=+0.133811204 container died 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:16:57 np0005593232 systemd[1]: libpod-0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed.scope: Deactivated successfully.
Jan 23 04:16:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6da39603bbf496c88531b7a390c9a804afaace0d69042af55757653070075728-merged.mount: Deactivated successfully.
Jan 23 04:16:57 np0005593232 podman[188261]: 2026-01-23 09:16:57.069954966 +0000 UTC m=+0.171256766 container remove 0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:16:57 np0005593232 systemd[1]: libpod-conmon-0e7cc2965461898e0c10e223adba92042e0dc341324aeba5d30f4504644965ed.scope: Deactivated successfully.
Jan 23 04:16:57 np0005593232 podman[188302]: 2026-01-23 09:16:57.242712605 +0000 UTC m=+0.056488458 container create da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:16:57 np0005593232 systemd[1]: Started libpod-conmon-da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c.scope.
Jan 23 04:16:57 np0005593232 podman[188302]: 2026-01-23 09:16:57.210986013 +0000 UTC m=+0.024761866 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:16:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:16:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:57 np0005593232 podman[188302]: 2026-01-23 09:16:57.446092994 +0000 UTC m=+0.259868937 container init da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:16:57 np0005593232 podman[188302]: 2026-01-23 09:16:57.469444247 +0000 UTC m=+0.283220140 container start da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:16:57 np0005593232 podman[188302]: 2026-01-23 09:16:57.474997922 +0000 UTC m=+0.288773865 container attach da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:16:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:16:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:16:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:57.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:16:58 np0005593232 loving_herschel[188318]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:16:58 np0005593232 loving_herschel[188318]: --> relative data size: 1.0
Jan 23 04:16:58 np0005593232 loving_herschel[188318]: --> All data devices are unavailable
Jan 23 04:16:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v647: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:16:58 np0005593232 systemd[1]: libpod-da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c.scope: Deactivated successfully.
Jan 23 04:16:58 np0005593232 podman[188302]: 2026-01-23 09:16:58.415115307 +0000 UTC m=+1.228891190 container died da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:16:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dbc527d53d30195fcce859d71421281a3b358b2e071076bf790b9f9b7a60d1cc-merged.mount: Deactivated successfully.
Jan 23 04:16:58 np0005593232 podman[188302]: 2026-01-23 09:16:58.486040752 +0000 UTC m=+1.299816626 container remove da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:16:58 np0005593232 systemd[1]: libpod-conmon-da85f8b48014b24099bd474f1e23737ce40fe802dd6fd2086e0df4b28248158c.scope: Deactivated successfully.
Jan 23 04:16:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:16:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:16:58.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.037490646 +0000 UTC m=+0.039981248 container create 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:16:59 np0005593232 systemd[1]: Started libpod-conmon-277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b.scope.
Jan 23 04:16:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.099819447 +0000 UTC m=+0.102310049 container init 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.106080523 +0000 UTC m=+0.108571145 container start 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:16:59 np0005593232 charming_cartwright[188502]: 167 167
Jan 23 04:16:59 np0005593232 systemd[1]: libpod-277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b.scope: Deactivated successfully.
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.110596487 +0000 UTC m=+0.113087099 container attach 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:16:59 np0005593232 conmon[188502]: conmon 277d889a9c5490b79a40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b.scope/container/memory.events
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.11238108 +0000 UTC m=+0.114871802 container died 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.01877034 +0000 UTC m=+0.021260942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:16:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9ea2a57a0ee0c45ea8148e11f10d832ab266f33ddcac1aab8c1e14039bfe9777-merged.mount: Deactivated successfully.
Jan 23 04:16:59 np0005593232 podman[188486]: 2026-01-23 09:16:59.154714067 +0000 UTC m=+0.157204659 container remove 277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cartwright, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:16:59 np0005593232 systemd[1]: libpod-conmon-277d889a9c5490b79a4066c874e427af6a7583572ebe8b53b96a9e357343325b.scope: Deactivated successfully.
Jan 23 04:16:59 np0005593232 podman[188528]: 2026-01-23 09:16:59.314905323 +0000 UTC m=+0.043378939 container create dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:16:59 np0005593232 systemd[1]: Started libpod-conmon-dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5.scope.
Jan 23 04:16:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:16:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae13ec9207eeec4d20f7d340a4980c152dbfaa2cc2c38800199f16b5a47446e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae13ec9207eeec4d20f7d340a4980c152dbfaa2cc2c38800199f16b5a47446e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae13ec9207eeec4d20f7d340a4980c152dbfaa2cc2c38800199f16b5a47446e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fae13ec9207eeec4d20f7d340a4980c152dbfaa2cc2c38800199f16b5a47446e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:16:59 np0005593232 podman[188528]: 2026-01-23 09:16:59.388071326 +0000 UTC m=+0.116544972 container init dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:16:59 np0005593232 podman[188528]: 2026-01-23 09:16:59.296391554 +0000 UTC m=+0.024865220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:16:59 np0005593232 podman[188528]: 2026-01-23 09:16:59.394810886 +0000 UTC m=+0.123284502 container start dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:16:59 np0005593232 podman[188528]: 2026-01-23 09:16:59.416061347 +0000 UTC m=+0.144534983 container attach dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 04:16:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:16:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:16:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:16:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:16:59.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]: {
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:    "0": [
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:        {
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "devices": [
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "/dev/loop3"
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            ],
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "lv_name": "ceph_lv0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "lv_size": "7511998464",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "name": "ceph_lv0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "tags": {
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.cluster_name": "ceph",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.crush_device_class": "",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.encrypted": "0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.osd_id": "0",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.type": "block",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:                "ceph.vdo": "0"
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            },
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "type": "block",
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:            "vg_name": "ceph_vg0"
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:        }
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]:    ]
Jan 23 04:17:00 np0005593232 fervent_galileo[188544]: }
Jan 23 04:17:00 np0005593232 systemd[1]: libpod-dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5.scope: Deactivated successfully.
Jan 23 04:17:00 np0005593232 podman[188528]: 2026-01-23 09:17:00.207837696 +0000 UTC m=+0.936311342 container died dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:17:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fae13ec9207eeec4d20f7d340a4980c152dbfaa2cc2c38800199f16b5a47446e-merged.mount: Deactivated successfully.
Jan 23 04:17:00 np0005593232 podman[188528]: 2026-01-23 09:17:00.259764227 +0000 UTC m=+0.988237843 container remove dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_galileo, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 04:17:00 np0005593232 systemd[1]: libpod-conmon-dac518b0d29dbad73042cb5d39967cfc8fed6cd8d2eaff35a63af929e3f41fa5.scope: Deactivated successfully.
Jan 23 04:17:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v648: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:17:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:00.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.876089898 +0000 UTC m=+0.043878184 container create aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:17:00 np0005593232 systemd[1]: Started libpod-conmon-aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8.scope.
Jan 23 04:17:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.859568667 +0000 UTC m=+0.027356983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.962221175 +0000 UTC m=+0.130009461 container init aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.97013291 +0000 UTC m=+0.137921196 container start aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.974018215 +0000 UTC m=+0.141806551 container attach aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:17:00 np0005593232 dazzling_cohen[188724]: 167 167
Jan 23 04:17:00 np0005593232 systemd[1]: libpod-aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8.scope: Deactivated successfully.
Jan 23 04:17:00 np0005593232 podman[188707]: 2026-01-23 09:17:00.975957073 +0000 UTC m=+0.143745379 container died aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:17:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7ad3198b5a69a85035084a727d7469cf5d6d920b1d2413547df437fe1a445a5b-merged.mount: Deactivated successfully.
Jan 23 04:17:01 np0005593232 podman[188707]: 2026-01-23 09:17:01.014076345 +0000 UTC m=+0.181864631 container remove aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_cohen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:17:01 np0005593232 systemd[1]: libpod-conmon-aa7d5928590d076e8f7347627f1a9cef64913ec86411c338188db0819b7e23a8.scope: Deactivated successfully.
Jan 23 04:17:01 np0005593232 podman[188748]: 2026-01-23 09:17:01.171147958 +0000 UTC m=+0.039553045 container create 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:17:01 np0005593232 systemd[1]: Started libpod-conmon-3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3.scope.
Jan 23 04:17:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:17:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ba9128550429c63b03004f3e7b82a9899ce88b277bf3679e7406feec766329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:17:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ba9128550429c63b03004f3e7b82a9899ce88b277bf3679e7406feec766329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:17:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ba9128550429c63b03004f3e7b82a9899ce88b277bf3679e7406feec766329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:17:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2ba9128550429c63b03004f3e7b82a9899ce88b277bf3679e7406feec766329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:17:01 np0005593232 podman[188748]: 2026-01-23 09:17:01.247376452 +0000 UTC m=+0.115781559 container init 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:17:01 np0005593232 podman[188748]: 2026-01-23 09:17:01.155576976 +0000 UTC m=+0.023982063 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:17:01 np0005593232 podman[188748]: 2026-01-23 09:17:01.25372465 +0000 UTC m=+0.122129727 container start 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:17:01 np0005593232 podman[188748]: 2026-01-23 09:17:01.257240145 +0000 UTC m=+0.125645222 container attach 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:17:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:02 np0005593232 zen_noyce[188765]: {
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:        "osd_id": 0,
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:        "type": "bluestore"
Jan 23 04:17:02 np0005593232 zen_noyce[188765]:    }
Jan 23 04:17:02 np0005593232 zen_noyce[188765]: }
Jan 23 04:17:02 np0005593232 systemd[1]: libpod-3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3.scope: Deactivated successfully.
Jan 23 04:17:02 np0005593232 podman[188748]: 2026-01-23 09:17:02.133185734 +0000 UTC m=+1.001590821 container died 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:17:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c2ba9128550429c63b03004f3e7b82a9899ce88b277bf3679e7406feec766329-merged.mount: Deactivated successfully.
Jan 23 04:17:02 np0005593232 podman[188748]: 2026-01-23 09:17:02.1856181 +0000 UTC m=+1.054023187 container remove 3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_noyce, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:17:02 np0005593232 systemd[1]: libpod-conmon-3ceee6a010d79ad6092c98a7ee92f86f900f07ab772ae9f7850fa3a37fac17f3.scope: Deactivated successfully.
Jan 23 04:17:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:17:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:17:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:17:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:17:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b75396d2-75ef-499f-8fae-c028338a2539 does not exist
Jan 23 04:17:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev efd1d1ba-dfe3-409c-a610-53fd22361b1c does not exist
Jan 23 04:17:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5a25e896-7e6d-420e-8a32-80852b13b5a3 does not exist
Jan 23 04:17:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v649: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:02.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:17:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:17:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v650: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:04.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:05 np0005593232 kernel: SELinux:  Converting 2777 SID table entries...
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability network_peer_controls=1
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability open_perms=1
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability extended_socket_class=1
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability always_check_network=0
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 23 04:17:05 np0005593232 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 23 04:17:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:05.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v651: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:06 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 04:17:06 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 23 04:17:06 np0005593232 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 23 04:17:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:17:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:17:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:07.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:17:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v652: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:08.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:09.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.970089) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159829970256, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 896, "num_deletes": 250, "total_data_size": 1443719, "memory_usage": 1475464, "flush_reason": "Manual Compaction"}
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159829986040, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 887777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14763, "largest_seqno": 15658, "table_properties": {"data_size": 884206, "index_size": 1351, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9135, "raw_average_key_size": 19, "raw_value_size": 876562, "raw_average_value_size": 1913, "num_data_blocks": 62, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159750, "oldest_key_time": 1769159750, "file_creation_time": 1769159829, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16046 microseconds, and 4717 cpu microseconds.
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.986170) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 887777 bytes OK
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.986219) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.988336) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.988370) EVENT_LOG_v1 {"time_micros": 1769159829988358, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.988392) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1439501, prev total WAL file size 1439501, number of live WAL files 2.
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.989397) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323534' seq:72057594037927935, type:22 .. '6D67727374617400353035' seq:0, type:0; will stop at (end)
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(866KB)], [32(10074KB)]
Jan 23 04:17:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159829989530, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 11203713, "oldest_snapshot_seqno": -1}
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 4163 keys, 7889462 bytes, temperature: kUnknown
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159830035845, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7889462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7860343, "index_size": 17617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 103112, "raw_average_key_size": 24, "raw_value_size": 7783653, "raw_average_value_size": 1869, "num_data_blocks": 741, "num_entries": 4163, "num_filter_entries": 4163, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159829, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.036123) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7889462 bytes
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.059190) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.4 rd, 170.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.8 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(21.5) write-amplify(8.9) OK, records in: 4642, records dropped: 479 output_compression: NoCompression
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.059244) EVENT_LOG_v1 {"time_micros": 1769159830059222, "job": 14, "event": "compaction_finished", "compaction_time_micros": 46419, "compaction_time_cpu_micros": 20178, "output_level": 6, "num_output_files": 1, "total_output_size": 7889462, "num_input_records": 4642, "num_output_records": 4163, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159830059555, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159830061230, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:09.989190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.061301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.061309) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.061311) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.061313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:17:10.061315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:17:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v653: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:17:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:10.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:17:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v654: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:12.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:13 np0005593232 podman[189163]: 2026-01-23 09:17:13.152010918 +0000 UTC m=+0.063644834 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:17:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:13.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:14 np0005593232 systemd[1]: Stopping OpenSSH server daemon...
Jan 23 04:17:14 np0005593232 systemd[1]: sshd.service: Deactivated successfully.
Jan 23 04:17:14 np0005593232 systemd[1]: Stopped OpenSSH server daemon.
Jan 23 04:17:14 np0005593232 systemd[1]: sshd.service: Consumed 4.960s CPU time, read 32.0K from disk, written 36.0K to disk.
Jan 23 04:17:14 np0005593232 systemd[1]: Stopped target sshd-keygen.target.
Jan 23 04:17:14 np0005593232 systemd[1]: Stopping sshd-keygen.target...
Jan 23 04:17:14 np0005593232 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 04:17:14 np0005593232 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 04:17:14 np0005593232 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 23 04:17:14 np0005593232 systemd[1]: Reached target sshd-keygen.target.
Jan 23 04:17:14 np0005593232 systemd[1]: Starting OpenSSH server daemon...
Jan 23 04:17:14 np0005593232 systemd[1]: Started OpenSSH server daemon.
Jan 23 04:17:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v655: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:14.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:15.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:16 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 04:17:16 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 04:17:16 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:16 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:16 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v656: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:16 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 04:17:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:16.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:17.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v657: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v658: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:20.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v659: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:22 np0005593232 podman[196335]: 2026-01-23 09:17:22.494901741 +0000 UTC m=+0.149105708 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:17:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:22.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:23.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v660: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:24.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:25.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:26 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 04:17:26 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 04:17:26 np0005593232 systemd[1]: man-db-cache-update.service: Consumed 10.770s CPU time.
Jan 23 04:17:26 np0005593232 systemd[1]: run-r04785e489b1e48138eb2ad832ba2ae3c.service: Deactivated successfully.
Jan 23 04:17:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v661: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:26.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:27.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v662: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:28.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v663: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:30.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:31.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v664: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:32.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:33.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v665: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:34.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:35.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v666: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:36.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:17:37
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:17:37 np0005593232 python3.9[198676]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:17:37 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:37 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:37 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:17:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:17:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:37.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v667: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:38 np0005593232 python3.9[198866]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:17:38 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:38.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:38 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:38 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:39 np0005593232 python3.9[199057]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:17:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:39.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:40 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:40 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:40 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v668: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:40.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:41 np0005593232 python3.9[199248]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:17:41 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:41 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:41 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:41.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v669: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:17:42.569 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:17:42.571 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:17:42.571 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:17:42 np0005593232 python3.9[199437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:42.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:42 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:43 np0005593232 podman[199495]: 2026-01-23 09:17:43.380950969 +0000 UTC m=+0.070728917 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 04:17:43 np0005593232 python3.9[199694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:43.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:44 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:44 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:44 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v670: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:45 np0005593232 python3.9[199885]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:45 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:45 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:45 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:17:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v671: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:46 np0005593232 python3.9[200075]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:47 np0005593232 python3.9[200231]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:47 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:47 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:47 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:48.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v672: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:49 np0005593232 python3.9[200420]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 23 04:17:49 np0005593232 systemd[1]: Reloading.
Jan 23 04:17:49 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:17:49 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:17:49 np0005593232 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 23 04:17:49 np0005593232 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 23 04:17:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:17:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:50.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:17:50 np0005593232 python3.9[200614]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v673: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:51 np0005593232 python3.9[200770]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:52 np0005593232 python3.9[200925]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v674: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:52 np0005593232 podman[201052]: 2026-01-23 09:17:52.689131784 +0000 UTC m=+0.105107907 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 04:17:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:52 np0005593232 python3.9[201097]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:53 np0005593232 python3.9[201259]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v675: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:54 np0005593232 python3.9[201414]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:54.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:17:55 np0005593232 python3.9[201570]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:17:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:17:56 np0005593232 python3.9[201725]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v676: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:56 np0005593232 python3.9[201880]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:57 np0005593232 python3.9[202036]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:17:58.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v677: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:17:58 np0005593232 python3.9[202191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:17:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:17:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:17:58.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:17:59 np0005593232 python3.9[202347]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:17:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:00 np0005593232 python3.9[202502]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:18:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v678: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:00 np0005593232 python3.9[202657]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 23 04:18:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:00.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:02.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v679: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:03 np0005593232 python3.9[202914]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 56773ff9-ba06-421c-b636-3aa93c9dba99 does not exist
Jan 23 04:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 005426d3-65bf-40fe-a6a2-231ef86c05a7 does not exist
Jan 23 04:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 437b2fdd-6f6d-47d1-84e3-eeef22ec04f8 does not exist
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:18:04 np0005593232 python3.9[203147]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:04.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v680: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.642528803 +0000 UTC m=+0.046976226 container create 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:18:04 np0005593232 systemd[1]: Started libpod-conmon-5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd.scope.
Jan 23 04:18:04 np0005593232 python3.9[203410]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.623254689 +0000 UTC m=+0.027702122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.742486764 +0000 UTC m=+0.146934197 container init 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.751773956 +0000 UTC m=+0.156221379 container start 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.756134749 +0000 UTC m=+0.160582192 container attach 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:18:04 np0005593232 hopeful_ptolemy[203455]: 167 167
Jan 23 04:18:04 np0005593232 systemd[1]: libpod-5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd.scope: Deactivated successfully.
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.761379827 +0000 UTC m=+0.165827280 container died 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:18:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-650eef714a223aff7f311e3efe00b1d5534e2ca6bc2882b2db2f899d6d613461-merged.mount: Deactivated successfully.
Jan 23 04:18:04 np0005593232 podman[203438]: 2026-01-23 09:18:04.812431477 +0000 UTC m=+0.216878890 container remove 5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:04 np0005593232 systemd[1]: libpod-conmon-5f4ecbeb10ebcf8844283eb69d13d99384584d449ad1d8c037bd41e6a1bb0dbd.scope: Deactivated successfully.
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:18:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:04.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:04 np0005593232 podman[203526]: 2026-01-23 09:18:04.963550911 +0000 UTC m=+0.039842445 container create 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:18:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:05 np0005593232 systemd[1]: Started libpod-conmon-287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752.scope.
Jan 23 04:18:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:05 np0005593232 podman[203526]: 2026-01-23 09:18:04.947458497 +0000 UTC m=+0.023750051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:05 np0005593232 podman[203526]: 2026-01-23 09:18:05.050208076 +0000 UTC m=+0.126499640 container init 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:18:05 np0005593232 podman[203526]: 2026-01-23 09:18:05.059151319 +0000 UTC m=+0.135442853 container start 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:18:05 np0005593232 podman[203526]: 2026-01-23 09:18:05.062213835 +0000 UTC m=+0.138505369 container attach 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:18:05 np0005593232 python3.9[203651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:05 np0005593232 objective_lumiere[203588]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:18:05 np0005593232 objective_lumiere[203588]: --> relative data size: 1.0
Jan 23 04:18:05 np0005593232 objective_lumiere[203588]: --> All data devices are unavailable
Jan 23 04:18:06 np0005593232 systemd[1]: libpod-287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752.scope: Deactivated successfully.
Jan 23 04:18:06 np0005593232 podman[203526]: 2026-01-23 09:18:06.003460273 +0000 UTC m=+1.079751817 container died 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:18:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bd9acd49d67b5234be3e229673f7263ec51ada8e6f716581a29a31ea7f8202de-merged.mount: Deactivated successfully.
Jan 23 04:18:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:06 np0005593232 python3.9[203807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:06 np0005593232 podman[203526]: 2026-01-23 09:18:06.053833474 +0000 UTC m=+1.130125008 container remove 287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 23 04:18:06 np0005593232 systemd[1]: libpod-conmon-287e4ec55b6296f38f46f108b8251d250cb9dfd50462bba7cc31764092fa9752.scope: Deactivated successfully.
Jan 23 04:18:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v681: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.632686017 +0000 UTC m=+0.047466280 container create 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:18:06 np0005593232 python3.9[204090]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:18:06 np0005593232 systemd[1]: Started libpod-conmon-283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c.scope.
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.606127518 +0000 UTC m=+0.020907801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.721674098 +0000 UTC m=+0.136454381 container init 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.728062208 +0000 UTC m=+0.142842471 container start 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.73165326 +0000 UTC m=+0.146433523 container attach 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:18:06 np0005593232 reverent_kapitsa[204136]: 167 167
Jan 23 04:18:06 np0005593232 systemd[1]: libpod-283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c.scope: Deactivated successfully.
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.73380227 +0000 UTC m=+0.148582523 container died 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:18:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a26d954c310bf2ca6beab8c390328a7cb61b81c3795589332c3a0f7f6c722364-merged.mount: Deactivated successfully.
Jan 23 04:18:06 np0005593232 podman[204119]: 2026-01-23 09:18:06.767921363 +0000 UTC m=+0.182701656 container remove 283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kapitsa, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:18:06 np0005593232 systemd[1]: libpod-conmon-283918558b46dc2d0f4fc449a2aec1a922f2f3d8a68a2afb46d6e381679d583c.scope: Deactivated successfully.
Jan 23 04:18:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:06 np0005593232 podman[204209]: 2026-01-23 09:18:06.953440067 +0000 UTC m=+0.043358324 container create 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 04:18:06 np0005593232 systemd[1]: Started libpod-conmon-67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6.scope.
Jan 23 04:18:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca65d2add32f67f490287a86d6a0a9200344e7d2f164795576d46f76734597e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca65d2add32f67f490287a86d6a0a9200344e7d2f164795576d46f76734597e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca65d2add32f67f490287a86d6a0a9200344e7d2f164795576d46f76734597e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ca65d2add32f67f490287a86d6a0a9200344e7d2f164795576d46f76734597e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:06.936084408 +0000 UTC m=+0.026002695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:07.037374856 +0000 UTC m=+0.127293133 container init 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:07.044648541 +0000 UTC m=+0.134566788 container start 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:07.047834501 +0000 UTC m=+0.137752778 container attach 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:07 np0005593232 python3.9[204332]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]: {
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:    "0": [
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:        {
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "devices": [
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "/dev/loop3"
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            ],
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "lv_name": "ceph_lv0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "lv_size": "7511998464",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "name": "ceph_lv0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "tags": {
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.cluster_name": "ceph",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.crush_device_class": "",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.encrypted": "0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.osd_id": "0",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.type": "block",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:                "ceph.vdo": "0"
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            },
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "type": "block",
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:            "vg_name": "ceph_vg0"
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:        }
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]:    ]
Jan 23 04:18:07 np0005593232 stupefied_elgamal[204260]: }
Jan 23 04:18:07 np0005593232 systemd[1]: libpod-67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6.scope: Deactivated successfully.
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:07.824987598 +0000 UTC m=+0.914905875 container died 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:18:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4ca65d2add32f67f490287a86d6a0a9200344e7d2f164795576d46f76734597e-merged.mount: Deactivated successfully.
Jan 23 04:18:07 np0005593232 podman[204209]: 2026-01-23 09:18:07.882994874 +0000 UTC m=+0.972913131 container remove 67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:18:07 np0005593232 systemd[1]: libpod-conmon-67d1babb6de4d79fcd08b054d51d38c6e526aa5445ac86222518e6f3886da6d6.scope: Deactivated successfully.
Jan 23 04:18:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:08.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v682: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.535602408 +0000 UTC m=+0.038685912 container create ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:08 np0005593232 python3.9[204607]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:08 np0005593232 systemd[1]: Started libpod-conmon-ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa.scope.
Jan 23 04:18:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.519825573 +0000 UTC m=+0.022909077 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.6260296 +0000 UTC m=+0.129113124 container init ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.633514501 +0000 UTC m=+0.136598005 container start ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.636885516 +0000 UTC m=+0.139969010 container attach ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:18:08 np0005593232 determined_aryabhata[204660]: 167 167
Jan 23 04:18:08 np0005593232 systemd[1]: libpod-ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa.scope: Deactivated successfully.
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.639746197 +0000 UTC m=+0.142829701 container died ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3e579cebd26c16397f0fbb9f2c999cc42eefb5184c5fd51803dc1c4fd88e424c-merged.mount: Deactivated successfully.
Jan 23 04:18:08 np0005593232 podman[204642]: 2026-01-23 09:18:08.671390799 +0000 UTC m=+0.174474303 container remove ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_aryabhata, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:18:08 np0005593232 systemd[1]: libpod-conmon-ac550cbe089fc8adf7e45b3f71f8e3d69e451dcd2300c9662aefba3476725baa.scope: Deactivated successfully.
Jan 23 04:18:08 np0005593232 podman[204731]: 2026-01-23 09:18:08.841251292 +0000 UTC m=+0.051469753 container create 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 23 04:18:08 np0005593232 systemd[1]: Started libpod-conmon-88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5.scope.
Jan 23 04:18:08 np0005593232 podman[204731]: 2026-01-23 09:18:08.816661468 +0000 UTC m=+0.026879989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:18:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede11719da23d3ee90e0f3733abe6a780662bbb59c26ab288f1c428f83e0ce5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede11719da23d3ee90e0f3733abe6a780662bbb59c26ab288f1c428f83e0ce5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede11719da23d3ee90e0f3733abe6a780662bbb59c26ab288f1c428f83e0ce5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fede11719da23d3ee90e0f3733abe6a780662bbb59c26ab288f1c428f83e0ce5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:18:08 np0005593232 podman[204731]: 2026-01-23 09:18:08.93332005 +0000 UTC m=+0.143538491 container init 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:08 np0005593232 podman[204731]: 2026-01-23 09:18:08.942998793 +0000 UTC m=+0.153217234 container start 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:18:08 np0005593232 podman[204731]: 2026-01-23 09:18:08.947722576 +0000 UTC m=+0.157941007 container attach 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:18:09 np0005593232 python3.9[204828]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159887.8623126-1646-51819095731641/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]: {
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:        "osd_id": 0,
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:        "type": "bluestore"
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]:    }
Jan 23 04:18:09 np0005593232 beautiful_gauss[204748]: }
Jan 23 04:18:09 np0005593232 systemd[1]: libpod-88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5.scope: Deactivated successfully.
Jan 23 04:18:09 np0005593232 podman[204731]: 2026-01-23 09:18:09.772613361 +0000 UTC m=+0.982831822 container died 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:18:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fede11719da23d3ee90e0f3733abe6a780662bbb59c26ab288f1c428f83e0ce5-merged.mount: Deactivated successfully.
Jan 23 04:18:09 np0005593232 podman[204731]: 2026-01-23 09:18:09.82429716 +0000 UTC m=+1.034515621 container remove 88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:18:09 np0005593232 systemd[1]: libpod-conmon-88b5af9f450f81871b7650d5419f36070a51ce2bdb43ff87803fca564adb02c5.scope: Deactivated successfully.
Jan 23 04:18:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:18:09 np0005593232 python3.9[204991]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:18:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 056b68c8-d40f-422b-9b1a-9018427c38ab does not exist
Jan 23 04:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1fe7cd27-22d2-4e51-8aa3-73a8281a72f0 does not exist
Jan 23 04:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a3c9989b-e3cc-43b6-be46-45a9d9d2d924 does not exist
Jan 23 04:18:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:10.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:10 np0005593232 python3.9[205185]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159889.433354-1646-194876552554374/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v683: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:18:11 np0005593232 python3.9[205338]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:11 np0005593232 python3.9[205463]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159890.5653465-1646-50938358063103/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:12.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:12 np0005593232 python3.9[205615]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v684: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:12 np0005593232 python3.9[205740]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159891.8255382-1646-116347021339243/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:12.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:13 np0005593232 python3.9[205893]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:13 np0005593232 podman[205990]: 2026-01-23 09:18:13.817017356 +0000 UTC m=+0.063393810 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:18:14 np0005593232 python3.9[206034]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159892.9344904-1646-109924919353722/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:14.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v685: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:14 np0005593232 python3.9[206186]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:14.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:15 np0005593232 python3.9[206312]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159894.146994-1646-118440113181627/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:15 np0005593232 python3.9[206464]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:16 np0005593232 python3.9[206587]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159895.375668-1646-85459000260469/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v686: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:17 np0005593232 python3.9[206740]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:17 np0005593232 python3.9[206865]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769159896.5083323-1646-56765187683744/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:18:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:18.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:18:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v687: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:18.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:19 np0005593232 python3.9[207018]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 23 04:18:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:18:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:20.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:18:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v688: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:20 np0005593232 python3.9[207171]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:21 np0005593232 python3.9[207324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:21 np0005593232 python3.9[207476]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:22.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v689: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:22 np0005593232 python3.9[207628]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:22 np0005593232 podman[207752]: 2026-01-23 09:18:22.931366023 +0000 UTC m=+0.077709857 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 04:18:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:22.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:23 np0005593232 python3.9[207800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:23 np0005593232 python3.9[207962]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:24.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:24 np0005593232 python3.9[208160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v690: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:24.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:25 np0005593232 python3.9[208313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:25 np0005593232 python3.9[208465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:26.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:26 np0005593232 python3.9[208617]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v691: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:26.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:27 np0005593232 python3.9[208769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:27 np0005593232 python3.9[208922]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:28.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:28 np0005593232 python3.9[209074]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v692: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:28 np0005593232 python3.9[209226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:28.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:30.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v693: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:30 np0005593232 python3.9[209379]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:30.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:31 np0005593232 python3.9[209503]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159910.1441195-2309-20916832674317/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:31 np0005593232 python3.9[209655]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:32.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:32 np0005593232 python3.9[209778]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159911.3260977-2309-149971857436626/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v694: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:32 np0005593232 python3.9[209930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:33 np0005593232 python3.9[210054]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159912.4138594-2309-36112225628582/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:34.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:34 np0005593232 python3.9[210206]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v695: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:34 np0005593232 python3.9[210329]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159913.6652155-2309-58297506409760/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:34.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:35 np0005593232 python3.9[210482]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:35 np0005593232 python3.9[210605]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159914.9043167-2309-65110160583364/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:36.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:36 np0005593232 python3.9[210757]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v696: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:36 np0005593232 python3.9[210880]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159916.0000968-2309-260376814222244/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:18:37
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.rgw.root']
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:18:37 np0005593232 python3.9[211033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:18:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:18:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:38.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:38 np0005593232 python3.9[211156]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159917.104619-2309-55476885895990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v697: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:38 np0005593232 python3.9[211308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:38.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:39 np0005593232 python3.9[211432]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159918.3570104-2309-57787718476125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:40 np0005593232 python3.9[211584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:40.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v698: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:40 np0005593232 python3.9[211707]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159919.588724-2309-80963020464493/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:40.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:41 np0005593232 python3.9[211860]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:41 np0005593232 python3.9[211983]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159920.7632062-2309-21794688120093/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:42.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:42 np0005593232 python3.9[212135]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v699: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:18:42.570 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:18:42.571 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:18:42.571 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:18:42 np0005593232 python3.9[212258]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159921.922788-2309-43015650779112/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:42.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:43 np0005593232 python3.9[212411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:43 np0005593232 python3.9[212577]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159922.995776-2309-247642288431155/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:44.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:44 np0005593232 podman[212708]: 2026-01-23 09:18:44.420386651 +0000 UTC m=+0.074935029 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 04:18:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v700: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:44 np0005593232 python3.9[212755]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:45.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:45 np0005593232 python3.9[212879]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159924.108208-2309-222189409286542/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:45 np0005593232 python3.9[213031]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:18:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:46.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:46 np0005593232 python3.9[213154]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159925.2544754-2309-208928175267215/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:18:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v701: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:18:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:47.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3742 writes, 16K keys, 3742 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3742 writes, 3742 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1398 writes, 5688 keys, 1398 commit groups, 1.0 writes per commit group, ingest: 9.80 MB, 0.02 MB/s#012Interval WAL: 1398 writes, 1398 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     31.5      0.60              0.07         7    0.085       0      0       0.0       0.0#012  L6      1/0    7.52 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.6     81.5     66.2      0.74              0.16         6    0.123     26K   3331       0.0       0.0#012 Sum      1/0    7.52 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.6     45.0     50.7      1.33              0.23        13    0.102     26K   3331       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.3     99.8    100.6      0.33              0.11         6    0.055     14K   2019       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0     81.5     66.2      0.74              0.16         6    0.123     26K   3331       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     31.6      0.59              0.07         6    0.099       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.018, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.06 MB/s write, 0.06 GB read, 0.05 MB/s read, 1.3 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 308.00 MB usage: 2.26 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.000123 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(115,2.01 MB,0.654077%) FilterBlock(14,82.42 KB,0.0261332%) IndexBlock(14,170.67 KB,0.0541142%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:18:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:48.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v702: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:48 np0005593232 python3.9[213305]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:18:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:49.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:49 np0005593232 python3.9[213461]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 23 04:18:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:50.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v703: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:51.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:52 np0005593232 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 23 04:18:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:52.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:52 np0005593232 python3.9[213618]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v704: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:52 np0005593232 python3.9[213770]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:53.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:53 np0005593232 podman[213895]: 2026-01-23 09:18:53.412767702 +0000 UTC m=+0.092396553 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:18:53 np0005593232 python3.9[213935]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:18:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:54.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:18:54 np0005593232 python3.9[214098]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v705: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:54 np0005593232 python3.9[214250]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:18:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:55.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:55 np0005593232 python3.9[214403]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:56.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:56 np0005593232 python3.9[214555]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v706: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:57 np0005593232 python3.9[214708]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:57 np0005593232 python3.9[214860]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:18:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:58 np0005593232 python3.9[215012]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:18:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v707: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:18:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:18:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:18:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:18:59.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:18:59 np0005593232 python3.9[215165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:18:59 np0005593232 systemd[1]: Reloading.
Jan 23 04:18:59 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:18:59 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:18:59 np0005593232 systemd[1]: Starting libvirt logging daemon socket...
Jan 23 04:18:59 np0005593232 systemd[1]: Listening on libvirt logging daemon socket.
Jan 23 04:18:59 np0005593232 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 23 04:18:59 np0005593232 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 23 04:18:59 np0005593232 systemd[1]: Starting libvirt logging daemon...
Jan 23 04:18:59 np0005593232 systemd[1]: Started libvirt logging daemon.
Jan 23 04:18:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:00.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:00 np0005593232 python3.9[215357]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:19:00 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v708: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:00 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:00 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:00 np0005593232 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 23 04:19:00 np0005593232 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 23 04:19:00 np0005593232 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 23 04:19:00 np0005593232 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 23 04:19:00 np0005593232 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 23 04:19:00 np0005593232 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 23 04:19:00 np0005593232 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 04:19:00 np0005593232 systemd[1]: Started libvirt nodedev daemon.
Jan 23 04:19:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:01.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:01 np0005593232 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 23 04:19:01 np0005593232 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 23 04:19:01 np0005593232 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 23 04:19:01 np0005593232 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 23 04:19:01 np0005593232 python3.9[215575]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:19:01 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:01 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:01 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:01 np0005593232 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 23 04:19:01 np0005593232 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 23 04:19:01 np0005593232 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 23 04:19:01 np0005593232 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 23 04:19:01 np0005593232 systemd[1]: Starting libvirt proxy daemon...
Jan 23 04:19:02 np0005593232 systemd[1]: Started libvirt proxy daemon.
Jan 23 04:19:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v709: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:02 np0005593232 setroubleshoot[215470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8b343788-1d78-42c4-8c18-1405fd3bcb72
Jan 23 04:19:02 np0005593232 setroubleshoot[215470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 23 04:19:02 np0005593232 setroubleshoot[215470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 8b343788-1d78-42c4-8c18-1405fd3bcb72
Jan 23 04:19:02 np0005593232 setroubleshoot[215470]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 23 04:19:02 np0005593232 python3.9[215794]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:19:02 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:02 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:02 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:03 np0005593232 systemd[1]: Listening on libvirt locking daemon socket.
Jan 23 04:19:03 np0005593232 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 23 04:19:03 np0005593232 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 23 04:19:03 np0005593232 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 23 04:19:03 np0005593232 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 23 04:19:03 np0005593232 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 23 04:19:03 np0005593232 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 23 04:19:03 np0005593232 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 23 04:19:03 np0005593232 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 23 04:19:03 np0005593232 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 23 04:19:03 np0005593232 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 04:19:03 np0005593232 systemd[1]: Started libvirt QEMU daemon.
Jan 23 04:19:04 np0005593232 python3.9[216011]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:19:04 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:04.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:04 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:04 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:04 np0005593232 systemd[1]: Starting libvirt secret daemon socket...
Jan 23 04:19:04 np0005593232 systemd[1]: Listening on libvirt secret daemon socket.
Jan 23 04:19:04 np0005593232 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 23 04:19:04 np0005593232 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 23 04:19:04 np0005593232 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 23 04:19:04 np0005593232 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 23 04:19:04 np0005593232 systemd[1]: Starting libvirt secret daemon...
Jan 23 04:19:04 np0005593232 systemd[1]: Started libvirt secret daemon.
Jan 23 04:19:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v710: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:05 np0005593232 python3.9[216275]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:06.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v711: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:06 np0005593232 python3.9[216427]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 04:19:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:07.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:07 np0005593232 python3.9[216580]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:08 np0005593232 python3.9[216734]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 04:19:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:08.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v712: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:09.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:09 np0005593232 python3.9[216885]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:09 np0005593232 python3.9[217006]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159948.5979993-3383-40065345939626/.source.xml follow=False _original_basename=secret.xml.j2 checksum=4390443d357de49206cd2f69bdb29495711c4544 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:10.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v713: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:10 np0005593232 python3.9[217206]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine e1533653-0a5a-584c-b34b-8689f0d32e77#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:11.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:11 np0005593232 python3.9[217451]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:12.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v714: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:12 np0005593232 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 23 04:19:12 np0005593232 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.036s CPU time.
Jan 23 04:19:12 np0005593232 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 23 04:19:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:13.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:19:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:19:14 np0005593232 python3.9[217915]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:19:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:14.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v715: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:14 np0005593232 podman[218039]: 2026-01-23 09:19:14.631002336 +0000 UTC m=+0.053284527 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:19:14 np0005593232 python3.9[218086]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:19:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:15.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:15 np0005593232 python3.9[218210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159954.3079858-3548-42921569205370/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b46efe05-a7b0-46e5-9400-f813db3fbb9b does not exist
Jan 23 04:19:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 19c3cf0f-4986-4b55-9b66-b94e1eb85c93 does not exist
Jan 23 04:19:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c1c4439f-1828-4bce-8240-9edaa8ff8807 does not exist
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:19:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:19:15 np0005593232 podman[218434]: 2026-01-23 09:19:15.954640716 +0000 UTC m=+0.036453342 container create 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 23 04:19:15 np0005593232 systemd[1]: Started libpod-conmon-1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d.scope.
Jan 23 04:19:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:15.937722318 +0000 UTC m=+0.019534964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:16.045178726 +0000 UTC m=+0.126991392 container init 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:19:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:19:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:16.052994236 +0000 UTC m=+0.134806862 container start 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:16.056852035 +0000 UTC m=+0.138664661 container attach 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:19:16 np0005593232 systemd[1]: libpod-1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d.scope: Deactivated successfully.
Jan 23 04:19:16 np0005593232 elegant_mendeleev[218487]: 167 167
Jan 23 04:19:16 np0005593232 conmon[218487]: conmon 1a3cc05baed1d114a8f4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d.scope/container/memory.events
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:16.062338831 +0000 UTC m=+0.144151457 container died 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:19:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e8ee3569ae416aac61c349ecc2e8f9655e5bcd65d90b6642854c56999d97b188-merged.mount: Deactivated successfully.
Jan 23 04:19:16 np0005593232 podman[218434]: 2026-01-23 09:19:16.098062591 +0000 UTC m=+0.179875217 container remove 1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mendeleev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:19:16 np0005593232 systemd[1]: libpod-conmon-1a3cc05baed1d114a8f4104b9848b2c773cc5b5e2f835ee8fdef57c100f9336d.scope: Deactivated successfully.
Jan 23 04:19:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:16.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:16 np0005593232 python3.9[218536]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:16 np0005593232 podman[218542]: 2026-01-23 09:19:16.228340934 +0000 UTC m=+0.021259292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:16 np0005593232 podman[218542]: 2026-01-23 09:19:16.33433841 +0000 UTC m=+0.127256768 container create 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:19:16 np0005593232 systemd[1]: Started libpod-conmon-8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987.scope.
Jan 23 04:19:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:16 np0005593232 podman[218542]: 2026-01-23 09:19:16.469552083 +0000 UTC m=+0.262470451 container init 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:19:16 np0005593232 podman[218542]: 2026-01-23 09:19:16.478050153 +0000 UTC m=+0.270968501 container start 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:19:16 np0005593232 podman[218542]: 2026-01-23 09:19:16.481751708 +0000 UTC m=+0.274670056 container attach 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:19:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v716: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:17.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:17 np0005593232 python3.9[218715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:17 np0005593232 distracted_blackwell[218582]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:19:17 np0005593232 distracted_blackwell[218582]: --> relative data size: 1.0
Jan 23 04:19:17 np0005593232 distracted_blackwell[218582]: --> All data devices are unavailable
Jan 23 04:19:17 np0005593232 systemd[1]: libpod-8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987.scope: Deactivated successfully.
Jan 23 04:19:17 np0005593232 podman[218542]: 2026-01-23 09:19:17.377299556 +0000 UTC m=+1.170217904 container died 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:19:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f851d438c713e9a513df2a3694febdc1ac3e5e4acebad61f44784b79f45b53b7-merged.mount: Deactivated successfully.
Jan 23 04:19:17 np0005593232 podman[218542]: 2026-01-23 09:19:17.42836537 +0000 UTC m=+1.221283728 container remove 8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:19:17 np0005593232 systemd[1]: libpod-conmon-8a9c6311428cc09b1181b959995156188b818bf642476d1c7582b0d5c2db8987.scope: Deactivated successfully.
Jan 23 04:19:17 np0005593232 python3.9[218816]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.043751087 +0000 UTC m=+0.077632606 container create a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:19:18 np0005593232 systemd[1]: Started libpod-conmon-a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d.scope.
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:17.988533946 +0000 UTC m=+0.022415485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.110298358 +0000 UTC m=+0.144179877 container init a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.118573352 +0000 UTC m=+0.152454871 container start a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.122116063 +0000 UTC m=+0.155997582 container attach a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:19:18 np0005593232 peaceful_bell[219073]: 167 167
Jan 23 04:19:18 np0005593232 systemd[1]: libpod-a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d.scope: Deactivated successfully.
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.124215952 +0000 UTC m=+0.158097471 container died a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:19:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-38f4f8df0607748e2beacec7ddde667035403e0b990bca422efd049ab3f924a0-merged.mount: Deactivated successfully.
Jan 23 04:19:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:18.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:18 np0005593232 podman[219033]: 2026-01-23 09:19:18.165732686 +0000 UTC m=+0.199614205 container remove a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_bell, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:19:18 np0005593232 systemd[1]: libpod-conmon-a2ca202fa73c3af08521691a0c421b2608cec3ee8b200a1e783a56a16816153d.scope: Deactivated successfully.
Jan 23 04:19:18 np0005593232 podman[219149]: 2026-01-23 09:19:18.309584542 +0000 UTC m=+0.026918692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:18 np0005593232 python3.9[219143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v717: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:18 np0005593232 podman[219149]: 2026-01-23 09:19:18.59920898 +0000 UTC m=+0.316543140 container create 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:19:18 np0005593232 systemd[1]: Started libpod-conmon-68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9.scope.
Jan 23 04:19:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eecbdbc5bfff7df5ef028c7dd304e3c63b6d7ce460d6b439e8b7744d32b521/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eecbdbc5bfff7df5ef028c7dd304e3c63b6d7ce460d6b439e8b7744d32b521/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eecbdbc5bfff7df5ef028c7dd304e3c63b6d7ce460d6b439e8b7744d32b521/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0eecbdbc5bfff7df5ef028c7dd304e3c63b6d7ce460d6b439e8b7744d32b521/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:18 np0005593232 podman[219149]: 2026-01-23 09:19:18.708713446 +0000 UTC m=+0.426047606 container init 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:19:18 np0005593232 podman[219149]: 2026-01-23 09:19:18.716359472 +0000 UTC m=+0.433693602 container start 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:19:18 np0005593232 podman[219149]: 2026-01-23 09:19:18.740719081 +0000 UTC m=+0.458053211 container attach 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:19:18 np0005593232 python3.9[219244]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.qdeaphrr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:19.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]: {
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:    "0": [
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:        {
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "devices": [
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "/dev/loop3"
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            ],
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "lv_name": "ceph_lv0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "lv_size": "7511998464",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "name": "ceph_lv0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "tags": {
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.cluster_name": "ceph",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.crush_device_class": "",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.encrypted": "0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.osd_id": "0",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.type": "block",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:                "ceph.vdo": "0"
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            },
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "type": "block",
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:            "vg_name": "ceph_vg0"
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:        }
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]:    ]
Jan 23 04:19:19 np0005593232 awesome_yonath[219242]: }
Jan 23 04:19:19 np0005593232 systemd[1]: libpod-68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9.scope: Deactivated successfully.
Jan 23 04:19:19 np0005593232 podman[219149]: 2026-01-23 09:19:19.573354059 +0000 UTC m=+1.290688189 container died 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:19:19 np0005593232 python3.9[219400]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d0eecbdbc5bfff7df5ef028c7dd304e3c63b6d7ce460d6b439e8b7744d32b521-merged.mount: Deactivated successfully.
Jan 23 04:19:19 np0005593232 podman[219149]: 2026-01-23 09:19:19.631327498 +0000 UTC m=+1.348661618 container remove 68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_yonath, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:19:19 np0005593232 systemd[1]: libpod-conmon-68db79ebbda86d55dfbf1304a4c165188535082694ef5406eb24d8d76bf866b9.scope: Deactivated successfully.
Jan 23 04:19:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:20 np0005593232 python3.9[219590]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:20.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.191262378 +0000 UTC m=+0.036057510 container create 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:19:20 np0005593232 systemd[1]: Started libpod-conmon-782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd.scope.
Jan 23 04:19:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.269987244 +0000 UTC m=+0.114782416 container init 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.175304297 +0000 UTC m=+0.020099449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.278944917 +0000 UTC m=+0.123740049 container start 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.283050263 +0000 UTC m=+0.127845435 container attach 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:19:20 np0005593232 priceless_fermat[219673]: 167 167
Jan 23 04:19:20 np0005593232 systemd[1]: libpod-782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd.scope: Deactivated successfully.
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.28469874 +0000 UTC m=+0.129493872 container died 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:19:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c853bc1977ecf12f4d1e417fd6d8b3b9ba1d94b20d8f08f4e80e7ab964efd6dd-merged.mount: Deactivated successfully.
Jan 23 04:19:20 np0005593232 podman[219657]: 2026-01-23 09:19:20.326469321 +0000 UTC m=+0.171264443 container remove 782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:19:20 np0005593232 systemd[1]: libpod-conmon-782adeb438e89e5289c873003d83a419531ff0260dcc57859347c3cdcba698bd.scope: Deactivated successfully.
Jan 23 04:19:20 np0005593232 podman[219741]: 2026-01-23 09:19:20.481843693 +0000 UTC m=+0.046081994 container create 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:19:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v718: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:20 np0005593232 systemd[1]: Started libpod-conmon-881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad.scope.
Jan 23 04:19:20 np0005593232 podman[219741]: 2026-01-23 09:19:20.455745915 +0000 UTC m=+0.019984246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:19:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:19:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a73e4c430184345916853677a86d2ee9dde7c35c051961c8f589e8e56742ae54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a73e4c430184345916853677a86d2ee9dde7c35c051961c8f589e8e56742ae54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a73e4c430184345916853677a86d2ee9dde7c35c051961c8f589e8e56742ae54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a73e4c430184345916853677a86d2ee9dde7c35c051961c8f589e8e56742ae54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:19:20 np0005593232 podman[219741]: 2026-01-23 09:19:20.569705717 +0000 UTC m=+0.133944028 container init 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:19:20 np0005593232 podman[219741]: 2026-01-23 09:19:20.577887499 +0000 UTC m=+0.142125820 container start 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:19:20 np0005593232 podman[219741]: 2026-01-23 09:19:20.581054288 +0000 UTC m=+0.145292609 container attach 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:19:20 np0005593232 python3.9[219844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:21.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]: {
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:        "osd_id": 0,
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:        "type": "bluestore"
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]:    }
Jan 23 04:19:21 np0005593232 dazzling_ardinghelli[219788]: }
Jan 23 04:19:21 np0005593232 systemd[1]: libpod-881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad.scope: Deactivated successfully.
Jan 23 04:19:21 np0005593232 podman[219741]: 2026-01-23 09:19:21.463350431 +0000 UTC m=+1.027588762 container died 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:19:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a73e4c430184345916853677a86d2ee9dde7c35c051961c8f589e8e56742ae54-merged.mount: Deactivated successfully.
Jan 23 04:19:21 np0005593232 podman[219741]: 2026-01-23 09:19:21.523969945 +0000 UTC m=+1.088208256 container remove 881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:19:21 np0005593232 systemd[1]: libpod-conmon-881dd4aeae7a1ae2d0db44db719dc6e3cde1d80df5f0938eab56bb0d8ac50aad.scope: Deactivated successfully.
Jan 23 04:19:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:19:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:19:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8fc559a3-b49c-41f7-9080-3428d995e481 does not exist
Jan 23 04:19:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 719a160a-00e3-4c1e-b5ed-82d99103f786 does not exist
Jan 23 04:19:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ef9186c-47bc-4146-a6e2-e82a7764b27f does not exist
Jan 23 04:19:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:22 np0005593232 python3[220075]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 23 04:19:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v719: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:19:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:23.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:23 np0005593232 python3.9[220228]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:23 np0005593232 podman[220278]: 2026-01-23 09:19:23.798923499 +0000 UTC m=+0.119424677 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 04:19:23 np0005593232 python3.9[220322]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:24.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v720: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:24 np0005593232 python3.9[220531]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.012383) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965012526, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1323, "num_deletes": 257, "total_data_size": 2372729, "memory_usage": 2405824, "flush_reason": "Manual Compaction"}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965033978, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2337204, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15659, "largest_seqno": 16981, "table_properties": {"data_size": 2330955, "index_size": 3512, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12158, "raw_average_key_size": 18, "raw_value_size": 2318598, "raw_average_value_size": 3567, "num_data_blocks": 159, "num_entries": 650, "num_filter_entries": 650, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159830, "oldest_key_time": 1769159830, "file_creation_time": 1769159965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 21630 microseconds, and 8088 cpu microseconds.
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.034047) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2337204 bytes OK
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.034079) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.035853) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.035910) EVENT_LOG_v1 {"time_micros": 1769159965035905, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.035927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2367027, prev total WAL file size 2367027, number of live WAL files 2.
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.036799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323534' seq:0, type:0; will stop at (end)
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2282KB)], [35(7704KB)]
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965036967, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10226666, "oldest_snapshot_seqno": -1}
Jan 23 04:19:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:25.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4284 keys, 9889277 bytes, temperature: kUnknown
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965103495, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 9889277, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9857540, "index_size": 19915, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 106691, "raw_average_key_size": 24, "raw_value_size": 9776820, "raw_average_value_size": 2282, "num_data_blocks": 833, "num_entries": 4284, "num_filter_entries": 4284, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.103752) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 9889277 bytes
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.104724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 148.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.5 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.2) OK, records in: 4813, records dropped: 529 output_compression: NoCompression
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.104740) EVENT_LOG_v1 {"time_micros": 1769159965104732, "job": 16, "event": "compaction_finished", "compaction_time_micros": 66594, "compaction_time_cpu_micros": 29593, "output_level": 6, "num_output_files": 1, "total_output_size": 9889277, "num_input_records": 4813, "num_output_records": 4284, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965105237, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159965106610, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.036650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.106725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.106732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.106733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.106735) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:25.106736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:25 np0005593232 python3.9[220657]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159964.2858818-3815-138449324110163/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:26.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:26 np0005593232 python3.9[220809]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v721: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:27 np0005593232 python3.9[220887]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:27.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:27 np0005593232 python3.9[221040]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:28.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:28 np0005593232 python3.9[221118]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v722: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:29.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:29 np0005593232 python3.9[221271]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:30 np0005593232 python3.9[221396]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769159968.6392565-3932-216200713987920/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:30.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v723: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:30 np0005593232 python3.9[221548]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.879509) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970880055, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 300, "num_deletes": 251, "total_data_size": 109761, "memory_usage": 115552, "flush_reason": "Manual Compaction"}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970882696, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 108869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16983, "largest_seqno": 17281, "table_properties": {"data_size": 106898, "index_size": 199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5054, "raw_average_key_size": 18, "raw_value_size": 103029, "raw_average_value_size": 374, "num_data_blocks": 9, "num_entries": 275, "num_filter_entries": 275, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769159965, "oldest_key_time": 1769159965, "file_creation_time": 1769159970, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 3233 microseconds, and 916 cpu microseconds.
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.882739) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 108869 bytes OK
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.882758) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.884266) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.884327) EVENT_LOG_v1 {"time_micros": 1769159970884313, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.884359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 107602, prev total WAL file size 107602, number of live WAL files 2.
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.885033) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(106KB)], [38(9657KB)]
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970885096, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 9998146, "oldest_snapshot_seqno": -1}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4049 keys, 7955959 bytes, temperature: kUnknown
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970977051, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 7955959, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7927471, "index_size": 17242, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 102491, "raw_average_key_size": 25, "raw_value_size": 7852475, "raw_average_value_size": 1939, "num_data_blocks": 712, "num_entries": 4049, "num_filter_entries": 4049, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769159970, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.977602) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7955959 bytes
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.979529) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.3 rd, 86.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(164.9) write-amplify(73.1) OK, records in: 4559, records dropped: 510 output_compression: NoCompression
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.979563) EVENT_LOG_v1 {"time_micros": 1769159970979548, "job": 18, "event": "compaction_finished", "compaction_time_micros": 92300, "compaction_time_cpu_micros": 27440, "output_level": 6, "num_output_files": 1, "total_output_size": 7955959, "num_input_records": 4559, "num_output_records": 4049, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970980540, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769159970985215, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.884856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.985414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.985425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.985428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.985430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:19:30.985432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:19:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:31.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:31 np0005593232 python3.9[221701]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:32.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v724: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:32 np0005593232 python3.9[221856]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:33.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:33 np0005593232 python3.9[222009]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:19:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:34.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:19:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v725: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:34 np0005593232 python3.9[222162]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:19:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:35.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:35 np0005593232 python3.9[222317]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:19:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:36.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:36 np0005593232 python3.9[222472]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v726: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:19:37
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', 'images', 'backups']
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:19:37 np0005593232 python3.9[222625]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:19:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:19:38 np0005593232 python3.9[222748]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159976.7255485-4148-147602277664005/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:38.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v727: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:38 np0005593232 python3.9[222900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:39.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:39 np0005593232 python3.9[223024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159978.2964535-4193-127546930083426/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:40.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:40 np0005593232 python3.9[223176]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:19:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v728: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:40 np0005593232 python3.9[223299]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769159979.79824-4238-244042874695845/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:19:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:41.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:41 np0005593232 python3.9[223452]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:19:41 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:41 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:41 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:42.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:42 np0005593232 systemd[1]: Reached target edpm_libvirt.target.
Jan 23 04:19:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v729: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:19:42.572 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:19:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:19:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:19:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:43.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:43 np0005593232 python3.9[223643]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 23 04:19:43 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:43 np0005593232 systemd[1]: Reloading.
Jan 23 04:19:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:19:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:19:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:44.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v730: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:44 np0005593232 systemd[1]: session-49.scope: Deactivated successfully.
Jan 23 04:19:44 np0005593232 systemd[1]: session-49.scope: Consumed 3min 26.178s CPU time.
Jan 23 04:19:44 np0005593232 systemd-logind[808]: Session 49 logged out. Waiting for processes to exit.
Jan 23 04:19:44 np0005593232 systemd-logind[808]: Removed session 49.
Jan 23 04:19:44 np0005593232 podman[223789]: 2026-01-23 09:19:44.735265073 +0000 UTC m=+0.056994752 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:19:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:45.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:46.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:19:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v731: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:47.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:48.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v732: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:49.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:50.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:50 np0005593232 systemd-logind[808]: New session 50 of user zuul.
Jan 23 04:19:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v733: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:50 np0005593232 systemd[1]: Started Session 50 of User zuul.
Jan 23 04:19:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:51.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:51 np0005593232 python3.9[223965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:19:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:52.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v734: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:53 np0005593232 python3.9[224120]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:19:53 np0005593232 network[224137]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:19:53 np0005593232 network[224138]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:19:53 np0005593232 network[224139]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:19:54 np0005593232 podman[224146]: 2026-01-23 09:19:54.171884822 +0000 UTC m=+0.083453140 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:19:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:19:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:54.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:19:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v735: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:19:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:55.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:19:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:56.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:19:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v736: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:57.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:19:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:58 np0005593232 python3.9[224439]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 23 04:19:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v737: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:19:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:19:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:19:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:19:59.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:19:59 np0005593232 python3.9[224524]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:20:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:20:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:00.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v738: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:01 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:20:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:01.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:02.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v739: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 04:20:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:03.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:04.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v740: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:05.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:06.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:06 np0005593232 python3.9[224730]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:20:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v741: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:07.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:08.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v742: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:09.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:10.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v743: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:11 np0005593232 python3.9[224884]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:20:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:11.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:12.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v744: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:13.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:14.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v745: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:15 np0005593232 podman[225012]: 2026-01-23 09:20:15.011810179 +0000 UTC m=+0.063711832 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:20:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:15.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:15 np0005593232 python3.9[225059]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:20:16 np0005593232 python3.9[225211]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:20:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v746: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:16 np0005593232 python3.9[225364]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:20:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:17 np0005593232 python3.9[225488]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769160016.2354722-245-50238241095991/.source.iscsi _original_basename=.54dm3mfg follow=False checksum=bbd2addbbcb5deb0599547e1c8d4399cfee05429 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:18.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v747: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:18 np0005593232 python3.9[225640]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:19.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:19 np0005593232 python3.9[225793]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:20.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v748: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:20 np0005593232 python3.9[225945]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:20:20 np0005593232 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 23 04:20:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:21.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:22.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v749: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:20:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:20:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:22 np0005593232 python3.9[226222]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:20:23 np0005593232 systemd[1]: Reloading.
Jan 23 04:20:23 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:20:23 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:20:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:23.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:23 np0005593232 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 04:20:23 np0005593232 systemd[1]: Starting Open-iSCSI...
Jan 23 04:20:23 np0005593232 kernel: Loading iSCSI transport class v2.0-870.
Jan 23 04:20:23 np0005593232 systemd[1]: Started Open-iSCSI.
Jan 23 04:20:23 np0005593232 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 23 04:20:23 np0005593232 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 23 04:20:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e69a2085-55e7-452d-9e13-10cd60f38794 does not exist
Jan 23 04:20:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1cf4f1d1-2adb-4ee5-8781-2404eedeeff5 does not exist
Jan 23 04:20:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b58b5e1e-f868-4fde-9191-f6e190c24595 does not exist
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:20:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:24.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:24 np0005593232 podman[226477]: 2026-01-23 09:20:24.359410432 +0000 UTC m=+0.122563156 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:20:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v750: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.695420812 +0000 UTC m=+0.052180457 container create 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:20:24 np0005593232 systemd[1]: Started libpod-conmon-69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9.scope.
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.669324524 +0000 UTC m=+0.026084199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.832996771 +0000 UTC m=+0.189756436 container init 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.840614166 +0000 UTC m=+0.197373811 container start 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.844613309 +0000 UTC m=+0.201372974 container attach 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:20:24 np0005593232 lucid_williams[226750]: 167 167
Jan 23 04:20:24 np0005593232 systemd[1]: libpod-69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9.scope: Deactivated successfully.
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.848118248 +0000 UTC m=+0.204877893 container died 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:20:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9895f2e2de1b35bcce2dc43688d3590200b65a3f0bdfcd160841c12c5860964d-merged.mount: Deactivated successfully.
Jan 23 04:20:24 np0005593232 podman[226694]: 2026-01-23 09:20:24.887672927 +0000 UTC m=+0.244432572 container remove 69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:24 np0005593232 systemd[1]: libpod-conmon-69870eaed9262faada26b4169f79b534daf1faae5d82c70e7b7a818a8febd8f9.scope: Deactivated successfully.
Jan 23 04:20:25 np0005593232 python3.9[226786]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:20:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:25 np0005593232 network[226835]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:20:25 np0005593232 network[226836]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:20:25 np0005593232 network[226839]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:20:25 np0005593232 podman[226808]: 2026-01-23 09:20:25.093961989 +0000 UTC m=+0.059902395 container create 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:20:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:25.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:25 np0005593232 systemd[1]: Started libpod-conmon-669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae.scope.
Jan 23 04:20:25 np0005593232 podman[226808]: 2026-01-23 09:20:25.075382213 +0000 UTC m=+0.041322639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:26 np0005593232 podman[226808]: 2026-01-23 09:20:26.000170718 +0000 UTC m=+0.966111134 container init 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:20:26 np0005593232 podman[226808]: 2026-01-23 09:20:26.007761733 +0000 UTC m=+0.973702139 container start 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:20:26 np0005593232 podman[226808]: 2026-01-23 09:20:26.075111436 +0000 UTC m=+1.041051842 container attach 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:20:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:26.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v751: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:26 np0005593232 funny_mirzakhani[226847]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:20:26 np0005593232 funny_mirzakhani[226847]: --> relative data size: 1.0
Jan 23 04:20:26 np0005593232 funny_mirzakhani[226847]: --> All data devices are unavailable
Jan 23 04:20:27 np0005593232 systemd[1]: libpod-669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae.scope: Deactivated successfully.
Jan 23 04:20:27 np0005593232 podman[226808]: 2026-01-23 09:20:27.024062774 +0000 UTC m=+1.990003190 container died 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:27 np0005593232 systemd[1]: libpod-669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae.scope: Consumed 1.000s CPU time.
Jan 23 04:20:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-665d80ed8e75b4fa72c7c5e5ec171889d71a82ce538a219a5d9be0390a13b2bc-merged.mount: Deactivated successfully.
Jan 23 04:20:27 np0005593232 podman[226808]: 2026-01-23 09:20:27.078875044 +0000 UTC m=+2.044815450 container remove 669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_mirzakhani, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:20:27 np0005593232 systemd[1]: libpod-conmon-669cbbed67b6d0072ae339aaa6a4d6c1503d4e5992190b5b65f7ac8a1f5c78ae.scope: Deactivated successfully.
Jan 23 04:20:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.727766608 +0000 UTC m=+0.044174800 container create 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:20:27 np0005593232 systemd[1]: Started libpod-conmon-346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94.scope.
Jan 23 04:20:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.707499125 +0000 UTC m=+0.023907337 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.813154952 +0000 UTC m=+0.129563134 container init 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.821613801 +0000 UTC m=+0.138021983 container start 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:27 np0005593232 systemd[1]: libpod-346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94.scope: Deactivated successfully.
Jan 23 04:20:27 np0005593232 elegant_mccarthy[227071]: 167 167
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.826302363 +0000 UTC m=+0.142710565 container attach 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:27 np0005593232 conmon[227071]: conmon 346b5cab8f141cf7297d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94.scope/container/memory.events
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.827730684 +0000 UTC m=+0.144138866 container died 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:20:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a4cfab5dd4c1a1824d22449341dacc7fdb9fe6fdbccbf0867739b84f369c1ed4-merged.mount: Deactivated successfully.
Jan 23 04:20:27 np0005593232 podman[227050]: 2026-01-23 09:20:27.924677255 +0000 UTC m=+0.241085437 container remove 346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:20:27 np0005593232 systemd[1]: libpod-conmon-346b5cab8f141cf7297d9c1cd0fb15e04fd00216d9aef105f9dfb35a1f013c94.scope: Deactivated successfully.
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.078550445 +0000 UTC m=+0.039551229 container create 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:20:28 np0005593232 systemd[1]: Started libpod-conmon-4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572.scope.
Jan 23 04:20:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c7d0d5d58a840edb2494f2f2dbbbf320fd96b36da3e7a71464ab627ff4213/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c7d0d5d58a840edb2494f2f2dbbbf320fd96b36da3e7a71464ab627ff4213/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c7d0d5d58a840edb2494f2f2dbbbf320fd96b36da3e7a71464ab627ff4213/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736c7d0d5d58a840edb2494f2f2dbbbf320fd96b36da3e7a71464ab627ff4213/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.157169137 +0000 UTC m=+0.118169931 container init 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.062743528 +0000 UTC m=+0.023744342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.16503405 +0000 UTC m=+0.126034834 container start 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.167973223 +0000 UTC m=+0.128974017 container attach 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:20:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:28.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v752: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]: {
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:    "0": [
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:        {
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "devices": [
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "/dev/loop3"
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            ],
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "lv_name": "ceph_lv0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "lv_size": "7511998464",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "name": "ceph_lv0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "tags": {
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.cluster_name": "ceph",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.crush_device_class": "",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.encrypted": "0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.osd_id": "0",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.type": "block",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:                "ceph.vdo": "0"
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            },
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "type": "block",
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:            "vg_name": "ceph_vg0"
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:        }
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]:    ]
Jan 23 04:20:28 np0005593232 funny_hofstadter[227130]: }
Jan 23 04:20:28 np0005593232 systemd[1]: libpod-4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572.scope: Deactivated successfully.
Jan 23 04:20:28 np0005593232 podman[227110]: 2026-01-23 09:20:28.977369775 +0000 UTC m=+0.938370559 container died 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:20:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-736c7d0d5d58a840edb2494f2f2dbbbf320fd96b36da3e7a71464ab627ff4213-merged.mount: Deactivated successfully.
Jan 23 04:20:29 np0005593232 podman[227110]: 2026-01-23 09:20:29.036647191 +0000 UTC m=+0.997647975 container remove 4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:20:29 np0005593232 systemd[1]: libpod-conmon-4d631d142c9a4e47e928c0c046a8bed5149fe9aec33fb3b73a777e2e7c5da572.scope: Deactivated successfully.
Jan 23 04:20:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:29.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.660008994 +0000 UTC m=+0.036597126 container create 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:20:29 np0005593232 systemd[1]: Started libpod-conmon-92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97.scope.
Jan 23 04:20:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.713852746 +0000 UTC m=+0.090440898 container init 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.719448394 +0000 UTC m=+0.096036536 container start 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.722739787 +0000 UTC m=+0.099327929 container attach 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:20:29 np0005593232 hopeful_cori[227392]: 167 167
Jan 23 04:20:29 np0005593232 systemd[1]: libpod-92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97.scope: Deactivated successfully.
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.723978073 +0000 UTC m=+0.100566215 container died 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.64536816 +0000 UTC m=+0.021956322 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b46d684f72c61448734fabf2715bb119b07043e6f1b579405c2482a452efdbf1-merged.mount: Deactivated successfully.
Jan 23 04:20:29 np0005593232 podman[227376]: 2026-01-23 09:20:29.757996094 +0000 UTC m=+0.134584246 container remove 92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:20:29 np0005593232 systemd[1]: libpod-conmon-92d0706d8cfcfe1aa9c156af8b47d3b82564a86918071ecbd2950b477b657b97.scope: Deactivated successfully.
Jan 23 04:20:29 np0005593232 podman[227469]: 2026-01-23 09:20:29.904836816 +0000 UTC m=+0.041279369 container create 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:20:29 np0005593232 systemd[1]: Started libpod-conmon-3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e.scope.
Jan 23 04:20:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:20:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae701b5ef21b702ba1a2a113818e12b3865b30302109a69c295415a97d88625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae701b5ef21b702ba1a2a113818e12b3865b30302109a69c295415a97d88625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae701b5ef21b702ba1a2a113818e12b3865b30302109a69c295415a97d88625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ae701b5ef21b702ba1a2a113818e12b3865b30302109a69c295415a97d88625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:20:29 np0005593232 podman[227469]: 2026-01-23 09:20:29.980439243 +0000 UTC m=+0.116881876 container init 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:20:29 np0005593232 podman[227469]: 2026-01-23 09:20:29.887580378 +0000 UTC m=+0.024022931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:20:29 np0005593232 podman[227469]: 2026-01-23 09:20:29.988201142 +0000 UTC m=+0.124643695 container start 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:20:29 np0005593232 podman[227469]: 2026-01-23 09:20:29.990983431 +0000 UTC m=+0.127426024 container attach 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:20:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:30.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:30 np0005593232 python3.9[227566]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:20:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v753: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]: {
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:        "osd_id": 0,
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:        "type": "bluestore"
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]:    }
Jan 23 04:20:30 np0005593232 quizzical_lewin[227486]: }
Jan 23 04:20:30 np0005593232 systemd[1]: libpod-3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e.scope: Deactivated successfully.
Jan 23 04:20:30 np0005593232 podman[227585]: 2026-01-23 09:20:30.920628103 +0000 UTC m=+0.029016391 container died 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:20:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7ae701b5ef21b702ba1a2a113818e12b3865b30302109a69c295415a97d88625-merged.mount: Deactivated successfully.
Jan 23 04:20:30 np0005593232 podman[227585]: 2026-01-23 09:20:30.968602299 +0000 UTC m=+0.076990597 container remove 3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_lewin, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:20:31 np0005593232 systemd[1]: libpod-conmon-3bbda20e737165fc2a7378ef46fc87b1d6d0dd136c7c30272fd2056e3dbb1b9e.scope: Deactivated successfully.
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3bedfc25-c2f0-468b-b6a6-93ff9a759627 does not exist
Jan 23 04:20:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 58337811-b736-4513-b2b6-b75853e00872 does not exist
Jan 23 04:20:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3ccea5b3-e23e-4661-a886-ce1fed6415b5 does not exist
Jan 23 04:20:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:20:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:32.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v754: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:32 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 04:20:32 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 04:20:32 np0005593232 systemd[1]: Reloading.
Jan 23 04:20:32 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:20:33 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:20:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:33.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:33 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 04:20:33 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 04:20:33 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 04:20:33 np0005593232 systemd[1]: run-r684b34a0a6d94fae875232c6d801f12d.service: Deactivated successfully.
Jan 23 04:20:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:34.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v755: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:35.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:35 np0005593232 python3.9[227968]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 04:20:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:36.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v756: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:36 np0005593232 python3.9[228120]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:20:37
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:20:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:37.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:20:37 np0005593232 python3.9[228277]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:20:37 np0005593232 python3.9[228400]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769160036.81415-509-223002365473485/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:20:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:20:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:38.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v757: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:38 np0005593232 python3.9[228552]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:39.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:40 np0005593232 python3.9[228705]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:20:40 np0005593232 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 04:20:40 np0005593232 systemd[1]: Stopped Load Kernel Modules.
Jan 23 04:20:40 np0005593232 systemd[1]: Stopping Load Kernel Modules...
Jan 23 04:20:40 np0005593232 systemd[1]: Starting Load Kernel Modules...
Jan 23 04:20:40 np0005593232 systemd[1]: Finished Load Kernel Modules.
Jan 23 04:20:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:40.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v758: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:41 np0005593232 python3.9[228861]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:20:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:41.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:41 np0005593232 python3.9[229015]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:20:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v759: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:20:42.573 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:20:42.575 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:20:42.575 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:20:42 np0005593232 python3.9[229167]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:20:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:43.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:43 np0005593232 python3.9[229291]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769160042.2636104-662-44956657817588/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:44 np0005593232 python3.9[229443]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:20:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:44.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v760: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:44 np0005593232 python3.9[229646]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:45.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:45 np0005593232 podman[229724]: 2026-01-23 09:20:45.415845669 +0000 UTC m=+0.073359272 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 04:20:45 np0005593232 python3.9[229818]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:20:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v761: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:46 np0005593232 python3.9[229970]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:47.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:47 np0005593232 python3.9[230123]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:48 np0005593232 python3.9[230275]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:20:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:48.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:20:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v762: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:48 np0005593232 python3.9[230427]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:49 np0005593232 python3.9[230580]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:50 np0005593232 python3.9[230732]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:20:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v763: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:50 np0005593232 python3.9[230886]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:20:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:51 np0005593232 python3.9[231040]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:20:52 np0005593232 systemd[1]: Listening on multipathd control socket.
Jan 23 04:20:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:52.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v764: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:52 np0005593232 python3.9[231196]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:20:52 np0005593232 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 23 04:20:52 np0005593232 udevadm[231202]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 23 04:20:52 np0005593232 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 23 04:20:53 np0005593232 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 04:20:53 np0005593232 multipathd[231206]: --------start up--------
Jan 23 04:20:53 np0005593232 multipathd[231206]: read /etc/multipath.conf
Jan 23 04:20:53 np0005593232 multipathd[231206]: path checkers start up
Jan 23 04:20:53 np0005593232 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 04:20:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:53.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:54 np0005593232 python3.9[231365]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 23 04:20:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:54.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v765: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:54 np0005593232 podman[231489]: 2026-01-23 09:20:54.957818023 +0000 UTC m=+0.092920714 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 23 04:20:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:20:55 np0005593232 python3.9[231537]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 23 04:20:55 np0005593232 kernel: Key type psk registered
Jan 23 04:20:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:55 np0005593232 python3.9[231707]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:20:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:56.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:56 np0005593232 python3.9[231830]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769160055.394713-1052-59027880963323/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v766: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:20:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:20:57 np0005593232 python3.9[231983]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:20:58 np0005593232 python3.9[232135]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:20:58 np0005593232 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 23 04:20:58 np0005593232 systemd[1]: Stopped Load Kernel Modules.
Jan 23 04:20:58 np0005593232 systemd[1]: Stopping Load Kernel Modules...
Jan 23 04:20:58 np0005593232 systemd[1]: Starting Load Kernel Modules...
Jan 23 04:20:58 np0005593232 systemd[1]: Finished Load Kernel Modules.
Jan 23 04:20:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:20:58.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v767: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:20:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:20:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:20:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:20:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:20:59 np0005593232 python3.9[232292]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 23 04:21:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:00.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v768: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:00 np0005593232 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 23 04:21:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:02 np0005593232 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 04:21:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:02.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v769: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:03 np0005593232 systemd[1]: Reloading.
Jan 23 04:21:03 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:21:03 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:21:04 np0005593232 systemd[1]: Reloading.
Jan 23 04:21:04 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:21:04 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:21:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v770: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:04 np0005593232 systemd-logind[808]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 23 04:21:04 np0005593232 systemd-logind[808]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 23 04:21:04 np0005593232 lvm[232433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 04:21:04 np0005593232 lvm[232433]: VG ceph_vg0 finished
Jan 23 04:21:04 np0005593232 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 23 04:21:04 np0005593232 systemd[1]: Starting man-db-cache-update.service...
Jan 23 04:21:04 np0005593232 systemd[1]: Reloading.
Jan 23 04:21:05 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:21:05 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:21:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:05.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:05 np0005593232 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 23 04:21:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:06 np0005593232 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 23 04:21:06 np0005593232 systemd[1]: Finished man-db-cache-update.service.
Jan 23 04:21:06 np0005593232 systemd[1]: man-db-cache-update.service: Consumed 1.574s CPU time.
Jan 23 04:21:06 np0005593232 systemd[1]: run-r1d4dbbe29ecc48c9a9b929de6eb47790.service: Deactivated successfully.
Jan 23 04:21:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v771: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:06 np0005593232 python3.9[233813]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:21:06 np0005593232 systemd[1]: Stopping Open-iSCSI...
Jan 23 04:21:06 np0005593232 iscsid[226338]: iscsid shutting down.
Jan 23 04:21:06 np0005593232 systemd[1]: iscsid.service: Deactivated successfully.
Jan 23 04:21:06 np0005593232 systemd[1]: Stopped Open-iSCSI.
Jan 23 04:21:06 np0005593232 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 23 04:21:06 np0005593232 systemd[1]: Starting Open-iSCSI...
Jan 23 04:21:06 np0005593232 systemd[1]: Started Open-iSCSI.
Jan 23 04:21:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:07.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:07 np0005593232 python3.9[233971]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:21:07 np0005593232 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 23 04:21:07 np0005593232 multipathd[231206]: exit (signal)
Jan 23 04:21:07 np0005593232 multipathd[231206]: --------shut down-------
Jan 23 04:21:07 np0005593232 systemd[1]: multipathd.service: Deactivated successfully.
Jan 23 04:21:07 np0005593232 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 23 04:21:07 np0005593232 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 23 04:21:07 np0005593232 multipathd[233977]: --------start up--------
Jan 23 04:21:07 np0005593232 multipathd[233977]: read /etc/multipath.conf
Jan 23 04:21:07 np0005593232 multipathd[233977]: path checkers start up
Jan 23 04:21:07 np0005593232 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 23 04:21:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v772: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:09 np0005593232 python3.9[234135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 23 04:21:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:10.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:10 np0005593232 python3.9[234291]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v773: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:11.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:11 np0005593232 python3.9[234444]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:21:11 np0005593232 systemd[1]: Reloading.
Jan 23 04:21:11 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:21:11 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:21:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:12.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v774: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:12 np0005593232 python3.9[234629]: ansible-ansible.builtin.service_facts Invoked
Jan 23 04:21:12 np0005593232 network[234646]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 23 04:21:12 np0005593232 network[234647]: 'network-scripts' will be removed from distribution in near future.
Jan 23 04:21:12 np0005593232 network[234648]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 23 04:21:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:21:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:13.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:21:13 np0005593232 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 23 04:21:13 np0005593232 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 23 04:21:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:14.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v775: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:15 np0005593232 podman[234701]: 2026-01-23 09:21:15.513682528 +0000 UTC m=+0.055865568 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 04:21:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:16.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v776: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8381 writes, 33K keys, 8381 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8381 writes, 1803 syncs, 4.65 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 632 writes, 971 keys, 632 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s#012Interval WAL: 632 writes, 301 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.0001 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 0.0001 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Jan 23 04:21:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:17.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:17 np0005593232 python3.9[234945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:18.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:18 np0005593232 python3.9[235098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v777: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:19 np0005593232 python3.9[235252]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:20 np0005593232 python3.9[235405]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:20.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v778: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:20 np0005593232 python3.9[235558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:21 np0005593232 python3.9[235712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:22 np0005593232 python3.9[235865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v779: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:22 np0005593232 python3.9[236018]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:21:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:24 np0005593232 python3.9[236172]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:24.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v780: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:24 np0005593232 python3.9[236324]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:25.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:25 np0005593232 podman[236499]: 2026-01-23 09:21:25.332613669 +0000 UTC m=+0.091148774 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 04:21:25 np0005593232 python3.9[236545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:26 np0005593232 python3.9[236705]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:26.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v781: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:26 np0005593232 python3.9[236857]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:27.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:27 np0005593232 python3.9[237010]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:21:28 np0005593232 python3.9[237162]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:28.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v782: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:28 np0005593232 python3.9[237314]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:29 np0005593232 python3.9[237467]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:30 np0005593232 python3.9[237619]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:30.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v783: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:30 np0005593232 python3.9[237771]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:31.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:31 np0005593232 python3.9[237924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:21:32 np0005593232 python3.9[238176]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:32.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v784: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:32 np0005593232 python3.9[238358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 54acd1b1-ef6e-4510-a313-f18482fa026d does not exist
Jan 23 04:21:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 72050ad1-127b-4dda-82b0-75efeae0c98b does not exist
Jan 23 04:21:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a89f76c-4e3a-4a83-88c3-6374649dc414 does not exist
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:21:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:21:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:21:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:21:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:33 np0005593232 python3.9[238584]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.387044424 +0000 UTC m=+0.039559398 container create fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:21:33 np0005593232 systemd[1]: Started libpod-conmon-fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275.scope.
Jan 23 04:21:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.370198568 +0000 UTC m=+0.022713582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.473492854 +0000 UTC m=+0.126007848 container init fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.48044764 +0000 UTC m=+0.132962614 container start fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.4853842 +0000 UTC m=+0.137899174 container attach fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:21:33 np0005593232 laughing_raman[238663]: 167 167
Jan 23 04:21:33 np0005593232 systemd[1]: libpod-fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275.scope: Deactivated successfully.
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.490429822 +0000 UTC m=+0.142944796 container died fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:21:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6620441fb8082257f1b7698446aa32047997f9a2e35c90747617ede421c7edbd-merged.mount: Deactivated successfully.
Jan 23 04:21:33 np0005593232 podman[238646]: 2026-01-23 09:21:33.529986059 +0000 UTC m=+0.182501033 container remove fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:21:33 np0005593232 systemd[1]: libpod-conmon-fc174122cf06c3257f7e2e7e62c1ecc8eaa016fab48b2215578f9d5c9ccae275.scope: Deactivated successfully.
Jan 23 04:21:33 np0005593232 podman[238761]: 2026-01-23 09:21:33.680085225 +0000 UTC m=+0.039418183 container create a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:21:33 np0005593232 systemd[1]: Started libpod-conmon-a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff.scope.
Jan 23 04:21:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:33 np0005593232 podman[238761]: 2026-01-23 09:21:33.663361763 +0000 UTC m=+0.022694741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:33 np0005593232 podman[238761]: 2026-01-23 09:21:33.762126911 +0000 UTC m=+0.121459889 container init a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:21:33 np0005593232 podman[238761]: 2026-01-23 09:21:33.77129676 +0000 UTC m=+0.130629718 container start a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:21:33 np0005593232 podman[238761]: 2026-01-23 09:21:33.77447369 +0000 UTC m=+0.133806688 container attach a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:21:34 np0005593232 python3.9[238860]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:21:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:34.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v785: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:34 np0005593232 frosty_fermi[238803]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:21:34 np0005593232 frosty_fermi[238803]: --> relative data size: 1.0
Jan 23 04:21:34 np0005593232 frosty_fermi[238803]: --> All data devices are unavailable
Jan 23 04:21:34 np0005593232 systemd[1]: libpod-a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff.scope: Deactivated successfully.
Jan 23 04:21:34 np0005593232 podman[238761]: 2026-01-23 09:21:34.626341586 +0000 UTC m=+0.985674584 container died a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:21:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-140f622fb2ea569937354a567cda656e142b93e94bff5ee3228a3941a04c1bd6-merged.mount: Deactivated successfully.
Jan 23 04:21:34 np0005593232 podman[238761]: 2026-01-23 09:21:34.691885386 +0000 UTC m=+1.051218344 container remove a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_fermi, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:21:34 np0005593232 systemd[1]: libpod-conmon-a55b9452d5081d0bd147d796d08b66b9ef8dbf24d4220dde89c0bae10ad73dff.scope: Deactivated successfully.
Jan 23 04:21:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:35.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:35 np0005593232 python3.9[239148]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.344614381 +0000 UTC m=+0.063870644 container create e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:21:35 np0005593232 systemd[1]: Started libpod-conmon-e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd.scope.
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.305273121 +0000 UTC m=+0.024529404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.422765236 +0000 UTC m=+0.142021499 container init e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.431719889 +0000 UTC m=+0.150976152 container start e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.435758543 +0000 UTC m=+0.155014816 container attach e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:21:35 np0005593232 distracted_almeida[239193]: 167 167
Jan 23 04:21:35 np0005593232 systemd[1]: libpod-e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd.scope: Deactivated successfully.
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.437805161 +0000 UTC m=+0.157061434 container died e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:21:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95aa3a5b9424a00748dfe6e5b26cba4ce6b9411534155d26174b8773532b45b9-merged.mount: Deactivated successfully.
Jan 23 04:21:35 np0005593232 podman[239175]: 2026-01-23 09:21:35.474512957 +0000 UTC m=+0.193769220 container remove e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:21:35 np0005593232 systemd[1]: libpod-conmon-e40e2631b6c55c2a9c013675e091f24973dc45dfd1278ac84a0ae64a0e560fdd.scope: Deactivated successfully.
Jan 23 04:21:35 np0005593232 podman[239266]: 2026-01-23 09:21:35.643848937 +0000 UTC m=+0.045256359 container create e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:21:35 np0005593232 systemd[1]: Started libpod-conmon-e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2.scope.
Jan 23 04:21:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4d389a18077db968afc1af16216b0795ef3e68a801087b457f6a724d127284/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4d389a18077db968afc1af16216b0795ef3e68a801087b457f6a724d127284/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4d389a18077db968afc1af16216b0795ef3e68a801087b457f6a724d127284/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa4d389a18077db968afc1af16216b0795ef3e68a801087b457f6a724d127284/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:35 np0005593232 podman[239266]: 2026-01-23 09:21:35.623845462 +0000 UTC m=+0.025252914 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:35 np0005593232 podman[239266]: 2026-01-23 09:21:35.724169794 +0000 UTC m=+0.125577236 container init e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Jan 23 04:21:35 np0005593232 podman[239266]: 2026-01-23 09:21:35.734057343 +0000 UTC m=+0.135464765 container start e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:21:35 np0005593232 podman[239266]: 2026-01-23 09:21:35.737231162 +0000 UTC m=+0.138638604 container attach e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:21:36 np0005593232 python3.9[239386]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 23 04:21:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:36.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]: {
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:    "0": [
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:        {
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "devices": [
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "/dev/loop3"
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            ],
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "lv_name": "ceph_lv0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "lv_size": "7511998464",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "name": "ceph_lv0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "tags": {
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.cluster_name": "ceph",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.crush_device_class": "",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.encrypted": "0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.osd_id": "0",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.type": "block",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:                "ceph.vdo": "0"
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            },
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "type": "block",
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:            "vg_name": "ceph_vg0"
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:        }
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]:    ]
Jan 23 04:21:36 np0005593232 elated_sanderson[239308]: }
Jan 23 04:21:36 np0005593232 systemd[1]: libpod-e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2.scope: Deactivated successfully.
Jan 23 04:21:36 np0005593232 podman[239266]: 2026-01-23 09:21:36.501112185 +0000 UTC m=+0.902519607 container died e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:21:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aa4d389a18077db968afc1af16216b0795ef3e68a801087b457f6a724d127284-merged.mount: Deactivated successfully.
Jan 23 04:21:36 np0005593232 podman[239266]: 2026-01-23 09:21:36.555109119 +0000 UTC m=+0.956516541 container remove e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:21:36 np0005593232 systemd[1]: libpod-conmon-e391f7f1a7f6f1789d2936a90d3aa663fe9b764fc4c937d4d0271e1ad88fdbb2.scope: Deactivated successfully.
Jan 23 04:21:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v786: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:21:37
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log']
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.177655372 +0000 UTC m=+0.039510266 container create 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:21:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:37.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:37 np0005593232 systemd[1]: Started libpod-conmon-2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2.scope.
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:21:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.253351089 +0000 UTC m=+0.115205993 container init 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.160860348 +0000 UTC m=+0.022715252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.259804731 +0000 UTC m=+0.121659625 container start 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.263526646 +0000 UTC m=+0.125381550 container attach 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:21:37 np0005593232 thirsty_blackburn[239710]: 167 167
Jan 23 04:21:37 np0005593232 systemd[1]: libpod-2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2.scope: Deactivated successfully.
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.265315756 +0000 UTC m=+0.127170640 container died 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:21:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f8e4c12b95162b0486b3790bb926ef191465183e2712f7cada2cdaa7812b1845-merged.mount: Deactivated successfully.
Jan 23 04:21:37 np0005593232 podman[239693]: 2026-01-23 09:21:37.29910988 +0000 UTC m=+0.160964774 container remove 2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:21:37 np0005593232 systemd[1]: libpod-conmon-2c71e4a0166bcc84f6e674020d88b3d32c36235fa927e4d7ab16403f75fa4ed2.scope: Deactivated successfully.
Jan 23 04:21:37 np0005593232 python3.9[239689]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:21:37 np0005593232 systemd[1]: Reloading.
Jan 23 04:21:37 np0005593232 podman[239735]: 2026-01-23 09:21:37.496598285 +0000 UTC m=+0.053633035 container create d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:21:37 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:21:37 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:21:37 np0005593232 podman[239735]: 2026-01-23 09:21:37.471746403 +0000 UTC m=+0.028781173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:21:37 np0005593232 systemd[1]: Started libpod-conmon-d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0.scope.
Jan 23 04:21:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:21:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621e410f87db2c39db02a6a275e793b477ea324053260e305260fcab1fce55e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621e410f87db2c39db02a6a275e793b477ea324053260e305260fcab1fce55e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621e410f87db2c39db02a6a275e793b477ea324053260e305260fcab1fce55e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3621e410f87db2c39db02a6a275e793b477ea324053260e305260fcab1fce55e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:21:37 np0005593232 podman[239735]: 2026-01-23 09:21:37.788554586 +0000 UTC m=+0.345589356 container init d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:21:37 np0005593232 podman[239735]: 2026-01-23 09:21:37.798993571 +0000 UTC m=+0.356028321 container start d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 04:21:37 np0005593232 podman[239735]: 2026-01-23 09:21:37.802124629 +0000 UTC m=+0.359159409 container attach d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:21:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:21:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:38.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:38 np0005593232 python3.9[239943]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v787: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]: {
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:        "osd_id": 0,
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:        "type": "bluestore"
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]:    }
Jan 23 04:21:38 np0005593232 mystifying_ellis[239787]: }
Jan 23 04:21:38 np0005593232 systemd[1]: libpod-d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0.scope: Deactivated successfully.
Jan 23 04:21:38 np0005593232 podman[239735]: 2026-01-23 09:21:38.666154128 +0000 UTC m=+1.223188878 container died d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:21:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3621e410f87db2c39db02a6a275e793b477ea324053260e305260fcab1fce55e-merged.mount: Deactivated successfully.
Jan 23 04:21:38 np0005593232 podman[239735]: 2026-01-23 09:21:38.777186003 +0000 UTC m=+1.334220753 container remove d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_ellis, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:21:38 np0005593232 systemd[1]: libpod-conmon-d776d6b678432fcbe97d87b7ff0592348beee55055e7a2c4f89cba53b5b7adc0.scope: Deactivated successfully.
Jan 23 04:21:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:21:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:21:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ce8648c3-2166-44e0-959e-77c2dc253ee1 does not exist
Jan 23 04:21:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 53c5a04c-99f0-4cb0-abfd-7165806913fe does not exist
Jan 23 04:21:38 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 67622276-5782-48df-a4a1-9c81b44075df does not exist
Jan 23 04:21:39 np0005593232 python3.9[240126]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:39.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:39 np0005593232 python3.9[240330]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:21:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:40 np0005593232 python3.9[240483]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:40.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v788: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:41 np0005593232 python3.9[240636]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:41.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:41 np0005593232 python3.9[240790]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:42 np0005593232 python3.9[240943]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:42.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:21:42.574 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:21:42.575 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:21:42.576 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:21:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v789: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:42 np0005593232 python3.9[241096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 23 04:21:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:43.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:44.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v790: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:45.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:45 np0005593232 python3.9[241301]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:45 np0005593232 podman[241425]: 2026-01-23 09:21:45.924175982 +0000 UTC m=+0.064364468 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 04:21:46 np0005593232 python3.9[241468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:46.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:21:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v791: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:46 np0005593232 python3.9[241624]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:47.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:47 np0005593232 python3.9[241777]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:48 np0005593232 python3.9[241929]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v792: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:49 np0005593232 python3.9[242081]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:49.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:49 np0005593232 python3.9[242234]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:50 np0005593232 python3.9[242386]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:50.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v793: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:50 np0005593232 python3.9[242538]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:51 np0005593232 python3.9[242691]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:21:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:52.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v794: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:53.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v795: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:21:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:55.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:21:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:21:56 np0005593232 podman[242718]: 2026-01-23 09:21:56.438409639 +0000 UTC m=+0.095195978 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 04:21:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v796: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:57 np0005593232 python3.9[242873]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 23 04:21:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:21:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v797: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:21:58 np0005593232 python3.9[243026]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 23 04:21:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:21:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:21:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:21:59.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:21:59 np0005593232 python3.9[243185]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 23 04:22:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:00.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v798: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:01 np0005593232 systemd-logind[808]: New session 51 of user zuul.
Jan 23 04:22:01 np0005593232 systemd[1]: Started Session 51 of User zuul.
Jan 23 04:22:01 np0005593232 systemd[1]: session-51.scope: Deactivated successfully.
Jan 23 04:22:01 np0005593232 systemd-logind[808]: Session 51 logged out. Waiting for processes to exit.
Jan 23 04:22:01 np0005593232 systemd-logind[808]: Removed session 51.
Jan 23 04:22:02 np0005593232 python3.9[243372]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:02.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:02 np0005593232 python3.9[243493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160121.6363513-2659-181059603389274/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v799: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:03 np0005593232 python3.9[243644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:03 np0005593232 python3.9[243720]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:04 np0005593232 python3.9[243870]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:22:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:22:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v800: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:04 np0005593232 python3.9[243991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160123.7279685-2659-107632222608912/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:05 np0005593232 python3.9[244142]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:05 np0005593232 python3.9[244313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160124.759296-2659-25219184808095/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:06 np0005593232 python3.9[244463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v801: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:06 np0005593232 python3.9[244584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160125.8992007-2659-155866570689336/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:07.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:07 np0005593232 python3.9[244735]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:07 np0005593232 python3.9[244856]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160127.04523-2659-107336995019484/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v802: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:09 np0005593232 python3.9[245009]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:22:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:10 np0005593232 python3.9[245161]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:22:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v803: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:11.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:11 np0005593232 python3.9[245314]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:11 np0005593232 python3.9[245466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:12.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:12 np0005593232 python3.9[245589]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769160131.557221-2980-130812577130733/.source _original_basename=.4vcx38gv follow=False checksum=d0f7039eb0db75cfe8dc7424054348ff82b6b421 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 23 04:22:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v804: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:13.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:13 np0005593232 python3.9[245742]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:14 np0005593232 python3.9[245894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v805: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:14 np0005593232 python3.9[246015]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160133.9313152-3058-196454674999872/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:15 np0005593232 python3.9[246166]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 23 04:22:16 np0005593232 python3.9[246287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769160135.1465712-3103-260248358171815/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 23 04:22:16 np0005593232 podman[246288]: 2026-01-23 09:22:16.263816813 +0000 UTC m=+0.091285867 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:16.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v806: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:17 np0005593232 python3.9[246459]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 23 04:22:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:18.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v807: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:18 np0005593232 python3.9[246611]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 04:22:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:19.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:19 np0005593232 python3[246764]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 04:22:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:20.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v808: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:21.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:22.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v809: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:23.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:24.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v810: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:25.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:26.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v811: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:27.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:28.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v812: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:29.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:30.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v813: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:31.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:31 np0005593232 podman[246887]: 2026-01-23 09:22:31.909768346 +0000 UTC m=+4.562527609 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 04:22:32 np0005593232 podman[246778]: 2026-01-23 09:22:32.082479221 +0000 UTC m=+12.031067315 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 04:22:32 np0005593232 podman[246937]: 2026-01-23 09:22:32.192065425 +0000 UTC m=+0.021108397 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 04:22:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v814: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:32 np0005593232 podman[246937]: 2026-01-23 09:22:32.62338148 +0000 UTC m=+0.452424432 container create c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init)
Jan 23 04:22:32 np0005593232 python3[246764]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 23 04:22:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:33.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:34.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v815: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:34 np0005593232 python3.9[247127]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:36 np0005593232 python3.9[247282]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 23 04:22:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v816: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:22:37
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'vms']
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:22:37 np0005593232 python3.9[247434]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:22:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:37.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:22:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:22:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:38.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v817: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:39 np0005593232 python3[247587]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 23 04:22:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:39.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:39 np0005593232 podman[247624]: 2026-01-23 09:22:39.34068958 +0000 UTC m=+0.048466449 container create 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:22:39 np0005593232 podman[247624]: 2026-01-23 09:22:39.313530773 +0000 UTC m=+0.021307662 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 23 04:22:39 np0005593232 python3[247587]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 23 04:22:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:22:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:22:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:22:40 np0005593232 python3.9[247941]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:40.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3d96efff-be33-4ec4-9945-1a5ca51ba526 does not exist
Jan 23 04:22:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 31a8b26c-1f30-41ea-87f4-9bed61aa2c20 does not exist
Jan 23 04:22:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7ec23cee-c9af-422d-a9f4-25d26cab2cf4 does not exist
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v818: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:22:40 np0005593232 python3.9[248170]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.159394414 +0000 UTC m=+0.043924353 container create b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:22:41 np0005593232 systemd[1]: Started libpod-conmon-b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d.scope.
Jan 23 04:22:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.142468582 +0000 UTC m=+0.026998551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.247026062 +0000 UTC m=+0.131556021 container init b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.253679791 +0000 UTC m=+0.138209730 container start b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.256859512 +0000 UTC m=+0.141389481 container attach b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:22:41 np0005593232 epic_johnson[248309]: 167 167
Jan 23 04:22:41 np0005593232 systemd[1]: libpod-b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d.scope: Deactivated successfully.
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.261020011 +0000 UTC m=+0.145549950 container died b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:22:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:41.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-45075aed75d9018cab538fa519f2b9bb8a4ca3f898106e5c4260e53465427d9e-merged.mount: Deactivated successfully.
Jan 23 04:22:41 np0005593232 podman[248267]: 2026-01-23 09:22:41.302112342 +0000 UTC m=+0.186642281 container remove b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_johnson, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:22:41 np0005593232 systemd[1]: libpod-conmon-b6ddf477525cdde8dd2184c144b142d29da61011402eb62eff44e6314424ef7d.scope: Deactivated successfully.
Jan 23 04:22:41 np0005593232 podman[248407]: 2026-01-23 09:22:41.456666337 +0000 UTC m=+0.039565829 container create f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:22:41 np0005593232 systemd[1]: Started libpod-conmon-f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c.scope.
Jan 23 04:22:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:41 np0005593232 podman[248407]: 2026-01-23 09:22:41.439095196 +0000 UTC m=+0.021994708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:41 np0005593232 podman[248407]: 2026-01-23 09:22:41.547004822 +0000 UTC m=+0.129904324 container init f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:22:41 np0005593232 podman[248407]: 2026-01-23 09:22:41.560111595 +0000 UTC m=+0.143011087 container start f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:22:41 np0005593232 podman[248407]: 2026-01-23 09:22:41.563626886 +0000 UTC m=+0.146526428 container attach f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:41 np0005593232 python3.9[248439]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769160161.064543-3391-111060597857044/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 23 04:22:42 np0005593232 python3.9[248525]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 23 04:22:42 np0005593232 systemd[1]: Reloading.
Jan 23 04:22:42 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:22:42 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:22:42 np0005593232 musing_rubin[248445]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:22:42 np0005593232 musing_rubin[248445]: --> relative data size: 1.0
Jan 23 04:22:42 np0005593232 musing_rubin[248445]: --> All data devices are unavailable
Jan 23 04:22:42 np0005593232 podman[248407]: 2026-01-23 09:22:42.474602311 +0000 UTC m=+1.057501803 container died f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:22:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:42.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:42 np0005593232 systemd[1]: libpod-f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c.scope: Deactivated successfully.
Jan 23 04:22:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3469b232ccb172d7d328cac11437b8bc2a0984a6ba5b139d4b6b1fcee15da3ef-merged.mount: Deactivated successfully.
Jan 23 04:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:22:42.576 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:22:42.585 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:22:42.585 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:22:42 np0005593232 podman[248407]: 2026-01-23 09:22:42.609677381 +0000 UTC m=+1.192576863 container remove f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:22:42 np0005593232 systemd[1]: libpod-conmon-f0b9ad7df5631089839bf83f135ab5a212a83eb1595ceb66cedd060a04c74b0c.scope: Deactivated successfully.
Jan 23 04:22:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v819: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:43 np0005593232 python3.9[248753]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.196420835 +0000 UTC m=+0.047978109 container create 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:22:43 np0005593232 systemd[1]: Reloading.
Jan 23 04:22:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:22:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:43.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.174091498 +0000 UTC m=+0.025648792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:43 np0005593232 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 23 04:22:43 np0005593232 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 23 04:22:43 np0005593232 systemd[1]: Started libpod-conmon-1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280.scope.
Jan 23 04:22:43 np0005593232 systemd[1]: Starting nova_compute container...
Jan 23 04:22:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.54572966 +0000 UTC m=+0.397286944 container init 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.553830991 +0000 UTC m=+0.405388265 container start 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:22:43 np0005593232 naughty_shockley[248854]: 167 167
Jan 23 04:22:43 np0005593232 systemd[1]: libpod-1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280.scope: Deactivated successfully.
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.561125829 +0000 UTC m=+0.412683133 container attach 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.562520318 +0000 UTC m=+0.414077612 container died 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-48e35ad5e1949c0021385732973dde710f9fc87b01875ad8d15350eab4c20db0-merged.mount: Deactivated successfully.
Jan 23 04:22:43 np0005593232 podman[248799]: 2026-01-23 09:22:43.616383844 +0000 UTC m=+0.467941118 container remove 1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:22:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:43 np0005593232 systemd[1]: libpod-conmon-1495e60b408a10770a6c8c9ec92484b956f5228ef6b4ee7b42a3a8bca2358280.scope: Deactivated successfully.
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 podman[248856]: 2026-01-23 09:22:43.665478483 +0000 UTC m=+0.134187096 container init 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm)
Jan 23 04:22:43 np0005593232 podman[248856]: 2026-01-23 09:22:43.67416272 +0000 UTC m=+0.142871303 container start 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:22:43 np0005593232 podman[248856]: nova_compute
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + sudo -E kolla_set_configs
Jan 23 04:22:43 np0005593232 systemd[1]: Started nova_compute container.
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Validating config file
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying service configuration files
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Deleting /etc/ceph
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Creating directory /etc/ceph
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Writing out command to execute
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:43 np0005593232 nova_compute[248885]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 04:22:43 np0005593232 nova_compute[248885]: ++ cat /run_command
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + CMD=nova-compute
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + ARGS=
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + sudo kolla_copy_cacerts
Jan 23 04:22:43 np0005593232 podman[248900]: 2026-01-23 09:22:43.792486893 +0000 UTC m=+0.047225797 container create c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + [[ ! -n '' ]]
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + . kolla_extend_start
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + umask 0022
Jan 23 04:22:43 np0005593232 nova_compute[248885]: + exec nova-compute
Jan 23 04:22:43 np0005593232 nova_compute[248885]: Running command: 'nova-compute'
Jan 23 04:22:43 np0005593232 systemd[1]: Started libpod-conmon-c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea.scope.
Jan 23 04:22:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953a122e67ac90ac3c7136f9d231a498582bab45ff266d61cc80deda456f349f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953a122e67ac90ac3c7136f9d231a498582bab45ff266d61cc80deda456f349f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953a122e67ac90ac3c7136f9d231a498582bab45ff266d61cc80deda456f349f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953a122e67ac90ac3c7136f9d231a498582bab45ff266d61cc80deda456f349f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:43 np0005593232 podman[248900]: 2026-01-23 09:22:43.775990143 +0000 UTC m=+0.030729047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:43 np0005593232 podman[248900]: 2026-01-23 09:22:43.887603914 +0000 UTC m=+0.142342828 container init c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:22:43 np0005593232 podman[248900]: 2026-01-23 09:22:43.895437277 +0000 UTC m=+0.150176181 container start c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:22:43 np0005593232 podman[248900]: 2026-01-23 09:22:43.900894013 +0000 UTC m=+0.155632947 container attach c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:22:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:44.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v820: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]: {
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:    "0": [
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:        {
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "devices": [
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "/dev/loop3"
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            ],
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "lv_name": "ceph_lv0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "lv_size": "7511998464",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "name": "ceph_lv0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "tags": {
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.cluster_name": "ceph",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.crush_device_class": "",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.encrypted": "0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.osd_id": "0",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.type": "block",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:                "ceph.vdo": "0"
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            },
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "type": "block",
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:            "vg_name": "ceph_vg0"
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:        }
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]:    ]
Jan 23 04:22:44 np0005593232 mystifying_tu[248946]: }
Jan 23 04:22:44 np0005593232 systemd[1]: libpod-c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea.scope: Deactivated successfully.
Jan 23 04:22:44 np0005593232 conmon[248946]: conmon c0cf84950124483921f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea.scope/container/memory.events
Jan 23 04:22:44 np0005593232 podman[248900]: 2026-01-23 09:22:44.756091758 +0000 UTC m=+1.010830662 container died c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:22:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-953a122e67ac90ac3c7136f9d231a498582bab45ff266d61cc80deda456f349f-merged.mount: Deactivated successfully.
Jan 23 04:22:44 np0005593232 podman[248900]: 2026-01-23 09:22:44.806378972 +0000 UTC m=+1.061117876 container remove c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:22:44 np0005593232 systemd[1]: libpod-conmon-c0cf84950124483921f80860a60353902a474e304eb8e671b8b6913d7cd8e2ea.scope: Deactivated successfully.
Jan 23 04:22:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:45.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.403214573 +0000 UTC m=+0.040940158 container create 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:22:45 np0005593232 systemd[1]: Started libpod-conmon-8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94.scope.
Jan 23 04:22:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.476637646 +0000 UTC m=+0.114363241 container init 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.383783539 +0000 UTC m=+0.021509134 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.482844923 +0000 UTC m=+0.120570488 container start 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.486913109 +0000 UTC m=+0.124638684 container attach 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:45 np0005593232 systemd[1]: libpod-8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94.scope: Deactivated successfully.
Jan 23 04:22:45 np0005593232 laughing_maxwell[249171]: 167 167
Jan 23 04:22:45 np0005593232 conmon[249171]: conmon 8a64a998c0c68e0dedbb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94.scope/container/memory.events
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.489021849 +0000 UTC m=+0.126747424 container died 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:22:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-13377975a83a59ce91a779a524286dacd379dca07899ca133632f1f733dd5750-merged.mount: Deactivated successfully.
Jan 23 04:22:45 np0005593232 podman[249125]: 2026-01-23 09:22:45.521628468 +0000 UTC m=+0.159354043 container remove 8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_maxwell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:22:45 np0005593232 systemd[1]: libpod-conmon-8a64a998c0c68e0dedbbd9f0a3475ea9eb6d6cffc43e7085f0090e0256d54d94.scope: Deactivated successfully.
Jan 23 04:22:45 np0005593232 podman[249196]: 2026-01-23 09:22:45.698368796 +0000 UTC m=+0.044004136 container create 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:22:45 np0005593232 systemd[1]: Started libpod-conmon-1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a.scope.
Jan 23 04:22:45 np0005593232 podman[249196]: 2026-01-23 09:22:45.680376123 +0000 UTC m=+0.026011493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:22:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dbdbccb849b98ef7182cd92f6fee345485cc16d585fba4702c8621458691b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dbdbccb849b98ef7182cd92f6fee345485cc16d585fba4702c8621458691b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dbdbccb849b98ef7182cd92f6fee345485cc16d585fba4702c8621458691b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70dbdbccb849b98ef7182cd92f6fee345485cc16d585fba4702c8621458691b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:45 np0005593232 podman[249196]: 2026-01-23 09:22:45.79991228 +0000 UTC m=+0.145547640 container init 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:22:45 np0005593232 podman[249196]: 2026-01-23 09:22:45.809823892 +0000 UTC m=+0.155459232 container start 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:22:45 np0005593232 podman[249196]: 2026-01-23 09:22:45.813899919 +0000 UTC m=+0.159535279 container attach 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.144 248891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.146 248891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.146 248891 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.146 248891 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 23 04:22:46 np0005593232 python3.9[249343]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.306 248891 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.330 248891 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:22:46 np0005593232 nova_compute[248885]: 2026-01-23 09:22:46.331 248891 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 04:22:46 np0005593232 podman[249371]: 2026-01-23 09:22:46.406761167 +0000 UTC m=+0.052944580 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:22:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:46.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]: {
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:        "osd_id": 0,
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:        "type": "bluestore"
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]:    }
Jan 23 04:22:46 np0005593232 tender_lovelace[249257]: }
Jan 23 04:22:46 np0005593232 systemd[1]: libpod-1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a.scope: Deactivated successfully.
Jan 23 04:22:46 np0005593232 podman[249196]: 2026-01-23 09:22:46.651275376 +0000 UTC m=+0.996910716 container died 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:22:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v821: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-70dbdbccb849b98ef7182cd92f6fee345485cc16d585fba4702c8621458691b2-merged.mount: Deactivated successfully.
Jan 23 04:22:46 np0005593232 podman[249196]: 2026-01-23 09:22:46.711589355 +0000 UTC m=+1.057224725 container remove 1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lovelace, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:22:46 np0005593232 systemd[1]: libpod-conmon-1b766d6e4bebe1c7f7592800c53e15e709baf9d57faf7d766840c28bbfde2d2a.scope: Deactivated successfully.
Jan 23 04:22:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.040 248891 INFO nova.virt.driver [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 23 04:22:47 np0005593232 python3.9[249545]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.198 248891 INFO nova.compute.provider_config [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.210 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.211 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.211 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.211 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.212 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.212 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.212 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.212 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.212 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.213 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.214 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.214 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.214 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.214 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.214 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.215 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.215 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.215 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.215 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.215 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.216 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.216 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.216 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.216 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.216 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.217 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.217 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.217 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.217 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.217 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.218 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.218 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.218 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.218 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.219 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.219 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.219 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.219 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.219 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.220 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.220 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.220 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.220 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.220 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.221 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.221 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.221 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.221 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.221 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.222 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.222 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.222 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.222 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.222 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.223 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.223 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.223 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.223 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.223 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.224 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.225 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.225 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.225 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.225 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.225 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.226 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.226 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.226 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.226 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.226 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.227 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.227 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.227 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.227 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.227 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.228 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.229 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.229 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.229 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.229 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.229 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.230 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.230 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.230 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.230 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.230 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.231 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.232 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.232 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.232 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.232 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.232 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.233 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.233 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.233 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.233 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.233 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.234 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.234 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.234 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.234 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.234 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.235 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.236 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.236 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.236 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.236 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.236 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.237 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.237 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.237 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.237 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.237 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.238 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.238 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.238 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.238 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.238 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.239 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.240 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.240 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.240 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.240 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.240 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.241 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.241 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.241 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.241 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.241 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.242 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.242 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.242 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.242 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.242 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.243 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.243 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.243 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.243 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.243 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.244 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.244 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.244 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.244 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.244 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.245 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.245 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.245 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.245 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.245 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.246 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.246 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.246 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.246 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.247 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.247 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.247 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.247 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.247 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.248 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.248 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.248 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.248 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.249 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.249 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.249 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.249 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.249 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.250 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.250 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.250 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.250 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.251 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.251 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.251 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.251 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.251 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.252 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.252 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.252 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.252 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.252 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.253 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.253 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.253 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.253 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.253 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.254 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.254 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.254 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.254 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.255 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.256 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.256 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.256 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.256 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.256 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.257 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.257 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.257 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.257 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.257 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.258 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.258 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.258 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.258 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.259 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.259 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.259 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.259 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.259 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.260 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.261 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.261 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.261 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.261 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.261 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.262 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.262 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.262 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.262 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.262 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.263 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.263 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.263 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.263 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.263 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.264 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.264 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.264 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.264 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.264 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.265 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.265 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.265 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.265 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.265 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.266 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.266 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.266 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.266 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.266 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.267 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.267 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.267 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.267 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.268 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.268 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.268 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.268 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.269 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.269 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.269 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.269 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:47.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.269 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.270 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.270 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.270 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.270 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.270 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.271 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.271 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.271 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.271 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.271 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.272 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.272 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.272 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.272 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.272 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.273 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.273 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.273 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.273 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.273 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.274 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.274 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.274 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.274 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.274 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.275 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.275 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.276 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.276 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.276 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.277 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.277 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.277 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.277 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.278 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.278 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.278 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.278 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.278 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.279 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.279 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.279 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.279 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.279 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.280 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.280 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.280 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.280 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.281 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.281 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.281 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.281 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.281 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.282 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.282 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.282 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.282 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.282 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.283 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.284 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.284 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.284 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.284 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.285 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.285 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.285 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.285 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.285 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.286 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.286 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.286 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.286 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.286 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.287 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.288 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.288 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.288 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.288 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.288 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.289 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.290 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.290 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.290 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.290 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.291 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.292 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.292 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.292 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.292 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.292 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.293 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.293 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.293 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.293 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.293 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.294 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.294 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.294 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.294 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.294 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.295 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.296 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.296 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.296 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.296 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.297 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.298 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.298 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.298 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.298 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.298 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.299 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.299 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.299 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.299 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.299 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.300 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.301 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.302 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.302 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.302 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.302 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.302 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.303 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.303 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.303 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.303 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.303 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.304 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.304 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.304 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.304 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.305 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.305 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.305 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.305 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.305 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.306 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.306 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.306 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.306 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.307 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.307 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.307 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.307 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.307 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.308 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.308 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.308 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.308 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.309 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.309 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.309 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.310 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.310 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.310 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.310 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.311 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.311 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.311 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.311 248891 WARNING oslo_config.cfg [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 04:22:47 np0005593232 nova_compute[248885]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 04:22:47 np0005593232 nova_compute[248885]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 04:22:47 np0005593232 nova_compute[248885]: and ``live_migration_inbound_addr`` respectively.
Jan 23 04:22:47 np0005593232 nova_compute[248885]: ).  Its value may be silently ignored in the future.#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.312 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.312 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.312 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.312 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.312 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.313 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.313 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.313 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.313 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.314 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.314 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.314 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.314 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.314 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.315 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.315 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.315 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.315 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.316 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rbd_secret_uuid        = e1533653-0a5a-584c-b34b-8689f0d32e77 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.316 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.316 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.316 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.316 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.317 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.317 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.317 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.317 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.317 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.318 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.318 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.318 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.318 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.318 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.319 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.319 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.319 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.319 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.319 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.320 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.320 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.320 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.320 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.321 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.321 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.321 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.321 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.321 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.322 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.322 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.322 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.322 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.322 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.323 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.324 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.324 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.324 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.324 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.324 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.325 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.326 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.326 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.326 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.326 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.326 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.327 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.327 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.327 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.327 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.327 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.328 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.328 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.328 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.328 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.328 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.329 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.329 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.329 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.329 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.329 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.330 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.331 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.331 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.331 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.331 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.331 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.332 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.332 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.332 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.332 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.332 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.333 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.333 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.333 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.333 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.333 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.334 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.335 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.335 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.335 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.335 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.335 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.336 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.336 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.336 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.336 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.336 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.337 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.337 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.337 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.337 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.337 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.338 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.338 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.338 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.338 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.338 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.339 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.339 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.339 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.339 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.340 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.340 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.340 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.340 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.340 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.341 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.341 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.341 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.341 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.341 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.342 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.342 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.342 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.342 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.342 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.343 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.343 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.343 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.343 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.343 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.344 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.344 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.344 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.344 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.344 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.345 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.345 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.345 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.345 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.345 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.346 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.346 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.346 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.346 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.346 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.347 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.347 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.347 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.347 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.347 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.348 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.348 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.348 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.348 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.348 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.349 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.350 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.350 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.350 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.350 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.350 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.351 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.351 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.351 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.351 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.352 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.352 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.352 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.352 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.352 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.353 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.354 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.354 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.354 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.354 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.355 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.355 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.355 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.355 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.356 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.356 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.356 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.356 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.356 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.357 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.358 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.358 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.358 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.358 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.358 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.359 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.359 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.359 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.359 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.359 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.360 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.360 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.360 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.360 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.361 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.361 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.361 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.361 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.361 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.362 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.362 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.362 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.362 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.363 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.363 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.363 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.363 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.363 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.364 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.364 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.364 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.364 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.365 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.365 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.365 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.365 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.365 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.366 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.366 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.366 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.366 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.367 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.367 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.367 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.367 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.367 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.368 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.368 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.368 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.368 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.369 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.369 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.369 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.369 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.370 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.370 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.370 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.370 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.370 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.371 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.371 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.371 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.371 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.371 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.372 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.372 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.372 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.372 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.372 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.373 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.373 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.373 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.373 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.373 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.374 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.374 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.374 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.374 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.374 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.375 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.375 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.375 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.375 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.375 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.376 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.376 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.376 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.376 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.376 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.377 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.377 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.377 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.377 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.377 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.378 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.378 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.378 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.378 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.379 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.379 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.379 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.379 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.379 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.380 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.380 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.380 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.380 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.381 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.381 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.381 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.381 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.381 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.382 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.382 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.382 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.382 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.382 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.383 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.383 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.383 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.383 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.383 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.384 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.384 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.384 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.384 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.384 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.385 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.386 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.386 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.386 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.386 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.386 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.387 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.388 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.388 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.388 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.388 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.388 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.389 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.389 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.389 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.389 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.390 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.390 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.390 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.390 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.391 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.391 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.391 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.391 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.392 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.392 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.392 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.392 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.393 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.393 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.393 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.393 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.394 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.394 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.394 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.394 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.395 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.395 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.395 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.395 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.396 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.396 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.396 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.396 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.396 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.397 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.397 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.397 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.397 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.398 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.398 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.398 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.398 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.399 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.399 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.399 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.399 248891 DEBUG oslo_service.service [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.401 248891 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.415 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.416 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.416 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.417 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 23 04:22:47 np0005593232 systemd[1]: Starting libvirt QEMU daemon...
Jan 23 04:22:47 np0005593232 systemd[1]: Started libvirt QEMU daemon.
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.497 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7ff3a6abd730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.500 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7ff3a6abd730> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.501 248891 INFO nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.515 248891 WARNING nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 23 04:22:47 np0005593232 nova_compute[248885]: 2026-01-23 09:22:47.515 248891 DEBUG nova.virt.libvirt.volume.mount [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 23 04:22:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:22:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9caa4208-4f54-491e-85c0-ea894ecb2c8a does not exist
Jan 23 04:22:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6c3d291f-7d9e-4dc8-b877-93f9a0f3bc31 does not exist
Jan 23 04:22:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a921e486-e1be-4c40-bb6c-353288e22f36 does not exist
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.392 248891 INFO nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <host>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <uuid>ebc95db9-b389-4cf6-b3a5-7ae0afc322d2</uuid>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <arch>x86_64</arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model>EPYC-Rome-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <vendor>AMD</vendor>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <microcode version='16777317'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <signature family='23' model='49' stepping='0'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='x2apic'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='tsc-deadline'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='osxsave'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='hypervisor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='tsc_adjust'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='spec-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='stibp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='arch-capabilities'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='cmp_legacy'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='topoext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='virt-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='lbrv'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='tsc-scale'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='vmcb-clean'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='pause-filter'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='pfthreshold'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='svme-addr-chk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='rdctl-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='skip-l1dfl-vmentry'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='mds-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature name='pschange-mc-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <pages unit='KiB' size='4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <pages unit='KiB' size='2048'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <pages unit='KiB' size='1048576'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <power_management>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <suspend_mem/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </power_management>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <iommu support='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <migration_features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <live/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <uri_transports>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <uri_transport>tcp</uri_transport>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <uri_transport>rdma</uri_transport>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </uri_transports>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </migration_features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <topology>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <cells num='1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <cell id='0'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <memory unit='KiB'>7864316</memory>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <pages unit='KiB' size='2048'>0</pages>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <distances>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <sibling id='0' value='10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          </distances>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          <cpus num='8'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:          </cpus>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        </cell>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </cells>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </topology>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <cache>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </cache>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <secmodel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model>selinux</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <doi>0</doi>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </secmodel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <secmodel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model>dac</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <doi>0</doi>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </secmodel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </host>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <guest>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <os_type>hvm</os_type>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <arch name='i686'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <wordsize>32</wordsize>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <domain type='qemu'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <domain type='kvm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <pae/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <nonpae/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <acpi default='on' toggle='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <apic default='on' toggle='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <cpuselection/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <deviceboot/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <disksnapshot default='on' toggle='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <externalSnapshot/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </guest>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <guest>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <os_type>hvm</os_type>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <arch name='x86_64'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <wordsize>64</wordsize>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <domain type='qemu'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <domain type='kvm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <acpi default='on' toggle='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <apic default='on' toggle='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <cpuselection/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <deviceboot/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <disksnapshot default='on' toggle='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <externalSnapshot/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </guest>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </capabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: #033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.399 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.428 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 04:22:48 np0005593232 nova_compute[248885]: <domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <domain>kvm</domain>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <arch>i686</arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <vcpu max='240'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <iothreads supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <os supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='firmware'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <loader supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>rom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pflash</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='readonly'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>yes</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='secure'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </loader>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </os>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='maximumMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <vendor>AMD</vendor>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='succor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='custom' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <memoryBacking supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='sourceType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>anonymous</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>memfd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </memoryBacking>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <disk supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='diskDevice'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>disk</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cdrom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>floppy</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>lun</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ide</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>fdc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>sata</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </disk>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <graphics supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vnc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egl-headless</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </graphics>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <video supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='modelType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vga</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cirrus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>none</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>bochs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ramfb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </video>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hostdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='mode'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>subsystem</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='startupPolicy'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>mandatory</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>requisite</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>optional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='subsysType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pci</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='capsType'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='pciBackend'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hostdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <rng supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>random</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </rng>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <filesystem supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='driverType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>path</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>handle</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtiofs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </filesystem>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tpm supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-tis</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-crb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emulator</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>external</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendVersion'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>2.0</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </tpm>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <redirdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </redirdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <channel supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </channel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <crypto supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </crypto>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <interface supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>passt</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </interface>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <panic supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>isa</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>hyperv</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </panic>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <console supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>null</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dev</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pipe</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stdio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>udp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tcp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu-vdagent</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </console>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <gic supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <genid supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backup supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <async-teardown supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <s390-pv supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <ps2 supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tdx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sev supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sgx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hyperv supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='features'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>relaxed</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vapic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>spinlocks</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vpindex</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>runtime</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>synic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stimer</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reset</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vendor_id</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>frequencies</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reenlightenment</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tlbflush</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ipi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>avic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emsr_bitmap</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>xmm_input</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hyperv>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <launchSecurity supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.439 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 04:22:48 np0005593232 nova_compute[248885]: <domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <domain>kvm</domain>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <arch>i686</arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <vcpu max='4096'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <iothreads supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <os supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='firmware'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <loader supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>rom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pflash</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='readonly'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>yes</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='secure'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </loader>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </os>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='maximumMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <vendor>AMD</vendor>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='succor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='custom' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:48.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <memoryBacking supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='sourceType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>anonymous</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>memfd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </memoryBacking>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <disk supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='diskDevice'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>disk</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cdrom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>floppy</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>lun</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>fdc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>sata</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </disk>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <graphics supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vnc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egl-headless</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </graphics>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <video supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='modelType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vga</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cirrus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>none</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>bochs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ramfb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </video>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hostdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='mode'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>subsystem</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='startupPolicy'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>mandatory</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>requisite</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>optional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='subsysType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pci</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='capsType'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='pciBackend'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hostdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <rng supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>random</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </rng>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <filesystem supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='driverType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>path</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>handle</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtiofs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </filesystem>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tpm supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-tis</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-crb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emulator</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>external</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendVersion'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>2.0</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </tpm>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <redirdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </redirdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <channel supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </channel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <crypto supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </crypto>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <interface supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>passt</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </interface>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <panic supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>isa</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>hyperv</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </panic>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <console supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>null</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dev</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pipe</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stdio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>udp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tcp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu-vdagent</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </console>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <gic supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <genid supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backup supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <async-teardown supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <s390-pv supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <ps2 supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tdx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sev supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sgx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hyperv supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='features'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>relaxed</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vapic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>spinlocks</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vpindex</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>runtime</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>synic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stimer</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reset</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vendor_id</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>frequencies</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reenlightenment</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tlbflush</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ipi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>avic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emsr_bitmap</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>xmm_input</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hyperv>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <launchSecurity supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.492 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.496 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 23 04:22:48 np0005593232 nova_compute[248885]: <domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <domain>kvm</domain>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <arch>x86_64</arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <vcpu max='4096'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <iothreads supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <os supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='firmware'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>efi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <loader supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>rom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pflash</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='readonly'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>yes</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='secure'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>yes</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </loader>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </os>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='maximumMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <vendor>AMD</vendor>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='succor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='custom' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <memoryBacking supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='sourceType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>anonymous</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>memfd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </memoryBacking>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <disk supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='diskDevice'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>disk</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cdrom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>floppy</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>lun</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>fdc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>sata</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </disk>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <graphics supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vnc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egl-headless</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </graphics>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <video supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='modelType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vga</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cirrus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>none</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>bochs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ramfb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </video>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hostdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='mode'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>subsystem</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='startupPolicy'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>mandatory</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>requisite</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>optional</value>
Jan 23 04:22:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='subsysType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pci</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='capsType'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='pciBackend'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hostdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <rng supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>random</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </rng>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <filesystem supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='driverType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>path</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>handle</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtiofs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </filesystem>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tpm supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-tis</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-crb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emulator</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>external</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendVersion'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>2.0</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </tpm>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <redirdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </redirdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <channel supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </channel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <crypto supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </crypto>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <interface supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>passt</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </interface>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <panic supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>isa</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>hyperv</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </panic>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <console supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>null</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dev</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pipe</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stdio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>udp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tcp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu-vdagent</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </console>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <gic supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <genid supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backup supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <async-teardown supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <s390-pv supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <ps2 supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tdx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sev supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sgx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hyperv supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='features'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>relaxed</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vapic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>spinlocks</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vpindex</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>runtime</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>synic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stimer</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reset</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vendor_id</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>frequencies</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reenlightenment</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tlbflush</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ipi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>avic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emsr_bitmap</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>xmm_input</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hyperv>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <launchSecurity supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.578 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 04:22:48 np0005593232 nova_compute[248885]: <domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <domain>kvm</domain>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <arch>x86_64</arch>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <vcpu max='240'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <iothreads supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <os supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='firmware'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <loader supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>rom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pflash</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='readonly'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>yes</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='secure'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>no</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </loader>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </os>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='maximumMigratable'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>on</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>off</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <vendor>AMD</vendor>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='succor'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <mode name='custom' supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ddpd-u'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sha512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm3'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sm4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Denverton-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v822: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amd-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='auto-ibrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='perfmon-v2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbpb'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='stibp-always-on'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='EPYC-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-128'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-256'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx10-512'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='prefetchiti'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Haswell-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512er'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512pf'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fma4'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tbm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xop'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='amx-tile'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-bf16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-fp16'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bitalg'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrc'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fzrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='la57'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='taa-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ifma'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cmpccxadd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fbsdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='fsrs'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ibrs-all'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='intel-psfd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='lam'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mcdt-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pbrsb-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='psdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='serialize'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vaes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='hle'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='rtm'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512bw'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512cd'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512dq'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512f'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='avx512vl'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='invpcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pcid'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='pku'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='mpx'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='core-capability'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='split-lock-detect'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='cldemote'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='erms'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='gfni'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdir64b'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='movdiri'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='xsaves'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='athlon-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='core2duo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='coreduo-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='n270-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='ss'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <blockers model='phenom-v1'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnow'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <feature name='3dnowext'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </blockers>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </mode>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <memoryBacking supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <enum name='sourceType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>anonymous</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <value>memfd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </memoryBacking>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <disk supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='diskDevice'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>disk</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cdrom</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>floppy</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>lun</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ide</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>fdc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>sata</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </disk>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <graphics supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vnc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egl-headless</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </graphics>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <video supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='modelType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vga</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>cirrus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>none</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>bochs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ramfb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </video>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hostdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='mode'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>subsystem</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='startupPolicy'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>mandatory</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>requisite</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>optional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='subsysType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pci</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>scsi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='capsType'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='pciBackend'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hostdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <rng supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtio-non-transitional</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>random</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>egd</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </rng>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <filesystem supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='driverType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>path</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>handle</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>virtiofs</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </filesystem>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tpm supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-tis</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tpm-crb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emulator</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>external</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendVersion'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>2.0</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </tpm>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <redirdev supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='bus'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>usb</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </redirdev>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <channel supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </channel>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <crypto supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendModel'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>builtin</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </crypto>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <interface supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='backendType'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>default</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>passt</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </interface>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <panic supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='model'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>isa</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>hyperv</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </panic>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <console supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='type'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>null</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vc</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pty</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dev</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>file</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>pipe</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stdio</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>udp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tcp</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>unix</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>qemu-vdagent</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>dbus</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </console>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </devices>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <gic supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <genid supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <backup supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <async-teardown supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <s390-pv supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <ps2 supported='yes'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <tdx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sev supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <sgx supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <hyperv supported='yes'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <enum name='features'>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>relaxed</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vapic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>spinlocks</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vpindex</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>runtime</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>synic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>stimer</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reset</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>vendor_id</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>frequencies</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>reenlightenment</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>tlbflush</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>ipi</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>avic</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>emsr_bitmap</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <value>xmm_input</value>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </enum>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      <defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:      </defaults>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    </hyperv>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:    <launchSecurity supported='no'/>
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  </features>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </domainCapabilities>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.661 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.661 248891 INFO nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Secure Boot support detected#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.663 248891 INFO nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.663 248891 INFO nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.673 248891 DEBUG nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] cpu compare xml: <cpu match="exact">
Jan 23 04:22:48 np0005593232 nova_compute[248885]:  <model>Nehalem</model>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: </cpu>
Jan 23 04:22:48 np0005593232 nova_compute[248885]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.677 248891 DEBUG nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.755 248891 INFO nova.virt.node [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Determined node identity 0e4a8508-835c-4c0a-aa74-aae2c6536573 from /var/lib/nova/compute_id#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.779 248891 WARNING nova.compute.manager [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Compute nodes ['0e4a8508-835c-4c0a-aa74-aae2c6536573'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.840 248891 INFO nova.compute.manager [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.890 248891 WARNING nova.compute.manager [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.890 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.890 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.891 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.891 248891 DEBUG nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:22:48 np0005593232 nova_compute[248885]: 2026-01-23 09:22:48.891 248891 DEBUG oslo_concurrency.processutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:22:48 np0005593232 python3.9[249810]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 23 04:22:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:49.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:22:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646354825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.319 248891 DEBUG oslo_concurrency.processutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:22:49 np0005593232 systemd[1]: Starting libvirt nodedev daemon...
Jan 23 04:22:49 np0005593232 systemd[1]: Started libvirt nodedev daemon.
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.633 248891 WARNING nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.635 248891 DEBUG nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5256MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.635 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.635 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.654 248891 WARNING nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] No compute node record for compute-0.ctlplane.example.com:0e4a8508-835c-4c0a-aa74-aae2c6536573: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 0e4a8508-835c-4c0a-aa74-aae2c6536573 could not be found.#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.674 248891 INFO nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 0e4a8508-835c-4c0a-aa74-aae2c6536573#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.747 248891 DEBUG nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:22:49 np0005593232 nova_compute[248885]: 2026-01-23 09:22:49.747 248891 DEBUG nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:22:50 np0005593232 python3.9[250008]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 04:22:50 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:22:50 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.375 248891 INFO nova.scheduler.client.report [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] [req-41dd979d-d949-4b64-a39d-080c73af62dd] Created resource provider record via placement API for resource provider with UUID 0e4a8508-835c-4c0a-aa74-aae2c6536573 and name compute-0.ctlplane.example.com.#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.399 248891 DEBUG oslo_concurrency.processutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:22:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:50.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v823: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:22:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3613243730' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.852 248891 DEBUG oslo_concurrency.processutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.858 248891 DEBUG nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 23 04:22:50 np0005593232 nova_compute[248885]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.858 248891 INFO nova.virt.libvirt.host [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.859 248891 DEBUG nova.compute.provider_tree [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.860 248891 DEBUG nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.863 248891 DEBUG nova.virt.libvirt.driver [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Libvirt baseline CPU <cpu>
Jan 23 04:22:50 np0005593232 nova_compute[248885]:  <arch>x86_64</arch>
Jan 23 04:22:50 np0005593232 nova_compute[248885]:  <model>Nehalem</model>
Jan 23 04:22:50 np0005593232 nova_compute[248885]:  <vendor>AMD</vendor>
Jan 23 04:22:50 np0005593232 nova_compute[248885]:  <topology sockets="8" cores="1" threads="1"/>
Jan 23 04:22:50 np0005593232 nova_compute[248885]: </cpu>
Jan 23 04:22:50 np0005593232 nova_compute[248885]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.917 248891 DEBUG nova.scheduler.client.report [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Updated inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.918 248891 DEBUG nova.compute.provider_tree [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Updating resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 23 04:22:50 np0005593232 nova_compute[248885]: 2026-01-23 09:22:50.918 248891 DEBUG nova.compute.provider_tree [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.085 248891 DEBUG nova.compute.provider_tree [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Updating resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.119 248891 DEBUG nova.compute.resource_tracker [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.120 248891 DEBUG oslo_concurrency.lockutils [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.120 248891 DEBUG nova.service [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Jan 23 04:22:51 np0005593232 python3.9[250206]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 23 04:22:51 np0005593232 systemd[1]: Stopping nova_compute container...
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.220 248891 DEBUG nova.service [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.221 248891 DEBUG nova.servicegroup.drivers.db [None req-ced6c3aa-c441-47c0-8dd9-599ed422ba0c - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Jan 23 04:22:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:51.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.303 248891 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.306 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.306 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:22:51 np0005593232 nova_compute[248885]: 2026-01-23 09:22:51.307 248891 DEBUG oslo_concurrency.lockutils [None req-380b0c94-34af-4e67-99c5-ac613a721624 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:22:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 23 04:22:51 np0005593232 virtqemud[249592]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 23 04:22:51 np0005593232 virtqemud[249592]: hostname: compute-0
Jan 23 04:22:51 np0005593232 virtqemud[249592]: End of file while reading data: Input/output error
Jan 23 04:22:51 np0005593232 systemd[1]: libpod-72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5.scope: Deactivated successfully.
Jan 23 04:22:51 np0005593232 systemd[1]: libpod-72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5.scope: Consumed 4.577s CPU time.
Jan 23 04:22:51 np0005593232 podman[250211]: 2026-01-23 09:22:51.770488085 +0000 UTC m=+0.554291860 container died 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 04:22:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5-userdata-shm.mount: Deactivated successfully.
Jan 23 04:22:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8-merged.mount: Deactivated successfully.
Jan 23 04:22:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:22:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:52.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:22:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v824: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:52 np0005593232 podman[250211]: 2026-01-23 09:22:52.68004402 +0000 UTC m=+1.463847765 container cleanup 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:22:52 np0005593232 podman[250211]: nova_compute
Jan 23 04:22:52 np0005593232 podman[250240]: nova_compute
Jan 23 04:22:52 np0005593232 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 23 04:22:52 np0005593232 systemd[1]: Stopped nova_compute container.
Jan 23 04:22:52 np0005593232 systemd[1]: Starting nova_compute container...
Jan 23 04:22:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486621eff1fe750ea0328b3bab4468890d1c85c55745d598364f1b899a177ff8/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:52 np0005593232 podman[250253]: 2026-01-23 09:22:52.859182196 +0000 UTC m=+0.087702181 container init 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Jan 23 04:22:52 np0005593232 podman[250253]: 2026-01-23 09:22:52.865521147 +0000 UTC m=+0.094041102 container start 72b3e3988b666c9faed31dcd32e7423610a5c49411ab60248ac62495f45112e5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute)
Jan 23 04:22:52 np0005593232 podman[250253]: nova_compute
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + sudo -E kolla_set_configs
Jan 23 04:22:52 np0005593232 systemd[1]: Started nova_compute container.
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Validating config file
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying service configuration files
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /etc/ceph
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Creating directory /etc/ceph
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/ceph
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Writing out command to execute
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:52 np0005593232 nova_compute[250269]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 23 04:22:52 np0005593232 nova_compute[250269]: ++ cat /run_command
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + CMD=nova-compute
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + ARGS=
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + sudo kolla_copy_cacerts
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + [[ ! -n '' ]]
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + . kolla_extend_start
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + echo 'Running command: '\''nova-compute'\'''
Jan 23 04:22:52 np0005593232 nova_compute[250269]: Running command: 'nova-compute'
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + umask 0022
Jan 23 04:22:52 np0005593232 nova_compute[250269]: + exec nova-compute
Jan 23 04:22:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:53 np0005593232 python3.9[250433]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 23 04:22:54 np0005593232 systemd[1]: Started libpod-conmon-c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35.scope.
Jan 23 04:22:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:22:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf31a4539748b609b327c821311df56315ecf3d521ec337c6df89b10f77558ef/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf31a4539748b609b327c821311df56315ecf3d521ec337c6df89b10f77558ef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf31a4539748b609b327c821311df56315ecf3d521ec337c6df89b10f77558ef/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 23 04:22:54 np0005593232 podman[250459]: 2026-01-23 09:22:54.204016746 +0000 UTC m=+0.120170535 container init c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:22:54 np0005593232 podman[250459]: 2026-01-23 09:22:54.211781028 +0000 UTC m=+0.127934797 container start c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:22:54 np0005593232 python3.9[250433]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Applying nova statedir ownership
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 23 04:22:54 np0005593232 nova_compute_init[250480]: INFO:nova_statedir:Nova statedir ownership complete
Jan 23 04:22:54 np0005593232 systemd[1]: libpod-c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35.scope: Deactivated successfully.
Jan 23 04:22:54 np0005593232 podman[250494]: 2026-01-23 09:22:54.334625059 +0000 UTC m=+0.024551131 container died c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 04:22:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35-userdata-shm.mount: Deactivated successfully.
Jan 23 04:22:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cf31a4539748b609b327c821311df56315ecf3d521ec337c6df89b10f77558ef-merged.mount: Deactivated successfully.
Jan 23 04:22:54 np0005593232 podman[250494]: 2026-01-23 09:22:54.37149364 +0000 UTC m=+0.061419712 container cleanup c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 04:22:54 np0005593232 systemd[1]: libpod-conmon-c88a517ff3a488d4f388aa08289192a3752cfc75f8374d0b05bb7fd93f71ca35.scope: Deactivated successfully.
Jan 23 04:22:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:22:54.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v825: 321 pgs: 321 active+clean; 456 KiB data, 153 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.091 250273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.092 250273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.092 250273 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.092 250273 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 23 04:22:55 np0005593232 systemd[1]: session-50.scope: Deactivated successfully.
Jan 23 04:22:55 np0005593232 systemd[1]: session-50.scope: Consumed 1min 59.038s CPU time.
Jan 23 04:22:55 np0005593232 systemd-logind[808]: Session 50 logged out. Waiting for processes to exit.
Jan 23 04:22:55 np0005593232 systemd-logind[808]: Removed session 50.
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.255 250273 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.281 250273 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.281 250273 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 04:22:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:22:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:22:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:22:55.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.775 250273 INFO nova.virt.driver [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.914 250273 INFO nova.compute.provider_config [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.926 250273 DEBUG oslo_concurrency.lockutils [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.926 250273 DEBUG oslo_concurrency.lockutils [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.927 250273 DEBUG oslo_concurrency.lockutils [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.927 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.928 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.928 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.928 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.928 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.928 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.929 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.929 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.929 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.929 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.930 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.930 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.930 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.930 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.931 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.931 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.931 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.931 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.931 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.932 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.932 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.932 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.932 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.932 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.933 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.933 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.933 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.933 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.934 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.934 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.934 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.934 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.934 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.935 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.935 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.935 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.935 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.935 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.936 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.936 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.936 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.936 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.937 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.937 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.937 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.938 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.938 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.938 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.938 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.938 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.939 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.939 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.939 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.939 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.940 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.940 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.940 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.940 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.940 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.941 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.942 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.942 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.942 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.942 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.942 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.943 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.944 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.944 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.944 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.944 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.944 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.945 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.945 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.945 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.945 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.945 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.946 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.947 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.947 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.947 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.947 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.947 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.948 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.949 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.949 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.949 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.949 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.949 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.950 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.951 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.952 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.952 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.952 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.952 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.952 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.953 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.954 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.954 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.954 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.954 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.954 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.955 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.956 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.956 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.956 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.956 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.956 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.957 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.957 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.957 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.957 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.957 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.958 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.958 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.958 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.958 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.958 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.959 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.959 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.959 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.959 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.959 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.960 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.960 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.960 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.960 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.961 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.962 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.962 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.962 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.962 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.963 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.964 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.964 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.964 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.964 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.965 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.965 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.965 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.965 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.965 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.966 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.966 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.966 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.966 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.966 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.967 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.967 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.967 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.967 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.967 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.968 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.969 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.970 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.971 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.972 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.973 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.974 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.974 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.974 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.974 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.974 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.975 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.976 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.977 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.978 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.979 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.980 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.981 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.982 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.983 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.984 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.985 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.986 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.987 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.988 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.989 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.990 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.991 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.992 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.992 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.992 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.992 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.992 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.993 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.994 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.995 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.996 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.997 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.998 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:55 np0005593232 nova_compute[250269]: 2026-01-23 09:22:55.999 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.000 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.001 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.002 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.003 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.004 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.004 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.004 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.004 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.004 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.005 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.006 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.006 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.006 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.006 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.006 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.007 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.007 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.007 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.007 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.007 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.008 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.009 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.009 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.009 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.009 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.009 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.010 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.011 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.012 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.013 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 WARNING oslo_config.cfg [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 23 04:22:56 np0005593232 nova_compute[250269]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 23 04:22:56 np0005593232 nova_compute[250269]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 23 04:22:56 np0005593232 nova_compute[250269]: and ``live_migration_inbound_addr`` respectively.
Jan 23 04:22:56 np0005593232 nova_compute[250269]: ).  Its value may be silently ignored in the future.#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.014 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.015 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.016 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.016 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.016 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.016 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rbd_secret_uuid        = e1533653-0a5a-584c-b34b-8689f0d32e77 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.017 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.018 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.019 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.020 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.021 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.022 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.022 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.022 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.022 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.022 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.023 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.024 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.025 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.026 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.027 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.028 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.029 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.030 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.031 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.032 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.033 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.034 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.035 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.035 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.035 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.035 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.035 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.036 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.036 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.036 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.036 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.037 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.037 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.037 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.037 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.037 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.038 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.038 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.038 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.038 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.038 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.039 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.040 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.041 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.042 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.043 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.044 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.044 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.044 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.044 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.044 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.045 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.046 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.047 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.048 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.049 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.050 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.051 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.052 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.053 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.054 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.055 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.055 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.055 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.055 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.055 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.056 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.056 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.056 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.056 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.056 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.057 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.057 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.057 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.057 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.057 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.058 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.058 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.058 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.058 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.058 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.059 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.060 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.061 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.062 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.063 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.063 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.063 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.063 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.063 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.064 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.065 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.065 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.065 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.065 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.065 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.066 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.067 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.068 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.069 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.070 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.071 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.071 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.071 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.071 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.071 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.072 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.073 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.074 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.075 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.076 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.077 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.078 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.079 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.079 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.079 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.079 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.079 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.080 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.081 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.082 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.082 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.082 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.082 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.082 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.083 250273 DEBUG oslo_service.service [None req-817f07f7-ebb6-4b69-b9ec-6193ac2448b2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.085 250273 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.104 250273 INFO nova.virt.node [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Determined node identity 0e4a8508-835c-4c0a-aa74-aae2c6536573 from /var/lib/nova/compute_id#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.104 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.105 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.105 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.105 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.117 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f97dc4ab2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.120 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f97dc4ab2b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.120 250273 INFO nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.127 250273 INFO nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Libvirt host capabilities <capabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <host>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <uuid>ebc95db9-b389-4cf6-b3a5-7ae0afc322d2</uuid>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <arch>x86_64</arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model>EPYC-Rome-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <vendor>AMD</vendor>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <microcode version='16777317'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <signature family='23' model='49' stepping='0'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='x2apic'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='tsc-deadline'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='osxsave'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='hypervisor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='tsc_adjust'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='spec-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='stibp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='arch-capabilities'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='cmp_legacy'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='topoext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='virt-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='lbrv'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='tsc-scale'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='vmcb-clean'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='pause-filter'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='pfthreshold'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='svme-addr-chk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='rdctl-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='skip-l1dfl-vmentry'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='mds-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature name='pschange-mc-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <pages unit='KiB' size='4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <pages unit='KiB' size='2048'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <pages unit='KiB' size='1048576'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <power_management>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <suspend_mem/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </power_management>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <iommu support='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <migration_features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <live/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <uri_transports>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <uri_transport>tcp</uri_transport>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <uri_transport>rdma</uri_transport>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </uri_transports>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </migration_features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <topology>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <cells num='1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <cell id='0'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <memory unit='KiB'>7864316</memory>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <pages unit='KiB' size='4'>1966079</pages>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <pages unit='KiB' size='2048'>0</pages>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <distances>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <sibling id='0' value='10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          </distances>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          <cpus num='8'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:          </cpus>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        </cell>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </cells>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </topology>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <cache>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </cache>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <secmodel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model>selinux</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <doi>0</doi>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </secmodel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <secmodel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model>dac</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <doi>0</doi>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </secmodel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </host>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <guest>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <os_type>hvm</os_type>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <arch name='i686'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <wordsize>32</wordsize>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <domain type='qemu'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <domain type='kvm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <pae/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <nonpae/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <acpi default='on' toggle='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <apic default='on' toggle='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <cpuselection/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <deviceboot/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <disksnapshot default='on' toggle='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <externalSnapshot/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </guest>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <guest>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <os_type>hvm</os_type>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <arch name='x86_64'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <wordsize>64</wordsize>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <domain type='qemu'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <domain type='kvm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <acpi default='on' toggle='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <apic default='on' toggle='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <cpuselection/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <deviceboot/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <disksnapshot default='on' toggle='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <externalSnapshot/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </guest>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 
Jan 23 04:22:56 np0005593232 nova_compute[250269]: </capabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: #033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.133 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.140 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 23 04:22:56 np0005593232 nova_compute[250269]: <domainCapabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <domain>kvm</domain>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <arch>i686</arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <vcpu max='240'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <iothreads supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <os supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <enum name='firmware'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <loader supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>rom</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pflash</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='readonly'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>yes</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='secure'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </loader>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='maximumMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <vendor>AMD</vendor>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='succor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='custom' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='n270'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='n270-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='phenom'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='phenom-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <memoryBacking supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <enum name='sourceType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>file</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>anonymous</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>memfd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </memoryBacking>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <disk supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='diskDevice'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>disk</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>cdrom</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>floppy</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>lun</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='bus'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>ide</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>fdc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>scsi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>sata</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-non-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <graphics supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vnc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>egl-headless</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dbus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <video supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='modelType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vga</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>cirrus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>none</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>bochs</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>ramfb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <hostdev supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='mode'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>subsystem</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='startupPolicy'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>default</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>mandatory</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>requisite</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>optional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='subsysType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pci</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>scsi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='capsType'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='pciBackend'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </hostdev>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <rng supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-non-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>random</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>egd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>builtin</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <filesystem supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='driverType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>path</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>handle</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtiofs</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </filesystem>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <tpm supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tpm-tis</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tpm-crb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>emulator</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>external</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendVersion'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>2.0</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </tpm>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <redirdev supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='bus'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </redirdev>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <channel supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pty</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>unix</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </channel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <crypto supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>qemu</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>builtin</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </crypto>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <interface supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>default</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>passt</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <panic supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>isa</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>hyperv</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </panic>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <console supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>null</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pty</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dev</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>file</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pipe</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>stdio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>udp</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tcp</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>unix</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>qemu-vdagent</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dbus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <gic supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <genid supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <backup supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <async-teardown supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <s390-pv supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <ps2 supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <tdx supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <sev supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <sgx supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <hyperv supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='features'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>relaxed</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vapic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>spinlocks</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vpindex</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>runtime</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>synic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>stimer</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>reset</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vendor_id</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>frequencies</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>reenlightenment</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tlbflush</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>ipi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>avic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>emsr_bitmap</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>xmm_input</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <defaults>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </defaults>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </hyperv>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <launchSecurity supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: </domainCapabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.153 250273 DEBUG nova.virt.libvirt.volume.mount [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.154 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 23 04:22:56 np0005593232 nova_compute[250269]: <domainCapabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <domain>kvm</domain>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <arch>i686</arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <vcpu max='4096'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <iothreads supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <os supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <enum name='firmware'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <loader supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>rom</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pflash</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='readonly'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>yes</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='secure'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </loader>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='maximumMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <vendor>AMD</vendor>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='succor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='custom' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='n270'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='n270-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='phenom'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='phenom-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <memoryBacking supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <enum name='sourceType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>file</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>anonymous</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>memfd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </memoryBacking>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <disk supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='diskDevice'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>disk</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>cdrom</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>floppy</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>lun</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='bus'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>fdc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>scsi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>sata</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-non-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <graphics supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vnc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>egl-headless</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dbus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <video supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='modelType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vga</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>cirrus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>none</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>bochs</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>ramfb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <hostdev supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='mode'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>subsystem</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='startupPolicy'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>default</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>mandatory</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>requisite</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>optional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='subsysType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pci</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>scsi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='capsType'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='pciBackend'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </hostdev>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <rng supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtio-non-transitional</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>random</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>egd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>builtin</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <filesystem supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='driverType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>path</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>handle</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>virtiofs</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </filesystem>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <tpm supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tpm-tis</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tpm-crb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>emulator</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>external</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendVersion'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>2.0</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </tpm>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <redirdev supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='bus'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>usb</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </redirdev>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <channel supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pty</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>unix</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </channel>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <crypto supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>qemu</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendModel'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>builtin</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </crypto>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <interface supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='backendType'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>default</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>passt</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <panic supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='model'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>isa</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>hyperv</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </panic>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <console supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>null</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vc</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pty</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dev</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>file</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pipe</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>stdio</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>udp</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tcp</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>unix</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>qemu-vdagent</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>dbus</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <gic supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <vmcoreinfo supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <genid supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <backingStoreInput supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <backup supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <async-teardown supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <s390-pv supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <ps2 supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <tdx supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <sev supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <sgx supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <hyperv supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='features'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>relaxed</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vapic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>spinlocks</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vpindex</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>runtime</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>synic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>stimer</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>reset</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>vendor_id</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>frequencies</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>reenlightenment</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>tlbflush</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>ipi</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>avic</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>emsr_bitmap</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>xmm_input</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <defaults>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <spinlocks>4095</spinlocks>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <stimer_direct>on</stimer_direct>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <tlbflush_direct>on</tlbflush_direct>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <tlbflush_extended>on</tlbflush_extended>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </defaults>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </hyperv>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <launchSecurity supported='no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: </domainCapabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.205 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 23 04:22:56 np0005593232 nova_compute[250269]: 2026-01-23 09:22:56.210 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 23 04:22:56 np0005593232 nova_compute[250269]: <domainCapabilities>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <path>/usr/libexec/qemu-kvm</path>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <domain>kvm</domain>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <arch>x86_64</arch>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <vcpu max='240'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <iothreads supported='yes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <os supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <enum name='firmware'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <loader supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='type'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>rom</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>pflash</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='readonly'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>yes</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='secure'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>no</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </loader>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:  <cpu>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-passthrough' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='hostPassthroughMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='maximum' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <enum name='maximumMigratable'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>on</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <value>off</value>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </enum>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='host-model' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <vendor>AMD</vendor>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='x2apic'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-deadline'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='hypervisor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc_adjust'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='spec-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='stibp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='cmp_legacy'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='overflow-recov'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='succor'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='amd-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='virt-ssbd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lbrv'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='tsc-scale'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='vmcb-clean'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='flushbyasid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pause-filter'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='pfthreshold'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='svme-addr-chk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <feature policy='disable' name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    </mode>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:    <mode name='custom' supported='yes'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Broadwell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cascadelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='ClearwaterForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ddpd-u'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sha512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm3'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sm4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Cooperlake-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Denverton-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Dhyana-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Genoa-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Milan-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Rome-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-Turin-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amd-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='auto-ibrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vp2intersect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fs-gs-base-ns'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibpb-brtype'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='no-nested-data-bp'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='null-sel-clr-base'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='perfmon-v2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbpb'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='srso-user-kernel-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='stibp-always-on'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='EPYC-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='GraniteRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-128'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-256'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx10-512'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='prefetchiti'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Haswell-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-noTSX'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v6'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Icelake-Server-v7'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='IvyBridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='KnightsMill-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4fmaps'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-4vnniw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512er'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512pf'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G4-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Opteron_G5-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fma4'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tbm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xop'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SapphireRapids-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='amx-tile'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-bf16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-fp16'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512-vpopcntdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bitalg'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vbmi2'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrc'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fzrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='la57'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='taa-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='tsx-ldtrk'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='SierraForest-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ifma'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-ne-convert'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx-vnni-int8'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bhi-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='bus-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cmpccxadd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fbsdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='fsrs'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ibrs-all'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='intel-psfd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ipred-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='lam'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mcdt-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pbrsb-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='psdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rrsba-ctrl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='sbdr-ssdp-no'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='serialize'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vaes'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='vpclmulqdq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Client-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='hle'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='rtm'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Skylake-Server-v5'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512bw'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512cd'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512dq'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512f'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='avx512vl'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='invpcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pcid'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='pku'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='mpx'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v2'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v3'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='core-capability'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='split-lock-detect'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='Snowridge-v4'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='cldemote'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='erms'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='gfni'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdir64b'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='movdiri'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='xsaves'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='athlon-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnow'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='3dnowext'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='core2duo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      <blockers model='coreduo-v1'>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:        <feature name='ss'/>
Jan 23 04:22:56 np0005593232 nova_compute[250269]:      </blockers>
Jan 23 04:28:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:13.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:13 np0005593232 nova_compute[250269]: 2026-01-23 09:28:13.662 250273 DEBUG oslo_concurrency.processutils [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24a2f423-4385-453b-86f4-8e7cd37e96a6/disk.config 24a2f423-4385-453b-86f4-8e7cd37e96a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:28:13 np0005593232 nova_compute[250269]: 2026-01-23 09:28:13.664 250273 INFO nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Deleting local config drive /var/lib/nova/instances/24a2f423-4385-453b-86f4-8e7cd37e96a6/disk.config because it was imported into RBD.#033[00m
Jan 23 04:28:13 np0005593232 systemd[1]: Starting libvirt secret daemon...
Jan 23 04:28:13 np0005593232 systemd[1]: Started libvirt secret daemon.
Jan 23 04:28:13 np0005593232 systemd-machined[215836]: New machine qemu-1-instance-00000001.
Jan 23 04:28:13 np0005593232 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 23 04:28:14 np0005593232 rsyslogd[1008]: imjournal: 5557 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.398 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160494.3972075, 24a2f423-4385-453b-86f4-8e7cd37e96a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.399 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.401 250273 DEBUG nova.compute.manager [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.402 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.405 250273 INFO nova.virt.libvirt.driver [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance spawned successfully.#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.405 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:28:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v992: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.7 MiB/s wr, 52 op/s
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.848 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.852 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.852 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.853 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.853 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.854 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.854 250273 DEBUG nova.virt.libvirt.driver [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.858 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.911 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.912 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160494.3986182, 24a2f423-4385-453b-86f4-8e7cd37e96a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:28:14 np0005593232 nova_compute[250269]: 2026-01-23 09:28:14.912 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] VM Started (Lifecycle Event)#033[00m
Jan 23 04:28:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:15.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:15 np0005593232 nova_compute[250269]: 2026-01-23 09:28:15.302 250273 INFO nova.compute.manager [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Took 13.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:28:15 np0005593232 nova_compute[250269]: 2026-01-23 09:28:15.303 250273 DEBUG nova.compute.manager [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:28:15 np0005593232 nova_compute[250269]: 2026-01-23 09:28:15.329 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:28:15 np0005593232 nova_compute[250269]: 2026-01-23 09:28:15.334 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:28:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:15.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:16 np0005593232 nova_compute[250269]: 2026-01-23 09:28:16.003 250273 INFO nova.compute.manager [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Took 15.77 seconds to build instance.#033[00m
Jan 23 04:28:16 np0005593232 nova_compute[250269]: 2026-01-23 09:28:16.179 250273 DEBUG oslo_concurrency.lockutils [None req-daba8113-e18f-4348-a0aa-35cb93b446d9 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "24a2f423-4385-453b-86f4-8e7cd37e96a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:28:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 23 04:28:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v993: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 72 op/s
Jan 23 04:28:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 23 04:28:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 23 04:28:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:17.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:17.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v995: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 98 op/s
Jan 23 04:28:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:19.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:19.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v996: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 118 op/s
Jan 23 04:28:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:21.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:21.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v997: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Jan 23 04:28:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:23.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:23.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 38970a36-1634-470b-a0a4-fcf9766e499c does not exist
Jan 23 04:28:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a66ba27-8c9e-47dd-80a3-1a4a0a69478b does not exist
Jan 23 04:28:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b0b0148b-764d-47c2-a43e-809015346ccb does not exist
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:28:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:28:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v998: 321 pgs: 321 active+clean; 88 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 79 op/s
Jan 23 04:28:24 np0005593232 podman[256667]: 2026-01-23 09:28:24.840675374 +0000 UTC m=+0.059439584 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:28:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:28:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:28:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:25.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.117 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.119 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.171 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.284072703 +0000 UTC m=+0.041675720 container create 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:25 np0005593232 systemd[1]: Started libpod-conmon-4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa.scope.
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.346 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.348 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:28:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.357 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.358 250273 INFO nova.compute.claims [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.265757855 +0000 UTC m=+0.023360882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.376581291 +0000 UTC m=+0.134184318 container init 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.384981149 +0000 UTC m=+0.142584176 container start 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.38854728 +0000 UTC m=+0.146150307 container attach 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:28:25 np0005593232 nice_bouman[256819]: 167 167
Jan 23 04:28:25 np0005593232 systemd[1]: libpod-4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa.scope: Deactivated successfully.
Jan 23 04:28:25 np0005593232 conmon[256819]: conmon 4914a4850c3639e7b471 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa.scope/container/memory.events
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.39312637 +0000 UTC m=+0.150729397 container died 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:28:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e5fa573fdf4244b5c1e7101e21a35121939871cde31a7174d88346861be6f4b4-merged.mount: Deactivated successfully.
Jan 23 04:28:25 np0005593232 podman[256803]: 2026-01-23 09:28:25.437400083 +0000 UTC m=+0.195003110 container remove 4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_bouman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:28:25 np0005593232 systemd[1]: libpod-conmon-4914a4850c3639e7b4715d7208e2c63b5e263845423b45ebbd94df39a5b0f2fa.scope: Deactivated successfully.
Jan 23 04:28:25 np0005593232 nova_compute[250269]: 2026-01-23 09:28:25.581 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:28:25 np0005593232 podman[256843]: 2026-01-23 09:28:25.618655973 +0000 UTC m=+0.047475095 container create eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:28:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:25.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:25 np0005593232 systemd[1]: Started libpod-conmon-eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e.scope.
Jan 23 04:28:25 np0005593232 podman[256843]: 2026-01-23 09:28:25.596353912 +0000 UTC m=+0.025173054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:25 np0005593232 podman[256843]: 2026-01-23 09:28:25.723102569 +0000 UTC m=+0.151921711 container init eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:28:25 np0005593232 podman[256843]: 2026-01-23 09:28:25.730802077 +0000 UTC m=+0.159621199 container start eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:28:25 np0005593232 podman[256843]: 2026-01-23 09:28:25.734521272 +0000 UTC m=+0.163340414 container attach eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.068 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.078 250273 DEBUG nova.compute.provider_tree [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.149 250273 ERROR nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [req-941bf5b4-b661-4506-ae0b-6506634ea981] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 0e4a8508-835c-4c0a-aa74-aae2c6536573.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-941bf5b4-b661-4506-ae0b-6506634ea981"}]}#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.204 250273 DEBUG nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.247 250273 DEBUG nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.248 250273 DEBUG nova.compute.provider_tree [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.269 250273 DEBUG nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.316 250273 DEBUG nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.376 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:28:26 np0005593232 dreamy_engelbart[256860]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:28:26 np0005593232 dreamy_engelbart[256860]: --> relative data size: 1.0
Jan 23 04:28:26 np0005593232 dreamy_engelbart[256860]: --> All data devices are unavailable
Jan 23 04:28:26 np0005593232 systemd[1]: libpod-eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e.scope: Deactivated successfully.
Jan 23 04:28:26 np0005593232 conmon[256860]: conmon eb2373ccaf196d5f85a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e.scope/container/memory.events
Jan 23 04:28:26 np0005593232 podman[256843]: 2026-01-23 09:28:26.67706715 +0000 UTC m=+1.105886272 container died eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-db90887f070fea6e05b642efe864491d6d18be32389cb4c6317818997de20386-merged.mount: Deactivated successfully.
Jan 23 04:28:26 np0005593232 podman[256843]: 2026-01-23 09:28:26.737632214 +0000 UTC m=+1.166451336 container remove eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:26 np0005593232 systemd[1]: libpod-conmon-eb2373ccaf196d5f85a6982ca246ff0a5f3c0dc9b4f236c35a6b3dfb6d4d1a3e.scope: Deactivated successfully.
Jan 23 04:28:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v999: 321 pgs: 321 active+clean; 91 MiB data, 215 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 KiB/s wr, 59 op/s
Jan 23 04:28:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:28:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036010582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.878 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:28:26 np0005593232 nova_compute[250269]: 2026-01-23 09:28:26.886 250273 DEBUG nova.compute.provider_tree [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:28:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:27.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.328646971 +0000 UTC m=+0.036301049 container create 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:28:27 np0005593232 systemd[1]: Started libpod-conmon-222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb.scope.
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.313225784 +0000 UTC m=+0.020879882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.559690196 +0000 UTC m=+0.267344274 container init 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.568054522 +0000 UTC m=+0.275708600 container start 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.571488129 +0000 UTC m=+0.279142217 container attach 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:28:27 np0005593232 goofy_babbage[257086]: 167 167
Jan 23 04:28:27 np0005593232 systemd[1]: libpod-222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb.scope: Deactivated successfully.
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.57576963 +0000 UTC m=+0.283423718 container died 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:28:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5cadf0b47a157905b2532fe2a0c4a34aea685a7f66fa21f5a635d96a5b20d35f-merged.mount: Deactivated successfully.
Jan 23 04:28:27 np0005593232 podman[257070]: 2026-01-23 09:28:27.612317463 +0000 UTC m=+0.319971541 container remove 222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_babbage, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:28:27 np0005593232 systemd[1]: libpod-conmon-222e88f94a85970d02eecb8bdacfb02f35bd905d80046b8a09bbab14adb81dcb.scope: Deactivated successfully.
Jan 23 04:28:27 np0005593232 nova_compute[250269]: 2026-01-23 09:28:27.638 250273 DEBUG nova.scheduler.client.report [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updated inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with generation 8 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 23 04:28:27 np0005593232 nova_compute[250269]: 2026-01-23 09:28:27.639 250273 DEBUG nova.compute.provider_tree [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 generation from 8 to 9 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 23 04:28:27 np0005593232 nova_compute[250269]: 2026-01-23 09:28:27.640 250273 DEBUG nova.compute.provider_tree [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:28:27 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 23 04:28:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:28:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:27.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:28:27 np0005593232 podman[257109]: 2026-01-23 09:28:27.802148782 +0000 UTC m=+0.066451421 container create 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:28:27 np0005593232 podman[257109]: 2026-01-23 09:28:27.76671875 +0000 UTC m=+0.031021419 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:27 np0005593232 systemd[1]: Started libpod-conmon-165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98.scope.
Jan 23 04:28:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57a01e86bb9535fd8dd48a7499883bfe7ebdf4f2afdf0e9667360985e4cf306/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57a01e86bb9535fd8dd48a7499883bfe7ebdf4f2afdf0e9667360985e4cf306/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57a01e86bb9535fd8dd48a7499883bfe7ebdf4f2afdf0e9667360985e4cf306/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b57a01e86bb9535fd8dd48a7499883bfe7ebdf4f2afdf0e9667360985e4cf306/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:27 np0005593232 podman[257109]: 2026-01-23 09:28:27.92855606 +0000 UTC m=+0.192858719 container init 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:28:27 np0005593232 podman[257109]: 2026-01-23 09:28:27.936168785 +0000 UTC m=+0.200471424 container start 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:28:27 np0005593232 podman[257109]: 2026-01-23 09:28:27.940615531 +0000 UTC m=+0.204918170 container attach 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:28:28 np0005593232 nova_compute[250269]: 2026-01-23 09:28:28.042 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:28:28 np0005593232 nova_compute[250269]: 2026-01-23 09:28:28.045 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:28:28 np0005593232 nova_compute[250269]: 2026-01-23 09:28:28.347 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:28:28 np0005593232 nova_compute[250269]: 2026-01-23 09:28:28.348 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:28:28 np0005593232 nova_compute[250269]: 2026-01-23 09:28:28.554 250273 INFO nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]: {
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:    "0": [
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:        {
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "devices": [
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "/dev/loop3"
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            ],
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "lv_name": "ceph_lv0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "lv_size": "7511998464",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "name": "ceph_lv0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "tags": {
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.cluster_name": "ceph",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.crush_device_class": "",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.encrypted": "0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.osd_id": "0",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.type": "block",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:                "ceph.vdo": "0"
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            },
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "type": "block",
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:            "vg_name": "ceph_vg0"
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:        }
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]:    ]
Jan 23 04:28:28 np0005593232 unruffled_chatelet[257174]: }
Jan 23 04:28:28 np0005593232 systemd[1]: libpod-165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98.scope: Deactivated successfully.
Jan 23 04:28:28 np0005593232 podman[257109]: 2026-01-23 09:28:28.782664284 +0000 UTC m=+1.046966933 container died 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:28:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1000: 321 pgs: 321 active+clean; 96 MiB data, 218 MiB used, 21 GiB / 21 GiB avail; 1.5 MiB/s rd, 431 KiB/s wr, 71 op/s
Jan 23 04:28:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:29.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:29 np0005593232 nova_compute[250269]: 2026-01-23 09:28:29.327 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:28:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:29.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:29 np0005593232 nova_compute[250269]: 2026-01-23 09:28:29.699 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Automatically allocating a network for project 7ce4d2b2bd9d4e648ef6fd351b972262. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 23 04:28:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b57a01e86bb9535fd8dd48a7499883bfe7ebdf4f2afdf0e9667360985e4cf306-merged.mount: Deactivated successfully.
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.414 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.415 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.415 250273 INFO nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Creating image(s)#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.440 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.471 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.504 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.508 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.586 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.587 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.588 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.588 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.614 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:28:30 np0005593232 nova_compute[250269]: 2026-01-23 09:28:30.617 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9160345b-33cb-4242-a3c0-a1f41d03934f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:28:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1001: 321 pgs: 321 active+clean; 129 MiB data, 237 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.4 MiB/s wr, 72 op/s
Jan 23 04:28:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:31.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:31.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:31 np0005593232 podman[257109]: 2026-01-23 09:28:31.70269706 +0000 UTC m=+3.966999699 container remove 165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:28:31 np0005593232 systemd[1]: libpod-conmon-165979fb12c80c3c262193c14a811ef9ee26959f31aca478de87d0f61a0bee98.scope: Deactivated successfully.
Jan 23 04:28:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.29410707 +0000 UTC m=+0.043615126 container create 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:28:32 np0005593232 systemd[1]: Started libpod-conmon-6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26.scope.
Jan 23 04:28:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.273683392 +0000 UTC m=+0.023191478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.425575691 +0000 UTC m=+0.175083747 container init 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.432772604 +0000 UTC m=+0.182280660 container start 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:28:32 np0005593232 gifted_nash[257453]: 167 167
Jan 23 04:28:32 np0005593232 systemd[1]: libpod-6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26.scope: Deactivated successfully.
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.444847426 +0000 UTC m=+0.194355502 container attach 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.445572587 +0000 UTC m=+0.195080643 container died 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:28:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1002: 321 pgs: 321 active+clean; 174 MiB data, 295 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.1 MiB/s wr, 98 op/s
Jan 23 04:28:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e3d9d85381aea5ccebb65d723249653eaa7e6989dca17db51954fe51a9129e4f-merged.mount: Deactivated successfully.
Jan 23 04:28:32 np0005593232 nova_compute[250269]: 2026-01-23 09:28:32.863 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9160345b-33cb-4242-a3c0-a1f41d03934f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:28:32 np0005593232 podman[257437]: 2026-01-23 09:28:32.880380043 +0000 UTC m=+0.629888099 container remove 6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_nash, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:32 np0005593232 systemd[1]: libpod-conmon-6d3e10ddfe262c251c9a75ffbee24073fb786a480624ff92569897b729877f26.scope: Deactivated successfully.
Jan 23 04:28:32 np0005593232 nova_compute[250269]: 2026-01-23 09:28:32.954 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] resizing rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:28:33 np0005593232 podman[257532]: 2026-01-23 09:28:33.054134011 +0000 UTC m=+0.045344414 container create 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 04:28:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:33.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:33 np0005593232 systemd[1]: Started libpod-conmon-7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7.scope.
Jan 23 04:28:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:28:33 np0005593232 podman[257532]: 2026-01-23 09:28:33.035448782 +0000 UTC m=+0.026659205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:28:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d14c377aa24c2db3370b664e2593d650bcafb8252ed308aefffdbeb45799925/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d14c377aa24c2db3370b664e2593d650bcafb8252ed308aefffdbeb45799925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d14c377aa24c2db3370b664e2593d650bcafb8252ed308aefffdbeb45799925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d14c377aa24c2db3370b664e2593d650bcafb8252ed308aefffdbeb45799925/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:28:33 np0005593232 podman[257532]: 2026-01-23 09:28:33.145528348 +0000 UTC m=+0.136738751 container init 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:28:33 np0005593232 podman[257532]: 2026-01-23 09:28:33.152526506 +0000 UTC m=+0.143736909 container start 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 04:28:33 np0005593232 podman[257532]: 2026-01-23 09:28:33.157060764 +0000 UTC m=+0.148271167 container attach 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.178 250273 DEBUG nova.objects.instance [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lazy-loading 'migration_context' on Instance uuid 9160345b-33cb-4242-a3c0-a1f41d03934f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.348 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.348 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Ensure instance console log exists: /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.348 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.349 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:28:33 np0005593232 nova_compute[250269]: 2026-01-23 09:28:33.349 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:28:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:28:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]: {
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:        "osd_id": 0,
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:        "type": "bluestore"
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]:    }
Jan 23 04:28:34 np0005593232 condescending_mendeleev[257549]: }
Jan 23 04:28:34 np0005593232 systemd[1]: libpod-7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7.scope: Deactivated successfully.
Jan 23 04:28:34 np0005593232 podman[257588]: 2026-01-23 09:28:34.11016688 +0000 UTC m=+0.024130054 container died 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:28:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9d14c377aa24c2db3370b664e2593d650bcafb8252ed308aefffdbeb45799925-merged.mount: Deactivated successfully.
Jan 23 04:28:34 np0005593232 podman[257588]: 2026-01-23 09:28:34.16882166 +0000 UTC m=+0.082784804 container remove 7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:28:34 np0005593232 systemd[1]: libpod-conmon-7285db3de2e0d4d3de119b8b8abd96b1b38e14623000161a7318fa134d26a9f7.scope: Deactivated successfully.
Jan 23 04:28:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:28:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:28:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d6137375-c56c-45ef-88a3-5dce731959eb does not exist
Jan 23 04:28:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ae10532b-8448-474d-9267-a66691a91fbd does not exist
Jan 23 04:28:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 143c6551-da0b-4a1f-99e6-c53f0e241c5e does not exist
Jan 23 04:28:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1003: 321 pgs: 321 active+clean; 241 MiB data, 333 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.9 MiB/s wr, 138 op/s
Jan 23 04:28:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:35.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:28:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:35.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1004: 321 pgs: 321 active+clean; 286 MiB data, 353 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 8.6 MiB/s wr, 174 op/s
Jan 23 04:28:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:28:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:37.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:28:37
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta']
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:28:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:37.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:28:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:28:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1005: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 9.0 MiB/s wr, 182 op/s
Jan 23 04:28:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:28:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:28:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:39.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1006: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 8.8 MiB/s wr, 190 op/s
Jan 23 04:28:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:41.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:41.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:42.583 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:42.584 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:42.584 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:28:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1007: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.8 MiB/s wr, 182 op/s
Jan 23 04:28:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:43.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:43 np0005593232 podman[257660]: 2026-01-23 09:28:43.470935597 +0000 UTC m=+0.123224758 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 04:28:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1008: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.1 MiB/s wr, 160 op/s
Jan 23 04:28:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:45.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006128552193839568 of space, bias 1.0, pg target 1.8385656581518706 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:28:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1009: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 119 op/s
Jan 23 04:28:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:47.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:28:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5176 writes, 22K keys, 5176 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 5176 writes, 5176 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1434 writes, 6478 keys, 1434 commit groups, 1.0 writes per commit group, ingest: 10.06 MB, 0.02 MB/s#012Interval WAL: 1434 writes, 1434 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     39.2      0.71              0.10        13    0.054       0      0       0.0       0.0#012  L6      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.6     92.1     76.4      1.30              0.30        12    0.108     55K   6375       0.0       0.0#012 Sum      1/0    7.26 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.6     59.7     63.3      2.01              0.41        25    0.080     55K   6375       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.7     88.7     88.3      0.67              0.18        12    0.056     29K   3044       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     92.1     76.4      1.30              0.30        12    0.108     55K   6375       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     39.3      0.70              0.10        12    0.058       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.027, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.12 GB write, 0.07 MB/s write, 0.12 GB read, 0.07 MB/s read, 2.0 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 10.19 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000112 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(587,9.72 MB,3.19874%) FilterBlock(26,161.73 KB,0.0519552%) IndexBlock(26,310.38 KB,0.0997041%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:28:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1010: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 653 KiB/s wr, 83 op/s
Jan 23 04:28:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:28:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:49.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:28:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1011: 321 pgs: 321 active+clean; 306 MiB data, 361 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 04:28:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:51.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:51.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1012: 321 pgs: 321 active+clean; 325 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.3 MiB/s wr, 64 op/s
Jan 23 04:28:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:53.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:53.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1013: 321 pgs: 321 active+clean; 296 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 952 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 23 04:28:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:55.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:55 np0005593232 podman[257742]: 2026-01-23 09:28:55.399888225 +0000 UTC m=+0.056380527 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:28:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:28:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:55.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:28:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1014: 321 pgs: 321 active+clean; 281 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 23 04:28:56 np0005593232 nova_compute[250269]: 2026-01-23 09:28:56.898 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:28:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:28:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:57.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:57.226 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:28:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:57.227 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:28:57 np0005593232 nova_compute[250269]: 2026-01-23 09:28:57.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:28:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:57.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:28:58.229 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:28:58 np0005593232 nova_compute[250269]: 2026-01-23 09:28:58.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:28:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1015: 321 pgs: 321 active+clean; 259 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 23 04:28:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:28:59.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:28:59 np0005593232 nova_compute[250269]: 2026-01-23 09:28:59.521 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:28:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:28:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:28:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:28:59.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.365 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:29:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1016: 321 pgs: 321 active+clean; 259 MiB data, 371 MiB used, 21 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.965 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.965 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.965 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:29:00 np0005593232 nova_compute[250269]: 2026-01-23 09:29:00.966 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 24a2f423-4385-453b-86f4-8e7cd37e96a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:29:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:01.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:01 np0005593232 nova_compute[250269]: 2026-01-23 09:29:01.425 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:29:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:01.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.260 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.286 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.286 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.325 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.326 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.326 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:29:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847774534' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.764 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1017: 321 pgs: 321 active+clean; 259 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 91 op/s
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.907 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:29:02 np0005593232 nova_compute[250269]: 2026-01-23 09:29:02.907 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.115 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.116 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=20.880550384521484GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.116 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.117 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:03.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.237 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 24a2f423-4385-453b-86f4-8e7cd37e96a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.237 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9160345b-33cb-4242-a3c0-a1f41d03934f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.237 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.237 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.350 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:03.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:29:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156155692' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.760 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.765 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.787 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.819 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:29:03 np0005593232 nova_compute[250269]: 2026-01-23 09:29:03.820 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1018: 321 pgs: 321 active+clean; 259 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 239 KiB/s rd, 875 KiB/s wr, 71 op/s
Jan 23 04:29:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:05.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 23 04:29:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 23 04:29:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 23 04:29:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:05.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 23 04:29:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 23 04:29:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 23 04:29:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1021: 321 pgs: 321 active+clean; 259 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 14 KiB/s wr, 2 op/s
Jan 23 04:29:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:07.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:07.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1022: 321 pgs: 321 active+clean; 272 MiB data, 370 MiB used, 21 GiB / 21 GiB avail; 127 B/s rd, 582 KiB/s wr, 3 op/s
Jan 23 04:29:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:09.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:09.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1023: 321 pgs: 321 active+clean; 281 MiB data, 366 MiB used, 21 GiB / 21 GiB avail; 24 KiB/s rd, 1.1 MiB/s wr, 35 op/s
Jan 23 04:29:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:11.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:11.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1024: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 32 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 23 04:29:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:13.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:13.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:14 np0005593232 podman[257866]: 2026-01-23 09:29:14.430535277 +0000 UTC m=+0.092526587 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 04:29:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1025: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 504 KiB/s rd, 2.3 MiB/s wr, 62 op/s
Jan 23 04:29:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:15.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1026: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 23 04:29:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 23 04:29:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 23 04:29:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 23 04:29:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:17.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1028: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 MiB/s wr, 120 op/s
Jan 23 04:29:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:19.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:19.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:20 np0005593232 nova_compute[250269]: 2026-01-23 09:29:20.540 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Automatically allocated network: {'id': 'f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'name': 'auto_allocated_network', 'tenant_id': '7ce4d2b2bd9d4e648ef6fd351b972262', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['23d287b3-60e5-4a28-b726-19dba2a5fcc9', '327a0f17-5714-4333-a846-376efac64bfd'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-23T09:28:31Z', 'updated_at': '2026-01-23T09:29:07Z', 'revision_number': 4, 'project_id': '7ce4d2b2bd9d4e648ef6fd351b972262'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 23 04:29:20 np0005593232 nova_compute[250269]: 2026-01-23 09:29:20.552 250273 WARNING oslo_policy.policy [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 23 04:29:20 np0005593232 nova_compute[250269]: 2026-01-23 09:29:20.553 250273 WARNING oslo_policy.policy [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Jan 23 04:29:20 np0005593232 nova_compute[250269]: 2026-01-23 09:29:20.555 250273 DEBUG nova.policy [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3f4fe5f838cb42d0ae4285971b115141', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7ce4d2b2bd9d4e648ef6fd351b972262', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:29:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1029: 321 pgs: 321 active+clean; 297 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 109 op/s
Jan 23 04:29:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:21.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:21 np0005593232 nova_compute[250269]: 2026-01-23 09:29:21.562 250273 DEBUG oslo_concurrency.processutils [None req-f7b0f381-c075-4161-b3f7-08d2ff824050 20e4e3a6b5674f81b3bf7d3b0f033a4a c2ca35ecb934411998938df9d0c08c34 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:21 np0005593232 nova_compute[250269]: 2026-01-23 09:29:21.584 250273 DEBUG oslo_concurrency.processutils [None req-f7b0f381-c075-4161-b3f7-08d2ff824050 20e4e3a6b5674f81b3bf7d3b0f033a4a c2ca35ecb934411998938df9d0c08c34 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:22 np0005593232 nova_compute[250269]: 2026-01-23 09:29:22.741 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Successfully created port: 3051f461-de35-45b5-aa63-1e67e3732c66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:29:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1030: 321 pgs: 321 active+clean; 267 MiB data, 365 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 591 KiB/s wr, 112 op/s
Jan 23 04:29:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:23.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:23.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:24 np0005593232 nova_compute[250269]: 2026-01-23 09:29:24.789 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Successfully updated port: 3051f461-de35-45b5-aa63-1e67e3732c66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:29:24 np0005593232 nova_compute[250269]: 2026-01-23 09:29:24.817 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:29:24 np0005593232 nova_compute[250269]: 2026-01-23 09:29:24.818 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquired lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:29:24 np0005593232 nova_compute[250269]: 2026-01-23 09:29:24.818 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:29:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1031: 321 pgs: 321 active+clean; 295 MiB data, 374 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 122 op/s
Jan 23 04:29:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:25 np0005593232 nova_compute[250269]: 2026-01-23 09:29:25.246 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:29:25 np0005593232 nova_compute[250269]: 2026-01-23 09:29:25.426 250273 DEBUG nova.compute.manager [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-changed-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:29:25 np0005593232 nova_compute[250269]: 2026-01-23 09:29:25.427 250273 DEBUG nova.compute.manager [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Refreshing instance network info cache due to event network-changed-3051f461-de35-45b5-aa63-1e67e3732c66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:29:25 np0005593232 nova_compute[250269]: 2026-01-23 09:29:25.427 250273 DEBUG oslo_concurrency.lockutils [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:29:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:25.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:26 np0005593232 podman[257898]: 2026-01-23 09:29:26.387049218 +0000 UTC m=+0.048957340 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 04:29:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1032: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 638 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 23 04:29:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:27.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.579 250273 DEBUG nova.network.neutron [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Updating instance_info_cache with network_info: [{"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.604 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Releasing lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.605 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Instance network_info: |[{"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.605 250273 DEBUG oslo_concurrency.lockutils [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.605 250273 DEBUG nova.network.neutron [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Refreshing network info cache for port 3051f461-de35-45b5-aa63-1e67e3732c66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.608 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Start _get_guest_xml network_info=[{"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.612 250273 WARNING nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.618 250273 DEBUG nova.virt.libvirt.host [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.619 250273 DEBUG nova.virt.libvirt.host [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.621 250273 DEBUG nova.virt.libvirt.host [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.622 250273 DEBUG nova.virt.libvirt.host [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.623 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.623 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.623 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.623 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.624 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.624 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.624 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.624 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.625 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.625 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.625 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.625 250273 DEBUG nova.virt.hardware [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:29:27 np0005593232 nova_compute[250269]: 2026-01-23 09:29:27.628 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:27.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:29:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036888616' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.076 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.110 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.115 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:29:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/781504465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.592 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.594 250273 DEBUG nova.virt.libvirt.vif [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-590995285-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-590995285-3',id=5,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ce4d2b2bd9d4e648ef6fd351b972262',ramdisk_id='',reservation_id='r-lqgmnwk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-335645779',owner_user_name='tempest-AutoAllocateNetworkTest-335645779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:28:29Z,user_data=None,user_id='3f4fe5f838cb42d0ae4285971b115141',uuid=9160345b-33cb-4242-a3c0-a1f41d03934f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.595 250273 DEBUG nova.network.os_vif_util [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converting VIF {"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.596 250273 DEBUG nova.network.os_vif_util [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.599 250273 DEBUG nova.objects.instance [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9160345b-33cb-4242-a3c0-a1f41d03934f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.634 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <uuid>9160345b-33cb-4242-a3c0-a1f41d03934f</uuid>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <name>instance-00000005</name>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:name>tempest-tempest.common.compute-instance-590995285-3</nova:name>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:29:27</nova:creationTime>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:user uuid="3f4fe5f838cb42d0ae4285971b115141">tempest-AutoAllocateNetworkTest-335645779-project-member</nova:user>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:project uuid="7ce4d2b2bd9d4e648ef6fd351b972262">tempest-AutoAllocateNetworkTest-335645779</nova:project>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <nova:port uuid="3051f461-de35-45b5-aa63-1e67e3732c66">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="fdfe:381f:8400:1::272" ipVersion="6"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.1.0.137" ipVersion="4"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="serial">9160345b-33cb-4242-a3c0-a1f41d03934f</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="uuid">9160345b-33cb-4242-a3c0-a1f41d03934f</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9160345b-33cb-4242-a3c0-a1f41d03934f_disk">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:6d:56:03"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <target dev="tap3051f461-de"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/console.log" append="off"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:29:28 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:29:28 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:29:28 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:29:28 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.635 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Preparing to wait for external event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.636 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.636 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.636 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.637 250273 DEBUG nova.virt.libvirt.vif [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-590995285-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-590995285-3',id=5,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7ce4d2b2bd9d4e648ef6fd351b972262',ramdisk_id='',reservation_id='r-lqgmnwk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-335645779',owner_user_name='tempest-AutoAllocateNetworkTest-335645779-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:28:29Z,user_data=None,user_id='3f4fe5f838cb42d0ae4285971b115141',uuid=9160345b-33cb-4242-a3c0-a1f41d03934f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.637 250273 DEBUG nova.network.os_vif_util [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converting VIF {"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.638 250273 DEBUG nova.network.os_vif_util [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.639 250273 DEBUG os_vif [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.682 250273 DEBUG ovsdbapp.backend.ovs_idl [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.682 250273 DEBUG ovsdbapp.backend.ovs_idl [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.683 250273 DEBUG ovsdbapp.backend.ovs_idl [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.699 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.699 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:29:28 np0005593232 nova_compute[250269]: 2026-01-23 09:29:28.701 250273 INFO oslo.privsep.daemon [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpg08lg416/privsep.sock']#033[00m
Jan 23 04:29:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1033: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 1.9 MiB/s wr, 65 op/s
Jan 23 04:29:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:29.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.408 250273 INFO oslo.privsep.daemon [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.268 258036 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.273 258036 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.275 258036 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.275 258036 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258036#033[00m
Jan 23 04:29:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.768 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3051f461-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.769 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3051f461-de, col_values=(('external_ids', {'iface-id': '3051f461-de35-45b5-aa63-1e67e3732c66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:56:03', 'vm-uuid': '9160345b-33cb-4242-a3c0-a1f41d03934f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:29 np0005593232 NetworkManager[49057]: <info>  [1769160569.7721] manager: (tap3051f461-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.779 250273 INFO os_vif [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de')#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.908 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.908 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.909 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] No VIF found with MAC fa:16:3e:6d:56:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.909 250273 INFO nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Using config drive#033[00m
Jan 23 04:29:29 np0005593232 nova_compute[250269]: 2026-01-23 09:29:29.940 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:29:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1034: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 986 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.012 250273 INFO nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Creating config drive at /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config#033[00m
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.018 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpisqfsm6v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.145 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpisqfsm6v" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.174 250273 DEBUG nova.storage.rbd_utils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] rbd image 9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.178 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config 9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:29:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:29:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:31.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.252 250273 DEBUG nova.network.neutron [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Updated VIF entry in instance network info cache for port 3051f461-de35-45b5-aa63-1e67e3732c66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:29:31 np0005593232 nova_compute[250269]: 2026-01-23 09:29:31.253 250273 DEBUG nova.network.neutron [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Updating instance_info_cache with network_info: [{"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:29:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:31.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.020 250273 DEBUG oslo_concurrency.processutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config 9160345b-33cb-4242-a3c0-a1f41d03934f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.842s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.020 250273 INFO nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Deleting local config drive /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f/disk.config because it was imported into RBD.#033[00m
Jan 23 04:29:32 np0005593232 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 23 04:29:32 np0005593232 kernel: tap3051f461-de: entered promiscuous mode
Jan 23 04:29:32 np0005593232 NetworkManager[49057]: <info>  [1769160572.0829] manager: (tap3051f461-de): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 23 04:29:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:32Z|00027|binding|INFO|Claiming lport 3051f461-de35-45b5-aa63-1e67e3732c66 for this chassis.
Jan 23 04:29:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:32Z|00028|binding|INFO|3051f461-de35-45b5-aa63-1e67e3732c66: Claiming fa:16:3e:6d:56:03 10.1.0.137 fdfe:381f:8400:1::272
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.084 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:32 np0005593232 systemd-udevd[258116]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:29:32 np0005593232 NetworkManager[49057]: <info>  [1769160572.1281] device (tap3051f461-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:29:32 np0005593232 NetworkManager[49057]: <info>  [1769160572.1295] device (tap3051f461-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:29:32 np0005593232 systemd-machined[215836]: New machine qemu-2-instance-00000005.
Jan 23 04:29:32 np0005593232 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:32Z|00029|binding|INFO|Setting lport 3051f461-de35-45b5-aa63-1e67e3732c66 ovn-installed in OVS
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.174 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:32Z|00030|binding|INFO|Setting lport 3051f461-de35-45b5-aa63-1e67e3732c66 up in Southbound
Jan 23 04:29:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.313 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:56:03 10.1.0.137 fdfe:381f:8400:1::272'], port_security=['fa:16:3e:6d:56:03 10.1.0.137 fdfe:381f:8400:1::272'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.137/26 fdfe:381f:8400:1::272/64', 'neutron:device_id': '9160345b-33cb-4242-a3c0-a1f41d03934f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ce4d2b2bd9d4e648ef6fd351b972262', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a1793278-8c6f-49e9-be94-8a60e6a54c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5658ab5-291d-4119-8cb7-9ecc0ad5a8b4, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3051f461-de35-45b5-aa63-1e67e3732c66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:29:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.314 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3051f461-de35-45b5-aa63-1e67e3732c66 in datapath f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 bound to our chassis#033[00m
Jan 23 04:29:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.316 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7#033[00m
Jan 23 04:29:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.318 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp4oy3goyp/privsep.sock']#033[00m
Jan 23 04:29:32 np0005593232 nova_compute[250269]: 2026-01-23 09:29:32.325 250273 DEBUG oslo_concurrency.lockutils [req-8783e8f7-4fcd-44ec-8d0e-8b3b944a4087 req-92f129fc-1822-482a-97e5-11f1752d52fb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9160345b-33cb-4242-a3c0-a1f41d03934f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:29:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1035: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:33.111 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:33.112 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4oy3goyp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.972 258153 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.976 258153 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.978 258153 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:32.978 258153 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258153#033[00m
Jan 23 04:29:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:33.115 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a312c5-ec16-4591-9085-ddadb0943594]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:33 np0005593232 nova_compute[250269]: 2026-01-23 09:29:33.150 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:33.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:33 np0005593232 nova_compute[250269]: 2026-01-23 09:29:33.467 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160573.4664512, 9160345b-33cb-4242-a3c0-a1f41d03934f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:29:33 np0005593232 nova_compute[250269]: 2026-01-23 09:29:33.468 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] VM Started (Lifecycle Event)#033[00m
Jan 23 04:29:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:33.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.235 258153 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.235 258153 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.235 258153 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:34 np0005593232 nova_compute[250269]: 2026-01-23 09:29:34.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1036: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 186 op/s
Jan 23 04:29:34 np0005593232 nova_compute[250269]: 2026-01-23 09:29:34.932 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:29:34 np0005593232 nova_compute[250269]: 2026-01-23 09:29:34.938 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160573.466712, 9160345b-33cb-4242-a3c0-a1f41d03934f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:29:34 np0005593232 nova_compute[250269]: 2026-01-23 09:29:34.939 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.982 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[34214327-cb8e-4d42-9604-c6889cd218b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.983 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf0fce0a3-e1 in ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.986 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf0fce0a3-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.986 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[78067fbb-b2cd-4a5d-a0e8-780edbf389e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:34.991 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1d516d87-58cd-4b6e-bcf1-3a7598645eff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.028 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[2aaa4152-df75-4d2b-ba3b-7d7c7410567a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.045 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[aac389b8-6329-4b62-9640-3873bd2d5d00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.047 161902 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpiwdc4iu4/privsep.sock']#033[00m
Jan 23 04:29:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:35.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.407 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.412 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.620 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:29:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:35.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.941 250273 DEBUG nova.compute.manager [req-06ad1ddd-d53f-4e0c-8c97-e3a856e6762c req-d897016a-14c9-4db8-88f2-8342a45216d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.942 250273 DEBUG oslo_concurrency.lockutils [req-06ad1ddd-d53f-4e0c-8c97-e3a856e6762c req-d897016a-14c9-4db8-88f2-8342a45216d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.942 250273 DEBUG oslo_concurrency.lockutils [req-06ad1ddd-d53f-4e0c-8c97-e3a856e6762c req-d897016a-14c9-4db8-88f2-8342a45216d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.943 250273 DEBUG oslo_concurrency.lockutils [req-06ad1ddd-d53f-4e0c-8c97-e3a856e6762c req-d897016a-14c9-4db8-88f2-8342a45216d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.943 250273 DEBUG nova.compute.manager [req-06ad1ddd-d53f-4e0c-8c97-e3a856e6762c req-d897016a-14c9-4db8-88f2-8342a45216d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Processing event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.944 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.948 161902 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.948 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160575.9480853, 9160345b-33cb-4242-a3c0-a1f41d03934f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.948 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.949 161902 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpiwdc4iu4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.812 258324 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.816 258324 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.950 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.818 258324 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.819 258324 INFO oslo.privsep.daemon [-] privsep daemon running as pid 258324#033[00m
Jan 23 04:29:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:35.952 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0220bbf7-deac-44ba-b633-55b3eaa4d028]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.953 250273 INFO nova.virt.libvirt.driver [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Instance spawned successfully.#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.953 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.973 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.988 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.989 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.989 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.990 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.990 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.991 250273 DEBUG nova.virt.libvirt.driver [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:29:35 np0005593232 nova_compute[250269]: 2026-01-23 09:29:35.994 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:29:36 np0005593232 nova_compute[250269]: 2026-01-23 09:29:36.041 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:29:36 np0005593232 nova_compute[250269]: 2026-01-23 09:29:36.089 250273 INFO nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Took 65.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:29:36 np0005593232 nova_compute[250269]: 2026-01-23 09:29:36.090 250273 DEBUG nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:29:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:36.500 258324 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:36.500 258324 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:36.501 258324 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:36 np0005593232 nova_compute[250269]: 2026-01-23 09:29:36.582 250273 INFO nova.compute.manager [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Took 71.30 seconds to build instance.#033[00m
Jan 23 04:29:36 np0005593232 nova_compute[250269]: 2026-01-23 09:29:36.633 250273 DEBUG oslo_concurrency.lockutils [None req-0d6b6a9c-d266-47ae-bccd-b0f31cda7bf4 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 71.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1037: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 5.1 MiB/s rd, 437 KiB/s wr, 210 op/s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.122 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ec728aef-ff54-4c7a-bab3-9ce8ca217288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:29:37
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'images']
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.143 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[01df62ea-7f90-4f88-b4fb-c469ecc272af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 NetworkManager[49057]: <info>  [1769160577.1442] manager: (tapf0fce0a3-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 23 04:29:37 np0005593232 systemd-udevd[258337]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.183 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8859f0-371a-4fb9-8025-1d7565e6e504]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.186 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[349780b3-bd9a-4798-b83c-8e54974a18f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:37.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:37 np0005593232 NetworkManager[49057]: <info>  [1769160577.2161] device (tapf0fce0a3-e0): carrier: link connected
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.221 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0d57325c-0207-4d18-86ed-cdcdf0aaa613]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.238 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a9128f-f9f5-4a1f-875c-66c700877b61]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0fce0a3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:e7:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449035, 'reachable_time': 20036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258355, 'error': None, 'target': 'ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.254 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea3ceac-1c93-4e16-afe9-de5f7b726039]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe40:e729'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449035, 'tstamp': 449035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258357, 'error': None, 'target': 'ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.273 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f97fde40-2c51-4f69-9b67-4f660239d18d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf0fce0a3-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:40:e7:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449035, 'reachable_time': 20036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258358, 'error': None, 'target': 'ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.303 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ead7042e-8785-4637-8f5b-a082d8296aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.352 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6df048-9c95-4940-9786-ae75817d9202]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.355 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0fce0a3-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.355 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.356 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf0fce0a3-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:37 np0005593232 nova_compute[250269]: 2026-01-23 09:29:37.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:37 np0005593232 kernel: tapf0fce0a3-e0: entered promiscuous mode
Jan 23 04:29:37 np0005593232 NetworkManager[49057]: <info>  [1769160577.3591] manager: (tapf0fce0a3-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.361 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf0fce0a3-e0, col_values=(('external_ids', {'iface-id': '7b9b8a00-a18e-4fcd-9afc-7e5a3b0670b9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:29:37 np0005593232 nova_compute[250269]: 2026-01-23 09:29:37.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:37 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:37Z|00031|binding|INFO|Releasing lport 7b9b8a00-a18e-4fcd-9afc-7e5a3b0670b9 from this chassis (sb_readonly=0)
Jan 23 04:29:37 np0005593232 nova_compute[250269]: 2026-01-23 09:29:37.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.381 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.382 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0edf8b6f-1a73-4982-b31b-112d026b90da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.384 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7.pid.haproxy
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:29:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:37.385 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'env', 'PROCESS_TAG=haproxy-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:29:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:37.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:37 np0005593232 podman[258391]: 2026-01-23 09:29:37.76702115 +0000 UTC m=+0.034396807 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:29:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:29:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:29:38 np0005593232 podman[258391]: 2026-01-23 09:29:38.064031617 +0000 UTC m=+0.331407244 container create c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:29:38 np0005593232 systemd[1]: Started libpod-conmon-c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118.scope.
Jan 23 04:29:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67f922385e5f6216d07fd5328cd413806790f833a917d9eb16d1bf72598fa5a5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.152 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:38 np0005593232 podman[258391]: 2026-01-23 09:29:38.189977571 +0000 UTC m=+0.457353218 container init c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:29:38 np0005593232 podman[258391]: 2026-01-23 09:29:38.195801046 +0000 UTC m=+0.463176673 container start c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 23 04:29:38 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [NOTICE]   (258410) : New worker (258412) forked
Jan 23 04:29:38 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [NOTICE]   (258410) : Loading success.
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.398 250273 DEBUG nova.compute.manager [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.399 250273 DEBUG oslo_concurrency.lockutils [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.399 250273 DEBUG oslo_concurrency.lockutils [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.400 250273 DEBUG oslo_concurrency.lockutils [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.400 250273 DEBUG nova.compute.manager [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] No waiting events found dispatching network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:29:38 np0005593232 nova_compute[250269]: 2026-01-23 09:29:38.400 250273 WARNING nova.compute.manager [req-e250417c-8221-4bab-9113-e621f86ee3e4 req-5f44ce0d-b0e7-41b7-80fb-0115bed3c5f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received unexpected event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:29:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:29:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1038: 321 pgs: 321 active+clean; 306 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 6.3 MiB/s rd, 40 KiB/s wr, 241 op/s
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:39.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:39.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:39 np0005593232 nova_compute[250269]: 2026-01-23 09:29:39.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:29:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1039: 321 pgs: 321 active+clean; 312 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 7.1 MiB/s rd, 824 KiB/s wr, 277 op/s
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 53f43b25-417a-4923-8d92-74bd0fed348e does not exist
Jan 23 04:29:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 34cfa04b-06d4-465e-b015-ea8054988fa3 does not exist
Jan 23 04:29:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a50f07e4-6020-4016-86ac-872ecdb518b4 does not exist
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:29:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:29:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:29:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:41.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:29:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:29:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.520845227 +0000 UTC m=+0.070838761 container create 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.477501467 +0000 UTC m=+0.027494991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:41 np0005593232 systemd[1]: Started libpod-conmon-0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9.scope.
Jan 23 04:29:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.649623101 +0000 UTC m=+0.199616665 container init 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.661627891 +0000 UTC m=+0.211621395 container start 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:29:41 np0005593232 condescending_lehmann[258583]: 167 167
Jan 23 04:29:41 np0005593232 systemd[1]: libpod-0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9.scope: Deactivated successfully.
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.754708082 +0000 UTC m=+0.304701636 container attach 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:29:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:41.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.758835619 +0000 UTC m=+0.308829143 container died 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:29:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-449bde9e4166a7bc0f498a71656abc5e0cc2c2da46f49c00c2e926a5c56ae21f-merged.mount: Deactivated successfully.
Jan 23 04:29:41 np0005593232 podman[258567]: 2026-01-23 09:29:41.879369489 +0000 UTC m=+0.429362993 container remove 0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:29:41 np0005593232 systemd[1]: libpod-conmon-0c134937b55ee90efe6cc01d412d99ffbf6d40ba87f0e677050652253a3ad1a9.scope: Deactivated successfully.
Jan 23 04:29:42 np0005593232 podman[258608]: 2026-01-23 09:29:42.051905864 +0000 UTC m=+0.047852458 container create c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:29:42 np0005593232 systemd[1]: Started libpod-conmon-c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37.scope.
Jan 23 04:29:42 np0005593232 podman[258608]: 2026-01-23 09:29:42.027471451 +0000 UTC m=+0.023418065 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:42 np0005593232 podman[258608]: 2026-01-23 09:29:42.319027413 +0000 UTC m=+0.314974027 container init c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:29:42 np0005593232 podman[258608]: 2026-01-23 09:29:42.326469104 +0000 UTC m=+0.322415698 container start c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:29:42 np0005593232 podman[258608]: 2026-01-23 09:29:42.362478145 +0000 UTC m=+0.358424799 container attach c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:42.585 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:42.587 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:42.588 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:29:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1040: 321 pgs: 321 active+clean; 323 MiB data, 388 MiB used, 21 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.7 MiB/s wr, 276 op/s
Jan 23 04:29:43 np0005593232 nova_compute[250269]: 2026-01-23 09:29:43.189 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:43 np0005593232 jovial_euler[258624]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:29:43 np0005593232 jovial_euler[258624]: --> relative data size: 1.0
Jan 23 04:29:43 np0005593232 jovial_euler[258624]: --> All data devices are unavailable
Jan 23 04:29:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:43 np0005593232 systemd[1]: libpod-c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37.scope: Deactivated successfully.
Jan 23 04:29:43 np0005593232 podman[258608]: 2026-01-23 09:29:43.3632464 +0000 UTC m=+1.359193004 container died c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:29:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:43.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d8dc6e85bd1f657654c5bc02396c4f2ef8a3df530f43215fcff568da01508c71-merged.mount: Deactivated successfully.
Jan 23 04:29:43 np0005593232 podman[258608]: 2026-01-23 09:29:43.947410755 +0000 UTC m=+1.943357359 container remove c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:29:43 np0005593232 systemd[1]: libpod-conmon-c38a95b245d1983a5381386fb8351b3b25d14fe2bd14e2b0264563f038109b37.scope: Deactivated successfully.
Jan 23 04:29:44 np0005593232 podman[258797]: 2026-01-23 09:29:44.531985882 +0000 UTC m=+0.021609795 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:44 np0005593232 nova_compute[250269]: 2026-01-23 09:29:44.775 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:44 np0005593232 podman[258797]: 2026-01-23 09:29:44.791173015 +0000 UTC m=+0.280796898 container create 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 04:29:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1041: 321 pgs: 321 active+clean; 296 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 5.8 MiB/s rd, 2.0 MiB/s wr, 250 op/s
Jan 23 04:29:44 np0005593232 systemd[1]: Started libpod-conmon-95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b.scope.
Jan 23 04:29:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:44 np0005593232 podman[258797]: 2026-01-23 09:29:44.981912357 +0000 UTC m=+0.471536270 container init 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:29:44 np0005593232 podman[258797]: 2026-01-23 09:29:44.988804343 +0000 UTC m=+0.478428236 container start 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:29:44 np0005593232 podman[258811]: 2026-01-23 09:29:44.989104701 +0000 UTC m=+0.155121732 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 04:29:44 np0005593232 determined_dewdney[258827]: 167 167
Jan 23 04:29:44 np0005593232 systemd[1]: libpod-95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b.scope: Deactivated successfully.
Jan 23 04:29:45 np0005593232 podman[258797]: 2026-01-23 09:29:45.038080331 +0000 UTC m=+0.527704224 container attach 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:29:45 np0005593232 podman[258797]: 2026-01-23 09:29:45.039421869 +0000 UTC m=+0.529045762 container died 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:29:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a3dddc6db4f482268ed94b8ab3c31388e15f7ff668fe375e23f7c368f52570ab-merged.mount: Deactivated successfully.
Jan 23 04:29:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:29:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:45.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:29:45 np0005593232 podman[258797]: 2026-01-23 09:29:45.37672816 +0000 UTC m=+0.866352053 container remove 95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:29:45 np0005593232 systemd[1]: libpod-conmon-95064b715ae50ab0019109921b4c98ae94b93dc3217e680a56a77d1f02878b0b.scope: Deactivated successfully.
Jan 23 04:29:45 np0005593232 podman[258867]: 2026-01-23 09:29:45.625616781 +0000 UTC m=+0.116258549 container create aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:29:45 np0005593232 podman[258867]: 2026-01-23 09:29:45.534597789 +0000 UTC m=+0.025239587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:45.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:45 np0005593232 systemd[1]: Started libpod-conmon-aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455.scope.
Jan 23 04:29:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6684ed50bc91e266b5e4f7dd465e0d6f1cceaa6ea363bff9c2606d706fec548e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6684ed50bc91e266b5e4f7dd465e0d6f1cceaa6ea363bff9c2606d706fec548e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6684ed50bc91e266b5e4f7dd465e0d6f1cceaa6ea363bff9c2606d706fec548e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6684ed50bc91e266b5e4f7dd465e0d6f1cceaa6ea363bff9c2606d706fec548e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:46 np0005593232 podman[258867]: 2026-01-23 09:29:46.090063698 +0000 UTC m=+0.580705486 container init aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:29:46 np0005593232 podman[258867]: 2026-01-23 09:29:46.099022892 +0000 UTC m=+0.589664660 container start aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 04:29:46 np0005593232 podman[258867]: 2026-01-23 09:29:46.263858639 +0000 UTC m=+0.754500437 container attach aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006132005513679509 of space, bias 1.0, pg target 1.8396016541038527 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:29:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1042: 321 pgs: 321 active+clean; 286 MiB data, 373 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.5 MiB/s wr, 199 op/s
Jan 23 04:29:46 np0005593232 pensive_pike[258885]: {
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:    "0": [
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:        {
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "devices": [
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "/dev/loop3"
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            ],
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "lv_name": "ceph_lv0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "lv_size": "7511998464",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "name": "ceph_lv0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "tags": {
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.cluster_name": "ceph",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.crush_device_class": "",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.encrypted": "0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.osd_id": "0",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.type": "block",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:                "ceph.vdo": "0"
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            },
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "type": "block",
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:            "vg_name": "ceph_vg0"
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:        }
Jan 23 04:29:46 np0005593232 pensive_pike[258885]:    ]
Jan 23 04:29:46 np0005593232 pensive_pike[258885]: }
Jan 23 04:29:46 np0005593232 systemd[1]: libpod-aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455.scope: Deactivated successfully.
Jan 23 04:29:46 np0005593232 podman[258867]: 2026-01-23 09:29:46.96137307 +0000 UTC m=+1.452014838 container died aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:29:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6684ed50bc91e266b5e4f7dd465e0d6f1cceaa6ea363bff9c2606d706fec548e-merged.mount: Deactivated successfully.
Jan 23 04:29:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:47 np0005593232 podman[258867]: 2026-01-23 09:29:47.349968316 +0000 UTC m=+1.840610094 container remove aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:29:47 np0005593232 systemd[1]: libpod-conmon-aa1870a8d522778426f23ff7a00cd31fa2d477e65b1bd5b158c3ecb61dce1455.scope: Deactivated successfully.
Jan 23 04:29:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:47.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.001004887 +0000 UTC m=+0.051540603 container create 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 23 04:29:48 np0005593232 systemd[1]: Started libpod-conmon-56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619.scope.
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:47.971283724 +0000 UTC m=+0.021819470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.11177823 +0000 UTC m=+0.162313966 container init 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.119182721 +0000 UTC m=+0.169718437 container start 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:29:48 np0005593232 stupefied_archimedes[259065]: 167 167
Jan 23 04:29:48 np0005593232 systemd[1]: libpod-56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619.scope: Deactivated successfully.
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.129854243 +0000 UTC m=+0.180389959 container attach 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.130203653 +0000 UTC m=+0.180739369 container died 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:29:48 np0005593232 nova_compute[250269]: 2026-01-23 09:29:48.191 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-827d52c74814f2ab6b07d33116ac7b596c7c01c68ed7b802be12fcf697c40c2e-merged.mount: Deactivated successfully.
Jan 23 04:29:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:48 np0005593232 podman[259049]: 2026-01-23 09:29:48.377394217 +0000 UTC m=+0.427929933 container remove 56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:29:48 np0005593232 systemd[1]: libpod-conmon-56a261a5e708920aea14ce2b247a0e3db7de88a69f820a37353154e4dbd75619.scope: Deactivated successfully.
Jan 23 04:29:48 np0005593232 podman[259139]: 2026-01-23 09:29:48.606808666 +0000 UTC m=+0.064581343 container create 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:29:48 np0005593232 podman[259139]: 2026-01-23 09:29:48.57698496 +0000 UTC m=+0.034757657 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:29:48 np0005593232 systemd[1]: Started libpod-conmon-6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a.scope.
Jan 23 04:29:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:29:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0ff26d4fb5e4d6559942483ea683edbf9d5c316ddb81f50278b2cfb7edd7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0ff26d4fb5e4d6559942483ea683edbf9d5c316ddb81f50278b2cfb7edd7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0ff26d4fb5e4d6559942483ea683edbf9d5c316ddb81f50278b2cfb7edd7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74b0ff26d4fb5e4d6559942483ea683edbf9d5c316ddb81f50278b2cfb7edd7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:29:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1043: 321 pgs: 321 active+clean; 281 MiB data, 375 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Jan 23 04:29:49 np0005593232 podman[259139]: 2026-01-23 09:29:49.141387394 +0000 UTC m=+0.599160091 container init 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:29:49 np0005593232 podman[259139]: 2026-01-23 09:29:49.150946735 +0000 UTC m=+0.608719412 container start 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:29:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:49.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:49 np0005593232 podman[259139]: 2026-01-23 09:29:49.260474432 +0000 UTC m=+0.718247129 container attach 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:29:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:49 np0005593232 nova_compute[250269]: 2026-01-23 09:29:49.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:49 np0005593232 eager_gates[259156]: {
Jan 23 04:29:49 np0005593232 eager_gates[259156]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:29:49 np0005593232 eager_gates[259156]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:29:49 np0005593232 eager_gates[259156]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:29:49 np0005593232 eager_gates[259156]:        "osd_id": 0,
Jan 23 04:29:49 np0005593232 eager_gates[259156]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:29:49 np0005593232 eager_gates[259156]:        "type": "bluestore"
Jan 23 04:29:49 np0005593232 eager_gates[259156]:    }
Jan 23 04:29:49 np0005593232 eager_gates[259156]: }
Jan 23 04:29:50 np0005593232 systemd[1]: libpod-6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a.scope: Deactivated successfully.
Jan 23 04:29:50 np0005593232 podman[259139]: 2026-01-23 09:29:50.02275821 +0000 UTC m=+1.480530917 container died 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:29:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-74b0ff26d4fb5e4d6559942483ea683edbf9d5c316ddb81f50278b2cfb7edd7b-merged.mount: Deactivated successfully.
Jan 23 04:29:50 np0005593232 podman[259139]: 2026-01-23 09:29:50.653234098 +0000 UTC m=+2.111006775 container remove 6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:29:50 np0005593232 systemd[1]: libpod-conmon-6413ab95c146be75495635b71c79aa66072f94d4cdb8f548ffa01f0e23c4c79a.scope: Deactivated successfully.
Jan 23 04:29:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:29:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:29:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1044: 321 pgs: 321 active+clean; 301 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.8 MiB/s wr, 174 op/s
Jan 23 04:29:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev adbe78c5-dfc5-495f-8023-35db13b5b9fe does not exist
Jan 23 04:29:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aef3e713-755b-4199-a8a5-4ed3389ae5b2 does not exist
Jan 23 04:29:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9bd45405-138b-4bb3-b084-c365a7f3949d does not exist
Jan 23 04:29:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:51.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:29:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:52Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:56:03 10.1.0.137
Jan 23 04:29:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:29:52Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:56:03 10.1.0.137
Jan 23 04:29:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1045: 321 pgs: 321 active+clean; 334 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.6 MiB/s wr, 193 op/s
Jan 23 04:29:53 np0005593232 nova_compute[250269]: 2026-01-23 09:29:53.192 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:53.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:54 np0005593232 nova_compute[250269]: 2026-01-23 09:29:54.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1046: 321 pgs: 321 active+clean; 319 MiB data, 422 MiB used, 21 GiB / 21 GiB avail; 729 KiB/s rd, 6.5 MiB/s wr, 207 op/s
Jan 23 04:29:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:55.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:29:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:29:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1047: 321 pgs: 321 active+clean; 287 MiB data, 408 MiB used, 21 GiB / 21 GiB avail; 845 KiB/s rd, 6.3 MiB/s wr, 208 op/s
Jan 23 04:29:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:57.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:57 np0005593232 podman[259247]: 2026-01-23 09:29:57.427898735 +0000 UTC m=+0.078400915 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:29:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:57.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:57 np0005593232 nova_compute[250269]: 2026-01-23 09:29:57.821 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:58 np0005593232 nova_compute[250269]: 2026-01-23 09:29:58.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:29:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1048: 321 pgs: 321 active+clean; 279 MiB data, 402 MiB used, 21 GiB / 21 GiB avail; 854 KiB/s rd, 5.8 MiB/s wr, 194 op/s
Jan 23 04:29:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:29:59.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:59 np0005593232 nova_compute[250269]: 2026-01-23 09:29:59.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:59 np0005593232 nova_compute[250269]: 2026-01-23 09:29:59.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:29:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:29:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:29:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:29:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:29:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:59.778 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:29:59 np0005593232 nova_compute[250269]: 2026-01-23 09:29:59.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:29:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:29:59.779 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:29:59 np0005593232 nova_compute[250269]: 2026-01-23 09:29:59.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:30:00 np0005593232 nova_compute[250269]: 2026-01-23 09:30:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:30:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:00.782 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1049: 321 pgs: 321 active+clean; 279 MiB data, 398 MiB used, 21 GiB / 21 GiB avail; 834 KiB/s rd, 4.7 MiB/s wr, 185 op/s
Jan 23 04:30:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:01.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:30:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:01.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.837 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.837 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.837 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:30:01 np0005593232 nova_compute[250269]: 2026-01-23 09:30:01.838 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 24a2f423-4385-453b-86f4-8e7cd37e96a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.379 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.380 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.380 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.380 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.380 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.382 250273 INFO nova.compute.manager [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Terminating instance#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.382 250273 DEBUG nova.compute.manager [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:30:02 np0005593232 kernel: tap3051f461-de (unregistering): left promiscuous mode
Jan 23 04:30:02 np0005593232 NetworkManager[49057]: <info>  [1769160602.4607] device (tap3051f461-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:30:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:02Z|00032|binding|INFO|Releasing lport 3051f461-de35-45b5-aa63-1e67e3732c66 from this chassis (sb_readonly=0)
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.471 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:02Z|00033|binding|INFO|Setting lport 3051f461-de35-45b5-aa63-1e67e3732c66 down in Southbound
Jan 23 04:30:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:02Z|00034|binding|INFO|Removing iface tap3051f461-de ovn-installed in OVS
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.473 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:02 np0005593232 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 23 04:30:02 np0005593232 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 15.842s CPU time.
Jan 23 04:30:02 np0005593232 systemd-machined[215836]: Machine qemu-2-instance-00000005 terminated.
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.621 250273 INFO nova.virt.libvirt.driver [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Instance destroyed successfully.#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.622 250273 DEBUG nova.objects.instance [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lazy-loading 'resources' on Instance uuid 9160345b-33cb-4242-a3c0-a1f41d03934f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:30:02 np0005593232 nova_compute[250269]: 2026-01-23 09:30:02.662 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:30:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1050: 321 pgs: 321 active+clean; 279 MiB data, 395 MiB used, 21 GiB / 21 GiB avail; 544 KiB/s rd, 2.5 MiB/s wr, 129 op/s
Jan 23 04:30:02 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.066 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:56:03 10.1.0.137 fdfe:381f:8400:1::272'], port_security=['fa:16:3e:6d:56:03 10.1.0.137 fdfe:381f:8400:1::272'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.137/26 fdfe:381f:8400:1::272/64', 'neutron:device_id': '9160345b-33cb-4242-a3c0-a1f41d03934f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7ce4d2b2bd9d4e648ef6fd351b972262', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a1793278-8c6f-49e9-be94-8a60e6a54c2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5658ab5-291d-4119-8cb7-9ecc0ad5a8b4, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3051f461-de35-45b5-aa63-1e67e3732c66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.067 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3051f461-de35-45b5-aa63-1e67e3732c66 in datapath f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 unbound from our chassis#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.069 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.070 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4156ebb4-189d-434f-bccb-b6000579a09b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.071 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 namespace which is not needed anymore#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.183 250273 DEBUG nova.virt.libvirt.vif [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:28:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-590995285-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-590995285-3',id=5,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-23T09:29:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7ce4d2b2bd9d4e648ef6fd351b972262',ramdisk_id='',reservation_id='r-lqgmnwk0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-335645779',owner_user_name='tempest-AutoAllocateNetworkTest-335645779-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:29:36Z,user_data=None,user_id='3f4fe5f838cb42d0ae4285971b115141',uuid=9160345b-33cb-4242-a3c0-a1f41d03934f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.184 250273 DEBUG nova.network.os_vif_util [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converting VIF {"id": "3051f461-de35-45b5-aa63-1e67e3732c66", "address": "fa:16:3e:6d:56:03", "network": {"id": "f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "fdfe:381f:8400:1::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400:1::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400:1::272", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}, {"cidr": "10.1.0.128/26", "dns": [], "gateway": {"address": "10.1.0.129", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.137", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7ce4d2b2bd9d4e648ef6fd351b972262", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3051f461-de", "ovs_interfaceid": "3051f461-de35-45b5-aa63-1e67e3732c66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.185 250273 DEBUG nova.network.os_vif_util [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.185 250273 DEBUG os_vif [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.188 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.188 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3051f461-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.196 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.201 250273 INFO os_vif [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:56:03,bridge_name='br-int',has_traffic_filtering=True,id=3051f461-de35-45b5-aa63-1e67e3732c66,network=Network(f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3051f461-de')#033[00m
Jan 23 04:30:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:03.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:03 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [NOTICE]   (258410) : haproxy version is 2.8.14-c23fe91
Jan 23 04:30:03 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [NOTICE]   (258410) : path to executable is /usr/sbin/haproxy
Jan 23 04:30:03 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [WARNING]  (258410) : Exiting Master process...
Jan 23 04:30:03 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [ALERT]    (258410) : Current worker (258412) exited with code 143 (Terminated)
Jan 23 04:30:03 np0005593232 neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7[258406]: [WARNING]  (258410) : All workers exited. Exiting... (0)
Jan 23 04:30:03 np0005593232 systemd[1]: libpod-c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118.scope: Deactivated successfully.
Jan 23 04:30:03 np0005593232 podman[259303]: 2026-01-23 09:30:03.277509396 +0000 UTC m=+0.104823295 container died c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:30:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118-userdata-shm.mount: Deactivated successfully.
Jan 23 04:30:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-67f922385e5f6216d07fd5328cd413806790f833a917d9eb16d1bf72598fa5a5-merged.mount: Deactivated successfully.
Jan 23 04:30:03 np0005593232 podman[259303]: 2026-01-23 09:30:03.449694882 +0000 UTC m=+0.277008781 container cleanup c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 04:30:03 np0005593232 systemd[1]: libpod-conmon-c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118.scope: Deactivated successfully.
Jan 23 04:30:03 np0005593232 podman[259353]: 2026-01-23 09:30:03.691989025 +0000 UTC m=+0.222768410 container remove c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.698 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[de6bcffe-91df-4db1-a11f-7b19a8733bbd]: (4, ('Fri Jan 23 09:30:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 (c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118)\nc2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118\nFri Jan 23 09:30:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 (c2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118)\nc2dcd537924cce108d9c2b874a89269f285c3913826795e40b0a06b3262b7118\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.700 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[234caa8c-36cb-4970-b774-331729f0b88a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.701 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf0fce0a3-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:03 np0005593232 kernel: tapf0fce0a3-e0: left promiscuous mode
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.715 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.718 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c00f0f97-994e-434e-a825-f7882448ac63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.731 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc34e72-ed30-4130-a9bf-3fc9e9486f60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.732 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4bae4084-81e6-4169-9147-5469096aebac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.746 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92d69ee4-12e5-4419-92cc-a8df64d89e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449025, 'reachable_time': 18414, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259369, 'error': None, 'target': 'ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2df0fce0a3\x2de4b8\x2d4f67\x2db93f\x2d3b825f52a4b7.mount: Deactivated successfully.
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.757 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f0fce0a3-e4b8-4f67-b93f-3b825f52a4b7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:30:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:03.758 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a5e631b0-f90f-48a2-b2b7-29df9d07cf0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.797 250273 DEBUG nova.compute.manager [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-unplugged-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.798 250273 DEBUG oslo_concurrency.lockutils [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.798 250273 DEBUG oslo_concurrency.lockutils [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.798 250273 DEBUG oslo_concurrency.lockutils [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.798 250273 DEBUG nova.compute.manager [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] No waiting events found dispatching network-vif-unplugged-3051f461-de35-45b5-aa63-1e67e3732c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.799 250273 DEBUG nova.compute.manager [req-0fd3b573-e368-4686-b940-59eadfb15a4d req-fc596d0b-0145-411b-81e7-14437ff43907 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-unplugged-3051f461-de35-45b5-aa63-1e67e3732c66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.809 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.846 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.847 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.847 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.847 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.848 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.848 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.848 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.945 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.946 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.946 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.946 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:30:03 np0005593232 nova_compute[250269]: 2026-01-23 09:30:03.947 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:30:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069194468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.452 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.466 250273 INFO nova.virt.libvirt.driver [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Deleting instance files /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f_del#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.467 250273 INFO nova.virt.libvirt.driver [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Deletion of /var/lib/nova/instances/9160345b-33cb-4242-a3c0-a1f41d03934f_del complete#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.775 250273 DEBUG nova.virt.libvirt.host [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.775 250273 INFO nova.virt.libvirt.host [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] UEFI support detected#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.778 250273 INFO nova.compute.manager [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Took 2.40 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.779 250273 DEBUG oslo.service.loopingcall [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.780 250273 DEBUG nova.compute.manager [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.780 250273 DEBUG nova.network.neutron [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.787 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Error from libvirt while getting description of instance-00000005: [Error Code 42] Domain not found: no domain with matching uuid '9160345b-33cb-4242-a3c0-a1f41d03934f' (instance-00000005): libvirt.libvirtError: Domain not found: no domain with matching uuid '9160345b-33cb-4242-a3c0-a1f41d03934f' (instance-00000005)#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.792 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.793 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:30:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1051: 321 pgs: 321 active+clean; 247 MiB data, 383 MiB used, 21 GiB / 21 GiB avail; 307 KiB/s rd, 1.0 MiB/s wr, 80 op/s
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.948 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.949 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4682MB free_disk=20.851776123046875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.949 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:04 np0005593232 nova_compute[250269]: 2026-01-23 09:30:04.949 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:05.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.390 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 24a2f423-4385-453b-86f4-8e7cd37e96a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.391 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9160345b-33cb-4242-a3c0-a1f41d03934f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.391 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.391 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.454 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:30:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2718255701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.902 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.908 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.948 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.974 250273 DEBUG nova.compute.manager [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.975 250273 DEBUG oslo_concurrency.lockutils [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.975 250273 DEBUG oslo_concurrency.lockutils [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.975 250273 DEBUG oslo_concurrency.lockutils [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.976 250273 DEBUG nova.compute.manager [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] No waiting events found dispatching network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.976 250273 WARNING nova.compute.manager [req-1c1ad4d1-058e-4c06-9bb8-f242cf7290d5 req-5705ace1-ae3d-4112-8d5c-e6b608401f68 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received unexpected event network-vif-plugged-3051f461-de35-45b5-aa63-1e67e3732c66 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.977 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:30:05 np0005593232 nova_compute[250269]: 2026-01-23 09:30:05.978 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1052: 321 pgs: 321 active+clean; 224 MiB data, 368 MiB used, 21 GiB / 21 GiB avail; 191 KiB/s rd, 99 KiB/s wr, 40 op/s
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:07.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:07.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:07 np0005593232 nova_compute[250269]: 2026-01-23 09:30:07.952 250273 DEBUG nova.network.neutron [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.062 250273 INFO nova.compute.manager [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Took 3.28 seconds to deallocate network for instance.#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.193 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.194 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.278 250273 DEBUG nova.compute.manager [req-c54c2f51-0098-4fb7-aa0c-03bb5df86710 req-c5e25cfd-bf56-4870-b081-64047cbeaa3e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Received event network-vif-deleted-3051f461-de35-45b5-aa63-1e67e3732c66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.279 250273 DEBUG oslo_concurrency.processutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:30:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/910682808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.729 250273 DEBUG oslo_concurrency.processutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:08 np0005593232 nova_compute[250269]: 2026-01-23 09:30:08.738 250273 DEBUG nova.compute.provider_tree [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:30:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1053: 321 pgs: 321 active+clean; 200 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 71 KiB/s rd, 91 KiB/s wr, 38 op/s
Jan 23 04:30:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:09.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:09.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:10 np0005593232 nova_compute[250269]: 2026-01-23 09:30:10.249 250273 DEBUG nova.scheduler.client.report [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:30:10 np0005593232 nova_compute[250269]: 2026-01-23 09:30:10.283 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:10 np0005593232 nova_compute[250269]: 2026-01-23 09:30:10.371 250273 INFO nova.scheduler.client.report [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Deleted allocations for instance 9160345b-33cb-4242-a3c0-a1f41d03934f#033[00m
Jan 23 04:30:10 np0005593232 nova_compute[250269]: 2026-01-23 09:30:10.487 250273 DEBUG oslo_concurrency.lockutils [None req-7a23fc10-732e-4ef2-a98b-5358fe4ee388 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "9160345b-33cb-4242-a3c0-a1f41d03934f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1054: 321 pgs: 321 active+clean; 200 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 26 KiB/s wr, 33 op/s
Jan 23 04:30:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:11.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1055: 321 pgs: 321 active+clean; 200 MiB data, 358 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 15 KiB/s wr, 42 op/s
Jan 23 04:30:13 np0005593232 nova_compute[250269]: 2026-01-23 09:30:13.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:13 np0005593232 nova_compute[250269]: 2026-01-23 09:30:13.199 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:13.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1056: 321 pgs: 321 active+clean; 161 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 4.2 KiB/s wr, 52 op/s
Jan 23 04:30:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:15.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:15 np0005593232 podman[259495]: 2026-01-23 09:30:15.430567233 +0000 UTC m=+0.091886748 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 23 04:30:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:15.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1057: 321 pgs: 321 active+clean; 137 MiB data, 340 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 1.9 KiB/s wr, 53 op/s
Jan 23 04:30:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:17 np0005593232 nova_compute[250269]: 2026-01-23 09:30:17.620 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160602.6192422, 9160345b-33cb-4242-a3c0-a1f41d03934f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:30:17 np0005593232 nova_compute[250269]: 2026-01-23 09:30:17.621 250273 INFO nova.compute.manager [-] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:30:17 np0005593232 nova_compute[250269]: 2026-01-23 09:30:17.650 250273 DEBUG nova.compute.manager [None req-06443c28-da5f-4785-9700-1be8ca573229 - - - - - -] [instance: 9160345b-33cb-4242-a3c0-a1f41d03934f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:17.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.203 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:18 np0005593232 nova_compute[250269]: 2026-01-23 09:30:18.630 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1058: 321 pgs: 321 active+clean; 128 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 69 KiB/s wr, 74 op/s
Jan 23 04:30:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:19.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:19.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1059: 321 pgs: 321 active+clean; 151 MiB data, 317 MiB used, 21 GiB / 21 GiB avail; 51 KiB/s rd, 1.2 MiB/s wr, 75 op/s
Jan 23 04:30:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:21.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.605 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "24a2f423-4385-453b-86f4-8e7cd37e96a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.605 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "24a2f423-4385-453b-86f4-8e7cd37e96a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.605 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "24a2f423-4385-453b-86f4-8e7cd37e96a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.606 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "24a2f423-4385-453b-86f4-8e7cd37e96a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.606 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "24a2f423-4385-453b-86f4-8e7cd37e96a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.607 250273 INFO nova.compute.manager [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Terminating instance#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.608 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.608 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquired lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.608 250273 DEBUG nova.network.neutron [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:30:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:21.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:21 np0005593232 nova_compute[250269]: 2026-01-23 09:30:21.865 250273 DEBUG nova.network.neutron [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:30:22 np0005593232 nova_compute[250269]: 2026-01-23 09:30:22.545 250273 DEBUG nova.network.neutron [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:22 np0005593232 nova_compute[250269]: 2026-01-23 09:30:22.571 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Releasing lock "refresh_cache-24a2f423-4385-453b-86f4-8e7cd37e96a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:30:22 np0005593232 nova_compute[250269]: 2026-01-23 09:30:22.573 250273 DEBUG nova.compute.manager [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:30:22 np0005593232 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 23 04:30:22 np0005593232 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 18.257s CPU time.
Jan 23 04:30:22 np0005593232 systemd-machined[215836]: Machine qemu-1-instance-00000001 terminated.
Jan 23 04:30:22 np0005593232 nova_compute[250269]: 2026-01-23 09:30:22.798 250273 INFO nova.virt.libvirt.driver [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance destroyed successfully.#033[00m
Jan 23 04:30:22 np0005593232 nova_compute[250269]: 2026-01-23 09:30:22.799 250273 DEBUG nova.objects.instance [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lazy-loading 'resources' on Instance uuid 24a2f423-4385-453b-86f4-8e7cd37e96a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:30:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1060: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 834 KiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:23.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.397 250273 INFO nova.virt.libvirt.driver [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Deleting instance files /var/lib/nova/instances/24a2f423-4385-453b-86f4-8e7cd37e96a6_del#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.398 250273 INFO nova.virt.libvirt.driver [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Deletion of /var/lib/nova/instances/24a2f423-4385-453b-86f4-8e7cd37e96a6_del complete#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.508 250273 INFO nova.compute.manager [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.509 250273 DEBUG oslo.service.loopingcall [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.509 250273 DEBUG nova.compute.manager [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.509 250273 DEBUG nova.network.neutron [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.677 250273 DEBUG nova.network.neutron [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.709 250273 DEBUG nova.network.neutron [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.766 250273 INFO nova.compute.manager [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Took 0.26 seconds to deallocate network for instance.#033[00m
Jan 23 04:30:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.833 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.834 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:23 np0005593232 nova_compute[250269]: 2026-01-23 09:30:23.902 250273 DEBUG oslo_concurrency.processutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:30:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1591053204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.336 250273 DEBUG oslo_concurrency.processutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.342 250273 DEBUG nova.compute.provider_tree [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.367 250273 DEBUG nova.scheduler.client.report [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.390 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.424 250273 INFO nova.scheduler.client.report [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Deleted allocations for instance 24a2f423-4385-453b-86f4-8e7cd37e96a6#033[00m
Jan 23 04:30:24 np0005593232 nova_compute[250269]: 2026-01-23 09:30:24.521 250273 DEBUG oslo_concurrency.lockutils [None req-7fa08ab7-af70-4c77-b921-256dbb118017 3f4fe5f838cb42d0ae4285971b115141 7ce4d2b2bd9d4e648ef6fd351b972262 - - default default] Lock "24a2f423-4385-453b-86f4-8e7cd37e96a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1061: 321 pgs: 321 active+clean; 140 MiB data, 304 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Jan 23 04:30:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:25.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:25.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1062: 321 pgs: 321 active+clean; 113 MiB data, 288 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Jan 23 04:30:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:27.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:27.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:28 np0005593232 nova_compute[250269]: 2026-01-23 09:30:28.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:28 np0005593232 podman[259572]: 2026-01-23 09:30:28.384691669 +0000 UTC m=+0.050475263 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 04:30:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1063: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 23 04:30:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:29.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:29.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1064: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 115 op/s
Jan 23 04:30:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:31.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:31.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1065: 321 pgs: 321 active+clean; 88 MiB data, 277 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 561 KiB/s wr, 96 op/s
Jan 23 04:30:33 np0005593232 nova_compute[250269]: 2026-01-23 09:30:33.208 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:33.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1066: 321 pgs: 321 active+clean; 100 MiB data, 284 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 771 KiB/s wr, 77 op/s
Jan 23 04:30:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:35.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:35.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1067: 321 pgs: 321 active+clean; 113 MiB data, 296 MiB used, 21 GiB / 21 GiB avail; 219 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:30:37
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'volumes', 'images', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.mgr']
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:30:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.365000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637365263, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2166, "num_deletes": 251, "total_data_size": 3841769, "memory_usage": 3906080, "flush_reason": "Manual Compaction"}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637425990, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 3768804, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21963, "largest_seqno": 24128, "table_properties": {"data_size": 3758978, "index_size": 6192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20542, "raw_average_key_size": 20, "raw_value_size": 3739190, "raw_average_value_size": 3735, "num_data_blocks": 274, "num_entries": 1001, "num_filter_entries": 1001, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769160432, "oldest_key_time": 1769160432, "file_creation_time": 1769160637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 60864 microseconds, and 8649 cpu microseconds.
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.426038) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 3768804 bytes OK
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.426057) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.430813) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.430828) EVENT_LOG_v1 {"time_micros": 1769160637430824, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.430845) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 3832764, prev total WAL file size 3832764, number of live WAL files 2.
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.431955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(3680KB)], [53(7436KB)]
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637432075, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11383743, "oldest_snapshot_seqno": -1}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4865 keys, 9349781 bytes, temperature: kUnknown
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637502851, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 9349781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9315969, "index_size": 20518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12229, "raw_key_size": 122773, "raw_average_key_size": 25, "raw_value_size": 9226581, "raw_average_value_size": 1896, "num_data_blocks": 842, "num_entries": 4865, "num_filter_entries": 4865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769160637, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.503135) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 9349781 bytes
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.534841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 160.6 rd, 131.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 7.3 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(5.5) write-amplify(2.5) OK, records in: 5386, records dropped: 521 output_compression: NoCompression
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.534938) EVENT_LOG_v1 {"time_micros": 1769160637534920, "job": 28, "event": "compaction_finished", "compaction_time_micros": 70890, "compaction_time_cpu_micros": 23725, "output_level": 6, "num_output_files": 1, "total_output_size": 9349781, "num_input_records": 5386, "num_output_records": 4865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637535955, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160637537678, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.431810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.537716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.537722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.537723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.537725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:30:37.537727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:30:37 np0005593232 nova_compute[250269]: 2026-01-23 09:30:37.798 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160622.7962198, 24a2f423-4385-453b-86f4-8e7cd37e96a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:30:37 np0005593232 nova_compute[250269]: 2026-01-23 09:30:37.798 250273 INFO nova.compute.manager [-] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:30:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:37.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:37 np0005593232 nova_compute[250269]: 2026-01-23 09:30:37.820 250273 DEBUG nova.compute.manager [None req-bc8a5780-f9e2-469d-a56e-cdeaa26c3bbc - - - - - -] [instance: 24a2f423-4385-453b-86f4-8e7cd37e96a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:30:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.211 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:38 np0005593232 nova_compute[250269]: 2026-01-23 09:30:38.216 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1068: 321 pgs: 321 active+clean; 115 MiB data, 299 MiB used, 21 GiB / 21 GiB avail; 237 KiB/s rd, 2.1 MiB/s wr, 46 op/s
Jan 23 04:30:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:39.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:39.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1069: 321 pgs: 321 active+clean; 118 MiB data, 301 MiB used, 21 GiB / 21 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 04:30:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:41.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:41.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:42.586 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:42.586 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:42.587 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1070: 321 pgs: 321 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 374 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.216 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.218 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.218 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.218 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:43 np0005593232 nova_compute[250269]: 2026-01-23 09:30:43.224 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 04:30:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:43.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1071: 321 pgs: 321 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 374 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 23 04:30:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:45.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 23 04:30:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:45.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 23 04:30:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 23 04:30:46 np0005593232 podman[259651]: 2026-01-23 09:30:46.456721144 +0000 UTC m=+0.103158028 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021672308300763076 of space, bias 1.0, pg target 0.6501692490228923 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:30:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1073: 321 pgs: 321 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 199 KiB/s rd, 443 KiB/s wr, 38 op/s
Jan 23 04:30:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:47.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:47 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.692 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:47 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.692 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:47 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.765 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:30:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:47.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:47 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.988 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:47 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.989 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:47.999 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.000 250273 INFO nova.compute.claims [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.225 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.371 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:30:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664404146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.846 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.853 250273 DEBUG nova.compute.provider_tree [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:30:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1074: 321 pgs: 321 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 177 KiB/s rd, 85 KiB/s wr, 44 op/s
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.917 250273 DEBUG nova.scheduler.client.report [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.987 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:48 np0005593232 nova_compute[250269]: 2026-01-23 09:30:48.987 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.087 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.088 250273 DEBUG nova.network.neutron [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.112 250273 INFO nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.141 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.324 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.325 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.325 250273 INFO nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Creating image(s)#033[00m
Jan 23 04:30:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:49.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.352 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.380 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.476 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.480 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.558 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.559 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.560 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.560 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.671 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:49 np0005593232 nova_compute[250269]: 2026-01-23 09:30:49.675 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 f62791ad-fc40-451f-b02a-ba991f2dbc32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:49.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:50 np0005593232 nova_compute[250269]: 2026-01-23 09:30:50.026 250273 DEBUG nova.policy [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a43b680a6019491aafe42c0a10e648df', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:30:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1075: 321 pgs: 321 active+clean; 121 MiB data, 302 MiB used, 21 GiB / 21 GiB avail; 130 KiB/s rd, 60 KiB/s wr, 33 op/s
Jan 23 04:30:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:51.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:51 np0005593232 nova_compute[250269]: 2026-01-23 09:30:51.890 250273 DEBUG nova.network.neutron [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Successfully updated port: 857f8a0c-0bda-43ca-85aa-7f22568eddc7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:30:51 np0005593232 nova_compute[250269]: 2026-01-23 09:30:51.906 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:30:51 np0005593232 nova_compute[250269]: 2026-01-23 09:30:51.906 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquired lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:30:51 np0005593232 nova_compute[250269]: 2026-01-23 09:30:51.907 250273 DEBUG nova.network.neutron [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:30:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.070 250273 DEBUG nova.compute.manager [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-changed-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.071 250273 DEBUG nova.compute.manager [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Refreshing instance network info cache due to event network-changed-857f8a0c-0bda-43ca-85aa-7f22568eddc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.071 250273 DEBUG oslo_concurrency.lockutils [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.244 250273 DEBUG nova.network.neutron [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.252 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 f62791ad-fc40-451f-b02a-ba991f2dbc32_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:52 np0005593232 nova_compute[250269]: 2026-01-23 09:30:52.339 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] resizing rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:30:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1076: 321 pgs: 321 active+clean; 126 MiB data, 305 MiB used, 21 GiB / 21 GiB avail; 1.0 MiB/s rd, 313 KiB/s wr, 88 op/s
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.111 250273 DEBUG nova.objects.instance [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lazy-loading 'migration_context' on Instance uuid f62791ad-fc40-451f-b02a-ba991f2dbc32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.139 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.140 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Ensure instance console log exists: /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.140 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.140 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.140 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:53 np0005593232 nova_compute[250269]: 2026-01-23 09:30:53.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 23 04:30:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:30:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:53.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 97353e04-c393-4164-afc6-75233f363eea does not exist
Jan 23 04:30:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 04347f5e-785b-43ce-b873-f5cff44a2672 does not exist
Jan 23 04:30:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7f3070fe-c14e-40eb-acfc-80053cf9ef79 does not exist
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:30:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.146 250273 DEBUG nova.network.neutron [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updating instance_info_cache with network_info: [{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.191 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Releasing lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.191 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Instance network_info: |[{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.191 250273 DEBUG oslo_concurrency.lockutils [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.192 250273 DEBUG nova.network.neutron [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Refreshing network info cache for port 857f8a0c-0bda-43ca-85aa-7f22568eddc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.194 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Start _get_guest_xml network_info=[{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.197 250273 WARNING nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.203 250273 DEBUG nova.virt.libvirt.host [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.204 250273 DEBUG nova.virt.libvirt.host [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.206 250273 DEBUG nova.virt.libvirt.host [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.207 250273 DEBUG nova.virt.libvirt.host [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.208 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.208 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.208 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.209 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.209 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.209 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.209 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.209 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.210 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.210 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.210 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.210 250273 DEBUG nova.virt.hardware [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.213 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:54 np0005593232 podman[260453]: 2026-01-23 09:30:54.520172696 +0000 UTC m=+0.023753435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2509956730' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:30:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.675 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:54 np0005593232 podman[260453]: 2026-01-23 09:30:54.752100957 +0000 UTC m=+0.255681686 container create 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:30:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1078: 321 pgs: 321 active+clean; 144 MiB data, 315 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.5 MiB/s wr, 148 op/s
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.987 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:54 np0005593232 nova_compute[250269]: 2026-01-23 09:30:54.992 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:55 np0005593232 systemd[1]: Started libpod-conmon-7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205.scope.
Jan 23 04:30:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:30:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:30:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790679689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.438 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.440 250273 DEBUG nova.virt.libvirt.vif [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1280958077',display_name='tempest-LiveMigrationTest-server-1280958077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1280958077',id=9,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-xkzmzoa6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:30:49Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=f62791ad-fc40-451f-b02a-ba991f2dbc32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.441 250273 DEBUG nova.network.os_vif_util [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Converting VIF {"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.442 250273 DEBUG nova.network.os_vif_util [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.443 250273 DEBUG nova.objects.instance [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lazy-loading 'pci_devices' on Instance uuid f62791ad-fc40-451f-b02a-ba991f2dbc32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.461 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <uuid>f62791ad-fc40-451f-b02a-ba991f2dbc32</uuid>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <name>instance-00000009</name>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:name>tempest-LiveMigrationTest-server-1280958077</nova:name>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:30:54</nova:creationTime>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:user uuid="a43b680a6019491aafe42c0a10e648df">tempest-LiveMigrationTest-1903931568-project-member</nova:user>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:project uuid="c56e53b3339e4e4db30b7a9d330bc380">tempest-LiveMigrationTest-1903931568</nova:project>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <nova:port uuid="857f8a0c-0bda-43ca-85aa-7f22568eddc7">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="serial">f62791ad-fc40-451f-b02a-ba991f2dbc32</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="uuid">f62791ad-fc40-451f-b02a-ba991f2dbc32</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f62791ad-fc40-451f-b02a-ba991f2dbc32_disk">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:d9:aa:f3"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <target dev="tap857f8a0c-0b"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/console.log" append="off"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:30:55 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:30:55 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:30:55 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:30:55 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.463 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Preparing to wait for external event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.463 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.464 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.464 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.465 250273 DEBUG nova.virt.libvirt.vif [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1280958077',display_name='tempest-LiveMigrationTest-server-1280958077',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1280958077',id=9,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-xkzmzoa6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:30:49Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=f62791ad-fc40-451f-b02a-ba991f2dbc32,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.465 250273 DEBUG nova.network.os_vif_util [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Converting VIF {"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.466 250273 DEBUG nova.network.os_vif_util [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.466 250273 DEBUG os_vif [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.467 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.468 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.472 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap857f8a0c-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.473 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap857f8a0c-0b, col_values=(('external_ids', {'iface-id': '857f8a0c-0bda-43ca-85aa-7f22568eddc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:aa:f3', 'vm-uuid': 'f62791ad-fc40-451f-b02a-ba991f2dbc32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:55 np0005593232 NetworkManager[49057]: <info>  [1769160655.4751] manager: (tap857f8a0c-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.481 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.482 250273 INFO os_vif [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b')#033[00m
Jan 23 04:30:55 np0005593232 podman[260453]: 2026-01-23 09:30:55.509690902 +0000 UTC m=+1.013271641 container init 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:30:55 np0005593232 podman[260453]: 2026-01-23 09:30:55.522386252 +0000 UTC m=+1.025967011 container start 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:30:55 np0005593232 confident_mcnulty[260512]: 167 167
Jan 23 04:30:55 np0005593232 systemd[1]: libpod-7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205.scope: Deactivated successfully.
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.695 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.695 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.696 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] No VIF found with MAC fa:16:3e:d9:aa:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.696 250273 INFO nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Using config drive#033[00m
Jan 23 04:30:55 np0005593232 podman[260453]: 2026-01-23 09:30:55.701944897 +0000 UTC m=+1.205525646 container attach 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:30:55 np0005593232 podman[260453]: 2026-01-23 09:30:55.703013057 +0000 UTC m=+1.206593776 container died 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:30:55 np0005593232 nova_compute[250269]: 2026-01-23 09:30:55.809 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:55.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-63321b174910c89d7813c6f586c69fa7d068ab3d03b20d388c834bc017640ca9-merged.mount: Deactivated successfully.
Jan 23 04:30:55 np0005593232 podman[260453]: 2026-01-23 09:30:55.929674878 +0000 UTC m=+1.433255597 container remove 7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:30:55 np0005593232 systemd[1]: libpod-conmon-7f93148c67c8ab0e324f669fea2a4513b11553765729ae14c119a29cde417205.scope: Deactivated successfully.
Jan 23 04:30:56 np0005593232 podman[260560]: 2026-01-23 09:30:56.100083833 +0000 UTC m=+0.045766979 container create c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:30:56 np0005593232 systemd[1]: Started libpod-conmon-c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee.scope.
Jan 23 04:30:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:30:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:30:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:30:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:30:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:30:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:30:56 np0005593232 podman[260560]: 2026-01-23 09:30:56.079074327 +0000 UTC m=+0.024757513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:30:56 np0005593232 podman[260560]: 2026-01-23 09:30:56.251075888 +0000 UTC m=+0.196759074 container init c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:30:56 np0005593232 podman[260560]: 2026-01-23 09:30:56.263917432 +0000 UTC m=+0.209600618 container start c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:30:56 np0005593232 podman[260560]: 2026-01-23 09:30:56.469383342 +0000 UTC m=+0.415066498 container attach c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:30:56 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:56Z|00035|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 04:30:56 np0005593232 nova_compute[250269]: 2026-01-23 09:30:56.595 250273 DEBUG nova.network.neutron [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updated VIF entry in instance network info cache for port 857f8a0c-0bda-43ca-85aa-7f22568eddc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:30:56 np0005593232 nova_compute[250269]: 2026-01-23 09:30:56.596 250273 DEBUG nova.network.neutron [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updating instance_info_cache with network_info: [{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:30:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1079: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 23 04:30:57 np0005593232 flamboyant_archimedes[260578]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:30:57 np0005593232 flamboyant_archimedes[260578]: --> relative data size: 1.0
Jan 23 04:30:57 np0005593232 flamboyant_archimedes[260578]: --> All data devices are unavailable
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.081 250273 DEBUG oslo_concurrency.lockutils [req-cd674c0e-ec6f-4b1d-819e-3a61d9dc7549 req-f959bd55-bee9-45a1-baa6-a762bc2cc0cc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.088 250273 INFO nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Creating config drive at /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.094 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo6fbshvj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:57 np0005593232 systemd[1]: libpod-c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee.scope: Deactivated successfully.
Jan 23 04:30:57 np0005593232 podman[260596]: 2026-01-23 09:30:57.155693845 +0000 UTC m=+0.028352226 container died c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:30:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-deeaadfd6113a516e941b7b6df59999601332f9aacb32d0146fe88cd05d7e3d9-merged.mount: Deactivated successfully.
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.220 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo6fbshvj" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:57 np0005593232 podman[260596]: 2026-01-23 09:30:57.238895815 +0000 UTC m=+0.111554176 container remove c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:30:57 np0005593232 systemd[1]: libpod-conmon-c8e8b2247994e3a62cc40ded354ac5754a0cacb593d1973ffc5a73841c36c3ee.scope: Deactivated successfully.
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.253 250273 DEBUG nova.storage.rbd_utils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] rbd image f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.258 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:30:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:57.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.533 250273 DEBUG oslo_concurrency.processutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config f62791ad-fc40-451f-b02a-ba991f2dbc32_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.534 250273 INFO nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Deleting local config drive /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32/disk.config because it was imported into RBD.#033[00m
Jan 23 04:30:57 np0005593232 kernel: tap857f8a0c-0b: entered promiscuous mode
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.5826] manager: (tap857f8a0c-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00036|binding|INFO|Claiming lport 857f8a0c-0bda-43ca-85aa-7f22568eddc7 for this chassis.
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00037|binding|INFO|857f8a0c-0bda-43ca-85aa-7f22568eddc7: Claiming fa:16:3e:d9:aa:f3 10.100.0.10
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00038|binding|INFO|Claiming lport 7dc28ada-b6f3-4524-9e75-42c4d4604d63 for this chassis.
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00039|binding|INFO|7dc28ada-b6f3-4524-9e75-42c4d4604d63: Claiming fa:16:3e:4b:1d:32 19.80.0.19
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.591 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.599 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:aa:f3 10.100.0.10'], port_security=['fa:16:3e:d9:aa:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-412021528', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f62791ad-fc40-451f-b02a-ba991f2dbc32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-412021528', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cabb3d88-013b-4542-b789-52d49c567d53, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=857f8a0c-0bda-43ca-85aa-7f22568eddc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.600 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1d:32 19.80.0.19'], port_security=['fa:16:3e:4b:1d:32 19.80.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['857f8a0c-0bda-43ca-85aa-7f22568eddc7'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1183347964', 'neutron:cidrs': '19.80.0.19/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48c9624b-33de-47f9-a720-02dd9028b5ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1183347964', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=7ac2005c-13d2-4227-8eb4-3d332da8f5d6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7dc28ada-b6f3-4524-9e75-42c4d4604d63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.601 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 857f8a0c-0bda-43ca-85aa-7f22568eddc7 in datapath 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 bound to our chassis#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.603 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.616 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[68244414-4b90-4235-bdbe-78005e62c8f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.616 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385e7a4d-f1 in ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.620 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385e7a4d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.620 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[63639243-3a81-47b3-9db9-8f70333c27dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.621 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5bf836ee-8ca0-436e-815d-c30bc83a823a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 systemd-machined[215836]: New machine qemu-3-instance-00000009.
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.636 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[71de0aae-8782-47e9-b5bf-eb0aa5631341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.663 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f67f157-fe32-4ffa-ba05-c7a7ee3a2125]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 systemd[1]: Started Virtual Machine qemu-3-instance-00000009.
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00040|binding|INFO|Setting lport 857f8a0c-0bda-43ca-85aa-7f22568eddc7 ovn-installed in OVS
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00041|binding|INFO|Setting lport 857f8a0c-0bda-43ca-85aa-7f22568eddc7 up in Southbound
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00042|binding|INFO|Setting lport 7dc28ada-b6f3-4524-9e75-42c4d4604d63 up in Southbound
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.703 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c733db-9a82-4aaf-9497-701b1537400e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.7106] manager: (tap385e7a4d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.709 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed736465-f79c-416b-9832-02ad49ad7274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 systemd-udevd[260785]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:30:57 np0005593232 systemd-udevd[260786]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.7280] device (tap857f8a0c-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.7286] device (tap857f8a0c-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.740 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fa32ee4f-a222-412b-9bd5-052d2b7b7d06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.742 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[99b6c3b5-52f4-4e94-b433-1b7edb7b5152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.7730] device (tap385e7a4d-f0): carrier: link connected
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.773 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f40e1790-8092-4773-9321-1e5e54d9b349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.790 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f2d938-2a66-4390-b128-d5f53b8f4538]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385e7a4d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:a3:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457091, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260824, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.803 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d0671f35-8416-4a77-b963-67fa67a26458]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:a31a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457091, 'tstamp': 457091}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260831, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.818 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7fce7533-188d-4ce2-a0ab-4bf33e9c0ee7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385e7a4d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:a3:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457091, 'reachable_time': 19465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260833, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:30:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:57.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.848 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a37dce02-9326-45ee-89f3-24a0b4d5596e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.898 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[12437e98-336e-460a-ad24-a40909478a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.900 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385e7a4d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.900 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.901 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385e7a4d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:57 np0005593232 NetworkManager[49057]: <info>  [1769160657.9029] manager: (tap385e7a4d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 kernel: tap385e7a4d-f0: entered promiscuous mode
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.906 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385e7a4d-f0, col_values=(('external_ids', {'iface-id': '7b93c40e-1f44-4d5a-9bad-e23468f98d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.908 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:30:57Z|00043|binding|INFO|Releasing lport 7b93c40e-1f44-4d5a-9bad-e23468f98d69 from this chassis (sb_readonly=0)
Jan 23 04:30:57 np0005593232 nova_compute[250269]: 2026-01-23 09:30:57.921 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.922 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.926 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cf78317f-b99e-49af-bec2-0f977e7dac40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.927 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:30:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:30:57.927 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'env', 'PROCESS_TAG=haproxy-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:30:57 np0005593232 podman[260834]: 2026-01-23 09:30:57.856666762 +0000 UTC m=+0.024491566 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:30:58 np0005593232 podman[260834]: 2026-01-23 09:30:58.073288639 +0000 UTC m=+0.241113423 container create fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:30:58 np0005593232 systemd[1]: Started libpod-conmon-fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb.scope.
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.228 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:30:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.332 250273 DEBUG nova.compute.manager [req-08f28459-ce1b-441f-a8fc-fd6770c5fac1 req-3db4bbec-afad-4691-80ff-d2e8c7be5e78 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.334 250273 DEBUG oslo_concurrency.lockutils [req-08f28459-ce1b-441f-a8fc-fd6770c5fac1 req-3db4bbec-afad-4691-80ff-d2e8c7be5e78 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.334 250273 DEBUG oslo_concurrency.lockutils [req-08f28459-ce1b-441f-a8fc-fd6770c5fac1 req-3db4bbec-afad-4691-80ff-d2e8c7be5e78 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.335 250273 DEBUG oslo_concurrency.lockutils [req-08f28459-ce1b-441f-a8fc-fd6770c5fac1 req-3db4bbec-afad-4691-80ff-d2e8c7be5e78 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.335 250273 DEBUG nova.compute.manager [req-08f28459-ce1b-441f-a8fc-fd6770c5fac1 req-3db4bbec-afad-4691-80ff-d2e8c7be5e78 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Processing event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:30:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:30:58 np0005593232 podman[260834]: 2026-01-23 09:30:58.40639157 +0000 UTC m=+0.574216374 container init fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:30:58 np0005593232 podman[260834]: 2026-01-23 09:30:58.414796368 +0000 UTC m=+0.582621152 container start fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:30:58 np0005593232 systemd[1]: libpod-fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb.scope: Deactivated successfully.
Jan 23 04:30:58 np0005593232 laughing_thompson[260909]: 167 167
Jan 23 04:30:58 np0005593232 conmon[260909]: conmon fd5c1f4cbe64e3c54877 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb.scope/container/memory.events
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.451 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.452 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160658.4510732, f62791ad-fc40-451f-b02a-ba991f2dbc32 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.453 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] VM Started (Lifecycle Event)#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.459 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.463 250273 INFO nova.virt.libvirt.driver [-] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Instance spawned successfully.#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.464 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.480 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.484 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.490 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.491 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.491 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.492 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.492 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.493 250273 DEBUG nova.virt.libvirt.driver [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.528 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.528 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160658.4522228, f62791ad-fc40-451f-b02a-ba991f2dbc32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.528 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.558 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.561 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160658.4572027, f62791ad-fc40-451f-b02a-ba991f2dbc32 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.562 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.570 250273 INFO nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Took 9.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.570 250273 DEBUG nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:58 np0005593232 podman[260834]: 2026-01-23 09:30:58.578250086 +0000 UTC m=+0.746074860 container attach fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:30:58 np0005593232 podman[260834]: 2026-01-23 09:30:58.578843743 +0000 UTC m=+0.746668527 container died fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.582 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.585 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.624 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.659 250273 INFO nova.compute.manager [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Took 10.71 seconds to build instance.#033[00m
Jan 23 04:30:58 np0005593232 nova_compute[250269]: 2026-01-23 09:30:58.687 250273 DEBUG oslo_concurrency.lockutils [None req-8aec0479-1d6f-41ed-b325-20a3aeb9d13b a43b680a6019491aafe42c0a10e648df c56e53b3339e4e4db30b7a9d330bc380 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:30:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1080: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Jan 23 04:30:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:30:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:30:59.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:30:59 np0005593232 nova_compute[250269]: 2026-01-23 09:30:59.421 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:59 np0005593232 nova_compute[250269]: 2026-01-23 09:30:59.422 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:59 np0005593232 nova_compute[250269]: 2026-01-23 09:30:59.454 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:30:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:30:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:30:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:30:59.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:30:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5ded88314838f1ac01411493c14c0b9ebc695c03da802597b013964098f285e5-merged.mount: Deactivated successfully.
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.490 250273 DEBUG nova.compute.manager [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.490 250273 DEBUG oslo_concurrency.lockutils [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.491 250273 DEBUG oslo_concurrency.lockutils [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.491 250273 DEBUG oslo_concurrency.lockutils [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.491 250273 DEBUG nova.compute.manager [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:00 np0005593232 nova_compute[250269]: 2026-01-23 09:31:00.491 250273 WARNING nova.compute.manager [req-ffe1408b-e411-4e78-a3a2-62f78f4173fb req-0f780e0a-925e-4b84-9217-cec80efae636 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received unexpected event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:31:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1081: 321 pgs: 321 active+clean; 167 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 23 04:31:00 np0005593232 podman[260834]: 2026-01-23 09:31:00.936570208 +0000 UTC m=+3.104395032 container remove fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:31:01 np0005593232 systemd[1]: libpod-conmon-fd5c1f4cbe64e3c54877ebac51882b02df52dbe4d75b1cae06d375ae6048d1eb.scope: Deactivated successfully.
Jan 23 04:31:01 np0005593232 podman[260937]: 2026-01-23 09:31:01.059387693 +0000 UTC m=+2.606583697 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 04:31:01 np0005593232 podman[260919]: 2026-01-23 09:31:01.037475401 +0000 UTC m=+2.764776236 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:31:01 np0005593232 nova_compute[250269]: 2026-01-23 09:31:01.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:01 np0005593232 nova_compute[250269]: 2026-01-23 09:31:01.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:31:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:01.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:01 np0005593232 nova_compute[250269]: 2026-01-23 09:31:01.407 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:31:01 np0005593232 nova_compute[250269]: 2026-01-23 09:31:01.407 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:01 np0005593232 podman[260980]: 2026-01-23 09:31:01.397370652 +0000 UTC m=+0.309741529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:31:01 np0005593232 podman[260919]: 2026-01-23 09:31:01.541327997 +0000 UTC m=+3.268628822 container create e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 04:31:01 np0005593232 podman[260980]: 2026-01-23 09:31:01.596980736 +0000 UTC m=+0.509351583 container create 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:31:01 np0005593232 systemd[1]: Started libpod-conmon-e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8.scope.
Jan 23 04:31:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:31:01 np0005593232 systemd[1]: Started libpod-conmon-05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad.scope.
Jan 23 04:31:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39516766458a59d235390c83fa6296973b3cdef1627bb40f5761926c537de8a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:31:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5d4b758533cba46a59c626b2b5b7498423b32fee523866efe308d6924a22f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5d4b758533cba46a59c626b2b5b7498423b32fee523866efe308d6924a22f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5d4b758533cba46a59c626b2b5b7498423b32fee523866efe308d6924a22f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb5d4b758533cba46a59c626b2b5b7498423b32fee523866efe308d6924a22f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:01 np0005593232 podman[260919]: 2026-01-23 09:31:01.68242448 +0000 UTC m=+3.409725325 container init e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 23 04:31:01 np0005593232 podman[260980]: 2026-01-23 09:31:01.691005603 +0000 UTC m=+0.603376470 container init 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:31:01 np0005593232 podman[260919]: 2026-01-23 09:31:01.692857516 +0000 UTC m=+3.420158341 container start e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 04:31:01 np0005593232 podman[260980]: 2026-01-23 09:31:01.699661349 +0000 UTC m=+0.612032186 container start 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:31:01 np0005593232 podman[260980]: 2026-01-23 09:31:01.703673043 +0000 UTC m=+0.616043910 container attach 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:31:01 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [NOTICE]   (261006) : New worker (261009) forked
Jan 23 04:31:01 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [NOTICE]   (261006) : Loading success.
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.771 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7dc28ada-b6f3-4524-9e75-42c4d4604d63 in datapath 48c9624b-33de-47f9-a720-02dd9028b5ea unbound from our chassis#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.774 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48c9624b-33de-47f9-a720-02dd9028b5ea#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.787 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bc4da351-f492-44eb-9dad-630f5fc42eef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.788 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48c9624b-31 in ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.791 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48c9624b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.791 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5c18d5-c344-4b25-b740-f82ec041466c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.792 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[433bedc9-424d-447d-947d-b309aa5122a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.806 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[e171d835-19f9-4789-a33b-ac829a3cff88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.827 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[69f0d3b9-4b48-4be1-8f83-29d33683773b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:01.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.854 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebd7f55-620f-4653-9205-b15909c88b7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.860 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[18b8ba5f-63b3-4253-9920-e0ffda6991f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 NetworkManager[49057]: <info>  [1769160661.8661] manager: (tap48c9624b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.893 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[64a670d4-3b87-4ee6-8ec5-a4c4e922ea79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 systemd-udevd[261026]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.903 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[33f5d7b7-9722-47a0-8f11-3aad1499352b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 NetworkManager[49057]: <info>  [1769160661.9282] device (tap48c9624b-30): carrier: link connected
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.933 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1669a5f2-b2fd-49c1-91b4-0101e1cf3480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.955 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ad3501eb-c56f-4d9c-b6e4-0c8a42e7efec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48c9624b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:8f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457507, 'reachable_time': 21067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261030, 'error': None, 'target': 'ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.971 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07b38085-aa7a-4755-bf76-62a7f2512b90]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:8fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 457507, 'tstamp': 457507}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261046, 'error': None, 'target': 'ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:01.986 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[592a4858-ed63-4134-ab56-85321c9a8880]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48c9624b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:8f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 17], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457507, 'reachable_time': 21067, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261047, 'error': None, 'target': 'ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.017 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92f8a0a5-b03c-450f-a483-68cd3b5e08d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.068 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c07a420-fb2e-4c41-a47e-5a8673d9ba92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.070 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48c9624b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.070 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.070 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48c9624b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.072 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:02 np0005593232 kernel: tap48c9624b-30: entered promiscuous mode
Jan 23 04:31:02 np0005593232 NetworkManager[49057]: <info>  [1769160662.0727] manager: (tap48c9624b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.075 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48c9624b-30, col_values=(('external_ids', {'iface-id': '8e19ba82-19a8-44be-8cf0-66f5e53af8a2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.076 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:02Z|00044|binding|INFO|Releasing lport 8e19ba82-19a8-44be-8cf0-66f5e53af8a2 from this chassis (sb_readonly=0)
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.094 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48c9624b-33de-47f9-a720-02dd9028b5ea.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48c9624b-33de-47f9-a720-02dd9028b5ea.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.095 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a41a34b-d557-432f-b76b-3a50bd319d3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.096 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-48c9624b-33de-47f9-a720-02dd9028b5ea
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/48c9624b-33de-47f9-a720-02dd9028b5ea.pid.haproxy
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 48c9624b-33de-47f9-a720-02dd9028b5ea
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.096 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea', 'env', 'PROCESS_TAG=haproxy-48c9624b-33de-47f9-a720-02dd9028b5ea', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48c9624b-33de-47f9-a720-02dd9028b5ea.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.217 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:02 np0005593232 nova_compute[250269]: 2026-01-23 09:31:02.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:02 np0005593232 podman[261079]: 2026-01-23 09:31:02.490292392 +0000 UTC m=+0.098347462 container create 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 04:31:02 np0005593232 podman[261079]: 2026-01-23 09:31:02.412038891 +0000 UTC m=+0.020093981 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]: {
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:    "0": [
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:        {
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "devices": [
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "/dev/loop3"
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            ],
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "lv_name": "ceph_lv0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "lv_size": "7511998464",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "name": "ceph_lv0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "tags": {
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.cluster_name": "ceph",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.crush_device_class": "",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.encrypted": "0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.osd_id": "0",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.type": "block",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:                "ceph.vdo": "0"
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            },
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "type": "block",
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:            "vg_name": "ceph_vg0"
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:        }
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]:    ]
Jan 23 04:31:02 np0005593232 heuristic_keller[261001]: }
Jan 23 04:31:02 np0005593232 systemd[1]: Started libpod-conmon-87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd.scope.
Jan 23 04:31:02 np0005593232 podman[260980]: 2026-01-23 09:31:02.564210729 +0000 UTC m=+1.476581576 container died 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:31:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:31:02 np0005593232 systemd[1]: libpod-05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad.scope: Deactivated successfully.
Jan 23 04:31:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5c471c124784cd5923a6d4e2d2f81648745eea5177739486428356cee852e48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:02 np0005593232 podman[261079]: 2026-01-23 09:31:02.623347797 +0000 UTC m=+0.231402897 container init 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 04:31:02 np0005593232 podman[261079]: 2026-01-23 09:31:02.629041289 +0000 UTC m=+0.237096359 container start 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:31:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fb5d4b758533cba46a59c626b2b5b7498423b32fee523866efe308d6924a22f0-merged.mount: Deactivated successfully.
Jan 23 04:31:02 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [NOTICE]   (261114) : New worker (261116) forked
Jan 23 04:31:02 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [NOTICE]   (261114) : Loading success.
Jan 23 04:31:02 np0005593232 podman[260980]: 2026-01-23 09:31:02.675558858 +0000 UTC m=+1.587929705 container remove 05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_keller, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:31:02 np0005593232 systemd[1]: libpod-conmon-05f7f535c2be0eeffee6b6784b2fc6a30d251d6de92fce3e24ab3b19776e9fad.scope: Deactivated successfully.
Jan 23 04:31:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:02.697 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:31:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1082: 321 pgs: 321 active+clean; 180 MiB data, 323 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 149 op/s
Jan 23 04:31:03 np0005593232 nova_compute[250269]: 2026-01-23 09:31:03.231 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:03 np0005593232 podman[261268]: 2026-01-23 09:31:03.283019434 +0000 UTC m=+0.044638268 container create 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:31:03 np0005593232 nova_compute[250269]: 2026-01-23 09:31:03.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:03 np0005593232 systemd[1]: Started libpod-conmon-449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0.scope.
Jan 23 04:31:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:31:03 np0005593232 podman[261268]: 2026-01-23 09:31:03.260959178 +0000 UTC m=+0.022578042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:31:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 23 04:31:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:03.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:03 np0005593232 podman[261268]: 2026-01-23 09:31:03.391517572 +0000 UTC m=+0.153136426 container init 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:31:03 np0005593232 podman[261268]: 2026-01-23 09:31:03.39812982 +0000 UTC m=+0.159748654 container start 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:31:03 np0005593232 podman[261268]: 2026-01-23 09:31:03.401384022 +0000 UTC m=+0.163002856 container attach 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:31:03 np0005593232 systemd[1]: libpod-449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0.scope: Deactivated successfully.
Jan 23 04:31:03 np0005593232 jovial_stonebraker[261285]: 167 167
Jan 23 04:31:03 np0005593232 conmon[261285]: conmon 449c61e4efbaf203033f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0.scope/container/memory.events
Jan 23 04:31:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 23 04:31:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 23 04:31:03 np0005593232 podman[261290]: 2026-01-23 09:31:03.453596754 +0000 UTC m=+0.034670085 container died 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:31:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55dd9d0b640d4417f5c52ef72c3c43015e44aeefbf9d2626b406345d48290215-merged.mount: Deactivated successfully.
Jan 23 04:31:03 np0005593232 podman[261290]: 2026-01-23 09:31:03.490928253 +0000 UTC m=+0.072001574 container remove 449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:31:03 np0005593232 systemd[1]: libpod-conmon-449c61e4efbaf203033fcd578482554f6c85a9bea13b0b9dc89e8de39b2909f0.scope: Deactivated successfully.
Jan 23 04:31:03 np0005593232 podman[261313]: 2026-01-23 09:31:03.677065874 +0000 UTC m=+0.042191108 container create e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:31:03 np0005593232 systemd[1]: Started libpod-conmon-e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117.scope.
Jan 23 04:31:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:31:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c32c5ff3ce97ee65d1053f8653271d46ad8c69a1fa156e366708854380247b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c32c5ff3ce97ee65d1053f8653271d46ad8c69a1fa156e366708854380247b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c32c5ff3ce97ee65d1053f8653271d46ad8c69a1fa156e366708854380247b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c32c5ff3ce97ee65d1053f8653271d46ad8c69a1fa156e366708854380247b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:31:03 np0005593232 podman[261313]: 2026-01-23 09:31:03.660920486 +0000 UTC m=+0.026045760 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:31:03 np0005593232 podman[261313]: 2026-01-23 09:31:03.754729908 +0000 UTC m=+0.119855182 container init e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:31:03 np0005593232 podman[261313]: 2026-01-23 09:31:03.768391076 +0000 UTC m=+0.133516350 container start e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:31:03 np0005593232 podman[261313]: 2026-01-23 09:31:03.80448896 +0000 UTC m=+0.169614224 container attach e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:31:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:03.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:04 np0005593232 nova_compute[250269]: 2026-01-23 09:31:04.259 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Check if temp file /var/lib/nova/instances/tmpvzlwa358 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 23 04:31:04 np0005593232 nova_compute[250269]: 2026-01-23 09:31:04.261 250273 DEBUG nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvzlwa358',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f62791ad-fc40-451f-b02a-ba991f2dbc32',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 23 04:31:04 np0005593232 nova_compute[250269]: 2026-01-23 09:31:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:04 np0005593232 nova_compute[250269]: 2026-01-23 09:31:04.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]: {
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:        "osd_id": 0,
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:        "type": "bluestore"
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]:    }
Jan 23 04:31:04 np0005593232 romantic_cartwright[261330]: }
Jan 23 04:31:04 np0005593232 systemd[1]: libpod-e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117.scope: Deactivated successfully.
Jan 23 04:31:04 np0005593232 podman[261313]: 2026-01-23 09:31:04.680288198 +0000 UTC m=+1.045413452 container died e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:31:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4c32c5ff3ce97ee65d1053f8653271d46ad8c69a1fa156e366708854380247b3-merged.mount: Deactivated successfully.
Jan 23 04:31:04 np0005593232 podman[261313]: 2026-01-23 09:31:04.744185531 +0000 UTC m=+1.109310775 container remove e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:31:04 np0005593232 systemd[1]: libpod-conmon-e1dc6cbde1ced0d445c055b81b922a6dc515a89882f873bcfaa8941279757117.scope: Deactivated successfully.
Jan 23 04:31:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:31:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:31:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:31:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:31:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d667026-b067-49a4-aecc-1234a9ca1aff does not exist
Jan 23 04:31:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c641821a-b59e-440c-bf17-e4709db2c015 does not exist
Jan 23 04:31:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 72816df1-ead8-4e55-a9be-d6558fb108e1 does not exist
Jan 23 04:31:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1084: 321 pgs: 321 active+clean; 202 MiB data, 329 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.348 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.349 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.349 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.349 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.350 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:05.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943747428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.791 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.792 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.799 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:31:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.804 250273 INFO nova.compute.rpcapi [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 23 04:31:05 np0005593232 nova_compute[250269]: 2026-01-23 09:31:05.805 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:31:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:05.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.220 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.220 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000009 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.386 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.388 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4684MB free_disk=20.915721893310547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.388 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.388 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1085: 321 pgs: 321 active+clean; 213 MiB data, 338 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 191 op/s
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.921 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updating resource usage from migration c757740f-0e06-4982-9054-d0a23d1c3501#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.969 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration c757740f-0e06-4982-9054-d0a23d1c3501 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.969 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:31:06 np0005593232 nova_compute[250269]: 2026-01-23 09:31:06.969 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:31:07 np0005593232 nova_compute[250269]: 2026-01-23 09:31:07.018 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:07.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952391256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:07 np0005593232 nova_compute[250269]: 2026-01-23 09:31:07.461 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:07 np0005593232 nova_compute[250269]: 2026-01-23 09:31:07.467 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:31:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:07.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:08 np0005593232 nova_compute[250269]: 2026-01-23 09:31:08.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:08.700 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1086: 321 pgs: 321 active+clean; 213 MiB data, 344 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.2 MiB/s wr, 212 op/s
Jan 23 04:31:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:09.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:09 np0005593232 nova_compute[250269]: 2026-01-23 09:31:09.947 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:31:10 np0005593232 nova_compute[250269]: 2026-01-23 09:31:10.462 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:31:10 np0005593232 nova_compute[250269]: 2026-01-23 09:31:10.463 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:10 np0005593232 nova_compute[250269]: 2026-01-23 09:31:10.480 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1087: 321 pgs: 321 active+clean; 246 MiB data, 363 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.0 MiB/s wr, 230 op/s
Jan 23 04:31:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:11 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:11Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:aa:f3 10.100.0.10
Jan 23 04:31:11 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:11Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:aa:f3 10.100.0.10
Jan 23 04:31:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:11.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.676 250273 DEBUG nova.compute.manager [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.895 250273 DEBUG nova.compute.manager [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.895 250273 DEBUG oslo_concurrency.lockutils [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.896 250273 DEBUG oslo_concurrency.lockutils [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.896 250273 DEBUG oslo_concurrency.lockutils [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.896 250273 DEBUG nova.compute.manager [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.896 250273 DEBUG nova.compute.manager [req-f7759c6d-c0bf-4294-af54-121df7585cea req-9ab15cf4-ae3b-4091-86d0-c64a97b55e87 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:31:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1088: 321 pgs: 321 active+clean; 277 MiB data, 372 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.1 MiB/s wr, 239 op/s
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.956 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.959 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:12 np0005593232 nova_compute[250269]: 2026-01-23 09:31:12.990 250273 DEBUG nova.objects.instance [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'pci_requests' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.013 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.013 250273 INFO nova.compute.claims [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.014 250273 DEBUG nova.objects.instance [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'resources' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.036 250273 DEBUG nova.objects.instance [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'pci_devices' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.105 250273 INFO nova.compute.resource_tracker [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating resource usage from migration cc07d23d-e40d-4b7a-98b0-a8f2611399a1#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.106 250273 DEBUG nova.compute.resource_tracker [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Starting to track incoming migration cc07d23d-e40d-4b7a-98b0-a8f2611399a1 with flavor eebea5f8-9b11-45ad-873d-c4ea90d3de87 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.227 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.247 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:13.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625346252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.669 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.675 250273 DEBUG nova.compute.provider_tree [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.708 250273 DEBUG nova.scheduler.client.report [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.734 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:13 np0005593232 nova_compute[250269]: 2026-01-23 09:31:13.735 250273 INFO nova.compute.manager [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Migrating#033[00m
Jan 23 04:31:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:13.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1089: 321 pgs: 321 active+clean; 290 MiB data, 382 MiB used, 21 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.170 250273 DEBUG nova.compute.manager [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.171 250273 DEBUG oslo_concurrency.lockutils [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.171 250273 DEBUG oslo_concurrency.lockutils [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.171 250273 DEBUG oslo_concurrency.lockutils [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.172 250273 DEBUG nova.compute.manager [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.172 250273 WARNING nova.compute.manager [req-d2700604-3203-4c94-8dc1-af0f2194dbf6 req-6f1784bd-8b33-4e27-8ddd-05bcd0ecf721 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received unexpected event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:31:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:15.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:15 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:31:15 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.484 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.499 250273 INFO nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Took 9.71 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.499 250273 DEBUG nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.560 250273 DEBUG nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpvzlwa358',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='f62791ad-fc40-451f-b02a-ba991f2dbc32',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(c757740f-0e06-4982-9054-d0a23d1c3501),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.563 250273 DEBUG nova.objects.instance [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lazy-loading 'migration_context' on Instance uuid f62791ad-fc40-451f-b02a-ba991f2dbc32 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.565 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.566 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.566 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.590 250273 DEBUG nova.virt.libvirt.vif [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1280958077',display_name='tempest-LiveMigrationTest-server-1280958077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1280958077',id=9,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-xkzmzoa6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:30:58Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=f62791ad-fc40-451f-b02a-ba991f2dbc32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.590 250273 DEBUG nova.network.os_vif_util [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converting VIF {"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.591 250273 DEBUG nova.network.os_vif_util [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.591 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updating guest XML with vif config: <interface type="ethernet">
Jan 23 04:31:15 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:d9:aa:f3"/>
Jan 23 04:31:15 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:31:15 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:31:15 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:31:15 np0005593232 nova_compute[250269]:  <target dev="tap857f8a0c-0b"/>
Jan 23 04:31:15 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:31:15 np0005593232 nova_compute[250269]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 23 04:31:15 np0005593232 nova_compute[250269]: 2026-01-23 09:31:15.592 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 23 04:31:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:15.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:15 np0005593232 systemd-logind[808]: New session 52 of user nova.
Jan 23 04:31:15 np0005593232 systemd[1]: Created slice User Slice of UID 42436.
Jan 23 04:31:15 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 23 04:31:15 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 23 04:31:15 np0005593232 systemd[1]: Starting User Manager for UID 42436...
Jan 23 04:31:16 np0005593232 systemd[261547]: Queued start job for default target Main User Target.
Jan 23 04:31:16 np0005593232 nova_compute[250269]: 2026-01-23 09:31:16.069 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:31:16 np0005593232 nova_compute[250269]: 2026-01-23 09:31:16.070 250273 INFO nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 23 04:31:16 np0005593232 systemd[261547]: Created slice User Application Slice.
Jan 23 04:31:16 np0005593232 systemd[261547]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 04:31:16 np0005593232 systemd[261547]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 04:31:16 np0005593232 systemd[261547]: Reached target Paths.
Jan 23 04:31:16 np0005593232 systemd[261547]: Reached target Timers.
Jan 23 04:31:16 np0005593232 systemd[261547]: Starting D-Bus User Message Bus Socket...
Jan 23 04:31:16 np0005593232 systemd[261547]: Starting Create User's Volatile Files and Directories...
Jan 23 04:31:16 np0005593232 systemd[261547]: Listening on D-Bus User Message Bus Socket.
Jan 23 04:31:16 np0005593232 systemd[261547]: Finished Create User's Volatile Files and Directories.
Jan 23 04:31:16 np0005593232 systemd[261547]: Reached target Sockets.
Jan 23 04:31:16 np0005593232 systemd[261547]: Reached target Basic System.
Jan 23 04:31:16 np0005593232 systemd[261547]: Reached target Main User Target.
Jan 23 04:31:16 np0005593232 systemd[261547]: Startup finished in 150ms.
Jan 23 04:31:16 np0005593232 systemd[1]: Started User Manager for UID 42436.
Jan 23 04:31:16 np0005593232 systemd[1]: Started Session 52 of User nova.
Jan 23 04:31:16 np0005593232 systemd[1]: session-52.scope: Deactivated successfully.
Jan 23 04:31:16 np0005593232 systemd-logind[808]: Session 52 logged out. Waiting for processes to exit.
Jan 23 04:31:16 np0005593232 systemd-logind[808]: Removed session 52.
Jan 23 04:31:16 np0005593232 nova_compute[250269]: 2026-01-23 09:31:16.229 250273 INFO nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 23 04:31:16 np0005593232 systemd-logind[808]: New session 54 of user nova.
Jan 23 04:31:16 np0005593232 systemd[1]: Started Session 54 of User nova.
Jan 23 04:31:16 np0005593232 systemd[1]: session-54.scope: Deactivated successfully.
Jan 23 04:31:16 np0005593232 systemd-logind[808]: Session 54 logged out. Waiting for processes to exit.
Jan 23 04:31:16 np0005593232 systemd-logind[808]: Removed session 54.
Jan 23 04:31:16 np0005593232 nova_compute[250269]: 2026-01-23 09:31:16.735 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:31:16 np0005593232 nova_compute[250269]: 2026-01-23 09:31:16.736 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:31:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:31:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2706 syncs, 3.90 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2183 writes, 6958 keys, 2183 commit groups, 1.0 writes per commit group, ingest: 6.80 MB, 0.01 MB/s#012Interval WAL: 2183 writes, 903 syncs, 2.42 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 04:31:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1090: 321 pgs: 321 active+clean; 291 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.6 MiB/s wr, 211 op/s
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.175 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160677.1753495, f62791ad-fc40-451f-b02a-ba991f2dbc32 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.176 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.217 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.221 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.238 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.238 250273 DEBUG nova.virt.libvirt.migration [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.271 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.351 250273 DEBUG nova.compute.manager [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-changed-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.352 250273 DEBUG nova.compute.manager [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Refreshing instance network info cache due to event network-changed-857f8a0c-0bda-43ca-85aa-7f22568eddc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.352 250273 DEBUG oslo_concurrency.lockutils [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.352 250273 DEBUG oslo_concurrency.lockutils [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.352 250273 DEBUG nova.network.neutron [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Refreshing network info cache for port 857f8a0c-0bda-43ca-85aa-7f22568eddc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:31:17 np0005593232 kernel: tap857f8a0c-0b (unregistering): left promiscuous mode
Jan 23 04:31:17 np0005593232 NetworkManager[49057]: <info>  [1769160677.3609] device (tap857f8a0c-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00045|binding|INFO|Releasing lport 857f8a0c-0bda-43ca-85aa-7f22568eddc7 from this chassis (sb_readonly=0)
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00046|binding|INFO|Setting lport 857f8a0c-0bda-43ca-85aa-7f22568eddc7 down in Southbound
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00047|binding|INFO|Releasing lport 7dc28ada-b6f3-4524-9e75-42c4d4604d63 from this chassis (sb_readonly=0)
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00048|binding|INFO|Setting lport 7dc28ada-b6f3-4524-9e75-42c4d4604d63 down in Southbound
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.373 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00049|binding|INFO|Removing iface tap857f8a0c-0b ovn-installed in OVS
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00050|binding|INFO|Releasing lport 8e19ba82-19a8-44be-8cf0-66f5e53af8a2 from this chassis (sb_readonly=0)
Jan 23 04:31:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:31:17Z|00051|binding|INFO|Releasing lport 7b93c40e-1f44-4d5a-9bad-e23468f98d69 from this chassis (sb_readonly=0)
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.389 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:aa:f3 10.100.0.10'], port_security=['fa:16:3e:d9:aa:f3 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '539cfa5a-1c2f-4cb4-97af-2edb819f72fc'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-412021528', 'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f62791ad-fc40-451f-b02a-ba991f2dbc32', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-412021528', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cabb3d88-013b-4542-b789-52d49c567d53, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=857f8a0c-0bda-43ca-85aa-7f22568eddc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.391 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:1d:32 19.80.0.19'], port_security=['fa:16:3e:4b:1d:32 19.80.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['857f8a0c-0bda-43ca-85aa-7f22568eddc7'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1183347964', 'neutron:cidrs': '19.80.0.19/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48c9624b-33de-47f9-a720-02dd9028b5ea', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1183347964', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '3', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=7ac2005c-13d2-4227-8eb4-3d332da8f5d6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=7dc28ada-b6f3-4524-9e75-42c4d4604d63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.393 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 857f8a0c-0bda-43ca-85aa-7f22568eddc7 in datapath 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 unbound from our chassis#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.394 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:31:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:17.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.397 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[67c7147e-884c-432c-9b4a-dfeba4224c9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.399 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 namespace which is not needed anymore#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 23 04:31:17 np0005593232 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000009.scope: Consumed 14.318s CPU time.
Jan 23 04:31:17 np0005593232 systemd-machined[215836]: Machine qemu-3-instance-00000009 terminated.
Jan 23 04:31:17 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/f62791ad-fc40-451f-b02a-ba991f2dbc32_disk: No such file or directory
Jan 23 04:31:17 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/f62791ad-fc40-451f-b02a-ba991f2dbc32_disk: No such file or directory
Jan 23 04:31:17 np0005593232 podman[261575]: 2026-01-23 09:31:17.535799897 +0000 UTC m=+0.196710763 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.586 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.586 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.586 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [NOTICE]   (261006) : haproxy version is 2.8.14-c23fe91
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [NOTICE]   (261006) : path to executable is /usr/sbin/haproxy
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [WARNING]  (261006) : Exiting Master process...
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [WARNING]  (261006) : Exiting Master process...
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [ALERT]    (261006) : Current worker (261009) exited with code 143 (Terminated)
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[260996]: [WARNING]  (261006) : All workers exited. Exiting... (0)
Jan 23 04:31:17 np0005593232 systemd[1]: libpod-e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8.scope: Deactivated successfully.
Jan 23 04:31:17 np0005593232 podman[261625]: 2026-01-23 09:31:17.629056902 +0000 UTC m=+0.044500443 container died e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 23 04:31:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8-userdata-shm.mount: Deactivated successfully.
Jan 23 04:31:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-39516766458a59d235390c83fa6296973b3cdef1627bb40f5761926c537de8a8-merged.mount: Deactivated successfully.
Jan 23 04:31:17 np0005593232 podman[261625]: 2026-01-23 09:31:17.687356636 +0000 UTC m=+0.102800167 container cleanup e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 04:31:17 np0005593232 systemd[1]: libpod-conmon-e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8.scope: Deactivated successfully.
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.741 250273 DEBUG nova.virt.libvirt.guest [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid 'f62791ad-fc40-451f-b02a-ba991f2dbc32' (instance-00000009) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.742 250273 INFO nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migration operation has completed#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.742 250273 INFO nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] _post_live_migration() is started..#033[00m
Jan 23 04:31:17 np0005593232 podman[261660]: 2026-01-23 09:31:17.801292819 +0000 UTC m=+0.094841272 container remove e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.807 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c6abdbc8-178e-47c1-961a-ffd11e9badf3]: (4, ('Fri Jan 23 09:31:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 (e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8)\ne85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8\nFri Jan 23 09:31:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 (e85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8)\ne85a8c582c8b355e088ed1f2b341d0f1625d3824c8a0ca90752cbfb9b1d2f6f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.809 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f8936b77-46e4-4ca6-b0ec-c44e5ba481e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.810 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385e7a4d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.811 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 kernel: tap385e7a4d-f0: left promiscuous mode
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 nova_compute[250269]: 2026-01-23 09:31:17.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.832 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9071bcc8-331f-47e8-bfcf-fe76d8a1c104]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.851 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5fceae-8268-40f8-b6a6-1ec62ca0308e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.852 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[602f6210-baf0-4f4c-88aa-6370921b6c94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:17.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.866 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[565a0a97-316d-4c33-a113-e91d6f29041d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457084, 'reachable_time': 24682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261679, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 systemd[1]: run-netns-ovnmeta\x2d385e7a4d\x2df87e\x2d44c5\x2d9fc0\x2d5a322eecd4b4.mount: Deactivated successfully.
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.870 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.870 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b9725d08-e99f-42c1-b59f-04e4c4bd0bf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.872 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 7dc28ada-b6f3-4524-9e75-42c4d4604d63 in datapath 48c9624b-33de-47f9-a720-02dd9028b5ea unbound from our chassis#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.874 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48c9624b-33de-47f9-a720-02dd9028b5ea, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.875 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebdaa17-400d-4c16-a4b4-2123f0022696]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:17.875 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea namespace which is not needed anymore#033[00m
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [NOTICE]   (261114) : haproxy version is 2.8.14-c23fe91
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [NOTICE]   (261114) : path to executable is /usr/sbin/haproxy
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [WARNING]  (261114) : Exiting Master process...
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [ALERT]    (261114) : Current worker (261116) exited with code 143 (Terminated)
Jan 23 04:31:17 np0005593232 neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea[261098]: [WARNING]  (261114) : All workers exited. Exiting... (0)
Jan 23 04:31:17 np0005593232 systemd[1]: libpod-87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd.scope: Deactivated successfully.
Jan 23 04:31:18 np0005593232 podman[261697]: 2026-01-23 09:31:18.001127079 +0000 UTC m=+0.045793870 container died 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 04:31:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd-userdata-shm.mount: Deactivated successfully.
Jan 23 04:31:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f5c471c124784cd5923a6d4e2d2f81648745eea5177739486428356cee852e48-merged.mount: Deactivated successfully.
Jan 23 04:31:18 np0005593232 podman[261697]: 2026-01-23 09:31:18.111809999 +0000 UTC m=+0.156476770 container cleanup 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:31:18 np0005593232 systemd[1]: libpod-conmon-87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd.scope: Deactivated successfully.
Jan 23 04:31:18 np0005593232 podman[261727]: 2026-01-23 09:31:18.173045187 +0000 UTC m=+0.042022304 container remove 87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.178 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f2d4f09-c73e-4ba1-b3ad-b41262e63876]: (4, ('Fri Jan 23 09:31:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea (87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd)\n87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd\nFri Jan 23 09:31:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea (87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd)\n87992c0c2b2decccfea69f791a71c114e853dc22395d0dd47d0875a3a09eabdd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.179 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b257394a-917b-48df-8f57-1561bf8e8189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.180 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48c9624b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:18 np0005593232 kernel: tap48c9624b-30: left promiscuous mode
Jan 23 04:31:18 np0005593232 nova_compute[250269]: 2026-01-23 09:31:18.183 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:18 np0005593232 nova_compute[250269]: 2026-01-23 09:31:18.199 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.202 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b24eaac2-ca26-46ec-82a8-4584b5c95c93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.219 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f8bc73-fb1b-44e3-a3c7-5876d6469c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.220 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[29ff3207-6858-4b0d-88df-4776b751d9e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.235 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0f05ccbf-9931-4564-9c1a-e7bd3ac82a57]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 457499, 'reachable_time': 21115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261746, 'error': None, 'target': 'ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 nova_compute[250269]: 2026-01-23 09:31:18.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.239 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48c9624b-33de-47f9-a720-02dd9028b5ea deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:31:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:18.239 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[032eb035-2d8e-4d95-ae2a-a26b278f97da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:31:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:18 np0005593232 systemd[1]: run-netns-ovnmeta\x2d48c9624b\x2d33de\x2d47f9\x2da720\x2d02dd9028b5ea.mount: Deactivated successfully.
Jan 23 04:31:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1091: 321 pgs: 321 active+clean; 295 MiB data, 391 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.149 250273 DEBUG nova.compute.manager [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.149 250273 DEBUG oslo_concurrency.lockutils [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.150 250273 DEBUG oslo_concurrency.lockutils [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.150 250273 DEBUG oslo_concurrency.lockutils [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.150 250273 DEBUG nova.compute.manager [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:19 np0005593232 nova_compute[250269]: 2026-01-23 09:31:19.151 250273 DEBUG nova.compute.manager [req-5b04b4e3-b543-4f8b-ba98-2e5c8da91a61 req-4fe81bef-9d27-4b5d-b1b6-14a317d95af4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-unplugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:31:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:19.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:19.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:20 np0005593232 nova_compute[250269]: 2026-01-23 09:31:20.488 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1092: 321 pgs: 321 active+clean; 269 MiB data, 387 MiB used, 21 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 247 op/s
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.180 250273 DEBUG nova.network.neutron [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updated VIF entry in instance network info cache for port 857f8a0c-0bda-43ca-85aa-7f22568eddc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.181 250273 DEBUG nova.network.neutron [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Updating instance_info_cache with network_info: [{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.212 250273 DEBUG oslo_concurrency.lockutils [req-0bac02da-c2ee-4dde-be23-a20cabe9b41f req-45026248-ce20-4e7d-8f80-1fdcc454cc08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-f62791ad-fc40-451f-b02a-ba991f2dbc32" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:31:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:21.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.403 250273 DEBUG nova.network.neutron [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Activated binding for port 857f8a0c-0bda-43ca-85aa-7f22568eddc7 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.404 250273 DEBUG nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.405 250273 DEBUG nova.virt.libvirt.vif [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:30:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1280958077',display_name='tempest-LiveMigrationTest-server-1280958077',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1280958077',id=9,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:30:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-xkzmzoa6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:31:02Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=f62791ad-fc40-451f-b02a-ba991f2dbc32,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.405 250273 DEBUG nova.network.os_vif_util [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converting VIF {"id": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "address": "fa:16:3e:d9:aa:f3", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap857f8a0c-0b", "ovs_interfaceid": "857f8a0c-0bda-43ca-85aa-7f22568eddc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.405 250273 DEBUG nova.network.os_vif_util [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.406 250273 DEBUG os_vif [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.408 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.408 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap857f8a0c-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.412 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.415 250273 INFO os_vif [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:aa:f3,bridge_name='br-int',has_traffic_filtering=True,id=857f8a0c-0bda-43ca-85aa-7f22568eddc7,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap857f8a0c-0b')#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.416 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.417 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.417 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.417 250273 DEBUG nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.418 250273 INFO nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Deleting instance files /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32_del#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.418 250273 INFO nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Deletion of /var/lib/nova/instances/f62791ad-fc40-451f-b02a-ba991f2dbc32_del complete#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.632 250273 DEBUG nova.compute.manager [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.633 250273 DEBUG oslo_concurrency.lockutils [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.633 250273 DEBUG oslo_concurrency.lockutils [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.633 250273 DEBUG oslo_concurrency.lockutils [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.633 250273 DEBUG nova.compute.manager [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:21 np0005593232 nova_compute[250269]: 2026-01-23 09:31:21.633 250273 WARNING nova.compute.manager [req-0fc9cba9-5932-47f3-9b0d-118ebd43e223 req-95f76fef-a385-4f9e-87aa-e467054c75db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received unexpected event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:31:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:21.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1093: 321 pgs: 321 active+clean; 274 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 252 op/s
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.238 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:23.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.890 250273 DEBUG nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.890 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.890 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.891 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.891 250273 DEBUG nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.891 250273 WARNING nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received unexpected event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.891 250273 DEBUG nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.891 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.892 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.892 250273 DEBUG oslo_concurrency.lockutils [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.892 250273 DEBUG nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] No waiting events found dispatching network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:31:23 np0005593232 nova_compute[250269]: 2026-01-23 09:31:23.892 250273 WARNING nova.compute.manager [req-7a391214-8344-429d-98ae-6aeb94687976 req-fda8c33d-e675-4fc3-8994-51f837808395 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Received unexpected event network-vif-plugged-857f8a0c-0bda-43ca-85aa-7f22568eddc7 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:31:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1094: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.0 MiB/s wr, 203 op/s
Jan 23 04:31:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:25.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:26 np0005593232 nova_compute[250269]: 2026-01-23 09:31:26.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:26 np0005593232 systemd[1]: Stopping User Manager for UID 42436...
Jan 23 04:31:26 np0005593232 systemd[261547]: Activating special unit Exit the Session...
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped target Main User Target.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped target Basic System.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped target Paths.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped target Sockets.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped target Timers.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 04:31:26 np0005593232 systemd[261547]: Closed D-Bus User Message Bus Socket.
Jan 23 04:31:26 np0005593232 systemd[261547]: Stopped Create User's Volatile Files and Directories.
Jan 23 04:31:26 np0005593232 systemd[261547]: Removed slice User Application Slice.
Jan 23 04:31:26 np0005593232 systemd[261547]: Reached target Shutdown.
Jan 23 04:31:26 np0005593232 systemd[261547]: Finished Exit the Session.
Jan 23 04:31:26 np0005593232 systemd[261547]: Reached target Exit the Session.
Jan 23 04:31:26 np0005593232 systemd[1]: user@42436.service: Deactivated successfully.
Jan 23 04:31:26 np0005593232 systemd[1]: Stopped User Manager for UID 42436.
Jan 23 04:31:26 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 23 04:31:26 np0005593232 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 23 04:31:26 np0005593232 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 23 04:31:26 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 23 04:31:26 np0005593232 systemd[1]: Removed slice User Slice of UID 42436.
Jan 23 04:31:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1095: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 23 04:31:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:27.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:31:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:28 np0005593232 nova_compute[250269]: 2026-01-23 09:31:28.240 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1096: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 155 op/s
Jan 23 04:31:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.006000171s ======
Jan 23 04:31:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:29.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000171s
Jan 23 04:31:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:29.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.113 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.113 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.113 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "f62791ad-fc40-451f-b02a-ba991f2dbc32-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.202 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.203 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.203 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.203 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.203 250273 DEBUG oslo_concurrency.processutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/450680387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.616 250273 DEBUG oslo_concurrency.processutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.811 250273 WARNING nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.813 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4845MB free_disk=20.851802825927734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.813 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:30 np0005593232 nova_compute[250269]: 2026-01-23 09:31:30.814 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1097: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 146 op/s
Jan 23 04:31:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:31.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:31 np0005593232 nova_compute[250269]: 2026-01-23 09:31:31.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.105 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Applying migration context for instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 as it has an incoming, in-progress migration cc07d23d-e40d-4b7a-98b0-a8f2611399a1. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.106 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Migration for instance f62791ad-fc40-451f-b02a-ba991f2dbc32 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.112 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.112 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquired lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.113 250273 DEBUG nova.network.neutron [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.304 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.305 250273 INFO nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating resource usage from migration cc07d23d-e40d-4b7a-98b0-a8f2611399a1#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.346 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Migration c757740f-0e06-4982-9054-d0a23d1c3501 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.347 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.347 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.347 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:31:32 np0005593232 podman[261838]: 2026-01-23 09:31:32.392601404 +0000 UTC m=+0.054555166 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.416 250273 DEBUG oslo_concurrency.processutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.551 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160677.5499794, f62791ad-fc40-451f-b02a-ba991f2dbc32 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.553 250273 INFO nova.compute.manager [-] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.615 250273 DEBUG nova.compute.manager [None req-7d17104e-8c82-4442-8146-b27faac353be - - - - - -] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:31:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1879443532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.893 250273 DEBUG oslo_concurrency.processutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.898 250273 DEBUG nova.compute.provider_tree [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:31:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1098: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 169 KiB/s rd, 1.5 MiB/s wr, 58 op/s
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.953 250273 DEBUG nova.scheduler.client.report [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.960 250273 DEBUG nova.network.neutron [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.993 250273 DEBUG nova.compute.resource_tracker [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:31:32 np0005593232 nova_compute[250269]: 2026-01-23 09:31:32.993 250273 DEBUG oslo_concurrency.lockutils [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:33 np0005593232 nova_compute[250269]: 2026-01-23 09:31:33.000 250273 INFO nova.compute.manager [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Jan 23 04:31:33 np0005593232 nova_compute[250269]: 2026-01-23 09:31:33.240 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:33.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:33 np0005593232 nova_compute[250269]: 2026-01-23 09:31:33.874 250273 INFO nova.scheduler.client.report [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Deleted allocation for migration c757740f-0e06-4982-9054-d0a23d1c3501#033[00m
Jan 23 04:31:33 np0005593232 nova_compute[250269]: 2026-01-23 09:31:33.875 250273 DEBUG nova.virt.libvirt.driver [None req-3587debb-5fd3-47d7-8686-c5801ff95138 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: f62791ad-fc40-451f-b02a-ba991f2dbc32] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 23 04:31:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:33.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:34 np0005593232 nova_compute[250269]: 2026-01-23 09:31:34.707 250273 DEBUG nova.network.neutron [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:31:34 np0005593232 nova_compute[250269]: 2026-01-23 09:31:34.856 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Releasing lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:31:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1099: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 76 KiB/s rd, 63 KiB/s wr, 20 op/s
Jan 23 04:31:35 np0005593232 nova_compute[250269]: 2026-01-23 09:31:35.069 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 23 04:31:35 np0005593232 nova_compute[250269]: 2026-01-23 09:31:35.071 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 04:31:35 np0005593232 nova_compute[250269]: 2026-01-23 09:31:35.072 250273 INFO nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Creating image(s)#033[00m
Jan 23 04:31:35 np0005593232 nova_compute[250269]: 2026-01-23 09:31:35.112 250273 DEBUG nova.storage.rbd_utils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] creating snapshot(nova-resize) on rbd image(f2d1fdc0-baaf-4566-8655-aafdbcf1f473_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:31:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:35.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 23 04:31:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 23 04:31:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.057 250273 DEBUG nova.objects.instance [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.338 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.339 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Ensure instance console log exists: /var/lib/nova/instances/f2d1fdc0-baaf-4566-8655-aafdbcf1f473/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.340 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.340 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.341 250273 DEBUG oslo_concurrency.lockutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.344 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.351 250273 WARNING nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.360 250273 DEBUG nova.virt.libvirt.host [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.361 250273 DEBUG nova.virt.libvirt.host [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.367 250273 DEBUG nova.virt.libvirt.host [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.368 250273 DEBUG nova.virt.libvirt.host [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.370 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.370 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eebea5f8-9b11-45ad-873d-c4ea90d3de87',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.371 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.371 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.371 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.372 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.372 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.372 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.372 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.373 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.373 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.373 250273 DEBUG nova.virt.hardware [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.374 250273 DEBUG nova.objects.instance [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.416 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.449 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:31:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632897020' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.901 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1101: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 16 KiB/s wr, 16 op/s
Jan 23 04:31:36 np0005593232 nova_compute[250269]: 2026-01-23 09:31:36.942 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:31:37
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.mgr']
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:31:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:31:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2851472895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:31:37 np0005593232 nova_compute[250269]: 2026-01-23 09:31:37.393 250273 DEBUG oslo_concurrency.processutils [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:37 np0005593232 nova_compute[250269]: 2026-01-23 09:31:37.399 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <uuid>f2d1fdc0-baaf-4566-8655-aafdbcf1f473</uuid>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <name>instance-0000000a</name>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <memory>196608</memory>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:name>tempest-MigrationsAdminTest-server-1871952171</nova:name>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:31:36</nova:creationTime>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.micro">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:memory>192</nova:memory>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:user uuid="7536fa2e625541fba613dc32a49a4c5b">tempest-MigrationsAdminTest-2056264627-project-member</nova:user>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <nova:project uuid="11def90dfdc14cfe928302bec2835794">tempest-MigrationsAdminTest-2056264627</nova:project>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="serial">f2d1fdc0-baaf-4566-8655-aafdbcf1f473</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="uuid">f2d1fdc0-baaf-4566-8655-aafdbcf1f473</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f2d1fdc0-baaf-4566-8655-aafdbcf1f473_disk">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f2d1fdc0-baaf-4566-8655-aafdbcf1f473_disk.config">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/f2d1fdc0-baaf-4566-8655-aafdbcf1f473/console.log" append="off"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:31:37 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:31:37 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:31:37 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:31:37 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:31:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:37.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:37 np0005593232 nova_compute[250269]: 2026-01-23 09:31:37.546 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:31:37 np0005593232 nova_compute[250269]: 2026-01-23 09:31:37.546 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:31:37 np0005593232 nova_compute[250269]: 2026-01-23 09:31:37.547 250273 INFO nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Using config drive#033[00m
Jan 23 04:31:37 np0005593232 systemd-machined[215836]: New machine qemu-4-instance-0000000a.
Jan 23 04:31:37 np0005593232 systemd[1]: Started Virtual Machine qemu-4-instance-0000000a.
Jan 23 04:31:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:31:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.598 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160698.5978482, f2d1fdc0-baaf-4566-8655-aafdbcf1f473 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.598 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.601 250273 DEBUG nova.compute.manager [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.604 250273 INFO nova.virt.libvirt.driver [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance running successfully.#033[00m
Jan 23 04:31:38 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.607 250273 DEBUG nova.virt.libvirt.guest [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.607 250273 DEBUG nova.virt.libvirt.driver [None req-cd8134c6-df22-42b8-953d-6e61fa57fbdb 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.683 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.686 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.752 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.753 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160698.601563, f2d1fdc0-baaf-4566-8655-aafdbcf1f473 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.753 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] VM Started (Lifecycle Event)#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.844 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.847 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:31:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1102: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 12 KiB/s rd, 16 KiB/s wr, 16 op/s
Jan 23 04:31:38 np0005593232 nova_compute[250269]: 2026-01-23 09:31:38.946 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 04:31:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:39.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:39.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1103: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 413 KiB/s rd, 4.1 KiB/s wr, 29 op/s
Jan 23 04:31:41 np0005593232 nova_compute[250269]: 2026-01-23 09:31:41.418 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:41.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:41.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 23 04:31:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 23 04:31:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 23 04:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:42.586 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:42.587 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:31:42.587 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1105: 321 pgs: 321 active+clean; 281 MiB data, 399 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 KiB/s wr, 119 op/s
Jan 23 04:31:43 np0005593232 nova_compute[250269]: 2026-01-23 09:31:43.270 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:43.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1106: 321 pgs: 321 active+clean; 292 MiB data, 405 MiB used, 21 GiB / 21 GiB avail; 3.5 MiB/s rd, 642 KiB/s wr, 119 op/s
Jan 23 04:31:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:45.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:45.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:46 np0005593232 nova_compute[250269]: 2026-01-23 09:31:46.420 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006762872522333892 of space, bias 1.0, pg target 2.0288617567001674 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 04:31:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1107: 321 pgs: 321 active+clean; 327 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Jan 23 04:31:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:47.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.530 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.530 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.550 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.634 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.635 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.643 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.643 250273 INFO nova.compute.claims [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:31:47 np0005593232 nova_compute[250269]: 2026-01-23 09:31:47.818 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:31:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:47.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727470881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:31:48 np0005593232 nova_compute[250269]: 2026-01-23 09:31:48.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:48 np0005593232 nova_compute[250269]: 2026-01-23 09:31:48.283 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:48 np0005593232 nova_compute[250269]: 2026-01-23 09:31:48.290 250273 DEBUG nova.compute.provider_tree [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 23 04:31:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 23 04:31:48 np0005593232 podman[262135]: 2026-01-23 09:31:48.517912153 +0000 UTC m=+0.178571251 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:31:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1109: 321 pgs: 321 active+clean; 327 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 162 op/s
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.278 250273 DEBUG nova.scheduler.client.report [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.374 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.375 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:31:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:49.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.486 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.486 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.529 250273 INFO nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.568 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.817 250273 DEBUG nova.policy [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9914fa5b09794fda94ca3cb12e25549f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5071dae1c732441291c3cea4201538d1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.835 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.836 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.837 250273 INFO nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Creating image(s)#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.860 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.882 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:31:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.906 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.909 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.965 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.966 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.966 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.967 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.994 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:31:49 np0005593232 nova_compute[250269]: 2026-01-23 09:31:49.997 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:31:50 np0005593232 nova_compute[250269]: 2026-01-23 09:31:50.295 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:31:50 np0005593232 nova_compute[250269]: 2026-01-23 09:31:50.369 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] resizing rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:31:50 np0005593232 nova_compute[250269]: 2026-01-23 09:31:50.479 250273 DEBUG nova.objects.instance [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'migration_context' on Instance uuid 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:31:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1110: 321 pgs: 321 active+clean; 327 MiB data, 420 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 99 op/s
Jan 23 04:31:51 np0005593232 nova_compute[250269]: 2026-01-23 09:31:51.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:51.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:51.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:52 np0005593232 nova_compute[250269]: 2026-01-23 09:31:52.439 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:31:52 np0005593232 nova_compute[250269]: 2026-01-23 09:31:52.439 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Ensure instance console log exists: /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:31:52 np0005593232 nova_compute[250269]: 2026-01-23 09:31:52.440 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:31:52 np0005593232 nova_compute[250269]: 2026-01-23 09:31:52.440 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:31:52 np0005593232 nova_compute[250269]: 2026-01-23 09:31:52.441 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:31:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1111: 321 pgs: 321 active+clean; 359 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 187 op/s
Jan 23 04:31:53 np0005593232 nova_compute[250269]: 2026-01-23 09:31:53.271 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:53.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:53.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1112: 321 pgs: 321 active+clean; 393 MiB data, 452 MiB used, 21 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 227 op/s
Jan 23 04:31:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:55.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:55 np0005593232 nova_compute[250269]: 2026-01-23 09:31:55.784 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Successfully created port: ae608b13-3393-41c9-9fc7-3bdad96a3218 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:31:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:56 np0005593232 nova_compute[250269]: 2026-01-23 09:31:56.427 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1113: 321 pgs: 321 active+clean; 386 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 233 op/s
Jan 23 04:31:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.745 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Successfully updated port: ae608b13-3393-41c9-9fc7-3bdad96a3218 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:31:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:57.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.943 250273 DEBUG nova.compute.manager [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-changed-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.943 250273 DEBUG nova.compute.manager [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Refreshing instance network info cache due to event network-changed-ae608b13-3393-41c9-9fc7-3bdad96a3218. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.943 250273 DEBUG oslo_concurrency.lockutils [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.944 250273 DEBUG oslo_concurrency.lockutils [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.944 250273 DEBUG nova.network.neutron [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Refreshing network info cache for port ae608b13-3393-41c9-9fc7-3bdad96a3218 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:31:57 np0005593232 nova_compute[250269]: 2026-01-23 09:31:57.963 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:31:58 np0005593232 nova_compute[250269]: 2026-01-23 09:31:58.273 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:31:58 np0005593232 nova_compute[250269]: 2026-01-23 09:31:58.371 250273 DEBUG nova.network.neutron [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:31:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:31:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1114: 321 pgs: 321 active+clean; 386 MiB data, 448 MiB used, 21 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 222 op/s
Jan 23 04:31:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:31:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:31:59.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:31:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:31:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:31:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:31:59.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:31:59 np0005593232 nova_compute[250269]: 2026-01-23 09:31:59.976 250273 DEBUG nova.network.neutron [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:00 np0005593232 nova_compute[250269]: 2026-01-23 09:32:00.010 250273 DEBUG oslo_concurrency.lockutils [req-b8ad6139-04cc-46bf-9b7c-470fc88be780 req-ee31e4ca-0bca-49b5-bda2-e89f1d793f26 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:00 np0005593232 nova_compute[250269]: 2026-01-23 09:32:00.011 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquired lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:00 np0005593232 nova_compute[250269]: 2026-01-23 09:32:00.011 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:32:00 np0005593232 nova_compute[250269]: 2026-01-23 09:32:00.521 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:32:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1115: 321 pgs: 321 active+clean; 375 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.6 MiB/s wr, 204 op/s
Jan 23 04:32:01 np0005593232 nova_compute[250269]: 2026-01-23 09:32:01.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:01.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:02 np0005593232 nova_compute[250269]: 2026-01-23 09:32:02.465 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:02 np0005593232 nova_compute[250269]: 2026-01-23 09:32:02.466 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:02 np0005593232 nova_compute[250269]: 2026-01-23 09:32:02.467 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:32:02 np0005593232 nova_compute[250269]: 2026-01-23 09:32:02.467 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:32:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1116: 321 pgs: 321 active+clean; 375 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 184 op/s
Jan 23 04:32:03 np0005593232 nova_compute[250269]: 2026-01-23 09:32:03.278 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:03 np0005593232 podman[262397]: 2026-01-23 09:32:03.408603255 +0000 UTC m=+0.065513719 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:32:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:03.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:03 np0005593232 nova_compute[250269]: 2026-01-23 09:32:03.778 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:32:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:03.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.598 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.599 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.599 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.600 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.705 250273 DEBUG nova.network.neutron [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Updating instance_info_cache with network_info: [{"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.733 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Releasing lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.733 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance network_info: |[{"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.735 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Start _get_guest_xml network_info=[{"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.740 250273 WARNING nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.745 250273 DEBUG nova.virt.libvirt.host [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.746 250273 DEBUG nova.virt.libvirt.host [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.749 250273 DEBUG nova.virt.libvirt.host [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.750 250273 DEBUG nova.virt.libvirt.host [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.751 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.751 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:31:34Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='967643117',id=15,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-1676572698',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.752 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.752 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.752 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.752 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.753 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.753 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.753 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.753 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.754 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.754 250273 DEBUG nova.virt.hardware [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.756 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1117: 321 pgs: 321 active+clean; 375 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 320 KiB/s rd, 2.3 MiB/s wr, 101 op/s
Jan 23 04:32:04 np0005593232 nova_compute[250269]: 2026-01-23 09:32:04.936 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221469357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.278 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.308 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.312 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:05.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1666728588' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.781 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.785 250273 DEBUG nova.virt.libvirt.vif [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1083116106',id=13,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-aby54ryw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:31:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=6b164305-fb4f-4e3a-9090-bb4dfb7ab779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.785 250273 DEBUG nova.network.os_vif_util [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.786 250273 DEBUG nova.network.os_vif_util [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:05 np0005593232 nova_compute[250269]: 2026-01-23 09:32:05.787 250273 DEBUG nova.objects.instance [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:32:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:05.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.392 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <uuid>6b164305-fb4f-4e3a-9090-bb4dfb7ab779</uuid>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <name>instance-0000000d</name>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1083116106</nova:name>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:32:04</nova:creationTime>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-1676572698">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:user uuid="9914fa5b09794fda94ca3cb12e25549f">tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member</nova:user>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:project uuid="5071dae1c732441291c3cea4201538d1">tempest-ServersWithSpecificFlavorTestJSON-2097391700</nova:project>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <nova:port uuid="ae608b13-3393-41c9-9fc7-3bdad96a3218">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="serial">6b164305-fb4f-4e3a-9090-bb4dfb7ab779</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="uuid">6b164305-fb4f-4e3a-9090-bb4dfb7ab779</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:f0:b4:9a"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <target dev="tapae608b13-33"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/console.log" append="off"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:32:06 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:32:06 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:32:06 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:32:06 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.392 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Preparing to wait for external event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.393 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.393 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.393 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.394 250273 DEBUG nova.virt.libvirt.vif [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1083116106',id=13,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-aby54ryw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:31:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=6b164305-fb4f-4e3a-9090-bb4dfb7ab779,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.394 250273 DEBUG nova.network.os_vif_util [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.394 250273 DEBUG nova.network.os_vif_util [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.397 250273 DEBUG os_vif [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.398 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.398 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.399 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.403 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.403 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae608b13-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.403 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae608b13-33, col_values=(('external_ids', {'iface-id': 'ae608b13-3393-41c9-9fc7-3bdad96a3218', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f0:b4:9a', 'vm-uuid': '6b164305-fb4f-4e3a-9090-bb4dfb7ab779'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:06 np0005593232 NetworkManager[49057]: <info>  [1769160726.4069] manager: (tapae608b13-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.414 250273 INFO os_vif [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33')#033[00m
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:32:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.64257765 +0000 UTC m=+0.051361484 container create 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:32:06 np0005593232 systemd[1]: Started libpod-conmon-4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31.scope.
Jan 23 04:32:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.622836148 +0000 UTC m=+0.031619982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.726554934 +0000 UTC m=+0.135338788 container init 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.736403084 +0000 UTC m=+0.145186918 container start 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.741586372 +0000 UTC m=+0.150370226 container attach 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:32:06 np0005593232 great_grothendieck[262772]: 167 167
Jan 23 04:32:06 np0005593232 systemd[1]: libpod-4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31.scope: Deactivated successfully.
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.74713735 +0000 UTC m=+0.155921184 container died 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-57b3224e11f9a87d3a13d5c069353d86ba6d60ffa5cca2989344c002e2b1849d-merged.mount: Deactivated successfully.
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.772 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.773 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.773 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No VIF found with MAC fa:16:3e:f0:b4:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.774 250273 INFO nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Using config drive#033[00m
Jan 23 04:32:06 np0005593232 podman[262756]: 2026-01-23 09:32:06.784468224 +0000 UTC m=+0.193252048 container remove 4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:32:06 np0005593232 systemd[1]: libpod-conmon-4ab383ca4b93327bca4a1b92039332a56c8bcc4193468ef3f9861e9903e0fb31.scope: Deactivated successfully.
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.806 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.815 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.860 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.860 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.861 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.861 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.861 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.862 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.862 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.862 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.862 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.863 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.919 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.919 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.920 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.921 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:32:06 np0005593232 nova_compute[250269]: 2026-01-23 09:32:06.921 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1118: 321 pgs: 321 active+clean; 375 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 39 KiB/s rd, 983 KiB/s wr, 59 op/s
Jan 23 04:32:06 np0005593232 podman[262815]: 2026-01-23 09:32:06.963565518 +0000 UTC m=+0.043349466 container create ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:32:07 np0005593232 systemd[1]: Started libpod-conmon-ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf.scope.
Jan 23 04:32:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d51552e59b374d94a1e979c6b83a39ceff65d5104f001f4d13ec2b490acbeec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d51552e59b374d94a1e979c6b83a39ceff65d5104f001f4d13ec2b490acbeec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d51552e59b374d94a1e979c6b83a39ceff65d5104f001f4d13ec2b490acbeec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d51552e59b374d94a1e979c6b83a39ceff65d5104f001f4d13ec2b490acbeec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:07 np0005593232 podman[262815]: 2026-01-23 09:32:06.944730622 +0000 UTC m=+0.024514590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:07 np0005593232 podman[262815]: 2026-01-23 09:32:07.054142 +0000 UTC m=+0.133925958 container init ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:07 np0005593232 podman[262815]: 2026-01-23 09:32:07.061280313 +0000 UTC m=+0.141064261 container start ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:32:07 np0005593232 podman[262815]: 2026-01-23 09:32:07.070554338 +0000 UTC m=+0.150338316 container attach ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:32:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/224111815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.348 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.430 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.431 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.435 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.435 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:32:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:07.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.575 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.577 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4623MB free_disk=20.83100128173828GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.577 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.577 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.625 250273 INFO nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Creating config drive at /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.633 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptthxyaav execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.760 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptthxyaav" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.790 250273 DEBUG nova.storage.rbd_utils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.794 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.813 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.814 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.814 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.815 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.889 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.944 250273 DEBUG oslo_concurrency.processutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config 6b164305-fb4f-4e3a-9090-bb4dfb7ab779_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:07 np0005593232 nova_compute[250269]: 2026-01-23 09:32:07.946 250273 INFO nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Deleting local config drive /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779/disk.config because it was imported into RBD.#033[00m
Jan 23 04:32:08 np0005593232 kernel: tapae608b13-33: entered promiscuous mode
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.0033] manager: (tapae608b13-33): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 23 04:32:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:08Z|00052|binding|INFO|Claiming lport ae608b13-3393-41c9-9fc7-3bdad96a3218 for this chassis.
Jan 23 04:32:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:08Z|00053|binding|INFO|ae608b13-3393-41c9-9fc7-3bdad96a3218: Claiming fa:16:3e:f0:b4:9a 10.100.0.6
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 systemd-machined[215836]: New machine qemu-5-instance-0000000d.
Jan 23 04:32:08 np0005593232 systemd-udevd[263118]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.0591] device (tapae608b13-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.0600] device (tapae608b13-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:32:08 np0005593232 systemd[1]: Started Virtual Machine qemu-5-instance-0000000d.
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:08Z|00054|binding|INFO|Setting lport ae608b13-3393-41c9-9fc7-3bdad96a3218 ovn-installed in OVS
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.093 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]: [
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:    {
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "available": false,
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "ceph_device": false,
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "lsm_data": {},
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "lvs": [],
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "path": "/dev/sr0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "rejected_reasons": [
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "Has a FileSystem",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "Insufficient space (<5GB)"
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        ],
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        "sys_api": {
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "actuators": null,
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "device_nodes": "sr0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "devname": "sr0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "human_readable_size": "482.00 KB",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "id_bus": "ata",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "model": "QEMU DVD-ROM",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "nr_requests": "2",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "parent": "/dev/sr0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "partitions": {},
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "path": "/dev/sr0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "removable": "1",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "rev": "2.5+",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "ro": "0",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "rotational": "1",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "sas_address": "",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "sas_device_handle": "",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "scheduler_mode": "mq-deadline",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "sectors": 0,
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "sectorsize": "2048",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "size": 493568.0,
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "support_discard": "2048",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "type": "disk",
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:            "vendor": "QEMU"
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:        }
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]:    }
Jan 23 04:32:08 np0005593232 intelligent_galileo[262834]: ]
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.279 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 systemd[1]: libpod-ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf.scope: Deactivated successfully.
Jan 23 04:32:08 np0005593232 systemd[1]: libpod-ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf.scope: Consumed 1.183s CPU time.
Jan 23 04:32:08 np0005593232 podman[262815]: 2026-01-23 09:32:08.295589971 +0000 UTC m=+1.375373919 container died ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:32:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1d51552e59b374d94a1e979c6b83a39ceff65d5104f001f4d13ec2b490acbeec-merged.mount: Deactivated successfully.
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362904222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.353 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:08 np0005593232 podman[262815]: 2026-01-23 09:32:08.354254632 +0000 UTC m=+1.434038580 container remove ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.367 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:32:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:08Z|00055|binding|INFO|Setting lport ae608b13-3393-41c9-9fc7-3bdad96a3218 up in Southbound
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.372 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b4:9a 10.100.0.6'], port_security=['fa:16:3e:f0:b4:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b164305-fb4f-4e3a-9090-bb4dfb7ab779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5071dae1c732441291c3cea4201538d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '485475b6-fa5d-49cf-9e7a-d68656021da2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd9ebd18-6b3b-45ea-8383-db75ba482bdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=ae608b13-3393-41c9-9fc7-3bdad96a3218) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:32:08 np0005593232 systemd[1]: libpod-conmon-ff38e39812b12c279fafcfbcc17075a3165a69b7e9b2c6b33b75cc4ab924efdf.scope: Deactivated successfully.
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.375 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ae608b13-3393-41c9-9fc7-3bdad96a3218 in datapath e1f93ca8-d3d7-4404-b1f0-385022a03154 bound to our chassis#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.376 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e1f93ca8-d3d7-4404-b1f0-385022a03154#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.396 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9007b250-de36-4fd9-bf00-b3bc5e43d3bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.398 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape1f93ca8-d1 in ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.401 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape1f93ca8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.401 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f975f843-e6cb-442d-8320-e045dcb976fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.403 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0a431653-5924-4880-a7f4-11c4aebedc9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.415 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e25a62-1142-47f6-979b-f8446b8cea6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.437 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[73202ca6-24ff-4fbd-846c-e3325829c1c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.466 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d750bc-e294-44d5-a06a-7227f8a17e10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.4725] manager: (tape1f93ca8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.471 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4086d7-8a83-4e66-a3fb-90d8479eb4a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.507 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[04e0dd86-267c-4abd-b0b1-1571e866d4fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.510 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a77ea8-b5af-455c-bb61-6389e15f1b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.529 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.5381] device (tape1f93ca8-d0): carrier: link connected
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.542 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6a7adcbb-8a65-483e-8934-998fa92a9bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.563 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2d289c2a-30d9-41ca-b578-bc48519d92f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1f93ca8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:6f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464168, 'reachable_time': 40919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264056, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.578 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2664aab-e413-4054-b197-e9d0bbf1eecb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:6fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 464168, 'tstamp': 464168}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264057, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.579 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.580 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.590 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce4972c-ce0c-498f-9efe-b716b71ed2c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1f93ca8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:6f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464168, 'reachable_time': 40919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264058, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.614 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c80c1af3-4d70-4fa5-88b0-4245daf5e9be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.672 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b89596a8-398f-4c0f-a094-a32cc12ec417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.674 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f93ca8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f93ca8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 NetworkManager[49057]: <info>  [1769160728.6782] manager: (tape1f93ca8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 23 04:32:08 np0005593232 kernel: tape1f93ca8-d0: entered promiscuous mode
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.682 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape1f93ca8-d0, col_values=(('external_ids', {'iface-id': '7bc99e1a-c88e-4c7e-a925-28a3494f0f6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:08Z|00056|binding|INFO|Releasing lport 7bc99e1a-c88e-4c7e-a925-28a3494f0f6e from this chassis (sb_readonly=0)
Jan 23 04:32:08 np0005593232 nova_compute[250269]: 2026-01-23 09:32:08.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.704 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.705 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[52a50eac-58d7-4f3c-9f62-0908fc724bcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.706 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-e1f93ca8-d3d7-4404-b1f0-385022a03154
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID e1f93ca8-d3d7-4404-b1f0-385022a03154
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:08.707 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'env', 'PROCESS_TAG=haproxy-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e1f93ca8-d3d7-4404-b1f0-385022a03154.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:32:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1119: 321 pgs: 321 active+clean; 375 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 7.3 KiB/s rd, 22 KiB/s wr, 11 op/s
Jan 23 04:32:09 np0005593232 podman[264106]: 2026-01-23 09:32:09.077550535 +0000 UTC m=+0.051429307 container create 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:32:09 np0005593232 systemd[1]: Started libpod-conmon-1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24.scope.
Jan 23 04:32:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e443e1d4f65913647b73004d965aa9535843c518294fa3bd9e3978c15b00e73c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:09 np0005593232 podman[264106]: 2026-01-23 09:32:09.05105288 +0000 UTC m=+0.024931662 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:32:09 np0005593232 podman[264106]: 2026-01-23 09:32:09.160154009 +0000 UTC m=+0.134032801 container init 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:32:09 np0005593232 podman[264106]: 2026-01-23 09:32:09.166474439 +0000 UTC m=+0.140353211 container start 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 04:32:09 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [NOTICE]   (264150) : New worker (264153) forked
Jan 23 04:32:09 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [NOTICE]   (264150) : Loading success.
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.211 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160729.210862, 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.212 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] VM Started (Lifecycle Event)#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.247 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.250 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160729.2111745, 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.251 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.285 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.289 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:09.297 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:09.299 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:09.300 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.322 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.368 250273 DEBUG nova.compute.manager [req-51ad11d0-e45b-411e-b36a-3389a4e1608b req-857aa33b-dbb3-4c3e-bad0-2500bc4301cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.370 250273 DEBUG oslo_concurrency.lockutils [req-51ad11d0-e45b-411e-b36a-3389a4e1608b req-857aa33b-dbb3-4c3e-bad0-2500bc4301cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.370 250273 DEBUG oslo_concurrency.lockutils [req-51ad11d0-e45b-411e-b36a-3389a4e1608b req-857aa33b-dbb3-4c3e-bad0-2500bc4301cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.371 250273 DEBUG oslo_concurrency.lockutils [req-51ad11d0-e45b-411e-b36a-3389a4e1608b req-857aa33b-dbb3-4c3e-bad0-2500bc4301cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.371 250273 DEBUG nova.compute.manager [req-51ad11d0-e45b-411e-b36a-3389a4e1608b req-857aa33b-dbb3-4c3e-bad0-2500bc4301cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Processing event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.372 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.376 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160729.3760026, 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.376 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.379 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.383 250273 INFO nova.virt.libvirt.driver [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance spawned successfully.#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.384 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:32:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:09.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.617 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.627 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.633 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.634 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.635 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.636 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.637 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.638 250273 DEBUG nova.virt.libvirt.driver [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1c64313e-30c8-4657-81e9-012411886207 does not exist
Jan 23 04:32:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a7ccad86-a61b-4014-b593-b27d07ef0647 does not exist
Jan 23 04:32:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6e283da3-7adf-4eb7-b632-2b46de22feec does not exist
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.709 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.835 250273 INFO nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Took 20.00 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.836 250273 DEBUG nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:09 np0005593232 nova_compute[250269]: 2026-01-23 09:32:09.963 250273 INFO nova.compute.manager [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Took 22.36 seconds to build instance.#033[00m
Jan 23 04:32:10 np0005593232 nova_compute[250269]: 2026-01-23 09:32:10.006 250273 DEBUG oslo_concurrency.lockutils [None req-8da92510-f2e5-49b9-88fc-fcc9671c061a 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.475s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.272499351 +0000 UTC m=+0.041494954 container create 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:32:10 np0005593232 systemd[1]: Started libpod-conmon-985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c.scope.
Jan 23 04:32:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.254096666 +0000 UTC m=+0.023092329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.368388983 +0000 UTC m=+0.137384616 container init 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.377973956 +0000 UTC m=+0.146969569 container start 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.381700323 +0000 UTC m=+0.150695986 container attach 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:32:10 np0005593232 ecstatic_greider[264368]: 167 167
Jan 23 04:32:10 np0005593232 systemd[1]: libpod-985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c.scope: Deactivated successfully.
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.388183457 +0000 UTC m=+0.157179080 container died 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:32:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-576b88d61a0da06117541fb34f056defb26aae112d98d3cc8d9758813b99bce7-merged.mount: Deactivated successfully.
Jan 23 04:32:10 np0005593232 podman[264352]: 2026-01-23 09:32:10.432338976 +0000 UTC m=+0.201334599 container remove 985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:32:10 np0005593232 systemd[1]: libpod-conmon-985d279b4b625c118b7143c6222bb44fbb7ee57ef069b41c6f7dc37838cd495c.scope: Deactivated successfully.
Jan 23 04:32:10 np0005593232 podman[264390]: 2026-01-23 09:32:10.625678716 +0000 UTC m=+0.054140614 container create e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:32:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:32:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:32:10 np0005593232 systemd[1]: Started libpod-conmon-e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71.scope.
Jan 23 04:32:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:10 np0005593232 podman[264390]: 2026-01-23 09:32:10.605383438 +0000 UTC m=+0.033845336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:10 np0005593232 podman[264390]: 2026-01-23 09:32:10.73038434 +0000 UTC m=+0.158846258 container init e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:32:10 np0005593232 podman[264390]: 2026-01-23 09:32:10.737792291 +0000 UTC m=+0.166254189 container start e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:32:10 np0005593232 podman[264390]: 2026-01-23 09:32:10.741033664 +0000 UTC m=+0.169495592 container attach e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:32:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1120: 321 pgs: 321 active+clean; 394 MiB data, 450 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 694 KiB/s wr, 38 op/s
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:11.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.471 250273 DEBUG nova.compute.manager [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.473 250273 DEBUG oslo_concurrency.lockutils [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.473 250273 DEBUG oslo_concurrency.lockutils [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.473 250273 DEBUG oslo_concurrency.lockutils [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.474 250273 DEBUG nova.compute.manager [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] No waiting events found dispatching network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:32:11 np0005593232 nova_compute[250269]: 2026-01-23 09:32:11.474 250273 WARNING nova.compute.manager [req-c4097142-0529-47cb-b954-fbca2899aaf0 req-52a03dd0-d6c7-4586-bb7e-77f60f40c66a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received unexpected event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:32:11 np0005593232 determined_nightingale[264406]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:32:11 np0005593232 determined_nightingale[264406]: --> relative data size: 1.0
Jan 23 04:32:11 np0005593232 determined_nightingale[264406]: --> All data devices are unavailable
Jan 23 04:32:11 np0005593232 systemd[1]: libpod-e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71.scope: Deactivated successfully.
Jan 23 04:32:11 np0005593232 podman[264422]: 2026-01-23 09:32:11.671237704 +0000 UTC m=+0.029748129 container died e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:32:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-286fa0d07009e6b550d5a6682b6e64bf978b0e4ccd850c298caa12df09e38b8e-merged.mount: Deactivated successfully.
Jan 23 04:32:11 np0005593232 podman[264422]: 2026-01-23 09:32:11.736615797 +0000 UTC m=+0.095126192 container remove e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:32:11 np0005593232 systemd[1]: libpod-conmon-e87071abefd54d5e21cafdefcb08c1408c9b4bcbd66e48b041af0949dedf8c71.scope: Deactivated successfully.
Jan 23 04:32:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:11.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.410829522 +0000 UTC m=+0.042299517 container create b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:32:12 np0005593232 systemd[1]: Started libpod-conmon-b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa.scope.
Jan 23 04:32:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.393603741 +0000 UTC m=+0.025073756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.502452533 +0000 UTC m=+0.133922548 container init b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.510850963 +0000 UTC m=+0.142320958 container start b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:32:12 np0005593232 vigilant_hamilton[264592]: 167 167
Jan 23 04:32:12 np0005593232 systemd[1]: libpod-b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa.scope: Deactivated successfully.
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.517976025 +0000 UTC m=+0.149446120 container attach b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.518817479 +0000 UTC m=+0.150287514 container died b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:32:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1d700665031dfef7653117506a702b18f2370cb062e608c803bdba95e104a22a-merged.mount: Deactivated successfully.
Jan 23 04:32:12 np0005593232 podman[264575]: 2026-01-23 09:32:12.575916576 +0000 UTC m=+0.207386611 container remove b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:12 np0005593232 systemd[1]: libpod-conmon-b6e3b7d488511c95b2d9bf0c4d21fc660157c97a82c26bb8999b0ef85c1da9aa.scope: Deactivated successfully.
Jan 23 04:32:12 np0005593232 podman[264615]: 2026-01-23 09:32:12.779079546 +0000 UTC m=+0.045559859 container create c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:32:12 np0005593232 systemd[1]: Started libpod-conmon-c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778.scope.
Jan 23 04:32:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49c33e3fe3322ff20d8aca1dfbcbbb7d557319a937af2904f6271363904c3a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49c33e3fe3322ff20d8aca1dfbcbbb7d557319a937af2904f6271363904c3a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49c33e3fe3322ff20d8aca1dfbcbbb7d557319a937af2904f6271363904c3a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49c33e3fe3322ff20d8aca1dfbcbbb7d557319a937af2904f6271363904c3a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:12 np0005593232 podman[264615]: 2026-01-23 09:32:12.760470666 +0000 UTC m=+0.026950969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:12 np0005593232 podman[264615]: 2026-01-23 09:32:12.862586846 +0000 UTC m=+0.129067169 container init c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:32:12 np0005593232 podman[264615]: 2026-01-23 09:32:12.869843153 +0000 UTC m=+0.136323456 container start c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:32:12 np0005593232 podman[264615]: 2026-01-23 09:32:12.87395156 +0000 UTC m=+0.140431883 container attach c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:32:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1121: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 80 op/s
Jan 23 04:32:13 np0005593232 nova_compute[250269]: 2026-01-23 09:32:13.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:13.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]: {
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:    "0": [
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:        {
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "devices": [
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "/dev/loop3"
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            ],
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "lv_name": "ceph_lv0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "lv_size": "7511998464",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "name": "ceph_lv0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "tags": {
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.cluster_name": "ceph",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.crush_device_class": "",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.encrypted": "0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.osd_id": "0",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.type": "block",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:                "ceph.vdo": "0"
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            },
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "type": "block",
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:            "vg_name": "ceph_vg0"
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:        }
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]:    ]
Jan 23 04:32:13 np0005593232 nostalgic_ardinghelli[264631]: }
Jan 23 04:32:13 np0005593232 systemd[1]: libpod-c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778.scope: Deactivated successfully.
Jan 23 04:32:13 np0005593232 podman[264615]: 2026-01-23 09:32:13.747021722 +0000 UTC m=+1.013502035 container died c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c49c33e3fe3322ff20d8aca1dfbcbbb7d557319a937af2904f6271363904c3a1-merged.mount: Deactivated successfully.
Jan 23 04:32:13 np0005593232 podman[264615]: 2026-01-23 09:32:13.803000967 +0000 UTC m=+1.069481270 container remove c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:32:13 np0005593232 systemd[1]: libpod-conmon-c962cfd386931673c5a9e947238ac778b2b0ebf9b64224400abd1bb7ffcc1778.scope: Deactivated successfully.
Jan 23 04:32:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:13.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:14 np0005593232 nova_compute[250269]: 2026-01-23 09:32:14.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1064] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/37)
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1072] device (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <warn>  [1769160734.1075] device (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1081] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/38)
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1083] device (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <warn>  [1769160734.1083] device (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1089] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1094] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1097] device (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 04:32:14 np0005593232 NetworkManager[49057]: <info>  [1769160734.1100] device (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 23 04:32:14 np0005593232 nova_compute[250269]: 2026-01-23 09:32:14.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:14Z|00057|binding|INFO|Releasing lport 7bc99e1a-c88e-4c7e-a925-28a3494f0f6e from this chassis (sb_readonly=0)
Jan 23 04:32:14 np0005593232 nova_compute[250269]: 2026-01-23 09:32:14.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.463665906 +0000 UTC m=+0.062990756 container create 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:32:14 np0005593232 systemd[1]: Started libpod-conmon-1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c.scope.
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.447387622 +0000 UTC m=+0.046712502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.552479757 +0000 UTC m=+0.151804627 container init 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.559896158 +0000 UTC m=+0.159221008 container start 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.563956814 +0000 UTC m=+0.163281664 container attach 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:32:14 np0005593232 elated_shaw[264812]: 167 167
Jan 23 04:32:14 np0005593232 systemd[1]: libpod-1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c.scope: Deactivated successfully.
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.565649892 +0000 UTC m=+0.164974752 container died 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:32:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-730388812e21e1ffee123575816ffb52ea927dc8daedbc383d182817f617bf6c-merged.mount: Deactivated successfully.
Jan 23 04:32:14 np0005593232 podman[264796]: 2026-01-23 09:32:14.607139255 +0000 UTC m=+0.206464095 container remove 1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:32:14 np0005593232 systemd[1]: libpod-conmon-1ad274c3102ca1850e947b9a0c38bfdfffbd9e2af547df25ec0efccf3efa8e4c.scope: Deactivated successfully.
Jan 23 04:32:14 np0005593232 podman[264836]: 2026-01-23 09:32:14.771149819 +0000 UTC m=+0.041160284 container create 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:32:14 np0005593232 systemd[1]: Started libpod-conmon-6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49.scope.
Jan 23 04:32:14 np0005593232 podman[264836]: 2026-01-23 09:32:14.752824337 +0000 UTC m=+0.022834822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:32:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de62301d370ec73620e591c73c3b4368ad7cecebaa54b83883d9acfb5b479498/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de62301d370ec73620e591c73c3b4368ad7cecebaa54b83883d9acfb5b479498/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de62301d370ec73620e591c73c3b4368ad7cecebaa54b83883d9acfb5b479498/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de62301d370ec73620e591c73c3b4368ad7cecebaa54b83883d9acfb5b479498/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:14 np0005593232 podman[264836]: 2026-01-23 09:32:14.882620456 +0000 UTC m=+0.152630941 container init 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:32:14 np0005593232 podman[264836]: 2026-01-23 09:32:14.893544357 +0000 UTC m=+0.163554822 container start 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:32:14 np0005593232 podman[264836]: 2026-01-23 09:32:14.896943254 +0000 UTC m=+0.166953829 container attach 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:32:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1122: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 23 04:32:15 np0005593232 nova_compute[250269]: 2026-01-23 09:32:15.142 250273 DEBUG nova.compute.manager [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-changed-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:15 np0005593232 nova_compute[250269]: 2026-01-23 09:32:15.143 250273 DEBUG nova.compute.manager [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Refreshing instance network info cache due to event network-changed-ae608b13-3393-41c9-9fc7-3bdad96a3218. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:32:15 np0005593232 nova_compute[250269]: 2026-01-23 09:32:15.143 250273 DEBUG oslo_concurrency.lockutils [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:15 np0005593232 nova_compute[250269]: 2026-01-23 09:32:15.143 250273 DEBUG oslo_concurrency.lockutils [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:15 np0005593232 nova_compute[250269]: 2026-01-23 09:32:15.143 250273 DEBUG nova.network.neutron [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Refreshing network info cache for port ae608b13-3393-41c9-9fc7-3bdad96a3218 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:32:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:15.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]: {
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:        "osd_id": 0,
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:        "type": "bluestore"
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]:    }
Jan 23 04:32:15 np0005593232 peaceful_kirch[264855]: }
Jan 23 04:32:15 np0005593232 systemd[1]: libpod-6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49.scope: Deactivated successfully.
Jan 23 04:32:15 np0005593232 podman[264877]: 2026-01-23 09:32:15.802504602 +0000 UTC m=+0.023056978 container died 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:32:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-de62301d370ec73620e591c73c3b4368ad7cecebaa54b83883d9acfb5b479498-merged.mount: Deactivated successfully.
Jan 23 04:32:15 np0005593232 podman[264877]: 2026-01-23 09:32:15.856892302 +0000 UTC m=+0.077444658 container remove 6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:32:15 np0005593232 systemd[1]: libpod-conmon-6dde4acc717cfeba94a08c9f5729a4fbaee7cb3214cf6f17868a18bf8d092c49.scope: Deactivated successfully.
Jan 23 04:32:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:32:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:15.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:32:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9f7a4a08-348c-4afb-92c0-8d6fb3e6a421 does not exist
Jan 23 04:32:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e48a988f-3e06-42a6-9572-dbf33d00a030 does not exist
Jan 23 04:32:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4de577cd-1f11-4bd9-948e-b6acfde1ad42 does not exist
Jan 23 04:32:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:32:16 np0005593232 nova_compute[250269]: 2026-01-23 09:32:16.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1123: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 175 op/s
Jan 23 04:32:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:17.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:17.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:18 np0005593232 nova_compute[250269]: 2026-01-23 09:32:18.042 250273 DEBUG nova.network.neutron [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Updated VIF entry in instance network info cache for port ae608b13-3393-41c9-9fc7-3bdad96a3218. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:32:18 np0005593232 nova_compute[250269]: 2026-01-23 09:32:18.044 250273 DEBUG nova.network.neutron [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Updating instance_info_cache with network_info: [{"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:18 np0005593232 nova_compute[250269]: 2026-01-23 09:32:18.151 250273 DEBUG oslo_concurrency.lockutils [req-5e546b65-8768-4db5-9de8-5ce397081b8e req-2b2ac560-4afc-4f6c-880e-2ccdcb1244d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6b164305-fb4f-4e3a-9090-bb4dfb7ab779" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:18 np0005593232 nova_compute[250269]: 2026-01-23 09:32:18.284 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1124: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 23 04:32:19 np0005593232 podman[264946]: 2026-01-23 09:32:19.43816482 +0000 UTC m=+0.100068492 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:32:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:19.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:19.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1125: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Jan 23 04:32:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:21.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:21 np0005593232 nova_compute[250269]: 2026-01-23 09:32:21.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:21.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1126: 321 pgs: 321 active+clean; 422 MiB data, 463 MiB used, 21 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.2 MiB/s wr, 196 op/s
Jan 23 04:32:23 np0005593232 nova_compute[250269]: 2026-01-23 09:32:23.286 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:23.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:23.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1127: 321 pgs: 321 active+clean; 461 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 1.6 MiB/s wr, 194 op/s
Jan 23 04:32:25 np0005593232 nova_compute[250269]: 2026-01-23 09:32:25.344 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Creating tmpfile /var/lib/nova/instances/tmpc8knh3fr to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 23 04:32:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:25.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:25 np0005593232 nova_compute[250269]: 2026-01-23 09:32:25.535 250273 DEBUG nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc8knh3fr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 23 04:32:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:25.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:26 np0005593232 nova_compute[250269]: 2026-01-23 09:32:26.498 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:26Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f0:b4:9a 10.100.0.6
Jan 23 04:32:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:26Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f0:b4:9a 10.100.0.6
Jan 23 04:32:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1128: 321 pgs: 321 active+clean; 504 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 224 op/s
Jan 23 04:32:27 np0005593232 nova_compute[250269]: 2026-01-23 09:32:27.244 250273 DEBUG nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc8knh3fr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 23 04:32:27 np0005593232 nova_compute[250269]: 2026-01-23 09:32:27.283 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:27 np0005593232 nova_compute[250269]: 2026-01-23 09:32:27.284 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquired lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:27 np0005593232 nova_compute[250269]: 2026-01-23 09:32:27.284 250273 DEBUG nova.network.neutron [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:32:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:27.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:27.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:28 np0005593232 nova_compute[250269]: 2026-01-23 09:32:28.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1129: 321 pgs: 321 active+clean; 504 MiB data, 520 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 177 op/s
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.130 250273 DEBUG nova.network.neutron [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updating instance_info_cache with network_info: [{"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.162 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Releasing lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.164 250273 DEBUG os_brick.utils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.166 250273 INFO oslo.privsep.daemon [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmppfs__paa/privsep.sock']#033[00m
Jan 23 04:32:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.847 250273 INFO oslo.privsep.daemon [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.716 265031 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.720 265031 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.722 265031 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.722 265031 INFO oslo.privsep.daemon [-] privsep daemon running as pid 265031#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.850 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[21d17b8e-8b75-4de8-9ac4-fe5eccd6b1dd]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:29.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.939 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.954 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.955 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b809391a-4ed3-41c3-98ba-9a665e461072]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.956 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.963 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.964 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[bc813d19-7726-4a77-99e6-fccb89cd688b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.966 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.973 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.974 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[fea88994-df59-4d6d-b2de-c4c0c2ca32df]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.975 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[da5ffaea-854a-498c-b012-52a9c7550195]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.976 250273 DEBUG oslo_concurrency.processutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.995 250273 DEBUG oslo_concurrency.processutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.997 250273 DEBUG os_brick.initiator.connectors.lightos [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.998 250273 DEBUG os_brick.initiator.connectors.lightos [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.998 250273 DEBUG os_brick.initiator.connectors.lightos [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 04:32:29 np0005593232 nova_compute[250269]: 2026-01-23 09:32:29.999 250273 DEBUG os_brick.utils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] <== get_connector_properties: return (833ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 04:32:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1130: 321 pgs: 321 active+clean; 529 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 6.0 MiB/s wr, 250 op/s
Jan 23 04:32:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:32:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.500 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.715 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc8knh3fr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2c9770c1-d351-43fa-b18d-aaf9291801fe='0f3f1f70-9837-4df7-bac9-a17bfd4c3a5f'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.716 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Creating instance directory: /var/lib/nova/instances/54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.716 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Ensure instance console log exists: /var/lib/nova/instances/54a1ad4e-6fc9-42dc-aa4c-99d3f1297520/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.716 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.717 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.717 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.718 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.726 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.728 250273 DEBUG nova.virt.libvirt.vif [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1815710456',display_name='tempest-LiveMigrationTest-server-1815710456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1815710456',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:32:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-1.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-mz7qsyn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:32:19Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=54a1ad4e-6fc9-42dc-aa4c-99d3f1297520,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.728 250273 DEBUG nova.network.os_vif_util [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converting VIF {"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.729 250273 DEBUG nova.network.os_vif_util [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.730 250273 DEBUG os_vif [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.730 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.731 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.731 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.737 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.737 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedc7d28f-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.737 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedc7d28f-ea, col_values=(('external_ids', {'iface-id': 'edc7d28f-eaba-44b8-9916-f2089618ca70', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:78:59', 'vm-uuid': '54a1ad4e-6fc9-42dc-aa4c-99d3f1297520'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:31 np0005593232 NetworkManager[49057]: <info>  [1769160751.7405] manager: (tapedc7d28f-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.747 250273 INFO os_vif [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea')#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.751 250273 DEBUG nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 23 04:32:31 np0005593232 nova_compute[250269]: 2026-01-23 09:32:31.752 250273 DEBUG nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc8knh3fr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2c9770c1-d351-43fa-b18d-aaf9291801fe='0f3f1f70-9837-4df7-bac9-a17bfd4c3a5f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 23 04:32:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:31.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1131: 321 pgs: 321 active+clean; 544 MiB data, 547 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 314 op/s
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.291 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.696 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.696 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.697 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.697 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.697 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.699 250273 INFO nova.compute.manager [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Terminating instance#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.700 250273 DEBUG nova.compute.manager [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:32:33 np0005593232 kernel: tapae608b13-33 (unregistering): left promiscuous mode
Jan 23 04:32:33 np0005593232 NetworkManager[49057]: <info>  [1769160753.7610] device (tapae608b13-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:33Z|00058|binding|INFO|Releasing lport ae608b13-3393-41c9-9fc7-3bdad96a3218 from this chassis (sb_readonly=0)
Jan 23 04:32:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:33Z|00059|binding|INFO|Setting lport ae608b13-3393-41c9-9fc7-3bdad96a3218 down in Southbound
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:33Z|00060|binding|INFO|Removing iface tapae608b13-33 ovn-installed in OVS
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 podman[265052]: 2026-01-23 09:32:33.866348337 +0000 UTC m=+0.069213324 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 04:32:33 np0005593232 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 23 04:32:33 np0005593232 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000d.scope: Consumed 17.267s CPU time.
Jan 23 04:32:33 np0005593232 systemd-machined[215836]: Machine qemu-5-instance-0000000d terminated.
Jan 23 04:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:33.882 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:b4:9a 10.100.0.6'], port_security=['fa:16:3e:f0:b4:9a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6b164305-fb4f-4e3a-9090-bb4dfb7ab779', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5071dae1c732441291c3cea4201538d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '485475b6-fa5d-49cf-9e7a-d68656021da2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd9ebd18-6b3b-45ea-8383-db75ba482bdb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=ae608b13-3393-41c9-9fc7-3bdad96a3218) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:33.884 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ae608b13-3393-41c9-9fc7-3bdad96a3218 in datapath e1f93ca8-d3d7-4404-b1f0-385022a03154 unbound from our chassis#033[00m
Jan 23 04:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:33.885 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1f93ca8-d3d7-4404-b1f0-385022a03154, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:33.887 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0139474-c985-495f-9171-3ba0cc6c3915]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:33.887 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 namespace which is not needed anymore#033[00m
Jan 23 04:32:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:33.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.945 250273 INFO nova.virt.libvirt.driver [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Instance destroyed successfully.#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.946 250273 DEBUG nova.objects.instance [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'resources' on Instance uuid 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.962 250273 DEBUG nova.virt.libvirt.vif [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:31:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1083116106',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(15),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1083116106',id=13,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=15,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:32:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-aby54ryw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:32:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=6b164305-fb4f-4e3a-9090-bb4dfb7ab779,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.963 250273 DEBUG nova.network.os_vif_util [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "address": "fa:16:3e:f0:b4:9a", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae608b13-33", "ovs_interfaceid": "ae608b13-3393-41c9-9fc7-3bdad96a3218", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.964 250273 DEBUG nova.network.os_vif_util [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.964 250273 DEBUG os_vif [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.966 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:33 np0005593232 nova_compute[250269]: 2026-01-23 09:32:33.967 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae608b13-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.036 250273 INFO os_vif [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f0:b4:9a,bridge_name='br-int',has_traffic_filtering=True,id=ae608b13-3393-41c9-9fc7-3bdad96a3218,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae608b13-33')#033[00m
Jan 23 04:32:34 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [NOTICE]   (264150) : haproxy version is 2.8.14-c23fe91
Jan 23 04:32:34 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [NOTICE]   (264150) : path to executable is /usr/sbin/haproxy
Jan 23 04:32:34 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [WARNING]  (264150) : Exiting Master process...
Jan 23 04:32:34 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [ALERT]    (264150) : Current worker (264153) exited with code 143 (Terminated)
Jan 23 04:32:34 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[264145]: [WARNING]  (264150) : All workers exited. Exiting... (0)
Jan 23 04:32:34 np0005593232 systemd[1]: libpod-1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24.scope: Deactivated successfully.
Jan 23 04:32:34 np0005593232 podman[265107]: 2026-01-23 09:32:34.088147107 +0000 UTC m=+0.049085780 container died 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:32:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24-userdata-shm.mount: Deactivated successfully.
Jan 23 04:32:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e443e1d4f65913647b73004d965aa9535843c518294fa3bd9e3978c15b00e73c-merged.mount: Deactivated successfully.
Jan 23 04:32:34 np0005593232 podman[265107]: 2026-01-23 09:32:34.138323637 +0000 UTC m=+0.099262310 container cleanup 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:32:34 np0005593232 systemd[1]: libpod-conmon-1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24.scope: Deactivated successfully.
Jan 23 04:32:34 np0005593232 podman[265156]: 2026-01-23 09:32:34.205141631 +0000 UTC m=+0.045169218 container remove 1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.211 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3b53bed6-56ba-4919-a51e-d3ce7c692d37]: (4, ('Fri Jan 23 09:32:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 (1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24)\n1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24\nFri Jan 23 09:32:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 (1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24)\n1377c239da614ab04e96300043ed0870b25fccd60c0d4fed1eb055338c9aff24\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.213 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cfb299-5bf8-441a-8e9b-929393ea81a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.215 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f93ca8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:34 np0005593232 kernel: tape1f93ca8-d0: left promiscuous mode
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.240 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e58ffa-3d09-4732-898d-12bd64f964f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.259 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d4660f-443d-4869-b788-39a417e02372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.262 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbe36af-4a21-460c-85cc-10821c8d2701]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.293 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[381c8d38-59ea-4d97-879d-7dc6190f0712]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 464160, 'reachable_time': 20330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265175, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 systemd[1]: run-netns-ovnmeta\x2de1f93ca8\x2dd3d7\x2d4404\x2db1f0\x2d385022a03154.mount: Deactivated successfully.
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.300 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:32:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:34.300 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4262f65d-e520-4a48-9f01-6442849fbe3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.458 250273 INFO nova.virt.libvirt.driver [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Deleting instance files /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779_del#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.459 250273 INFO nova.virt.libvirt.driver [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Deletion of /var/lib/nova/instances/6b164305-fb4f-4e3a-9090-bb4dfb7ab779_del complete#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.760 250273 INFO nova.compute.manager [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Took 1.06 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.760 250273 DEBUG oslo.service.loopingcall [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.761 250273 DEBUG nova.compute.manager [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.761 250273 DEBUG nova.network.neutron [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.791 250273 DEBUG nova.compute.manager [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-unplugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.792 250273 DEBUG oslo_concurrency.lockutils [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.792 250273 DEBUG oslo_concurrency.lockutils [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.792 250273 DEBUG oslo_concurrency.lockutils [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.792 250273 DEBUG nova.compute.manager [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] No waiting events found dispatching network-vif-unplugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:32:34 np0005593232 nova_compute[250269]: 2026-01-23 09:32:34.793 250273 DEBUG nova.compute.manager [req-e84b83c5-d8e0-440f-891a-5e279def56bc req-84406004-b02c-4feb-a0ca-3a8e144cf139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-unplugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:32:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1132: 321 pgs: 321 active+clean; 524 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.9 MiB/s wr, 312 op/s
Jan 23 04:32:35 np0005593232 nova_compute[250269]: 2026-01-23 09:32:35.383 250273 DEBUG nova.network.neutron [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Port edc7d28f-eaba-44b8-9916-f2089618ca70 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 23 04:32:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:35 np0005593232 nova_compute[250269]: 2026-01-23 09:32:35.649 250273 DEBUG nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc8knh3fr',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={2c9770c1-d351-43fa-b18d-aaf9291801fe='0f3f1f70-9837-4df7-bac9-a17bfd4c3a5f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 23 04:32:35 np0005593232 systemd[1]: Starting libvirt proxy daemon...
Jan 23 04:32:35 np0005593232 systemd[1]: Started libvirt proxy daemon.
Jan 23 04:32:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:35.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 23 04:32:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 23 04:32:36 np0005593232 kernel: tapedc7d28f-ea: entered promiscuous mode
Jan 23 04:32:36 np0005593232 NetworkManager[49057]: <info>  [1769160756.0491] manager: (tapedc7d28f-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 23 04:32:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:36Z|00061|binding|INFO|Claiming lport edc7d28f-eaba-44b8-9916-f2089618ca70 for this additional chassis.
Jan 23 04:32:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:36Z|00062|binding|INFO|edc7d28f-eaba-44b8-9916-f2089618ca70: Claiming fa:16:3e:c7:78:59 10.100.0.14
Jan 23 04:32:36 np0005593232 systemd-udevd[265059]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 23 04:32:36 np0005593232 NetworkManager[49057]: <info>  [1769160756.0673] device (tapedc7d28f-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:32:36 np0005593232 NetworkManager[49057]: <info>  [1769160756.0681] device (tapedc7d28f-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:32:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:36Z|00063|binding|INFO|Setting lport edc7d28f-eaba-44b8-9916-f2089618ca70 ovn-installed in OVS
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:36 np0005593232 systemd-machined[215836]: New machine qemu-6-instance-0000000f.
Jan 23 04:32:36 np0005593232 systemd[1]: Started Virtual Machine qemu-6-instance-0000000f.
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.577 250273 DEBUG nova.network.neutron [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.603 250273 INFO nova.compute.manager [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Took 1.84 seconds to deallocate network for instance.#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.771 250273 DEBUG nova.compute.manager [req-02d57a04-efdb-48ae-856b-e62f7a2abd94 req-0735e976-965f-453b-a52b-8d49258fa01e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-deleted-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.823 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.824 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.919 250273 DEBUG nova.compute.manager [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.920 250273 DEBUG oslo_concurrency.lockutils [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.920 250273 DEBUG oslo_concurrency.lockutils [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.920 250273 DEBUG oslo_concurrency.lockutils [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.920 250273 DEBUG nova.compute.manager [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] No waiting events found dispatching network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.920 250273 WARNING nova.compute.manager [req-7af7422e-f3a6-4e1a-ba6e-2dd7cb59cf52 req-491a80cd-8056-42f7-b334-228fa1df255d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Received unexpected event network-vif-plugged-ae608b13-3393-41c9-9fc7-3bdad96a3218 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:32:36 np0005593232 nova_compute[250269]: 2026-01-23 09:32:36.944 250273 DEBUG oslo_concurrency.processutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1134: 321 pgs: 321 active+clean; 488 MiB data, 518 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 260 op/s
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.044 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160757.044253, 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.045 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] VM Started (Lifecycle Event)#033[00m
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:32:37
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'vms', 'volumes', '.mgr']
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.285 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:32:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1674098212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.482 250273 DEBUG oslo_concurrency.processutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.488 250273 DEBUG nova.compute.provider_tree [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:32:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:32:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.529 250273 DEBUG nova.scheduler.client.report [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.583 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160757.5825758, 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.583 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.720 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.748 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.751 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.791 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.819 250273 INFO nova.scheduler.client.report [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Deleted allocations for instance 6b164305-fb4f-4e3a-9090-bb4dfb7ab779#033[00m
Jan 23 04:32:37 np0005593232 nova_compute[250269]: 2026-01-23 09:32:37.926 250273 DEBUG oslo_concurrency.lockutils [None req-7451683b-410c-44e2-b018-59232a7d1684 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "6b164305-fb4f-4e3a-9090-bb4dfb7ab779" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:32:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:32:38 np0005593232 nova_compute[250269]: 2026-01-23 09:32:38.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1135: 321 pgs: 321 active+clean; 488 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.1 MiB/s wr, 277 op/s
Jan 23 04:32:39 np0005593232 nova_compute[250269]: 2026-01-23 09:32:39.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:32:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:32:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:40Z|00064|binding|INFO|Claiming lport edc7d28f-eaba-44b8-9916-f2089618ca70 for this chassis.
Jan 23 04:32:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:40Z|00065|binding|INFO|edc7d28f-eaba-44b8-9916-f2089618ca70: Claiming fa:16:3e:c7:78:59 10.100.0.14
Jan 23 04:32:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:40Z|00066|binding|INFO|Setting lport edc7d28f-eaba-44b8-9916-f2089618ca70 up in Southbound
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.179 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:78:59 10.100.0.14'], port_security=['fa:16:3e:c7:78:59 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '54a1ad4e-6fc9-42dc-aa4c-99d3f1297520', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-1.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cabb3d88-013b-4542-b789-52d49c567d53, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=edc7d28f-eaba-44b8-9916-f2089618ca70) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.180 161902 INFO neutron.agent.ovn.metadata.agent [-] Port edc7d28f-eaba-44b8-9916-f2089618ca70 in datapath 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 bound to our chassis#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.181 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.192 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5309129e-1f69-4b64-8723-bcf988c8e745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.193 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap385e7a4d-f1 in ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.195 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap385e7a4d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.195 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f3f679-4fab-4497-9ce1-6cd3d6accc2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.196 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dff90761-39f8-4019-b1ab-61e3792ad4e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.207 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[98d64f17-203f-459f-920b-6048f25b3f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.231 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eee9b062-3d3b-43ac-8622-37bc5bb8f85c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.262 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[17a59b9c-de4d-4ab2-8259-c72879668896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 NetworkManager[49057]: <info>  [1769160760.2711] manager: (tap385e7a4d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.270 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad1fe67-849f-4c66-a78c-518e25c2313a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 systemd-udevd[265292]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.305 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3ba233-72b8-409e-91ac-a0c260b0266f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.307 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9344f4a5-1b07-47ae-9cd7-948b27afb5de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 NetworkManager[49057]: <info>  [1769160760.3337] device (tap385e7a4d-f0): carrier: link connected
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.338 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[2dcc201e-a8b4-48ad-b9e9-b0602b767f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.344 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.344 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.352 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[23f7b07d-1ec4-4b6f-9e91-73f86d1007b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385e7a4d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:a3:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467347, 'reachable_time': 40003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265311, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.365 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d55a0d15-4d17-483a-b097-368fa1394f50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:a31a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467347, 'tstamp': 467347}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265312, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.378 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[433e940e-d341-4de7-ae4f-1f7ce90b1e4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap385e7a4d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:a3:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467347, 'reachable_time': 40003, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265313, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.383 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.408 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56a96d4c-d4a4-46f3-b59f-766cbdea01be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.433 250273 INFO nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Post operation of migration started#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.462 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[950e2470-22e8-4138-baba-d0d472cffdf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.463 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385e7a4d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.464 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.465 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap385e7a4d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:40 np0005593232 NetworkManager[49057]: <info>  [1769160760.4681] manager: (tap385e7a4d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:40 np0005593232 kernel: tap385e7a4d-f0: entered promiscuous mode
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.472 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap385e7a4d-f0, col_values=(('external_ids', {'iface-id': '7b93c40e-1f44-4d5a-9bad-e23468f98d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:40Z|00067|binding|INFO|Releasing lport 7b93c40e-1f44-4d5a-9bad-e23468f98d69 from this chassis (sb_readonly=0)
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.475 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.476 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[690fcfc5-fd7f-4e10-813c-9e13a61a5b83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.477 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.pid.haproxy
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:32:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:40.478 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'env', 'PROCESS_TAG=haproxy-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/385e7a4d-f87e-44c5-9fc0-5a322eecd4b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.487 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.541 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.541 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.548 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.549 250273 INFO nova.compute.claims [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:32:40 np0005593232 nova_compute[250269]: 2026-01-23 09:32:40.687 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:40 np0005593232 podman[265346]: 2026-01-23 09:32:40.821020411 +0000 UTC m=+0.031788897 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:32:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1136: 321 pgs: 321 active+clean; 488 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 190 op/s
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.002 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.002 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquired lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.002 250273 DEBUG nova.network.neutron [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:32:41 np0005593232 podman[265346]: 2026-01-23 09:32:41.005020955 +0000 UTC m=+0.215789461 container create 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:32:41 np0005593232 systemd[1]: Started libpod-conmon-5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089.scope.
Jan 23 04:32:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e50f84ade3e66b912a1fbea03833a8e0b8f1def8ef73bd6cd8ecc26cc701df20/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:32:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546167233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:32:41 np0005593232 podman[265346]: 2026-01-23 09:32:41.099381624 +0000 UTC m=+0.310150130 container init 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:32:41 np0005593232 podman[265346]: 2026-01-23 09:32:41.108489154 +0000 UTC m=+0.319257630 container start 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.110 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.117 250273 DEBUG nova.compute.provider_tree [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:32:41 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [NOTICE]   (265387) : New worker (265389) forked
Jan 23 04:32:41 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [NOTICE]   (265387) : Loading success.
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.143 250273 DEBUG nova.scheduler.client.report [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.186 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.187 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.258 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.258 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.292 250273 INFO nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.339 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.477 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.478 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.478 250273 INFO nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Creating image(s)#033[00m
Jan 23 04:32:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.507 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.539 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.566 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.570 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.629 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.631 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.631 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.632 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.658 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.661 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:41 np0005593232 nova_compute[250269]: 2026-01-23 09:32:41.685 250273 DEBUG nova.policy [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9914fa5b09794fda94ca3cb12e25549f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5071dae1c732441291c3cea4201538d1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:32:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:41.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.389 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.497 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] resizing rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:42.588 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:42.589 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:42.589 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.625 250273 DEBUG nova.objects.instance [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'migration_context' on Instance uuid dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.674 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.708 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.713 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.714 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.715 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.745 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.747 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.782 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.783 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.809 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:42 np0005593232 nova_compute[250269]: 2026-01-23 09:32:42.813 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1137: 321 pgs: 321 active+clean; 541 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 296 op/s
Jan 23 04:32:43 np0005593232 nova_compute[250269]: 2026-01-23 09:32:43.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:43.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:43 np0005593232 nova_compute[250269]: 2026-01-23 09:32:43.764 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Successfully created port: b3082eca-f363-4d6f-91c2-78579e8ff7fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:32:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:43.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.071 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.146 250273 DEBUG nova.network.neutron [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updating instance_info_cache with network_info: [{"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.195 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.196 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Ensure instance console log exists: /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.196 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.197 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.198 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.386 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Releasing lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.411 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.412 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.412 250273 DEBUG oslo_concurrency.lockutils [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:44 np0005593232 nova_compute[250269]: 2026-01-23 09:32:44.417 250273 INFO nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 23 04:32:44 np0005593232 virtqemud[249592]: Domain id=6 name='instance-0000000f' uuid=54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 is tainted: custom-monitor
Jan 23 04:32:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1138: 321 pgs: 321 active+clean; 541 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.7 MiB/s wr, 243 op/s
Jan 23 04:32:45 np0005593232 nova_compute[250269]: 2026-01-23 09:32:45.425 250273 INFO nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 23 04:32:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:45.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:45 np0005593232 nova_compute[250269]: 2026-01-23 09:32:45.892 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Successfully updated port: b3082eca-f363-4d6f-91c2-78579e8ff7fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:32:45 np0005593232 nova_compute[250269]: 2026-01-23 09:32:45.915 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:45 np0005593232 nova_compute[250269]: 2026-01-23 09:32:45.915 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquired lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:45 np0005593232 nova_compute[250269]: 2026-01-23 09:32:45.916 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:32:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:45.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.132 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.136 250273 DEBUG nova.compute.manager [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-changed-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.136 250273 DEBUG nova.compute.manager [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Refreshing instance network info cache due to event network-changed-b3082eca-f363-4d6f-91c2-78579e8ff7fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.137 250273 DEBUG oslo_concurrency.lockutils [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 23 04:32:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 23 04:32:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.431 250273 INFO nova.virt.libvirt.driver [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.437 250273 DEBUG nova.compute.manager [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:46 np0005593232 nova_compute[250269]: 2026-01-23 09:32:46.470 250273 DEBUG nova.objects.instance [None req-4686ec0f-b2f4-43d4-9b84-cf40e2256e17 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011179486843941933 of space, bias 1.0, pg target 3.35384605318258 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002160505944072213 of space, bias 1.0, pg target 0.6416702653894472 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 04:32:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1140: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 542 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.2 MiB/s wr, 248 op/s
Jan 23 04:32:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:47.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.766 250273 DEBUG nova.network.neutron [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updating instance_info_cache with network_info: [{"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.916 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Releasing lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.917 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Instance network_info: |[{"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.918 250273 DEBUG oslo_concurrency.lockutils [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.919 250273 DEBUG nova.network.neutron [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Refreshing network info cache for port b3082eca-f363-4d6f-91c2-78579e8ff7fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.925 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Start _get_guest_xml network_info=[{"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [{'encryption_options': None, 'device_name': '/dev/vdb', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'size': 1, 'encrypted': False, 'guest_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.930 250273 WARNING nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.936 250273 DEBUG nova.virt.libvirt.host [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.936 250273 DEBUG nova.virt.libvirt.host [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.940 250273 DEBUG nova.virt.libvirt.host [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.940 250273 DEBUG nova.virt.libvirt.host [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.942 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.943 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:31:33Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1041441995',id=14,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-381013747',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.943 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.944 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.944 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.944 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.945 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.945 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.946 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.946 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.946 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.947 250273 DEBUG nova.virt.hardware [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:32:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:47 np0005593232 nova_compute[250269]: 2026-01-23 09:32:47.952 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2859987831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.395 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.396 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525514360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.827 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.857 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.862 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.943 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160753.9421778, 6b164305-fb4f-4e3a-9090-bb4dfb7ab779 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.944 250273 INFO nova.compute.manager [-] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:32:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1141: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 520 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.7 MiB/s wr, 291 op/s
Jan 23 04:32:48 np0005593232 nova_compute[250269]: 2026-01-23 09:32:48.984 250273 DEBUG nova.compute.manager [None req-0fa983ce-0ab4-40ba-af65-a189812eafe5 - - - - - -] [instance: 6b164305-fb4f-4e3a-9090-bb4dfb7ab779] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4138820481' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.315 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.317 250273 DEBUG nova.virt.libvirt.vif [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:32:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-63411154',id=17,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-dzpuoyof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:32:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=dcffb4e0-398b-4fa3-9eac-2020fa2f9b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.318 250273 DEBUG nova.network.os_vif_util [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.319 250273 DEBUG nova.network.os_vif_util [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.321 250273 DEBUG nova.objects.instance [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.344 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <uuid>dcffb4e0-398b-4fa3-9eac-2020fa2f9b75</uuid>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <name>instance-00000011</name>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-63411154</nova:name>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:32:47</nova:creationTime>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-381013747">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:ephemeral>1</nova:ephemeral>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:user uuid="9914fa5b09794fda94ca3cb12e25549f">tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member</nova:user>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:project uuid="5071dae1c732441291c3cea4201538d1">tempest-ServersWithSpecificFlavorTestJSON-2097391700</nova:project>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <nova:port uuid="b3082eca-f363-4d6f-91c2-78579e8ff7fd">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="serial">dcffb4e0-398b-4fa3-9eac-2020fa2f9b75</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="uuid">dcffb4e0-398b-4fa3-9eac-2020fa2f9b75</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.eph0">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:55:80:d7"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <target dev="tapb3082eca-f3"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/console.log" append="off"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:32:49 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:32:49 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:32:49 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:32:49 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.346 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Preparing to wait for external event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.346 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.347 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.347 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.348 250273 DEBUG nova.virt.libvirt.vif [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:32:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-63411154',id=17,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-dzpuoyof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:32:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=dcffb4e0-398b-4fa3-9eac-2020fa2f9b75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.348 250273 DEBUG nova.network.os_vif_util [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.348 250273 DEBUG nova.network.os_vif_util [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.349 250273 DEBUG os_vif [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.350 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.350 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.353 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.353 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3082eca-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.353 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3082eca-f3, col_values=(('external_ids', {'iface-id': 'b3082eca-f363-4d6f-91c2-78579e8ff7fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:55:80:d7', 'vm-uuid': 'dcffb4e0-398b-4fa3-9eac-2020fa2f9b75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.355 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:49 np0005593232 NetworkManager[49057]: <info>  [1769160769.3563] manager: (tapb3082eca-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.363 250273 INFO os_vif [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3')#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.423 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.423 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.423 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.424 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] No VIF found with MAC fa:16:3e:55:80:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.424 250273 INFO nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Using config drive#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.447 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:49.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.782 250273 DEBUG nova.network.neutron [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updated VIF entry in instance network info cache for port b3082eca-f363-4d6f-91c2-78579e8ff7fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.782 250273 DEBUG nova.network.neutron [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updating instance_info_cache with network_info: [{"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:32:49 np0005593232 podman[265831]: 2026-01-23 09:32:49.796106724 +0000 UTC m=+0.104943571 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.827 250273 DEBUG oslo_concurrency.lockutils [req-5b6ff8aa-1e96-49ff-b12c-90eabc1511a9 req-8af4e31b-ba91-4e7c-9fe2-c8bc336d5da8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.854 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Check if temp file /var/lib/nova/instances/tmpo7nk37a9 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.855 250273 DEBUG nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpo7nk37a9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.871 250273 INFO nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Creating config drive at /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config#033[00m
Jan 23 04:32:49 np0005593232 nova_compute[250269]: 2026-01-23 09:32:49.880 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8rd0kle execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:49.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.010 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8rd0kle" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.039 250273 DEBUG nova.storage.rbd_utils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] rbd image dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.042 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.281 250273 DEBUG oslo_concurrency.processutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.282 250273 INFO nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Deleting local config drive /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75/disk.config because it was imported into RBD.#033[00m
Jan 23 04:32:50 np0005593232 kernel: tapb3082eca-f3: entered promiscuous mode
Jan 23 04:32:50 np0005593232 NetworkManager[49057]: <info>  [1769160770.3397] manager: (tapb3082eca-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 23 04:32:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:50Z|00068|binding|INFO|Claiming lport b3082eca-f363-4d6f-91c2-78579e8ff7fd for this chassis.
Jan 23 04:32:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:50Z|00069|binding|INFO|b3082eca-f363-4d6f-91c2-78579e8ff7fd: Claiming fa:16:3e:55:80:d7 10.100.0.8
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.340 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:50Z|00070|binding|INFO|Setting lport b3082eca-f363-4d6f-91c2-78579e8ff7fd ovn-installed in OVS
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.359 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:50 np0005593232 nova_compute[250269]: 2026-01-23 09:32:50.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:50 np0005593232 systemd-udevd[265931]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:32:50 np0005593232 NetworkManager[49057]: <info>  [1769160770.3811] device (tapb3082eca-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:32:50 np0005593232 NetworkManager[49057]: <info>  [1769160770.3819] device (tapb3082eca-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:32:50 np0005593232 systemd-machined[215836]: New machine qemu-7-instance-00000011.
Jan 23 04:32:50 np0005593232 systemd[1]: Started Virtual Machine qemu-7-instance-00000011.
Jan 23 04:32:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1142: 321 pgs: 2 active+clean+snaptrim, 8 active+clean+snaptrim_wait, 311 active+clean; 520 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 6.7 MiB/s wr, 291 op/s
Jan 23 04:32:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:50Z|00071|binding|INFO|Setting lport b3082eca-f363-4d6f-91c2-78579e8ff7fd up in Southbound
Jan 23 04:32:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:50.984 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:80:d7 10.100.0.8'], port_security=['fa:16:3e:55:80:d7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dcffb4e0-398b-4fa3-9eac-2020fa2f9b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5071dae1c732441291c3cea4201538d1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '485475b6-fa5d-49cf-9e7a-d68656021da2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd9ebd18-6b3b-45ea-8383-db75ba482bdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=b3082eca-f363-4d6f-91c2-78579e8ff7fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:32:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:50.986 161902 INFO neutron.agent.ovn.metadata.agent [-] Port b3082eca-f363-4d6f-91c2-78579e8ff7fd in datapath e1f93ca8-d3d7-4404-b1f0-385022a03154 bound to our chassis#033[00m
Jan 23 04:32:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:50.987 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e1f93ca8-d3d7-4404-b1f0-385022a03154#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:50.999 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8bb8ec-f3b8-4013-bdd6-5e2db024b1dd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.000 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape1f93ca8-d1 in ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.002 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape1f93ca8-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.003 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b327e7-7b39-4fd4-bde2-76d5fcad6a79]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.003 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6f1a9c23-02b2-4706-8687-16ae65d23107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.019 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[23913a1b-4697-4116-ad5e-709b6cd29fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.031 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6bf5c35a-df86-4b93-a6e1-bffda31d22a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.077 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3aac9766-95e9-49e4-a870-ac2b58c3e064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 NetworkManager[49057]: <info>  [1769160771.0844] manager: (tape1f93ca8-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 23 04:32:51 np0005593232 systemd-udevd[265933]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.084 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b566c44-1557-4900-8819-64d202522a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.134 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7d195d73-fe9a-42ad-b0c7-86963af2dca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.138 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4fd5d5-4547-4a29-8984-bef60af84e1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 NetworkManager[49057]: <info>  [1769160771.1826] device (tape1f93ca8-d0): carrier: link connected
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.193 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[98c5b692-9faf-40c1-8a48-9c75703ccfc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.215 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5475c8-f5e6-4d53-aac9-6278f7b113ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1f93ca8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:6f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468432, 'reachable_time': 25865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266022, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.232 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2d44d5f-7fbd-427f-9d71-7830c678908f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:6fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 468432, 'tstamp': 468432}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266024, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.257 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ff43ea5b-11f5-446f-aa7c-076ec7b007d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape1f93ca8-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:6f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468432, 'reachable_time': 25865, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266025, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.289 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a17e36bb-cdcc-43a7-890f-fa5949e5d8b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.353 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[590b0ceb-cb2a-4957-a98d-077d5cc587d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.355 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f93ca8-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.355 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.356 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape1f93ca8-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:51 np0005593232 kernel: tape1f93ca8-d0: entered promiscuous mode
Jan 23 04:32:51 np0005593232 NetworkManager[49057]: <info>  [1769160771.3589] manager: (tape1f93ca8-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.362 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape1f93ca8-d0, col_values=(('external_ids', {'iface-id': '7bc99e1a-c88e-4c7e-a925-28a3494f0f6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:32:51Z|00072|binding|INFO|Releasing lport 7bc99e1a-c88e-4c7e-a925-28a3494f0f6e from this chassis (sb_readonly=0)
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.366 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.367 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a20cc304-031a-4c3f-87b6-f3bff7d2d72f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.368 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-e1f93ca8-d3d7-4404-b1f0-385022a03154
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/e1f93ca8-d3d7-4404-b1f0-385022a03154.pid.haproxy
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID e1f93ca8-d3d7-4404-b1f0-385022a03154
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:32:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:32:51.370 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'env', 'PROCESS_TAG=haproxy-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e1f93ca8-d3d7-4404-b1f0-385022a03154.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.398 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160771.3969665, dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.399 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] VM Started (Lifecycle Event)#033[00m
Jan 23 04:32:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:51 np0005593232 podman[266064]: 2026-01-23 09:32:51.756829734 +0000 UTC m=+0.055973496 container create cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:32:51 np0005593232 systemd[1]: Started libpod-conmon-cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8.scope.
Jan 23 04:32:51 np0005593232 podman[266064]: 2026-01-23 09:32:51.729467754 +0000 UTC m=+0.028611576 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:32:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:32:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d54c5d522df6f52ef49db82b3ec3927ec7ad170319d5987cb5e875d5155b822a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:32:51 np0005593232 podman[266064]: 2026-01-23 09:32:51.849753162 +0000 UTC m=+0.148897054 container init cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:32:51 np0005593232 podman[266064]: 2026-01-23 09:32:51.854966911 +0000 UTC m=+0.154110723 container start cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:32:51 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [NOTICE]   (266084) : New worker (266086) forked
Jan 23 04:32:51 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [NOTICE]   (266084) : Loading success.
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.940 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.948 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160771.3981118, dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.948 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:32:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:51.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.986 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:51 np0005593232 nova_compute[250269]: 2026-01-23 09:32:51.995 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.039 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:32:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1143: 321 pgs: 321 active+clean; 536 MiB data, 574 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.972 250273 DEBUG nova.compute.manager [req-6dd7bce9-4ab3-426c-86ee-e210fe49083c req-adcb14fe-5f08-4293-a976-940069d47a50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.973 250273 DEBUG oslo_concurrency.lockutils [req-6dd7bce9-4ab3-426c-86ee-e210fe49083c req-adcb14fe-5f08-4293-a976-940069d47a50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.973 250273 DEBUG oslo_concurrency.lockutils [req-6dd7bce9-4ab3-426c-86ee-e210fe49083c req-adcb14fe-5f08-4293-a976-940069d47a50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.973 250273 DEBUG oslo_concurrency.lockutils [req-6dd7bce9-4ab3-426c-86ee-e210fe49083c req-adcb14fe-5f08-4293-a976-940069d47a50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.974 250273 DEBUG nova.compute.manager [req-6dd7bce9-4ab3-426c-86ee-e210fe49083c req-adcb14fe-5f08-4293-a976-940069d47a50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Processing event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.974 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.978 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160772.9780443, dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.978 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.980 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.983 250273 INFO nova.virt.libvirt.driver [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Instance spawned successfully.#033[00m
Jan 23 04:32:52 np0005593232 nova_compute[250269]: 2026-01-23 09:32:52.983 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.016 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.021 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.022 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.022 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.023 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.023 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.024 250273 DEBUG nova.virt.libvirt.driver [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.027 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.065 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.098 250273 INFO nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Took 11.62 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.098 250273 DEBUG nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.242 250273 INFO nova.compute.manager [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Took 12.73 seconds to build instance.#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:53 np0005593232 nova_compute[250269]: 2026-01-23 09:32:53.313 250273 DEBUG oslo_concurrency.lockutils [None req-f3935d19-f5a4-4c79-8267-294b5edcfcb6 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.969s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 23 04:32:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 23 04:32:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 23 04:32:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:32:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:53.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:32:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:54 np0005593232 nova_compute[250269]: 2026-01-23 09:32:54.357 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1145: 321 pgs: 321 active+clean; 536 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.1 MiB/s wr, 354 op/s
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.007 250273 DEBUG nova.compute.manager [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.008 250273 DEBUG oslo_concurrency.lockutils [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.008 250273 DEBUG oslo_concurrency.lockutils [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.008 250273 DEBUG oslo_concurrency.lockutils [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.008 250273 DEBUG nova.compute.manager [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] No waiting events found dispatching network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:32:55 np0005593232 nova_compute[250269]: 2026-01-23 09:32:55.008 250273 WARNING nova.compute.manager [req-53b05946-e46e-434f-8177-f2a639b05408 req-5ab0b3be-776c-4dba-b997-dcd5172fe97e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received unexpected event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd for instance with vm_state active and task_state None.#033[00m
Jan 23 04:32:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:55.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:32:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:32:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:32:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3018630603' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:32:56 np0005593232 nova_compute[250269]: 2026-01-23 09:32:56.785 250273 DEBUG nova.compute.manager [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-changed-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:56 np0005593232 nova_compute[250269]: 2026-01-23 09:32:56.785 250273 DEBUG nova.compute.manager [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Refreshing instance network info cache due to event network-changed-b3082eca-f363-4d6f-91c2-78579e8ff7fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:32:56 np0005593232 nova_compute[250269]: 2026-01-23 09:32:56.786 250273 DEBUG oslo_concurrency.lockutils [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:32:56 np0005593232 nova_compute[250269]: 2026-01-23 09:32:56.786 250273 DEBUG oslo_concurrency.lockutils [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:32:56 np0005593232 nova_compute[250269]: 2026-01-23 09:32:56.787 250273 DEBUG nova.network.neutron [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Refreshing network info cache for port b3082eca-f363-4d6f-91c2-78579e8ff7fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:32:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1146: 321 pgs: 321 active+clean; 536 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.7 MiB/s wr, 314 op/s
Jan 23 04:32:57 np0005593232 nova_compute[250269]: 2026-01-23 09:32:57.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:57 np0005593232 nova_compute[250269]: 2026-01-23 09:32:57.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:32:57 np0005593232 nova_compute[250269]: 2026-01-23 09:32:57.329 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:32:57 np0005593232 nova_compute[250269]: 2026-01-23 09:32:57.329 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:57 np0005593232 nova_compute[250269]: 2026-01-23 09:32:57.330 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:32:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:57.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:57.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:58 np0005593232 nova_compute[250269]: 2026-01-23 09:32:58.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:32:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1147: 321 pgs: 321 active+clean; 555 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 1.0 MiB/s wr, 312 op/s
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.315 250273 DEBUG nova.compute.manager [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.317 250273 DEBUG oslo_concurrency.lockutils [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.317 250273 DEBUG oslo_concurrency.lockutils [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.318 250273 DEBUG oslo_concurrency.lockutils [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.318 250273 DEBUG nova.compute.manager [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.318 250273 DEBUG nova.compute.manager [req-326e380f-48f8-4be5-835c-1130554cbc37 req-fa527f01-696d-4475-891d-f1a38fe557d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.344 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.359 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:32:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:32:59.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.960 250273 INFO nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Took 7.97 seconds for pre_live_migration on destination host compute-1.ctlplane.example.com.#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.961 250273 DEBUG nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:32:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:32:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:32:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:32:59.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.977 250273 DEBUG nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpo7nk37a9',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='54a1ad4e-6fc9-42dc-aa4c-99d3f1297520',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(b15a37ef-d8e5-44a3-85e1-37de93fc8299),old_vol_attachment_ids={2c9770c1-d351-43fa-b18d-aaf9291801fe='27ad6cc5-03f7-4ff3-b319-d6a2b0f2119f'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.980 250273 DEBUG nova.objects.instance [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lazy-loading 'migration_context' on Instance uuid 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.982 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.983 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 23 04:32:59 np0005593232 nova_compute[250269]: 2026-01-23 09:32:59.984 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.003 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Find same serial number: pos=1, serial=2c9770c1-d351-43fa-b18d-aaf9291801fe _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.004 250273 DEBUG nova.virt.libvirt.vif [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T09:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1815710456',display_name='tempest-LiveMigrationTest-server-1815710456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1815710456',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:32:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-mz7qsyn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:32:46Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=54a1ad4e-6fc9-42dc-aa4c-99d3f1297520,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.004 250273 DEBUG nova.network.os_vif_util [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converting VIF {"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.005 250273 DEBUG nova.network.os_vif_util [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.005 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updating guest XML with vif config: <interface type="ethernet">
Jan 23 04:33:00 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:c7:78:59"/>
Jan 23 04:33:00 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:33:00 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:33:00 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:33:00 np0005593232 nova_compute[250269]:  <target dev="tapedc7d28f-ea"/>
Jan 23 04:33:00 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:33:00 np0005593232 nova_compute[250269]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.006 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.486 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.487 250273 INFO nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.601 250273 INFO nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.783 250273 DEBUG nova.network.neutron [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updated VIF entry in instance network info cache for port b3082eca-f363-4d6f-91c2-78579e8ff7fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.784 250273 DEBUG nova.network.neutron [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updating instance_info_cache with network_info: [{"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:33:00 np0005593232 nova_compute[250269]: 2026-01-23 09:33:00.808 250273 DEBUG oslo_concurrency.lockutils [req-f2405c23-f6d9-499b-a1ee-a7dfc4aa4b17 req-d4978480-7fd0-44eb-803d-f350766360c9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:33:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1148: 321 pgs: 321 active+clean; 555 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 1.0 MiB/s wr, 312 op/s
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.104 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.105 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.328 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.328 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.329 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.329 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:33:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:01.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.594 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.596 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.596 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.596 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.608 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.610 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.797 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.819 250273 DEBUG nova.compute.manager [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.820 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.820 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.821 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.821 250273 DEBUG nova.compute.manager [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.822 250273 WARNING nova.compute.manager [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received unexpected event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.822 250273 DEBUG nova.compute.manager [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-changed-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.822 250273 DEBUG nova.compute.manager [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Refreshing instance network info cache due to event network-changed-edc7d28f-eaba-44b8-9916-f2089618ca70. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.822 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.823 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:33:01 np0005593232 nova_compute[250269]: 2026-01-23 09:33:01.823 250273 DEBUG nova.network.neutron [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Refreshing network info cache for port edc7d28f-eaba-44b8-9916-f2089618ca70 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:33:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 04:33:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:01.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.112 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.113 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.541 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.564 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.564 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.616 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.617 250273 DEBUG nova.virt.libvirt.migration [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.715 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160782.7144575, 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.716 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.747 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.752 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.773 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 23 04:33:02 np0005593232 kernel: tapedc7d28f-ea (unregistering): left promiscuous mode
Jan 23 04:33:02 np0005593232 NetworkManager[49057]: <info>  [1769160782.9127] device (tapedc7d28f-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:33:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:02Z|00073|binding|INFO|Releasing lport edc7d28f-eaba-44b8-9916-f2089618ca70 from this chassis (sb_readonly=0)
Jan 23 04:33:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:02Z|00074|binding|INFO|Setting lport edc7d28f-eaba-44b8-9916-f2089618ca70 down in Southbound
Jan 23 04:33:02 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:02Z|00075|binding|INFO|Removing iface tapedc7d28f-ea ovn-installed in OVS
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:02.939 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:78:59 10.100.0.14'], port_security=['fa:16:3e:c7:78:59 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-1.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '539cfa5a-1c2f-4cb4-97af-2edb819f72fc'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '54a1ad4e-6fc9-42dc-aa4c-99d3f1297520', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c56e53b3339e4e4db30b7a9d330bc380', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'c0c0e09a-b9c3-4a3a-af9e-c3b66e9f8bc1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cabb3d88-013b-4542-b789-52d49c567d53, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=edc7d28f-eaba-44b8-9916-f2089618ca70) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:33:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:02.941 161902 INFO neutron.agent.ovn.metadata.agent [-] Port edc7d28f-eaba-44b8-9916-f2089618ca70 in datapath 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 unbound from our chassis#033[00m
Jan 23 04:33:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:02.943 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:33:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:02.944 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8e14a9aa-1803-43b7-be01-37ad59fc00f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:02.945 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 namespace which is not needed anymore#033[00m
Jan 23 04:33:02 np0005593232 nova_compute[250269]: 2026-01-23 09:33:02.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1149: 321 pgs: 321 active+clean; 604 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 4.6 MiB/s wr, 406 op/s
Jan 23 04:33:02 np0005593232 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 23 04:33:02 np0005593232 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000f.scope: Consumed 3.033s CPU time.
Jan 23 04:33:02 np0005593232 systemd-machined[215836]: Machine qemu-6-instance-0000000f terminated.
Jan 23 04:33:03 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-2c9770c1-d351-43fa-b18d-aaf9291801fe: No such file or directory
Jan 23 04:33:03 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-2c9770c1-d351-43fa-b18d-aaf9291801fe: No such file or directory
Jan 23 04:33:03 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [NOTICE]   (265387) : haproxy version is 2.8.14-c23fe91
Jan 23 04:33:03 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [NOTICE]   (265387) : path to executable is /usr/sbin/haproxy
Jan 23 04:33:03 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [WARNING]  (265387) : Exiting Master process...
Jan 23 04:33:03 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [ALERT]    (265387) : Current worker (265389) exited with code 143 (Terminated)
Jan 23 04:33:03 np0005593232 neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4[265381]: [WARNING]  (265387) : All workers exited. Exiting... (0)
Jan 23 04:33:03 np0005593232 systemd[1]: libpod-5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089.scope: Deactivated successfully.
Jan 23 04:33:03 np0005593232 podman[266125]: 2026-01-23 09:33:03.087489909 +0000 UTC m=+0.049281975 container died 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.094 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.094 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.094 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.119 250273 DEBUG nova.virt.libvirt.guest [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Domain has shutdown/gone away: Domain not found: no domain with matching uuid '54a1ad4e-6fc9-42dc-aa4c-99d3f1297520' (instance-0000000f) get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.120 250273 INFO nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migration operation has completed#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.120 250273 INFO nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] _post_live_migration() is started..#033[00m
Jan 23 04:33:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089-userdata-shm.mount: Deactivated successfully.
Jan 23 04:33:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e50f84ade3e66b912a1fbea03833a8e0b8f1def8ef73bd6cd8ecc26cc701df20-merged.mount: Deactivated successfully.
Jan 23 04:33:03 np0005593232 podman[266125]: 2026-01-23 09:33:03.133732667 +0000 UTC m=+0.095524713 container cleanup 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:33:03 np0005593232 systemd[1]: libpod-conmon-5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089.scope: Deactivated successfully.
Jan 23 04:33:03 np0005593232 podman[266163]: 2026-01-23 09:33:03.193343966 +0000 UTC m=+0.038682373 container remove 5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.213 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92314513-1fbf-4642-a3f8-243ef1c97de4]: (4, ('Fri Jan 23 09:33:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 (5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089)\n5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089\nFri Jan 23 09:33:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 (5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089)\n5675f454984a7069ca6c4114a4b8f9cc6016b3e99c720fb63b4ce4b38c281089\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.215 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e443df64-01c9-42cc-bafa-226fdf5af7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.216 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap385e7a4d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:03 np0005593232 kernel: tap385e7a4d-f0: left promiscuous mode
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.280 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1f3b9013-d80b-478e-aaf2-f6d30001ab6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 nova_compute[250269]: 2026-01-23 09:33:03.302 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.303 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0323c601-0874-40a4-bf28-139fafcd756e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.304 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[19694975-b951-456e-b256-e36c2bbfe474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.321 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[987b3267-7898-47f2-8516-d7e48a7031e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467340, 'reachable_time': 16214, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266182, 'error': None, 'target': 'ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2d385e7a4d\x2df87e\x2d44c5\x2d9fc0\x2d5a322eecd4b4.mount: Deactivated successfully.
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.325 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-385e7a4d-f87e-44c5-9fc0-5a322eecd4b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:33:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:03.326 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f57e0d2f-f1a0-4f18-98f4-2a3fdb9474ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:03.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:03.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.027 250273 DEBUG nova.compute.manager [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.027 250273 DEBUG oslo_concurrency.lockutils [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.028 250273 DEBUG oslo_concurrency.lockutils [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.028 250273 DEBUG oslo_concurrency.lockutils [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.028 250273 DEBUG nova.compute.manager [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.028 250273 DEBUG nova.compute.manager [req-7a086259-2a33-4fd7-82bc-ead082bbe47e req-b947c671-80cb-43b1-adc2-84e544f0968e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.328 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.328 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.329 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:04 np0005593232 nova_compute[250269]: 2026-01-23 09:33:04.361 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:04 np0005593232 podman[266183]: 2026-01-23 09:33:04.414145378 +0000 UTC m=+0.070133379 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 23 04:33:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1150: 321 pgs: 321 active+clean; 604 MiB data, 638 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 272 op/s
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.035 250273 DEBUG nova.network.neutron [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updated VIF entry in instance network info cache for port edc7d28f-eaba-44b8-9916-f2089618ca70. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.036 250273 DEBUG nova.network.neutron [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updating instance_info_cache with network_info: [{"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-1.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.057 250273 DEBUG oslo_concurrency.lockutils [req-af456a8b-f227-4d76-b6c4-a1a9f0a2dff4 req-ded67481-e6b0-4cd3-9c19-9808806e7412 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-54a1ad4e-6fc9-42dc-aa4c-99d3f1297520" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.171 250273 DEBUG nova.compute.manager [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.172 250273 DEBUG oslo_concurrency.lockutils [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.172 250273 DEBUG oslo_concurrency.lockutils [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.172 250273 DEBUG oslo_concurrency.lockutils [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.173 250273 DEBUG nova.compute.manager [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.173 250273 DEBUG nova.compute.manager [req-b17e6536-ac7b-4b06-a294-dccd2b7bc6d2 req-8e54cf13-de87-4a85-8445-ff4ebe028f3f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-unplugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:33:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:05.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.913 250273 DEBUG nova.network.neutron [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Activated binding for port edc7d28f-eaba-44b8-9916-f2089618ca70 and host compute-1.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.914 250273 DEBUG nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.915 250273 DEBUG nova.virt.libvirt.vif [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T09:32:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1815710456',display_name='tempest-LiveMigrationTest-server-1815710456',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1815710456',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:32:19Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c56e53b3339e4e4db30b7a9d330bc380',ramdisk_id='',reservation_id='r-mz7qsyn0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1903931568',owner_user_name='tempest-LiveMigrationTest-1903931568-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:32:49Z,user_data=None,user_id='a43b680a6019491aafe42c0a10e648df',uuid=54a1ad4e-6fc9-42dc-aa4c-99d3f1297520,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.915 250273 DEBUG nova.network.os_vif_util [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converting VIF {"id": "edc7d28f-eaba-44b8-9916-f2089618ca70", "address": "fa:16:3e:c7:78:59", "network": {"id": "385e7a4d-f87e-44c5-9fc0-5a322eecd4b4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-1143816535-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c56e53b3339e4e4db30b7a9d330bc380", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedc7d28f-ea", "ovs_interfaceid": "edc7d28f-eaba-44b8-9916-f2089618ca70", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.916 250273 DEBUG nova.network.os_vif_util [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.916 250273 DEBUG os_vif [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.919 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.919 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedc7d28f-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:33:05 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.921 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:05.923 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:33:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:06.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.031 250273 INFO os_vif [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:78:59,bridge_name='br-int',has_traffic_filtering=True,id=edc7d28f-eaba-44b8-9916-f2089618ca70,network=Network(385e7a4d-f87e-44c5-9fc0-5a322eecd4b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedc7d28f-ea')#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.032 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.032 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.032 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.032 250273 DEBUG nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.033 250273 INFO nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Deleting instance files /var/lib/nova/instances/54a1ad4e-6fc9-42dc-aa4c-99d3f1297520_del#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.033 250273 INFO nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Deletion of /var/lib/nova/instances/54a1ad4e-6fc9-42dc-aa4c-99d3f1297520_del complete#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.260 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.261 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.261 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.262 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.262 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.262 250273 WARNING nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received unexpected event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.262 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.262 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.263 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.263 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.263 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.264 250273 WARNING nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received unexpected event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.264 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.264 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.264 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.265 250273 DEBUG oslo_concurrency.lockutils [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.265 250273 DEBUG nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.265 250273 WARNING nova.compute.manager [req-6c476bcf-fe94-4155-a5af-95b511f06e7b req-4ff4bfaf-8556-4764-953f-70d6b6c1581f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received unexpected event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:33:06 np0005593232 nova_compute[250269]: 2026-01-23 09:33:06.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:06 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:06Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:55:80:d7 10.100.0.8
Jan 23 04:33:06 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:06Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:55:80:d7 10.100.0.8
Jan 23 04:33:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1151: 321 pgs: 321 active+clean; 622 MiB data, 656 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 5.4 MiB/s wr, 312 op/s
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.327 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.328 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.328 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.328 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.328 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:07.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172821414' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:07 np0005593232 nova_compute[250269]: 2026-01-23 09:33:07.928 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.030 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.030 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.031 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:08.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.036 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.036 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.190 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.191 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=20.707080841064453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.192 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.192 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.279 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Updating resource usage from migration b15a37ef-d8e5-44a3-85e1-37de93fc8299#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.305 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.403 250273 DEBUG nova.compute.manager [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.403 250273 DEBUG oslo_concurrency.lockutils [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.404 250273 DEBUG oslo_concurrency.lockutils [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.404 250273 DEBUG oslo_concurrency.lockutils [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.404 250273 DEBUG nova.compute.manager [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] No waiting events found dispatching network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.405 250273 WARNING nova.compute.manager [req-677976d9-58b9-43f7-ab5d-cdb6977c01c4 req-ac581b7d-ee75-4d04-ba9e-37e3f5c2d863 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Received unexpected event network-vif-plugged-edc7d28f-eaba-44b8-9916-f2089618ca70 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.435 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.436 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.436 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration b15a37ef-d8e5-44a3-85e1-37de93fc8299 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.436 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.436 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=960MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:33:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:08 np0005593232 nova_compute[250269]: 2026-01-23 09:33:08.608 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1152: 321 pgs: 321 active+clean; 595 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.0 MiB/s wr, 330 op/s
Jan 23 04:33:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3759786397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:09 np0005593232 nova_compute[250269]: 2026-01-23 09:33:09.345 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:09 np0005593232 nova_compute[250269]: 2026-01-23 09:33:09.353 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:33:09 np0005593232 nova_compute[250269]: 2026-01-23 09:33:09.397 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:33:09 np0005593232 nova_compute[250269]: 2026-01-23 09:33:09.429 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:33:09 np0005593232 nova_compute[250269]: 2026-01-23 09:33:09.429 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:09.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:10.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:10 np0005593232 nova_compute[250269]: 2026-01-23 09:33:10.922 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1153: 321 pgs: 321 active+clean; 595 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.4 MiB/s wr, 282 op/s
Jan 23 04:33:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:11.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:11 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 23 04:33:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:33:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:12.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:33:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1154: 321 pgs: 321 active+clean; 628 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 7.9 MiB/s wr, 407 op/s
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.037 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.038 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.038 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "54a1ad4e-6fc9-42dc-aa4c-99d3f1297520-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.063 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.063 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.063 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.064 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.064 250273 DEBUG oslo_concurrency.processutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141836134' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.560 250273 DEBUG oslo_concurrency.processutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:13.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.733 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.734 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.734 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.737 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.737 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.905 250273 WARNING nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.906 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4400MB free_disk=20.71023941040039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.906 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.906 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:13 np0005593232 nova_compute[250269]: 2026-01-23 09:33:13.986 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Migration for instance 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.024 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 23 04:33:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:14.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.060 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.060 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Instance dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.061 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Migration b15a37ef-d8e5-44a3-85e1-37de93fc8299 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.061 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.062 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.185 250273 DEBUG oslo_concurrency.processutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.496 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.496 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.497 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.497 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.498 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.500 250273 INFO nova.compute.manager [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Terminating instance#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.501 250273 DEBUG nova.compute.manager [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:33:14 np0005593232 kernel: tapb3082eca-f3 (unregistering): left promiscuous mode
Jan 23 04:33:14 np0005593232 NetworkManager[49057]: <info>  [1769160794.5863] device (tapb3082eca-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:33:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:14Z|00076|binding|INFO|Releasing lport b3082eca-f363-4d6f-91c2-78579e8ff7fd from this chassis (sb_readonly=0)
Jan 23 04:33:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:14Z|00077|binding|INFO|Setting lport b3082eca-f363-4d6f-91c2-78579e8ff7fd down in Southbound
Jan 23 04:33:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:33:14Z|00078|binding|INFO|Removing iface tapb3082eca-f3 ovn-installed in OVS
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.614 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.634 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:55:80:d7 10.100.0.8'], port_security=['fa:16:3e:55:80:d7 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dcffb4e0-398b-4fa3-9eac-2020fa2f9b75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5071dae1c732441291c3cea4201538d1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '485475b6-fa5d-49cf-9e7a-d68656021da2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd9ebd18-6b3b-45ea-8383-db75ba482bdb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=b3082eca-f363-4d6f-91c2-78579e8ff7fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.635 161902 INFO neutron.agent.ovn.metadata.agent [-] Port b3082eca-f363-4d6f-91c2-78579e8ff7fd in datapath e1f93ca8-d3d7-4404-b1f0-385022a03154 unbound from our chassis#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.637 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e1f93ca8-d3d7-4404-b1f0-385022a03154, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.639 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[df031c97-23f9-4388-b06e-6a61a18b722e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.639 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 namespace which is not needed anymore#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.642 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960758617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:14 np0005593232 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 23 04:33:14 np0005593232 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000011.scope: Consumed 15.251s CPU time.
Jan 23 04:33:14 np0005593232 systemd-machined[215836]: Machine qemu-7-instance-00000011 terminated.
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.672 250273 DEBUG oslo_concurrency.processutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.678 250273 DEBUG nova.compute.provider_tree [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.699 250273 DEBUG nova.scheduler.client.report [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.731 250273 DEBUG nova.compute.resource_tracker [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.732 250273 DEBUG oslo_concurrency.lockutils [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.740 250273 INFO nova.compute.manager [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Migrating instance to compute-1.ctlplane.example.com finished successfully.#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.744 250273 INFO nova.virt.libvirt.driver [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Instance destroyed successfully.#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.745 250273 DEBUG nova.objects.instance [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lazy-loading 'resources' on Instance uuid dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.762 250273 DEBUG nova.virt.libvirt.vif [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:32:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-63411154',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(14),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-63411154',id=17,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=14,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG55OzgmhMbHj/AdekeVIJzfHbYBd/FfQqGtG06NkSPUh44muiV0W4+jTbr0/+5N+bSuPO5vRZ2E3ny+4RPwCoV8mIh43dtiZy7+WnjnptN/wd9TpQ5NxnJIVtbxdhFRlA==',key_name='tempest-keypair-1596118567',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:32:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5071dae1c732441291c3cea4201538d1',ramdisk_id='',reservation_id='r-dzpuoyof',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-2097391700-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:32:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9914fa5b09794fda94ca3cb12e25549f',uuid=dcffb4e0-398b-4fa3-9eac-2020fa2f9b75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.762 250273 DEBUG nova.network.os_vif_util [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converting VIF {"id": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "address": "fa:16:3e:55:80:d7", "network": {"id": "e1f93ca8-d3d7-4404-b1f0-385022a03154", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-1466517221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5071dae1c732441291c3cea4201538d1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3082eca-f3", "ovs_interfaceid": "b3082eca-f363-4d6f-91c2-78579e8ff7fd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.763 250273 DEBUG nova.network.os_vif_util [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.764 250273 DEBUG os_vif [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.765 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.766 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3082eca-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.770 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.772 250273 INFO os_vif [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:55:80:d7,bridge_name='br-int',has_traffic_filtering=True,id=b3082eca-f363-4d6f-91c2-78579e8ff7fd,network=Network(e1f93ca8-d3d7-4404-b1f0-385022a03154),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3082eca-f3')#033[00m
Jan 23 04:33:14 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [NOTICE]   (266084) : haproxy version is 2.8.14-c23fe91
Jan 23 04:33:14 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [NOTICE]   (266084) : path to executable is /usr/sbin/haproxy
Jan 23 04:33:14 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [WARNING]  (266084) : Exiting Master process...
Jan 23 04:33:14 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [ALERT]    (266084) : Current worker (266086) exited with code 143 (Terminated)
Jan 23 04:33:14 np0005593232 neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154[266080]: [WARNING]  (266084) : All workers exited. Exiting... (0)
Jan 23 04:33:14 np0005593232 systemd[1]: libpod-cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8.scope: Deactivated successfully.
Jan 23 04:33:14 np0005593232 podman[266376]: 2026-01-23 09:33:14.802241091 +0000 UTC m=+0.061386771 container died cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:33:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8-userdata-shm.mount: Deactivated successfully.
Jan 23 04:33:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d54c5d522df6f52ef49db82b3ec3927ec7ad170319d5987cb5e875d5155b822a-merged.mount: Deactivated successfully.
Jan 23 04:33:14 np0005593232 podman[266376]: 2026-01-23 09:33:14.841961823 +0000 UTC m=+0.101107503 container cleanup cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 04:33:14 np0005593232 systemd[1]: libpod-conmon-cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8.scope: Deactivated successfully.
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.906 250273 INFO nova.scheduler.client.report [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] Deleted allocation for migration b15a37ef-d8e5-44a3-85e1-37de93fc8299#033[00m
Jan 23 04:33:14 np0005593232 podman[266426]: 2026-01-23 09:33:14.907263854 +0000 UTC m=+0.044062867 container remove cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.907 250273 DEBUG nova.virt.libvirt.driver [None req-9959f827-27bc-4670-9623-69661ee17463 5a5194678c634c8fb09b5397d1ed31fe 985865a35e144fc6b78d4b87561eb207 - - default default] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.912 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6833e2a-48e9-4170-990c-1887edf0fc86]: (4, ('Fri Jan 23 09:33:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 (cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8)\ncddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8\nFri Jan 23 09:33:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 (cddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8)\ncddb948bb43d3ae9897b8a56d57bbc673ee84c86d19279c9225d157a8eb3bee8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.914 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[123fdb2c-09ca-4ee0-be13-a0b9f75838ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.915 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape1f93ca8-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:33:14 np0005593232 kernel: tape1f93ca8-d0: left promiscuous mode
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.922 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[094a2fb1-8078-454c-afbc-92823999b3ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 nova_compute[250269]: 2026-01-23 09:33:14.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.938 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c01a552-b0a6-4211-85cc-704c8f8c27ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.940 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ce37c042-acf6-468b-9af9-ebb20cc494fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.958 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[622e1418-189d-41a1-9b5a-ba50c6aba402]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 468421, 'reachable_time': 25432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266442, 'error': None, 'target': 'ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 systemd[1]: run-netns-ovnmeta\x2de1f93ca8\x2dd3d7\x2d4404\x2db1f0\x2d385022a03154.mount: Deactivated successfully.
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.964 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e1f93ca8-d3d7-4404-b1f0-385022a03154 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:33:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:14.964 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b554b4a3-c4d6-4dd9-a914-d6b4944049e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:33:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1155: 321 pgs: 321 active+clean; 628 MiB data, 665 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.8 MiB/s wr, 235 op/s
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.162 250273 DEBUG nova.compute.manager [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-unplugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.163 250273 DEBUG oslo_concurrency.lockutils [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.163 250273 DEBUG oslo_concurrency.lockutils [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.163 250273 DEBUG oslo_concurrency.lockutils [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.164 250273 DEBUG nova.compute.manager [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] No waiting events found dispatching network-vif-unplugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.164 250273 DEBUG nova.compute.manager [req-3562587c-09a8-4741-923a-49123d21e665 req-ed8a4733-acfe-4681-845f-cefe2e98991f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-unplugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.375 250273 INFO nova.virt.libvirt.driver [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Deleting instance files /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_del#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.376 250273 INFO nova.virt.libvirt.driver [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Deletion of /var/lib/nova/instances/dcffb4e0-398b-4fa3-9eac-2020fa2f9b75_del complete#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.485 250273 INFO nova.compute.manager [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.486 250273 DEBUG oslo.service.loopingcall [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.486 250273 DEBUG nova.compute.manager [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:33:15 np0005593232 nova_compute[250269]: 2026-01-23 09:33:15.486 250273 DEBUG nova.network.neutron [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:33:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:15.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:16.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:33:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1156: 321 pgs: 321 active+clean; 578 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 417 op/s
Jan 23 04:33:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:33:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:17.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.784 250273 DEBUG nova.network.neutron [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.803 250273 INFO nova.compute.manager [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Took 2.32 seconds to deallocate network for instance.#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.857 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.857 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.959 250273 DEBUG nova.compute.manager [req-1a206932-2978-44e4-ac06-960c48dc224d req-c88c2ea4-7663-465a-b676-a193d7765ec0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-deleted-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:17.965 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:33:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:17.966 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.966 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.988 250273 DEBUG nova.compute.manager [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.988 250273 DEBUG oslo_concurrency.lockutils [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.988 250273 DEBUG oslo_concurrency.lockutils [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.989 250273 DEBUG oslo_concurrency.lockutils [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.989 250273 DEBUG nova.compute.manager [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] No waiting events found dispatching network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:33:17 np0005593232 nova_compute[250269]: 2026-01-23 09:33:17.989 250273 WARNING nova.compute.manager [req-74c66f86-d177-4b4e-bf35-28c3db403f4a req-29cf73a1-0a8a-4ba2-a753-a3ee1ee4d647 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Received unexpected event network-vif-plugged-b3082eca-f363-4d6f-91c2-78579e8ff7fd for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:33:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.016 250273 DEBUG oslo_concurrency.processutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:33:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:18.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.092 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160783.0914578, 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.093 250273 INFO nova.compute.manager [-] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.119 250273 DEBUG nova.compute.manager [None req-a94c74b9-3eb5-410a-90f4-6b0f085f443e - - - - - -] [instance: 54a1ad4e-6fc9-42dc-aa4c-99d3f1297520] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1171706661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.448 250273 DEBUG oslo_concurrency.processutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.454 250273 DEBUG nova.compute.provider_tree [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:33:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.486 250273 DEBUG nova.scheduler.client.report [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.529 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.590 250273 INFO nova.scheduler.client.report [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Deleted allocations for instance dcffb4e0-398b-4fa3-9eac-2020fa2f9b75#033[00m
Jan 23 04:33:18 np0005593232 nova_compute[250269]: 2026-01-23 09:33:18.698 250273 DEBUG oslo_concurrency.lockutils [None req-d7b6f81b-97f0-4e54-be7b-19d1c5b26380 9914fa5b09794fda94ca3cb12e25549f 5071dae1c732441291c3cea4201538d1 - - default default] Lock "dcffb4e0-398b-4fa3-9eac-2020fa2f9b75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1157: 321 pgs: 321 active+clean; 522 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.7 MiB/s wr, 469 op/s
Jan 23 04:33:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 23 04:33:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:19.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 23 04:33:19 np0005593232 nova_compute[250269]: 2026-01-23 09:33:19.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:20.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 090824e6-2000-4ab4-bd2b-b68decf2caeb does not exist
Jan 23 04:33:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8ee2821f-ed4f-4b74-b89a-3baf11bac2ad does not exist
Jan 23 04:33:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a9b7577b-77d8-4e0e-8c4d-ac5f064e0dac does not exist
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:33:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:33:20 np0005593232 podman[266599]: 2026-01-23 09:33:20.422916576 +0000 UTC m=+0.078294162 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.872141168 +0000 UTC m=+0.044249742 container create 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:20 np0005593232 systemd[1]: Started libpod-conmon-87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d.scope.
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.851249633 +0000 UTC m=+0.023358227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1158: 321 pgs: 321 active+clean; 522 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 411 op/s
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.975590996 +0000 UTC m=+0.147699590 container init 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.982654958 +0000 UTC m=+0.154763532 container start 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.985843029 +0000 UTC m=+0.157951603 container attach 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:33:20 np0005593232 practical_kowalevski[266783]: 167 167
Jan 23 04:33:20 np0005593232 systemd[1]: libpod-87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d.scope: Deactivated successfully.
Jan 23 04:33:20 np0005593232 podman[266766]: 2026-01-23 09:33:20.990949554 +0000 UTC m=+0.163058128 container died 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:33:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bde41e93163a1f5fdfe0a8f8885f1ab634d4815b2f5d571391a4ff3e3bd96922-merged.mount: Deactivated successfully.
Jan 23 04:33:21 np0005593232 podman[266766]: 2026-01-23 09:33:21.028788512 +0000 UTC m=+0.200897086 container remove 87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:33:21 np0005593232 systemd[1]: libpod-conmon-87ec367a8fd1ed0ba8161f9c407e30f206f6d659ba49c144c33a6c79efdfa75d.scope: Deactivated successfully.
Jan 23 04:33:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:33:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:33:21 np0005593232 podman[266807]: 2026-01-23 09:33:21.198393026 +0000 UTC m=+0.047048292 container create 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:33:21 np0005593232 systemd[1]: Started libpod-conmon-26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d.scope.
Jan 23 04:33:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:21 np0005593232 podman[266807]: 2026-01-23 09:33:21.177212502 +0000 UTC m=+0.025867818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:21 np0005593232 podman[266807]: 2026-01-23 09:33:21.2866078 +0000 UTC m=+0.135263116 container init 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:33:21 np0005593232 podman[266807]: 2026-01-23 09:33:21.294147185 +0000 UTC m=+0.142802451 container start 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:33:21 np0005593232 podman[266807]: 2026-01-23 09:33:21.298390936 +0000 UTC m=+0.147046252 container attach 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:21.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:22.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:22 np0005593232 gallant_lumiere[266824]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:33:22 np0005593232 gallant_lumiere[266824]: --> relative data size: 1.0
Jan 23 04:33:22 np0005593232 gallant_lumiere[266824]: --> All data devices are unavailable
Jan 23 04:33:22 np0005593232 systemd[1]: libpod-26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d.scope: Deactivated successfully.
Jan 23 04:33:22 np0005593232 podman[266807]: 2026-01-23 09:33:22.190427359 +0000 UTC m=+1.039082645 container died 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:33:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f8537465d77d755d8407fedbedc00183767cad713775c610ff9c0b6dd7bc588c-merged.mount: Deactivated successfully.
Jan 23 04:33:22 np0005593232 podman[266807]: 2026-01-23 09:33:22.247122984 +0000 UTC m=+1.095778260 container remove 26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Jan 23 04:33:22 np0005593232 systemd[1]: libpod-conmon-26dc6d23fe4dac1a9029ba0dd19f4c078f337ef34466e9b60a29d517dbb3e62d.scope: Deactivated successfully.
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.881161274 +0000 UTC m=+0.041578816 container create 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:33:22 np0005593232 systemd[1]: Started libpod-conmon-873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d.scope.
Jan 23 04:33:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.865558419 +0000 UTC m=+0.025975991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.973588348 +0000 UTC m=+0.134005900 container init 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:33:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1159: 321 pgs: 321 active+clean; 396 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 450 op/s
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.982239025 +0000 UTC m=+0.142656587 container start 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.985668923 +0000 UTC m=+0.146086555 container attach 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:33:22 np0005593232 gracious_lovelace[267007]: 167 167
Jan 23 04:33:22 np0005593232 systemd[1]: libpod-873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d.scope: Deactivated successfully.
Jan 23 04:33:22 np0005593232 podman[266991]: 2026-01-23 09:33:22.987431353 +0000 UTC m=+0.147848905 container died 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:33:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5fd9ec3e458c1b161ab5ccb019611f22217cb78512f56948fd5b961efcabd317-merged.mount: Deactivated successfully.
Jan 23 04:33:23 np0005593232 podman[266991]: 2026-01-23 09:33:23.032032964 +0000 UTC m=+0.192450516 container remove 873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:33:23 np0005593232 systemd[1]: libpod-conmon-873466f0c3573748b0d26e45ebe71fc0d3eb29a1e0a2996e778b53c993cc2e2d.scope: Deactivated successfully.
Jan 23 04:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 23 04:33:23 np0005593232 podman[267031]: 2026-01-23 09:33:23.216970164 +0000 UTC m=+0.049699907 container create 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:33:23 np0005593232 systemd[1]: Started libpod-conmon-8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e.scope.
Jan 23 04:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 23 04:33:23 np0005593232 podman[267031]: 2026-01-23 09:33:23.193542557 +0000 UTC m=+0.026272260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 23 04:33:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c750dfd69743503c9606c19faabdbf6b879684570a0271c0780f327fd23aba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c750dfd69743503c9606c19faabdbf6b879684570a0271c0780f327fd23aba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c750dfd69743503c9606c19faabdbf6b879684570a0271c0780f327fd23aba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88c750dfd69743503c9606c19faabdbf6b879684570a0271c0780f327fd23aba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:23 np0005593232 nova_compute[250269]: 2026-01-23 09:33:23.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:23 np0005593232 podman[267031]: 2026-01-23 09:33:23.316305276 +0000 UTC m=+0.149034979 container init 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:33:23 np0005593232 podman[267031]: 2026-01-23 09:33:23.322843642 +0000 UTC m=+0.155573345 container start 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:33:23 np0005593232 podman[267031]: 2026-01-23 09:33:23.326426564 +0000 UTC m=+0.159156327 container attach 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 23 04:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:23.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:23.968 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:33:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 04:33:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:24.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 04:33:24 np0005593232 festive_payne[267048]: {
Jan 23 04:33:24 np0005593232 festive_payne[267048]:    "0": [
Jan 23 04:33:24 np0005593232 festive_payne[267048]:        {
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "devices": [
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "/dev/loop3"
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            ],
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "lv_name": "ceph_lv0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "lv_size": "7511998464",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "name": "ceph_lv0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "tags": {
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.cluster_name": "ceph",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.crush_device_class": "",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.encrypted": "0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.osd_id": "0",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.type": "block",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:                "ceph.vdo": "0"
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            },
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "type": "block",
Jan 23 04:33:24 np0005593232 festive_payne[267048]:            "vg_name": "ceph_vg0"
Jan 23 04:33:24 np0005593232 festive_payne[267048]:        }
Jan 23 04:33:24 np0005593232 festive_payne[267048]:    ]
Jan 23 04:33:24 np0005593232 festive_payne[267048]: }
Jan 23 04:33:24 np0005593232 systemd[1]: libpod-8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e.scope: Deactivated successfully.
Jan 23 04:33:24 np0005593232 podman[267031]: 2026-01-23 09:33:24.143299464 +0000 UTC m=+0.976029167 container died 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:33:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-88c750dfd69743503c9606c19faabdbf6b879684570a0271c0780f327fd23aba-merged.mount: Deactivated successfully.
Jan 23 04:33:24 np0005593232 podman[267031]: 2026-01-23 09:33:24.204811437 +0000 UTC m=+1.037541150 container remove 8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_payne, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:33:24 np0005593232 systemd[1]: libpod-conmon-8cf718ed4de77011bee5b7869e1d522ff1665e180b2da8b3f4a763ea7f02769e.scope: Deactivated successfully.
Jan 23 04:33:24 np0005593232 nova_compute[250269]: 2026-01-23 09:33:24.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.806161265 +0000 UTC m=+0.047727521 container create 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:24 np0005593232 systemd[1]: Started libpod-conmon-8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e.scope.
Jan 23 04:33:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.788923273 +0000 UTC m=+0.030489559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.887814762 +0000 UTC m=+0.129381028 container init 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.89405192 +0000 UTC m=+0.135618176 container start 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.897340163 +0000 UTC m=+0.138906439 container attach 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:33:24 np0005593232 naughty_driscoll[267226]: 167 167
Jan 23 04:33:24 np0005593232 systemd[1]: libpod-8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e.scope: Deactivated successfully.
Jan 23 04:33:24 np0005593232 conmon[267226]: conmon 8bb196e35ba2c9ac6493 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e.scope/container/memory.events
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.900416341 +0000 UTC m=+0.141982597 container died 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:33:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5c31173a4d4db39621f9c0894a3374f88fb09b69f27afa1635e7b98277a9c18f-merged.mount: Deactivated successfully.
Jan 23 04:33:24 np0005593232 podman[267210]: 2026-01-23 09:33:24.945750093 +0000 UTC m=+0.187316359 container remove 8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_driscoll, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:33:24 np0005593232 systemd[1]: libpod-conmon-8bb196e35ba2c9ac64932a0f9c055707551114b98ec21effd9465a71a659db5e.scope: Deactivated successfully.
Jan 23 04:33:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1161: 321 pgs: 321 active+clean; 396 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.7 MiB/s wr, 391 op/s
Jan 23 04:33:25 np0005593232 podman[267250]: 2026-01-23 09:33:25.10635764 +0000 UTC m=+0.040219597 container create 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:33:25 np0005593232 systemd[1]: Started libpod-conmon-712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38.scope.
Jan 23 04:33:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:33:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d822f093a8aeeadb4afe5411692640c788c5045956e08401cc3840615cfa822/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d822f093a8aeeadb4afe5411692640c788c5045956e08401cc3840615cfa822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d822f093a8aeeadb4afe5411692640c788c5045956e08401cc3840615cfa822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d822f093a8aeeadb4afe5411692640c788c5045956e08401cc3840615cfa822/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:33:25 np0005593232 podman[267250]: 2026-01-23 09:33:25.089398827 +0000 UTC m=+0.023260804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:33:25 np0005593232 podman[267250]: 2026-01-23 09:33:25.19161147 +0000 UTC m=+0.125473457 container init 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:33:25 np0005593232 podman[267250]: 2026-01-23 09:33:25.19757622 +0000 UTC m=+0.131438177 container start 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:33:25 np0005593232 podman[267250]: 2026-01-23 09:33:25.201253445 +0000 UTC m=+0.135115402 container attach 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:33:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:25.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:25 np0005593232 nova_compute[250269]: 2026-01-23 09:33:25.962 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "641f6008-576e-4221-a1d8-33ddfca6d069" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:25 np0005593232 nova_compute[250269]: 2026-01-23 09:33:25.963 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:25 np0005593232 nova_compute[250269]: 2026-01-23 09:33:25.998 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]: {
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:        "osd_id": 0,
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:        "type": "bluestore"
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]:    }
Jan 23 04:33:26 np0005593232 practical_dijkstra[267266]: }
Jan 23 04:33:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:26.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.083 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.083 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:26 np0005593232 systemd[1]: libpod-712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38.scope: Deactivated successfully.
Jan 23 04:33:26 np0005593232 podman[267250]: 2026-01-23 09:33:26.087999467 +0000 UTC m=+1.021861414 container died 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.093 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.094 250273 INFO nova.compute.claims [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:33:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2d822f093a8aeeadb4afe5411692640c788c5045956e08401cc3840615cfa822-merged.mount: Deactivated successfully.
Jan 23 04:33:26 np0005593232 podman[267250]: 2026-01-23 09:33:26.139821653 +0000 UTC m=+1.073683610 container remove 712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:33:26 np0005593232 systemd[1]: libpod-conmon-712c8b6607898402901e49e0180918a2ff58d73f689b9a24b99f223698050d38.scope: Deactivated successfully.
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c5139fba-236f-4d41-bff3-f612e0933c07 does not exist
Jan 23 04:33:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dee9d431-e487-48ee-bc88-8313d9e84524 does not exist
Jan 23 04:33:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1f05eb2a-fb29-493b-8e7c-af10c3385866 does not exist
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.244 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:33:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156768168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.693 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.702 250273 DEBUG nova.compute.provider_tree [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.740 250273 DEBUG nova.scheduler.client.report [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.774 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.775 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.826 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.847 250273 INFO nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.865 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.965 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1162: 321 pgs: 321 active+clean; 364 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 225 KiB/s wr, 220 op/s
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.984 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.986 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:33:26 np0005593232 nova_compute[250269]: 2026-01-23 09:33:26.986 250273 INFO nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating image(s)#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.013 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.037 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.062 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.066 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.124 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.125 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.126 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.126 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.150 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.153 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 641f6008-576e-4221-a1d8-33ddfca6d069_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.464 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 641f6008-576e-4221-a1d8-33ddfca6d069_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.535 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] resizing rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.636 250273 DEBUG nova.objects.instance [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'migration_context' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.659 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.659 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Ensure instance console log exists: /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.660 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.660 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.660 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.662 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.666 250273 WARNING nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.671 250273 DEBUG nova.virt.libvirt.host [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.671 250273 DEBUG nova.virt.libvirt.host [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.674 250273 DEBUG nova.virt.libvirt.host [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.675 250273 DEBUG nova.virt.libvirt.host [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.676 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.676 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.676 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.677 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.678 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.678 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.678 250273 DEBUG nova.virt.hardware [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:33:27 np0005593232 nova_compute[250269]: 2026-01-23 09:33:27.680 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:28.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:33:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2190378317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.102 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.127 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.131 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.437 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:33:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2766331851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.567 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.569 250273 DEBUG nova.objects.instance [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'pci_devices' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.595 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <uuid>641f6008-576e-4221-a1d8-33ddfca6d069</uuid>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <name>instance-00000013</name>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1701597104</nova:name>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:33:27</nova:creationTime>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:user uuid="5874ba32b4a94f68aaa43252721d2fb0">tempest-UnshelveToHostMultiNodesTest-1879363435-project-member</nova:user>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <nova:project uuid="307173cd6ebb4dd5ad3883dedac0271e">tempest-UnshelveToHostMultiNodesTest-1879363435</nova:project>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="serial">641f6008-576e-4221-a1d8-33ddfca6d069</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="uuid">641f6008-576e-4221-a1d8-33ddfca6d069</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk.config">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/console.log" append="off"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:33:28 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:33:28 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:33:28 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:33:28 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.705 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.706 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.707 250273 INFO nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Using config drive#033[00m
Jan 23 04:33:28 np0005593232 nova_compute[250269]: 2026-01-23 09:33:28.736 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1163: 321 pgs: 321 active+clean; 377 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 746 KiB/s wr, 149 op/s
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.070 250273 INFO nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating config drive at /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.075 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp766hrhgu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.202 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp766hrhgu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.231 250273 DEBUG nova.storage.rbd_utils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.234 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.475 250273 DEBUG oslo_concurrency.processutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.477 250273 INFO nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deleting local config drive /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config because it was imported into RBD.#033[00m
Jan 23 04:33:29 np0005593232 systemd-machined[215836]: New machine qemu-8-instance-00000013.
Jan 23 04:33:29 np0005593232 systemd[1]: Started Virtual Machine qemu-8-instance-00000013.
Jan 23 04:33:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:29.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.738 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160794.7372267, dcffb4e0-398b-4fa3-9eac-2020fa2f9b75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.740 250273 INFO nova.compute.manager [-] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.815 250273 DEBUG nova.compute.manager [None req-fef8c713-b72e-420f-a033-b2c7933a331c - - - - - -] [instance: dcffb4e0-398b-4fa3-9eac-2020fa2f9b75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.977 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160809.9770138, 641f6008-576e-4221-a1d8-33ddfca6d069 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.978 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.980 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.980 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.983 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance spawned successfully.#033[00m
Jan 23 04:33:29 np0005593232 nova_compute[250269]: 2026-01-23 09:33:29.983 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:33:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.128 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.133 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.134 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.134 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.135 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.135 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.135 250273 DEBUG nova.virt.libvirt.driver [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.139 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.203 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.203 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160809.978174, 641f6008-576e-4221-a1d8-33ddfca6d069 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.203 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Started (Lifecycle Event)#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.250 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.253 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.387 250273 INFO nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Took 3.40 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.388 250273 DEBUG nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.391 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.475 250273 INFO nova.compute.manager [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Took 4.42 seconds to build instance.#033[00m
Jan 23 04:33:30 np0005593232 nova_compute[250269]: 2026-01-23 09:33:30.508 250273 DEBUG oslo_concurrency.lockutils [None req-68467767-ec56-4989-982b-c19546667721 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1164: 321 pgs: 321 active+clean; 377 MiB data, 526 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 746 KiB/s wr, 149 op/s
Jan 23 04:33:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:31.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:33:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:32.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:33:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1165: 321 pgs: 321 active+clean; 410 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 236 op/s
Jan 23 04:33:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:33 np0005593232 nova_compute[250269]: 2026-01-23 09:33:33.487 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:33.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:34.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 23 04:33:34 np0005593232 nova_compute[250269]: 2026-01-23 09:33:34.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 23 04:33:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 23 04:33:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1167: 321 pgs: 321 active+clean; 410 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 236 op/s
Jan 23 04:33:35 np0005593232 podman[267771]: 2026-01-23 09:33:35.419990341 +0000 UTC m=+0.066830146 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Jan 23 04:33:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:35.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:36.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:36 np0005593232 nova_compute[250269]: 2026-01-23 09:33:36.359 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:33:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1168: 321 pgs: 321 active+clean; 410 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:33:37
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'vms', 'backups', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.345 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.345 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid 641f6008-576e-4221-a1d8-33ddfca6d069 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.346 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.346 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.346 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "641f6008-576e-4221-a1d8-33ddfca6d069" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.347 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.430 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:37 np0005593232 nova_compute[250269]: 2026-01-23 09:33:37.431 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:33:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:37.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:33:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:33:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:38.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:38 np0005593232 nova_compute[250269]: 2026-01-23 09:33:38.490 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1169: 321 pgs: 321 active+clean; 410 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 177 op/s
Jan 23 04:33:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:39.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:39 np0005593232 nova_compute[250269]: 2026-01-23 09:33:39.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:40.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:40 np0005593232 nova_compute[250269]: 2026-01-23 09:33:40.516 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "641f6008-576e-4221-a1d8-33ddfca6d069" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:40 np0005593232 nova_compute[250269]: 2026-01-23 09:33:40.517 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:40 np0005593232 nova_compute[250269]: 2026-01-23 09:33:40.517 250273 INFO nova.compute.manager [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shelving#033[00m
Jan 23 04:33:40 np0005593232 nova_compute[250269]: 2026-01-23 09:33:40.566 250273 DEBUG nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 04:33:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1170: 321 pgs: 321 active+clean; 410 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.4 MiB/s wr, 177 op/s
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.057677) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821057817, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2201, "num_deletes": 254, "total_data_size": 3862385, "memory_usage": 3916592, "flush_reason": "Manual Compaction"}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821097795, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3726777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24129, "largest_seqno": 26329, "table_properties": {"data_size": 3716845, "index_size": 6234, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21599, "raw_average_key_size": 20, "raw_value_size": 3696614, "raw_average_value_size": 3557, "num_data_blocks": 274, "num_entries": 1039, "num_filter_entries": 1039, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769160639, "oldest_key_time": 1769160639, "file_creation_time": 1769160821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 40185 microseconds, and 9346 cpu microseconds.
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.097911) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3726777 bytes OK
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.097941) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.099784) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.099807) EVENT_LOG_v1 {"time_micros": 1769160821099802, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.099825) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 3853096, prev total WAL file size 3853096, number of live WAL files 2.
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.101160) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3639KB)], [56(9130KB)]
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821101371, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 13076558, "oldest_snapshot_seqno": -1}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5377 keys, 11100579 bytes, temperature: kUnknown
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821248231, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11100579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11061988, "index_size": 24017, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 134398, "raw_average_key_size": 24, "raw_value_size": 10962377, "raw_average_value_size": 2038, "num_data_blocks": 991, "num_entries": 5377, "num_filter_entries": 5377, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769160821, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.248776) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11100579 bytes
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.250380) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 89.0 rd, 75.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 5904, records dropped: 527 output_compression: NoCompression
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.250411) EVENT_LOG_v1 {"time_micros": 1769160821250395, "job": 30, "event": "compaction_finished", "compaction_time_micros": 146988, "compaction_time_cpu_micros": 47833, "output_level": 6, "num_output_files": 1, "total_output_size": 11100579, "num_input_records": 5904, "num_output_records": 5377, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821251733, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160821254419, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.100849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.254494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.254500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.254501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.254503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:41.254504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:41.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:42.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:42.589 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:42.590 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:33:42.590 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:33:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1171: 321 pgs: 321 active+clean; 427 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.491013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823491092, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 287, "num_deletes": 256, "total_data_size": 60597, "memory_usage": 66776, "flush_reason": "Manual Compaction"}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 23 04:33:43 np0005593232 nova_compute[250269]: 2026-01-23 09:33:43.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823505718, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 60856, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26330, "largest_seqno": 26616, "table_properties": {"data_size": 58936, "index_size": 148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4706, "raw_average_key_size": 16, "raw_value_size": 55011, "raw_average_value_size": 197, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769160822, "oldest_key_time": 1769160822, "file_creation_time": 1769160823, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14794 microseconds, and 2065 cpu microseconds.
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.505812) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 60856 bytes OK
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.505838) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.508279) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.508414) EVENT_LOG_v1 {"time_micros": 1769160823508405, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.508715) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 58440, prev total WAL file size 58440, number of live WAL files 2.
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.509918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373534' seq:0, type:0; will stop at (end)
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(59KB)], [59(10MB)]
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823509972, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11161435, "oldest_snapshot_seqno": -1}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5131 keys, 11075180 bytes, temperature: kUnknown
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823618563, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 11075180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11037497, "index_size": 23749, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12869, "raw_key_size": 130463, "raw_average_key_size": 25, "raw_value_size": 10941493, "raw_average_value_size": 2132, "num_data_blocks": 976, "num_entries": 5131, "num_filter_entries": 5131, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769160823, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.618836) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 11075180 bytes
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.620487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.7 rd, 101.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 10.6 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(365.4) write-amplify(182.0) OK, records in: 5655, records dropped: 524 output_compression: NoCompression
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.620509) EVENT_LOG_v1 {"time_micros": 1769160823620499, "job": 32, "event": "compaction_finished", "compaction_time_micros": 108671, "compaction_time_cpu_micros": 34182, "output_level": 6, "num_output_files": 1, "total_output_size": 11075180, "num_input_records": 5655, "num_output_records": 5131, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823620650, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769160823622674, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.509818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.622711) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.622716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.622718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.622720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:33:43.622722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:33:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:43.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:44.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:44 np0005593232 nova_compute[250269]: 2026-01-23 09:33:44.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1173: 321 pgs: 321 active+clean; 427 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 23 04:33:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:45.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:46.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010330333670668155 of space, bias 1.0, pg target 3.0991001012004467 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 04:33:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1174: 321 pgs: 321 active+clean; 442 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 169 op/s
Jan 23 04:33:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:47.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:48.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:48 np0005593232 nova_compute[250269]: 2026-01-23 09:33:48.528 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1175: 321 pgs: 321 active+clean; 443 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 132 op/s
Jan 23 04:33:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:49.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:49 np0005593232 nova_compute[250269]: 2026-01-23 09:33:49.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:50.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:50 np0005593232 nova_compute[250269]: 2026-01-23 09:33:50.619 250273 DEBUG nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 04:33:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1176: 321 pgs: 321 active+clean; 443 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 132 op/s
Jan 23 04:33:51 np0005593232 podman[267849]: 2026-01-23 09:33:51.430778796 +0000 UTC m=+0.085075585 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 04:33:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:51.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:52.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1177: 321 pgs: 321 active+clean; 364 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 441 KiB/s rd, 1.2 MiB/s wr, 96 op/s
Jan 23 04:33:53 np0005593232 nova_compute[250269]: 2026-01-23 09:33:53.530 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:53 np0005593232 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 23 04:33:53 np0005593232 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000013.scope: Consumed 13.714s CPU time.
Jan 23 04:33:53 np0005593232 systemd-machined[215836]: Machine qemu-8-instance-00000013 terminated.
Jan 23 04:33:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:53.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:53 np0005593232 nova_compute[250269]: 2026-01-23 09:33:53.766 250273 INFO nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 04:33:53 np0005593232 nova_compute[250269]: 2026-01-23 09:33:53.771 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:33:53 np0005593232 nova_compute[250269]: 2026-01-23 09:33:53.771 250273 DEBUG nova.objects.instance [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'numa_topology' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:33:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:54 np0005593232 nova_compute[250269]: 2026-01-23 09:33:54.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1178: 321 pgs: 321 active+clean; 364 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 383 KiB/s rd, 1.0 MiB/s wr, 83 op/s
Jan 23 04:33:55 np0005593232 nova_compute[250269]: 2026-01-23 09:33:55.069 250273 INFO nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Beginning cold snapshot process#033[00m
Jan 23 04:33:55 np0005593232 nova_compute[250269]: 2026-01-23 09:33:55.620 250273 DEBUG nova.virt.libvirt.imagebackend [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 04:33:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:55.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:56 np0005593232 nova_compute[250269]: 2026-01-23 09:33:56.487 250273 DEBUG nova.storage.rbd_utils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] creating snapshot(b8f744f6ba3342d787101fdb9971e8af) on rbd image(641f6008-576e-4221-a1d8-33ddfca6d069_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:33:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 23 04:33:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 23 04:33:56 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 23 04:33:56 np0005593232 nova_compute[250269]: 2026-01-23 09:33:56.928 250273 DEBUG nova.storage.rbd_utils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] cloning vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk@b8f744f6ba3342d787101fdb9971e8af to images/317c5b81-1ed5-4e60-8bf3-e997901e644f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:33:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1180: 321 pgs: 321 active+clean; 364 MiB data, 525 MiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 66 KiB/s wr, 66 op/s
Jan 23 04:33:57 np0005593232 nova_compute[250269]: 2026-01-23 09:33:57.071 250273 DEBUG nova.storage.rbd_utils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] flattening images/317c5b81-1ed5-4e60-8bf3-e997901e644f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:33:57 np0005593232 nova_compute[250269]: 2026-01-23 09:33:57.521 250273 DEBUG nova.storage.rbd_utils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] removing snapshot(b8f744f6ba3342d787101fdb9971e8af) on rbd image(641f6008-576e-4221-a1d8-33ddfca6d069_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:33:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:57.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 23 04:33:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 23 04:33:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 23 04:33:57 np0005593232 nova_compute[250269]: 2026-01-23 09:33:57.933 250273 DEBUG nova.storage.rbd_utils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] creating snapshot(snap) on rbd image(317c5b81-1ed5-4e60-8bf3-e997901e644f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:33:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:33:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:33:58.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:33:58 np0005593232 nova_compute[250269]: 2026-01-23 09:33:58.532 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:33:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:33:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 23 04:33:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 23 04:33:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 23 04:33:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1183: 321 pgs: 321 active+clean; 385 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 17 op/s
Jan 23 04:33:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:33:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:33:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:33:59.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:33:59 np0005593232 nova_compute[250269]: 2026-01-23 09:33:59.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:00.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1184: 321 pgs: 321 active+clean; 385 MiB data, 539 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 17 op/s
Jan 23 04:34:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:01.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:02.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:02 np0005593232 nova_compute[250269]: 2026-01-23 09:34:02.274 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:02 np0005593232 nova_compute[250269]: 2026-01-23 09:34:02.275 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1185: 321 pgs: 321 active+clean; 443 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 7.6 MiB/s wr, 156 op/s
Jan 23 04:34:03 np0005593232 nova_compute[250269]: 2026-01-23 09:34:03.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:03 np0005593232 nova_compute[250269]: 2026-01-23 09:34:03.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:34:03 np0005593232 nova_compute[250269]: 2026-01-23 09:34:03.413 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:34:03 np0005593232 nova_compute[250269]: 2026-01-23 09:34:03.533 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 23 04:34:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 23 04:34:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 23 04:34:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:03.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:04 np0005593232 nova_compute[250269]: 2026-01-23 09:34:04.795 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1187: 321 pgs: 321 active+clean; 443 MiB data, 571 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.6 MiB/s wr, 124 op/s
Jan 23 04:34:05 np0005593232 nova_compute[250269]: 2026-01-23 09:34:05.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:05 np0005593232 nova_compute[250269]: 2026-01-23 09:34:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:05 np0005593232 nova_compute[250269]: 2026-01-23 09:34:05.444 250273 INFO nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Snapshot image upload complete#033[00m
Jan 23 04:34:05 np0005593232 nova_compute[250269]: 2026-01-23 09:34:05.444 250273 DEBUG nova.compute.manager [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:05.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:06 np0005593232 nova_compute[250269]: 2026-01-23 09:34:06.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:06 np0005593232 podman[268027]: 2026-01-23 09:34:06.393098454 +0000 UTC m=+0.051811627 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1188: 321 pgs: 321 active+clean; 396 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.1 MiB/s wr, 134 op/s
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.425 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.426 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.426 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.426 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.426 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:07.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:34:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4227065802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:34:07 np0005593232 nova_compute[250269]: 2026-01-23 09:34:07.866 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.083 250273 INFO nova.compute.manager [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shelve offloading#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.093 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.093 250273 DEBUG nova.compute.manager [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.097 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.097 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquired lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:34:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:08.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.098 250273 DEBUG nova.network.neutron [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.349 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.350 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.354 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.355 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.535 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.559 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.560 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4525MB free_disk=20.83106231689453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.561 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.561 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.766 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160833.765496, 641f6008-576e-4221-a1d8-33ddfca6d069 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.767 250273 INFO nova.compute.manager [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.797 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.798 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 641f6008-576e-4221-a1d8-33ddfca6d069 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.798 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.798 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.808 250273 DEBUG nova.compute.manager [None req-b3f2e78f-1936-4525-a1b7-3490eef8b6ae - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.813 250273 DEBUG nova.compute.manager [None req-b3f2e78f-1936-4525-a1b7-3490eef8b6ae - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: shelved, current task_state: shelving_offloading, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.853 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.868 250273 INFO nova.compute.manager [None req-b3f2e78f-1936-4525-a1b7-3490eef8b6ae - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] During sync_power_state the instance has a pending task (shelving_offloading). Skip.#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.979 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:34:08 np0005593232 nova_compute[250269]: 2026-01-23 09:34:08.980 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:34:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1189: 321 pgs: 321 active+clean; 362 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 127 op/s
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.059 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.142 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.326 250273 DEBUG nova.network.neutron [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.430 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:09.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:34:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943214550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.892 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.898 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.923 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.960 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:34:09 np0005593232 nova_compute[250269]: 2026-01-23 09:34:09.961 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:10.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.234 250273 DEBUG nova.network.neutron [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.285 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Releasing lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.298 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.299 250273 DEBUG nova.objects.instance [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'resources' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.899 250273 INFO nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deleting instance files /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069_del#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.902 250273 INFO nova.virt.libvirt.driver [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deletion of /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069_del complete#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.961 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.962 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:34:10 np0005593232 nova_compute[250269]: 2026-01-23 09:34:10.962 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:34:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1190: 321 pgs: 321 active+clean; 362 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.3 MiB/s wr, 127 op/s
Jan 23 04:34:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:11.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:12.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1191: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 48 KiB/s rd, 7.6 KiB/s wr, 71 op/s
Jan 23 04:34:13 np0005593232 nova_compute[250269]: 2026-01-23 09:34:13.538 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:13 np0005593232 nova_compute[250269]: 2026-01-23 09:34:13.952 250273 INFO nova.scheduler.client.report [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Deleted allocations for instance 641f6008-576e-4221-a1d8-33ddfca6d069#033[00m
Jan 23 04:34:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:14.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.196 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.196 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.350 250273 DEBUG oslo_concurrency.processutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:34:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647779932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.772 250273 DEBUG oslo_concurrency.processutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.778 250273 DEBUG nova.compute.provider_tree [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:34:14 np0005593232 nova_compute[250269]: 2026-01-23 09:34:14.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1192: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 42 KiB/s rd, 6.6 KiB/s wr, 62 op/s
Jan 23 04:34:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:15.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:16.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1193: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 6.3 KiB/s wr, 59 op/s
Jan 23 04:34:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:17.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:18 np0005593232 nova_compute[250269]: 2026-01-23 09:34:18.541 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1194: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 1.8 KiB/s wr, 43 op/s
Jan 23 04:34:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:19.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:19 np0005593232 nova_compute[250269]: 2026-01-23 09:34:19.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:20.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1195: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 04:34:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:21.022 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:21.023 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.091 250273 DEBUG nova.scheduler.client.report [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.305 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 7.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.409 250273 DEBUG oslo_concurrency.lockutils [None req-95465372-7d4a-4d4f-8ee6-82b84841c88b 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 40.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:21.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.835 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.836 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.836 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.836 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.837 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.838 250273 INFO nova.compute.manager [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Terminating instance#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.839 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.839 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquired lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:34:21 np0005593232 nova_compute[250269]: 2026-01-23 09:34:21.839 250273 DEBUG nova.network.neutron [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:34:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:22 np0005593232 nova_compute[250269]: 2026-01-23 09:34:22.259 250273 DEBUG nova.network.neutron [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:34:22 np0005593232 podman[268190]: 2026-01-23 09:34:22.42525922 +0000 UTC m=+0.084873040 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:34:22 np0005593232 nova_compute[250269]: 2026-01-23 09:34:22.851 250273 DEBUG nova.network.neutron [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:34:22 np0005593232 nova_compute[250269]: 2026-01-23 09:34:22.873 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Releasing lock "refresh_cache-f2d1fdc0-baaf-4566-8655-aafdbcf1f473" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:34:22 np0005593232 nova_compute[250269]: 2026-01-23 09:34:22.873 250273 DEBUG nova.compute.manager [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:34:22 np0005593232 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 23 04:34:22 np0005593232 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000a.scope: Consumed 20.244s CPU time.
Jan 23 04:34:22 np0005593232 systemd-machined[215836]: Machine qemu-4-instance-0000000a terminated.
Jan 23 04:34:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1196: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 04:34:23 np0005593232 nova_compute[250269]: 2026-01-23 09:34:23.154 250273 INFO nova.virt.libvirt.driver [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance destroyed successfully.#033[00m
Jan 23 04:34:23 np0005593232 nova_compute[250269]: 2026-01-23 09:34:23.155 250273 DEBUG nova.objects.instance [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lazy-loading 'resources' on Instance uuid f2d1fdc0-baaf-4566-8655-aafdbcf1f473 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:23 np0005593232 nova_compute[250269]: 2026-01-23 09:34:23.543 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:23.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:24 np0005593232 nova_compute[250269]: 2026-01-23 09:34:24.810 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1197: 321 pgs: 321 active+clean; 283 MiB data, 478 MiB used, 21 GiB / 21 GiB avail
Jan 23 04:34:25 np0005593232 nova_compute[250269]: 2026-01-23 09:34:25.527 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquiring lock "641f6008-576e-4221-a1d8-33ddfca6d069" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:25 np0005593232 nova_compute[250269]: 2026-01-23 09:34:25.527 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:25 np0005593232 nova_compute[250269]: 2026-01-23 09:34:25.528 250273 INFO nova.compute.manager [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Unshelving#033[00m
Jan 23 04:34:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:25.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.092 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.093 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.101 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'pci_requests' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:26.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.156 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'numa_topology' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.206 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.207 250273 INFO nova.compute.claims [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.578 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.848 250273 INFO nova.virt.libvirt.driver [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Deleting instance files /var/lib/nova/instances/f2d1fdc0-baaf-4566-8655-aafdbcf1f473_del#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.849 250273 INFO nova.virt.libvirt.driver [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Deletion of /var/lib/nova/instances/f2d1fdc0-baaf-4566-8655-aafdbcf1f473_del complete#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.953 250273 INFO nova.compute.manager [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Took 4.08 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.954 250273 DEBUG oslo.service.loopingcall [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.954 250273 DEBUG nova.compute.manager [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:34:26 np0005593232 nova_compute[250269]: 2026-01-23 09:34:26.955 250273 DEBUG nova.network.neutron [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:34:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1198: 321 pgs: 321 active+clean; 236 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 15 KiB/s rd, 597 B/s wr, 20 op/s
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3124466471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.037 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.043 250273 DEBUG nova.compute.provider_tree [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.066 250273 DEBUG nova.scheduler.client.report [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.115 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5ec2e396-ba40-4a59-8540-ef0d54467d84 does not exist
Jan 23 04:34:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 534011d9-fe27-4b08-a84f-10671a78f34b does not exist
Jan 23 04:34:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b877aef9-f69a-454e-ba19-7f0eb749550c does not exist
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:34:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.522 250273 DEBUG nova.network.neutron [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.570 250273 DEBUG nova.network.neutron [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.625 250273 INFO nova.compute.manager [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Took 0.67 seconds to deallocate network for instance.#033[00m
Jan 23 04:34:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:27.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.794 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.795 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.892 250273 DEBUG oslo_concurrency.processutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.957 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquiring lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.958 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquired lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:34:27 np0005593232 nova_compute[250269]: 2026-01-23 09:34:27.959 250273 DEBUG nova.network.neutron [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:34:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:28.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.149542799 +0000 UTC m=+0.037352916 container create d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:34:28 np0005593232 systemd[1]: Started libpod-conmon-d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b.scope.
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.211 250273 DEBUG nova.network.neutron [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:34:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.133882583 +0000 UTC m=+0.021692730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.240521442 +0000 UTC m=+0.128331589 container init d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.247980594 +0000 UTC m=+0.135790731 container start d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.251492204 +0000 UTC m=+0.139302331 container attach d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:34:28 np0005593232 modest_benz[268570]: 167 167
Jan 23 04:34:28 np0005593232 systemd[1]: libpod-d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b.scope: Deactivated successfully.
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.256826586 +0000 UTC m=+0.144636723 container died d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:34:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4e0ae1d66ef311a0e10d3a99c0b375ee857df5bd7b0671fda4136b34dea382c1-merged.mount: Deactivated successfully.
Jan 23 04:34:28 np0005593232 podman[268554]: 2026-01-23 09:34:28.304045892 +0000 UTC m=+0.191856039 container remove d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_benz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:34:28 np0005593232 systemd[1]: libpod-conmon-d7d30ce96b79e4ee1d3ca189843efee9d0a8f897beb1cd138df9c5155e338e7b.scope: Deactivated successfully.
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562496653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.346 250273 DEBUG oslo_concurrency.processutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.353 250273 DEBUG nova.compute.provider_tree [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:34:28 np0005593232 podman[268594]: 2026-01-23 09:34:28.491202496 +0000 UTC m=+0.049635996 container create 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:28 np0005593232 systemd[1]: Started libpod-conmon-5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd.scope.
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.546 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:28 np0005593232 podman[268594]: 2026-01-23 09:34:28.472074261 +0000 UTC m=+0.030507811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:28 np0005593232 podman[268594]: 2026-01-23 09:34:28.599517972 +0000 UTC m=+0.157951492 container init 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:28 np0005593232 podman[268594]: 2026-01-23 09:34:28.606006207 +0000 UTC m=+0.164439707 container start 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:34:28 np0005593232 podman[268594]: 2026-01-23 09:34:28.60893827 +0000 UTC m=+0.167371770 container attach 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.671 250273 DEBUG nova.scheduler.client.report [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:34:28 np0005593232 nova_compute[250269]: 2026-01-23 09:34:28.998 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1199: 321 pgs: 321 active+clean; 202 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 938 B/s wr, 26 op/s
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.041 250273 INFO nova.scheduler.client.report [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Deleted allocations for instance f2d1fdc0-baaf-4566-8655-aafdbcf1f473#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.254 250273 DEBUG nova.network.neutron [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.305 250273 DEBUG oslo_concurrency.lockutils [None req-c02203d0-8c40-4c70-bde7-6c3ee456aa3c 7536fa2e625541fba613dc32a49a4c5b 11def90dfdc14cfe928302bec2835794 - - default default] Lock "f2d1fdc0-baaf-4566-8655-aafdbcf1f473" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:29 np0005593232 laughing_bardeen[268610]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:34:29 np0005593232 laughing_bardeen[268610]: --> relative data size: 1.0
Jan 23 04:34:29 np0005593232 laughing_bardeen[268610]: --> All data devices are unavailable
Jan 23 04:34:29 np0005593232 systemd[1]: libpod-5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd.scope: Deactivated successfully.
Jan 23 04:34:29 np0005593232 podman[268594]: 2026-01-23 09:34:29.431831242 +0000 UTC m=+0.990264742 container died 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.453 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Releasing lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.455 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.455 250273 INFO nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating image(s)#033[00m
Jan 23 04:34:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e836a37bd51bf06b383f494892dc18415b6c71c0d3128457a0b413396cfae0c5-merged.mount: Deactivated successfully.
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.490 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:29 np0005593232 podman[268594]: 2026-01-23 09:34:29.492755809 +0000 UTC m=+1.051189319 container remove 5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_bardeen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.493 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'trusted_certs' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:29 np0005593232 systemd[1]: libpod-conmon-5e5bdb490f8f13c47525d513d8e9c47533709dc2eda5189b941e559860b0dffd.scope: Deactivated successfully.
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.723 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.750 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.753 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquiring lock "cd2f3d7941295024f995616af97e7cfde40107ae" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.754 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "cd2f3d7941295024f995616af97e7cfde40107ae" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:29.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:29 np0005593232 nova_compute[250269]: 2026-01-23 09:34:29.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:30.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.127070566 +0000 UTC m=+0.037117098 container create 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:34:30 np0005593232 systemd[1]: Started libpod-conmon-49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3.scope.
Jan 23 04:34:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.110562026 +0000 UTC m=+0.020608548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.211032079 +0000 UTC m=+0.121078621 container init 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.223785403 +0000 UTC m=+0.133831955 container start 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:34:30 np0005593232 beautiful_dhawan[268848]: 167 167
Jan 23 04:34:30 np0005593232 systemd[1]: libpod-49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3.scope: Deactivated successfully.
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.23138867 +0000 UTC m=+0.141435202 container attach 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.231913734 +0000 UTC m=+0.141960246 container died 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:34:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d2182cc49eaa4a33ca39b5709d73dcd8a766ab6c84562086ab2a6ccccf949939-merged.mount: Deactivated successfully.
Jan 23 04:34:30 np0005593232 podman[268832]: 2026-01-23 09:34:30.26369333 +0000 UTC m=+0.173739842 container remove 49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_dhawan, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:34:30 np0005593232 systemd[1]: libpod-conmon-49256bb0739a7ded8af1be446fab26c8c7473096dac0e564b65946a329ab8bd3.scope: Deactivated successfully.
Jan 23 04:34:30 np0005593232 podman[268870]: 2026-01-23 09:34:30.423031881 +0000 UTC m=+0.043943143 container create ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:34:30 np0005593232 systemd[1]: Started libpod-conmon-ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e.scope.
Jan 23 04:34:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e44714fa6b78fc8677367e524dec3f9d5e9a5b13953f5ceb6d40994919367/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e44714fa6b78fc8677367e524dec3f9d5e9a5b13953f5ceb6d40994919367/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e44714fa6b78fc8677367e524dec3f9d5e9a5b13953f5ceb6d40994919367/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954e44714fa6b78fc8677367e524dec3f9d5e9a5b13953f5ceb6d40994919367/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:30 np0005593232 podman[268870]: 2026-01-23 09:34:30.402948689 +0000 UTC m=+0.023860001 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:30 np0005593232 podman[268870]: 2026-01-23 09:34:30.59142917 +0000 UTC m=+0.212340472 container init ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:34:30 np0005593232 podman[268870]: 2026-01-23 09:34:30.599052107 +0000 UTC m=+0.219963369 container start ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:34:30 np0005593232 podman[268870]: 2026-01-23 09:34:30.602731852 +0000 UTC m=+0.223643114 container attach ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1200: 321 pgs: 321 active+clean; 202 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 18 KiB/s rd, 938 B/s wr, 26 op/s
Jan 23 04:34:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:31.027 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:34:31 np0005593232 great_lamarr[268911]: {
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:    "0": [
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:        {
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "devices": [
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "/dev/loop3"
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            ],
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "lv_name": "ceph_lv0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "lv_size": "7511998464",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "name": "ceph_lv0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "tags": {
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.cluster_name": "ceph",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.crush_device_class": "",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.encrypted": "0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.osd_id": "0",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.type": "block",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:                "ceph.vdo": "0"
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            },
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "type": "block",
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:            "vg_name": "ceph_vg0"
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:        }
Jan 23 04:34:31 np0005593232 great_lamarr[268911]:    ]
Jan 23 04:34:31 np0005593232 great_lamarr[268911]: }
Jan 23 04:34:31 np0005593232 systemd[1]: libpod-ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e.scope: Deactivated successfully.
Jan 23 04:34:31 np0005593232 podman[268870]: 2026-01-23 09:34:31.414501227 +0000 UTC m=+1.035412499 container died ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:34:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-954e44714fa6b78fc8677367e524dec3f9d5e9a5b13953f5ceb6d40994919367-merged.mount: Deactivated successfully.
Jan 23 04:34:31 np0005593232 podman[268870]: 2026-01-23 09:34:31.477769991 +0000 UTC m=+1.098681243 container remove ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:34:31 np0005593232 systemd[1]: libpod-conmon-ef2385a111b711769fa357d2579d08bc51a81b2c0f70d23cb74f586b3c1eed2e.scope: Deactivated successfully.
Jan 23 04:34:31 np0005593232 nova_compute[250269]: 2026-01-23 09:34:31.518 250273 DEBUG nova.virt.libvirt.imagebackend [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/317c5b81-1ed5-4e60-8bf3-e997901e644f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/317c5b81-1ed5-4e60-8bf3-e997901e644f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 04:34:31 np0005593232 nova_compute[250269]: 2026-01-23 09:34:31.589 250273 DEBUG nova.virt.libvirt.imagebackend [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Selected location: {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/317c5b81-1ed5-4e60-8bf3-e997901e644f/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 23 04:34:31 np0005593232 nova_compute[250269]: 2026-01-23 09:34:31.590 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] cloning images/317c5b81-1ed5-4e60-8bf3-e997901e644f@snap to None/641f6008-576e-4221-a1d8-33ddfca6d069_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:34:31 np0005593232 nova_compute[250269]: 2026-01-23 09:34:31.732 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "cd2f3d7941295024f995616af97e7cfde40107ae" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:31.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:31 np0005593232 nova_compute[250269]: 2026-01-23 09:34:31.876 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'migration_context' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.015 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] flattening vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:34:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:32.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.130565424 +0000 UTC m=+0.047514914 container create ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.109761591 +0000 UTC m=+0.026711111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:32 np0005593232 systemd[1]: Started libpod-conmon-ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9.scope.
Jan 23 04:34:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.721716102 +0000 UTC m=+0.638665712 container init ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.730694168 +0000 UTC m=+0.647643668 container start ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:34:32 np0005593232 elegant_brahmagupta[269274]: 167 167
Jan 23 04:34:32 np0005593232 systemd[1]: libpod-ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9.scope: Deactivated successfully.
Jan 23 04:34:32 np0005593232 conmon[269274]: conmon ce43d3a42f85b98b761d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9.scope/container/memory.events
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.787334182 +0000 UTC m=+0.704283682 container attach ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.787835726 +0000 UTC m=+0.704785226 container died ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:34:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-594c480d6d01e2199adc9ad5160a12fe312a611fa0fe75b0eff64910690adc6b-merged.mount: Deactivated successfully.
Jan 23 04:34:32 np0005593232 podman[269258]: 2026-01-23 09:34:32.879931571 +0000 UTC m=+0.796881081 container remove ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:34:32 np0005593232 systemd[1]: libpod-conmon-ce43d3a42f85b98b761d3972efdb4bebb2bbeff86a5f65c17d81509cbb3423d9.scope: Deactivated successfully.
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.936 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Image rbd:vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.937 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.938 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Ensure instance console log exists: /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.939 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.939 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.939 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.941 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T09:33:38Z,direct_url=<?>,disk_format='raw',id=317c5b81-1ed5-4e60-8bf3-e997901e644f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1701597104-shelved',owner='307173cd6ebb4dd5ad3883dedac0271e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T09:34:02Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.946 250273 WARNING nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.951 250273 DEBUG nova.virt.libvirt.host [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.952 250273 DEBUG nova.virt.libvirt.host [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.957 250273 DEBUG nova.virt.libvirt.host [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.958 250273 DEBUG nova.virt.libvirt.host [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.959 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.959 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T09:33:38Z,direct_url=<?>,disk_format='raw',id=317c5b81-1ed5-4e60-8bf3-e997901e644f,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1701597104-shelved',owner='307173cd6ebb4dd5ad3883dedac0271e',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T09:34:02Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.960 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.960 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.960 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.960 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.960 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.961 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.961 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.961 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.961 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.961 250273 DEBUG nova.virt.hardware [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:34:32 np0005593232 nova_compute[250269]: 2026-01-23 09:34:32.962 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'vcpu_model' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1201: 321 pgs: 321 active+clean; 225 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 75 op/s
Jan 23 04:34:33 np0005593232 nova_compute[250269]: 2026-01-23 09:34:33.019 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:33 np0005593232 podman[269301]: 2026-01-23 09:34:33.046534729 +0000 UTC m=+0.025591800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:34:33 np0005593232 podman[269301]: 2026-01-23 09:34:33.229550045 +0000 UTC m=+0.208607116 container create f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:33 np0005593232 systemd[1]: Started libpod-conmon-f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b.scope.
Jan 23 04:34:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:34:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7994ff35c5ccfdc6506e2c8371e2ad3275f820647f6a3edac3469b64b7c17d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7994ff35c5ccfdc6506e2c8371e2ad3275f820647f6a3edac3469b64b7c17d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7994ff35c5ccfdc6506e2c8371e2ad3275f820647f6a3edac3469b64b7c17d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d7994ff35c5ccfdc6506e2c8371e2ad3275f820647f6a3edac3469b64b7c17d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:34:33 np0005593232 podman[269301]: 2026-01-23 09:34:33.337663586 +0000 UTC m=+0.316720637 container init f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:34:33 np0005593232 podman[269301]: 2026-01-23 09:34:33.343809671 +0000 UTC m=+0.322866722 container start f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:34:33 np0005593232 podman[269301]: 2026-01-23 09:34:33.346765635 +0000 UTC m=+0.325822686 container attach f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:34:33 np0005593232 nova_compute[250269]: 2026-01-23 09:34:33.548 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:34:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1439335444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:34:33 np0005593232 nova_compute[250269]: 2026-01-23 09:34:33.606 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:33 np0005593232 nova_compute[250269]: 2026-01-23 09:34:33.629 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:33 np0005593232 nova_compute[250269]: 2026-01-23 09:34:33.633 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:33.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3046870225' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.052 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.055 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'pci_devices' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:34.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.151 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <uuid>641f6008-576e-4221-a1d8-33ddfca6d069</uuid>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <name>instance-00000013</name>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1701597104</nova:name>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:34:32</nova:creationTime>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:user uuid="5874ba32b4a94f68aaa43252721d2fb0">tempest-UnshelveToHostMultiNodesTest-1879363435-project-member</nova:user>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <nova:project uuid="307173cd6ebb4dd5ad3883dedac0271e">tempest-UnshelveToHostMultiNodesTest-1879363435</nova:project>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="317c5b81-1ed5-4e60-8bf3-e997901e644f"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="serial">641f6008-576e-4221-a1d8-33ddfca6d069</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="uuid">641f6008-576e-4221-a1d8-33ddfca6d069</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk.config">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/console.log" append="off"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:34:34 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:34:34 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:34:34 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:34:34 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:34:34 np0005593232 elated_diffie[269336]: {
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:        "osd_id": 0,
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:        "type": "bluestore"
Jan 23 04:34:34 np0005593232 elated_diffie[269336]:    }
Jan 23 04:34:34 np0005593232 elated_diffie[269336]: }
Jan 23 04:34:34 np0005593232 systemd[1]: libpod-f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b.scope: Deactivated successfully.
Jan 23 04:34:34 np0005593232 podman[269301]: 2026-01-23 09:34:34.208593257 +0000 UTC m=+1.187650308 container died f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:34:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7d7994ff35c5ccfdc6506e2c8371e2ad3275f820647f6a3edac3469b64b7c17d-merged.mount: Deactivated successfully.
Jan 23 04:34:34 np0005593232 podman[269301]: 2026-01-23 09:34:34.262773161 +0000 UTC m=+1.241830212 container remove f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_diffie, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:34 np0005593232 systemd[1]: libpod-conmon-f3f6073f2e2060a71e2c1b4be0a5342ba19bed6254a76846873c8b88a2763d8b.scope: Deactivated successfully.
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:34:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 45af5316-6443-4278-a2ec-197422e1c7bb does not exist
Jan 23 04:34:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ac9614af-9c84-438b-9284-96bf12e47b60 does not exist
Jan 23 04:34:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c9a37eec-81d7-49fb-a4af-815e3f9a6097 does not exist
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.358 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.359 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.360 250273 INFO nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Using config drive#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.388 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.574 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'ec2_ids' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:34 np0005593232 nova_compute[250269]: 2026-01-23 09:34:34.873 250273 DEBUG nova.objects.instance [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lazy-loading 'keypairs' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1202: 321 pgs: 321 active+clean; 225 MiB data, 431 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.6 MiB/s wr, 75 op/s
Jan 23 04:34:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:34:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:35.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:35 np0005593232 nova_compute[250269]: 2026-01-23 09:34:35.952 250273 INFO nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Creating config drive at /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config#033[00m
Jan 23 04:34:35 np0005593232 nova_compute[250269]: 2026-01-23 09:34:35.958 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbx87w1pc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:36 np0005593232 nova_compute[250269]: 2026-01-23 09:34:36.091 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbx87w1pc" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:36 np0005593232 nova_compute[250269]: 2026-01-23 09:34:36.126 250273 DEBUG nova.storage.rbd_utils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] rbd image 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:34:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:36.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:36 np0005593232 nova_compute[250269]: 2026-01-23 09:34:36.132 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:34:36 np0005593232 nova_compute[250269]: 2026-01-23 09:34:36.328 250273 DEBUG oslo_concurrency.processutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config 641f6008-576e-4221-a1d8-33ddfca6d069_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:34:36 np0005593232 nova_compute[250269]: 2026-01-23 09:34:36.329 250273 INFO nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deleting local config drive /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069/disk.config because it was imported into RBD.#033[00m
Jan 23 04:34:36 np0005593232 systemd-machined[215836]: New machine qemu-9-instance-00000013.
Jan 23 04:34:36 np0005593232 systemd[1]: Started Virtual Machine qemu-9-instance-00000013.
Jan 23 04:34:36 np0005593232 podman[269532]: 2026-01-23 09:34:36.487652698 +0000 UTC m=+0.057119928 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1203: 321 pgs: 321 active+clean; 230 MiB data, 443 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.099 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160877.0988395, 641f6008-576e-4221-a1d8-33ddfca6d069 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.100 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.103 250273 DEBUG nova.compute.manager [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.104 250273 DEBUG nova.virt.libvirt.driver [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.107 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance spawned successfully.#033[00m
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:34:37
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'images']
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.344 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.347 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.458 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.459 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160877.0993981, 641f6008-576e-4221-a1d8-33ddfca6d069 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.459 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Started (Lifecycle Event)#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.502 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.507 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:34:37 np0005593232 nova_compute[250269]: 2026-01-23 09:34:37.539 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:34:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:37.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:37 np0005593232 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:34:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:34:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:38.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:38 np0005593232 nova_compute[250269]: 2026-01-23 09:34:38.152 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160863.150859, f2d1fdc0-baaf-4566-8655-aafdbcf1f473 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:34:38 np0005593232 nova_compute[250269]: 2026-01-23 09:34:38.152 250273 INFO nova.compute.manager [-] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:34:38 np0005593232 nova_compute[250269]: 2026-01-23 09:34:38.224 250273 DEBUG nova.compute.manager [None req-e03bd646-b07f-40c0-bbd3-e860103fe2f2 - - - - - -] [instance: f2d1fdc0-baaf-4566-8655-aafdbcf1f473] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 23 04:34:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 23 04:34:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 23 04:34:38 np0005593232 nova_compute[250269]: 2026-01-23 09:34:38.551 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1205: 321 pgs: 321 active+clean; 200 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 136 op/s
Jan 23 04:34:39 np0005593232 nova_compute[250269]: 2026-01-23 09:34:39.149 250273 DEBUG nova.compute.manager [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:34:39 np0005593232 nova_compute[250269]: 2026-01-23 09:34:39.276 250273 DEBUG oslo_concurrency.lockutils [None req-cfc8f3a1-b700-4a6a-9a38-b6d52e779b5e 86b66718f3a54282b1d7a2f58d62706f 782083282cf74a109e9cc81fd3a64fef - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 13.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:39.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:39 np0005593232 nova_compute[250269]: 2026-01-23 09:34:39.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:40.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1206: 321 pgs: 321 active+clean; 200 MiB data, 444 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 136 op/s
Jan 23 04:34:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:41.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:42.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:42 np0005593232 nova_compute[250269]: 2026-01-23 09:34:42.193 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "641f6008-576e-4221-a1d8-33ddfca6d069" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:42 np0005593232 nova_compute[250269]: 2026-01-23 09:34:42.194 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:42 np0005593232 nova_compute[250269]: 2026-01-23 09:34:42.194 250273 INFO nova.compute.manager [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shelving#033[00m
Jan 23 04:34:42 np0005593232 nova_compute[250269]: 2026-01-23 09:34:42.223 250273 DEBUG nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 04:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:42.590 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:42.590 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:34:42.591 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:34:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1207: 321 pgs: 321 active+clean; 121 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 184 op/s
Jan 23 04:34:43 np0005593232 nova_compute[250269]: 2026-01-23 09:34:43.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 23 04:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 23 04:34:43 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 23 04:34:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:43.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:44.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:34:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2546838625' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:34:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2546838625' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:34:44 np0005593232 nova_compute[250269]: 2026-01-23 09:34:44.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1209: 321 pgs: 321 active+clean; 121 MiB data, 393 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 32 KiB/s wr, 180 op/s
Jan 23 04:34:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:45.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:46.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021694118741857433 of space, bias 1.0, pg target 0.650823562255723 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:34:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:34:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1210: 321 pgs: 321 active+clean; 139 MiB data, 396 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 390 KiB/s wr, 160 op/s
Jan 23 04:34:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:47.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:48.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:48 np0005593232 nova_compute[250269]: 2026-01-23 09:34:48.556 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1211: 321 pgs: 321 active+clean; 167 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 23 04:34:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:49.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:49 np0005593232 nova_compute[250269]: 2026-01-23 09:34:49.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:50.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1212: 321 pgs: 321 active+clean; 167 MiB data, 414 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 23 04:34:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:51.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:34:52Z|00079|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 04:34:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:52.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:52 np0005593232 nova_compute[250269]: 2026-01-23 09:34:52.274 250273 DEBUG nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 04:34:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1213: 321 pgs: 321 active+clean; 167 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 171 op/s
Jan 23 04:34:53 np0005593232 podman[269659]: 2026-01-23 09:34:53.461676176 +0000 UTC m=+0.108326028 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:34:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:53 np0005593232 nova_compute[250269]: 2026-01-23 09:34:53.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:53.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:34:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:54.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:34:54 np0005593232 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 23 04:34:54 np0005593232 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000013.scope: Consumed 13.725s CPU time.
Jan 23 04:34:54 np0005593232 systemd-machined[215836]: Machine qemu-9-instance-00000013 terminated.
Jan 23 04:34:54 np0005593232 nova_compute[250269]: 2026-01-23 09:34:54.874 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1214: 321 pgs: 321 active+clean; 167 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.9 MiB/s wr, 149 op/s
Jan 23 04:34:55 np0005593232 nova_compute[250269]: 2026-01-23 09:34:55.290 250273 INFO nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 04:34:55 np0005593232 nova_compute[250269]: 2026-01-23 09:34:55.295 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:34:55 np0005593232 nova_compute[250269]: 2026-01-23 09:34:55.296 250273 DEBUG nova.objects.instance [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'numa_topology' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:34:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:55.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:55 np0005593232 nova_compute[250269]: 2026-01-23 09:34:55.948 250273 INFO nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Beginning cold snapshot process#033[00m
Jan 23 04:34:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:34:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:56.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:34:56 np0005593232 nova_compute[250269]: 2026-01-23 09:34:56.170 250273 DEBUG nova.virt.libvirt.imagebackend [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 04:34:56 np0005593232 nova_compute[250269]: 2026-01-23 09:34:56.608 250273 DEBUG nova.storage.rbd_utils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] creating snapshot(ee9b8b967647451db52222c1dba9cf39) on rbd image(641f6008-576e-4221-a1d8-33ddfca6d069_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:34:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1215: 321 pgs: 321 active+clean; 169 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 23 04:34:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 23 04:34:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 23 04:34:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 23 04:34:57 np0005593232 nova_compute[250269]: 2026-01-23 09:34:57.224 250273 DEBUG nova.storage.rbd_utils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] cloning vms/641f6008-576e-4221-a1d8-33ddfca6d069_disk@ee9b8b967647451db52222c1dba9cf39 to images/cb273736-b860-41f6-b2fe-c26bfa105d90 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:34:57 np0005593232 nova_compute[250269]: 2026-01-23 09:34:57.346 250273 DEBUG nova.storage.rbd_utils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] flattening images/cb273736-b860-41f6-b2fe-c26bfa105d90 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:34:57 np0005593232 nova_compute[250269]: 2026-01-23 09:34:57.816 250273 DEBUG nova.storage.rbd_utils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] removing snapshot(ee9b8b967647451db52222c1dba9cf39) on rbd image(641f6008-576e-4221-a1d8-33ddfca6d069_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:34:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:57.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:34:58.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 23 04:34:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 23 04:34:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 23 04:34:58 np0005593232 nova_compute[250269]: 2026-01-23 09:34:58.208 250273 DEBUG nova.storage.rbd_utils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] creating snapshot(snap) on rbd image(cb273736-b860-41f6-b2fe-c26bfa105d90) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:34:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:34:58 np0005593232 nova_compute[250269]: 2026-01-23 09:34:58.578 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:34:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1218: 321 pgs: 321 active+clean; 181 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.5 MiB/s wr, 220 op/s
Jan 23 04:34:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 23 04:34:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 23 04:34:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 23 04:34:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:34:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:34:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:34:59.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:34:59 np0005593232 nova_compute[250269]: 2026-01-23 09:34:59.876 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:00.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1220: 321 pgs: 321 active+clean; 181 MiB data, 415 MiB used, 21 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 66 op/s
Jan 23 04:35:01 np0005593232 nova_compute[250269]: 2026-01-23 09:35:01.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:01.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:02.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:02 np0005593232 nova_compute[250269]: 2026-01-23 09:35:02.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1221: 321 pgs: 321 active+clean; 328 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 11 MiB/s rd, 16 MiB/s wr, 452 op/s
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.039 250273 INFO nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Snapshot image upload complete#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.039 250273 DEBUG nova.compute.manager [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.232 250273 INFO nova.compute.manager [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Shelve offloading#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.238 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.239 250273 DEBUG nova.compute.manager [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.241 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.241 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquired lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.241 250273 DEBUG nova.network.neutron [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.325 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:35:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 23 04:35:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 23 04:35:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 23 04:35:03 np0005593232 nova_compute[250269]: 2026-01-23 09:35:03.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:03.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:04 np0005593232 nova_compute[250269]: 2026-01-23 09:35:04.149 250273 DEBUG nova.network.neutron [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:35:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:04 np0005593232 nova_compute[250269]: 2026-01-23 09:35:04.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1223: 321 pgs: 321 active+clean; 328 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 6.8 MiB/s rd, 12 MiB/s wr, 342 op/s
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.777 250273 DEBUG nova.network.neutron [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:35:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.830 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Releasing lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:35:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:05.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.832 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.832 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.832 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.839 250273 INFO nova.virt.libvirt.driver [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance destroyed successfully.#033[00m
Jan 23 04:35:05 np0005593232 nova_compute[250269]: 2026-01-23 09:35:05.839 250273 DEBUG nova.objects.instance [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lazy-loading 'resources' on Instance uuid 641f6008-576e-4221-a1d8-33ddfca6d069 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:35:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:06.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:06 np0005593232 nova_compute[250269]: 2026-01-23 09:35:06.987 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1224: 321 pgs: 321 active+clean; 308 MiB data, 483 MiB used, 21 GiB / 21 GiB avail; 6.7 MiB/s rd, 10 MiB/s wr, 324 op/s
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.050 250273 INFO nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deleting instance files /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069_del#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.051 250273 INFO nova.virt.libvirt.driver [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Deletion of /var/lib/nova/instances/641f6008-576e-4221-a1d8-33ddfca6d069_del complete#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.157 250273 INFO nova.scheduler.client.report [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Deleted allocations for instance 641f6008-576e-4221-a1d8-33ddfca6d069#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.216 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.216 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.259 250273 DEBUG oslo_concurrency.processutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:07 np0005593232 podman[269856]: 2026-01-23 09:35:07.402065807 +0000 UTC m=+0.057359885 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.497 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.520 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-641f6008-576e-4221-a1d8-33ddfca6d069" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.522 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.523 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.524 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280044772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.693 250273 DEBUG oslo_concurrency.processutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.699 250273 DEBUG nova.compute.provider_tree [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.720 250273 DEBUG nova.scheduler.client.report [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.744 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:07 np0005593232 nova_compute[250269]: 2026-01-23 09:35:07.798 250273 DEBUG oslo_concurrency.lockutils [None req-d06e87c4-9744-47f1-98e4-f064566e7df3 5874ba32b4a94f68aaa43252721d2fb0 307173cd6ebb4dd5ad3883dedac0271e - - default default] Lock "641f6008-576e-4221-a1d8-33ddfca6d069" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 25.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:07.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.321 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.322 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.322 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.355 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.356 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.356 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.356 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.357 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.611 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096576038' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.782 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.946 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.947 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4670MB free_disk=20.88443374633789GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.947 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:08 np0005593232 nova_compute[250269]: 2026-01-23 09:35:08.948 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1225: 321 pgs: 321 active+clean; 276 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 8.4 MiB/s wr, 289 op/s
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.036 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.037 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.063 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159843528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.479 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.485 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.524 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.556 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.557 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.753 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160894.7515254, 641f6008-576e-4221-a1d8-33ddfca6d069 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.753 250273 INFO nova.compute.manager [-] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.779 250273 DEBUG nova.compute.manager [None req-88fbe7cd-f3f9-496e-960e-7a17bdacd295 - - - - - -] [instance: 641f6008-576e-4221-a1d8-33ddfca6d069] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:09.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:09 np0005593232 nova_compute[250269]: 2026-01-23 09:35:09.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:10.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:10 np0005593232 nova_compute[250269]: 2026-01-23 09:35:10.527 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:35:10 np0005593232 nova_compute[250269]: 2026-01-23 09:35:10.528 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:35:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1226: 321 pgs: 321 active+clean; 276 MiB data, 464 MiB used, 21 GiB / 21 GiB avail; 5.4 MiB/s rd, 8.2 MiB/s wr, 284 op/s
Jan 23 04:35:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:11.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1227: 321 pgs: 321 active+clean; 202 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 746 KiB/s rd, 20 KiB/s wr, 88 op/s
Jan 23 04:35:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:13 np0005593232 nova_compute[250269]: 2026-01-23 09:35:13.613 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:13.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:14.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:14 np0005593232 nova_compute[250269]: 2026-01-23 09:35:14.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1228: 321 pgs: 321 active+clean; 202 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 651 KiB/s rd, 17 KiB/s wr, 76 op/s
Jan 23 04:35:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:15.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1229: 321 pgs: 321 active+clean; 204 MiB data, 438 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 142 op/s
Jan 23 04:35:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:18 np0005593232 nova_compute[250269]: 2026-01-23 09:35:18.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1230: 321 pgs: 321 active+clean; 203 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 23 04:35:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 23 04:35:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 23 04:35:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 23 04:35:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:19 np0005593232 nova_compute[250269]: 2026-01-23 09:35:19.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1232: 321 pgs: 321 active+clean; 203 MiB data, 425 MiB used, 21 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 168 op/s
Jan 23 04:35:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:21.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1233: 321 pgs: 321 active+clean; 122 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 241 op/s
Jan 23 04:35:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 23 04:35:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 23 04:35:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 23 04:35:23 np0005593232 nova_compute[250269]: 2026-01-23 09:35:23.618 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:24 np0005593232 podman[269998]: 2026-01-23 09:35:24.416055144 +0000 UTC m=+0.076331126 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller)
Jan 23 04:35:24 np0005593232 nova_compute[250269]: 2026-01-23 09:35:24.891 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:24.947 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:35:24 np0005593232 nova_compute[250269]: 2026-01-23 09:35:24.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:24.949 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:35:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1235: 321 pgs: 321 active+clean; 122 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.4 MiB/s wr, 198 op/s
Jan 23 04:35:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:25.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1236: 321 pgs: 321 active+clean; 79 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 2.9 MiB/s rd, 3.9 KiB/s wr, 165 op/s
Jan 23 04:35:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:27.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:28 np0005593232 nova_compute[250269]: 2026-01-23 09:35:28.620 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1237: 321 pgs: 321 active+clean; 41 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 152 op/s
Jan 23 04:35:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:29.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:29 np0005593232 nova_compute[250269]: 2026-01-23 09:35:29.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1238: 321 pgs: 321 active+clean; 41 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 KiB/s wr, 144 op/s
Jan 23 04:35:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:32.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:32.951 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:35:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1239: 321 pgs: 321 active+clean; 41 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 23 04:35:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:33 np0005593232 nova_compute[250269]: 2026-01-23 09:35:33.622 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:34 np0005593232 nova_compute[250269]: 2026-01-23 09:35:34.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1240: 321 pgs: 321 active+clean; 41 MiB data, 334 MiB used, 21 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 94af5704-fbb4-4247-af95-ecc0c368f6cf does not exist
Jan 23 04:35:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 67f0c953-80b5-4556-9dd9-6b252f92384f does not exist
Jan 23 04:35:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cd9a161d-d257-438b-ab93-3ac25b60056b does not exist
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:35:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:35:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:36.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.255034946 +0000 UTC m=+0.039727383 container create 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:35:36 np0005593232 systemd[1]: Started libpod-conmon-32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d.scope.
Jan 23 04:35:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.238071603 +0000 UTC m=+0.022764070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.34251914 +0000 UTC m=+0.127211627 container init 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.349518549 +0000 UTC m=+0.134211006 container start 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.353202824 +0000 UTC m=+0.137895261 container attach 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:35:36 np0005593232 musing_black[270371]: 167 167
Jan 23 04:35:36 np0005593232 systemd[1]: libpod-32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d.scope: Deactivated successfully.
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.357491646 +0000 UTC m=+0.142184083 container died 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:35:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dfe5e4148bc506944417ed0ae74da15f34c5d2c022ea0085e2dd845d172bf45d-merged.mount: Deactivated successfully.
Jan 23 04:35:36 np0005593232 podman[270355]: 2026-01-23 09:35:36.414578203 +0000 UTC m=+0.199270640 container remove 32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_black, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:35:36 np0005593232 systemd[1]: libpod-conmon-32acabafd0da340fe25119ae1d4c290d1d29bffe0cc9d300188c7093ecd30f1d.scope: Deactivated successfully.
Jan 23 04:35:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:35:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:35:36 np0005593232 podman[270393]: 2026-01-23 09:35:36.587679945 +0000 UTC m=+0.047487214 container create b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:35:36 np0005593232 systemd[1]: Started libpod-conmon-b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2.scope.
Jan 23 04:35:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:36 np0005593232 podman[270393]: 2026-01-23 09:35:36.565978647 +0000 UTC m=+0.025785946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:36 np0005593232 podman[270393]: 2026-01-23 09:35:36.671068082 +0000 UTC m=+0.130875371 container init b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:35:36 np0005593232 podman[270393]: 2026-01-23 09:35:36.679614696 +0000 UTC m=+0.139421965 container start b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:35:36 np0005593232 podman[270393]: 2026-01-23 09:35:36.684613038 +0000 UTC m=+0.144420327 container attach b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1241: 321 pgs: 321 active+clean; 47 MiB data, 337 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 388 KiB/s wr, 28 op/s
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:35:37
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'backups']
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:35:37 np0005593232 relaxed_kalam[270410]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:35:37 np0005593232 relaxed_kalam[270410]: --> relative data size: 1.0
Jan 23 04:35:37 np0005593232 relaxed_kalam[270410]: --> All data devices are unavailable
Jan 23 04:35:37 np0005593232 systemd[1]: libpod-b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2.scope: Deactivated successfully.
Jan 23 04:35:37 np0005593232 podman[270427]: 2026-01-23 09:35:37.599018188 +0000 UTC m=+0.030782828 container died b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:35:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f29553d9cad9e17e9d24b7b4341dc2828cdc4db555df977153cb1395cd69f21f-merged.mount: Deactivated successfully.
Jan 23 04:35:37 np0005593232 podman[270427]: 2026-01-23 09:35:37.652950685 +0000 UTC m=+0.084715295 container remove b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:35:37 np0005593232 systemd[1]: libpod-conmon-b971fcdd851cda3115cec1c06aeefe3942bb6ad79c3bf597752e8c3c491c0ea2.scope: Deactivated successfully.
Jan 23 04:35:37 np0005593232 podman[270426]: 2026-01-23 09:35:37.658556885 +0000 UTC m=+0.078330933 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:35:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:37.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:35:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:35:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:38.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.274431777 +0000 UTC m=+0.043045798 container create c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:35:38 np0005593232 systemd[1]: Started libpod-conmon-c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee.scope.
Jan 23 04:35:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.352753419 +0000 UTC m=+0.121367470 container init c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.257735841 +0000 UTC m=+0.026349892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.360443369 +0000 UTC m=+0.129057390 container start c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.364146014 +0000 UTC m=+0.132760035 container attach c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:35:38 np0005593232 systemd[1]: libpod-c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee.scope: Deactivated successfully.
Jan 23 04:35:38 np0005593232 lucid_volhard[270617]: 167 167
Jan 23 04:35:38 np0005593232 conmon[270617]: conmon c451e89dfab228214045 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee.scope/container/memory.events
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.369226599 +0000 UTC m=+0.137840620 container died c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:35:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-408dc9efb25dce6ce7b83d8409ecb4a4330f62f7bb214b8f48bec15de425b7b4-merged.mount: Deactivated successfully.
Jan 23 04:35:38 np0005593232 podman[270601]: 2026-01-23 09:35:38.412469591 +0000 UTC m=+0.181083612 container remove c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:38 np0005593232 systemd[1]: libpod-conmon-c451e89dfab22821404552a4061f991f848a0a97d3d42478238c86bf9b3bf0ee.scope: Deactivated successfully.
Jan 23 04:35:38 np0005593232 podman[270643]: 2026-01-23 09:35:38.569100765 +0000 UTC m=+0.041743350 container create 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:35:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:38 np0005593232 systemd[1]: Started libpod-conmon-101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b.scope.
Jan 23 04:35:38 np0005593232 nova_compute[250269]: 2026-01-23 09:35:38.624 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6530bd5fa7eadb2aa1cdd83402a6582d83928f0c7b688d042e118c1135a84c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6530bd5fa7eadb2aa1cdd83402a6582d83928f0c7b688d042e118c1135a84c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6530bd5fa7eadb2aa1cdd83402a6582d83928f0c7b688d042e118c1135a84c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b6530bd5fa7eadb2aa1cdd83402a6582d83928f0c7b688d042e118c1135a84c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:38 np0005593232 podman[270643]: 2026-01-23 09:35:38.644308899 +0000 UTC m=+0.116951494 container init 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:35:38 np0005593232 podman[270643]: 2026-01-23 09:35:38.551518064 +0000 UTC m=+0.024160759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:38 np0005593232 podman[270643]: 2026-01-23 09:35:38.655089306 +0000 UTC m=+0.127731891 container start 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:35:38 np0005593232 podman[270643]: 2026-01-23 09:35:38.659178742 +0000 UTC m=+0.131821327 container attach 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:35:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1242: 321 pgs: 321 active+clean; 64 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 17 KiB/s rd, 1.3 MiB/s wr, 23 op/s
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]: {
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:    "0": [
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:        {
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "devices": [
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "/dev/loop3"
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            ],
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "lv_name": "ceph_lv0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "lv_size": "7511998464",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "name": "ceph_lv0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "tags": {
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.cluster_name": "ceph",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.crush_device_class": "",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.encrypted": "0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.osd_id": "0",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.type": "block",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:                "ceph.vdo": "0"
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            },
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "type": "block",
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:            "vg_name": "ceph_vg0"
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:        }
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]:    ]
Jan 23 04:35:39 np0005593232 heuristic_hawking[270659]: }
Jan 23 04:35:39 np0005593232 systemd[1]: libpod-101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b.scope: Deactivated successfully.
Jan 23 04:35:39 np0005593232 podman[270643]: 2026-01-23 09:35:39.430466674 +0000 UTC m=+0.903109269 container died 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:35:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9b6530bd5fa7eadb2aa1cdd83402a6582d83928f0c7b688d042e118c1135a84c-merged.mount: Deactivated successfully.
Jan 23 04:35:39 np0005593232 podman[270643]: 2026-01-23 09:35:39.636082054 +0000 UTC m=+1.108724649 container remove 101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hawking, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:39 np0005593232 systemd[1]: libpod-conmon-101636abe95f0333024607d14d0b1e1432fe4e2b88110d6f165e561073832a4b.scope: Deactivated successfully.
Jan 23 04:35:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:39 np0005593232 nova_compute[250269]: 2026-01-23 09:35:39.911 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:40.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.223152784 +0000 UTC m=+0.040629109 container create 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:40 np0005593232 systemd[1]: Started libpod-conmon-57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5.scope.
Jan 23 04:35:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.205547262 +0000 UTC m=+0.023023607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.302180926 +0000 UTC m=+0.119657271 container init 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.307979061 +0000 UTC m=+0.125455386 container start 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.311236404 +0000 UTC m=+0.128712729 container attach 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:35:40 np0005593232 inspiring_babbage[270837]: 167 167
Jan 23 04:35:40 np0005593232 systemd[1]: libpod-57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5.scope: Deactivated successfully.
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.312661795 +0000 UTC m=+0.130138120 container died 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e6b57b987352b0d6264f09fe3f11a160d238331fe30705ef8af966415e774c98-merged.mount: Deactivated successfully.
Jan 23 04:35:40 np0005593232 podman[270821]: 2026-01-23 09:35:40.354898109 +0000 UTC m=+0.172374434 container remove 57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:35:40 np0005593232 systemd[1]: libpod-conmon-57777807e908b7e92dd1053ba72c18cdfb44056df0a1d15fe84813a72a47cac5.scope: Deactivated successfully.
Jan 23 04:35:40 np0005593232 podman[270860]: 2026-01-23 09:35:40.509440453 +0000 UTC m=+0.038081916 container create e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:35:40 np0005593232 systemd[1]: Started libpod-conmon-e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780.scope.
Jan 23 04:35:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:35:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5b58dc52a31ecaf027b1c6ab5f09de457620a24033de80325059e00d4226ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5b58dc52a31ecaf027b1c6ab5f09de457620a24033de80325059e00d4226ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5b58dc52a31ecaf027b1c6ab5f09de457620a24033de80325059e00d4226ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c5b58dc52a31ecaf027b1c6ab5f09de457620a24033de80325059e00d4226ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:35:40 np0005593232 podman[270860]: 2026-01-23 09:35:40.491546143 +0000 UTC m=+0.020187626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:35:40 np0005593232 podman[270860]: 2026-01-23 09:35:40.594468346 +0000 UTC m=+0.123109849 container init e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:35:40 np0005593232 podman[270860]: 2026-01-23 09:35:40.601648291 +0000 UTC m=+0.130289754 container start e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:35:40 np0005593232 podman[270860]: 2026-01-23 09:35:40.60547024 +0000 UTC m=+0.134111713 container attach e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:35:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1243: 321 pgs: 321 active+clean; 64 MiB data, 348 MiB used, 21 GiB / 21 GiB avail; 10 KiB/s rd, 1.3 MiB/s wr, 13 op/s
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.193 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.194 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.231 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.319 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.320 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.327 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.327 250273 INFO nova.compute.claims [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.454 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:41 np0005593232 busy_kare[270876]: {
Jan 23 04:35:41 np0005593232 busy_kare[270876]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:35:41 np0005593232 busy_kare[270876]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:35:41 np0005593232 busy_kare[270876]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:35:41 np0005593232 busy_kare[270876]:        "osd_id": 0,
Jan 23 04:35:41 np0005593232 busy_kare[270876]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:35:41 np0005593232 busy_kare[270876]:        "type": "bluestore"
Jan 23 04:35:41 np0005593232 busy_kare[270876]:    }
Jan 23 04:35:41 np0005593232 busy_kare[270876]: }
Jan 23 04:35:41 np0005593232 systemd[1]: libpod-e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780.scope: Deactivated successfully.
Jan 23 04:35:41 np0005593232 podman[270860]: 2026-01-23 09:35:41.500575 +0000 UTC m=+1.029216463 container died e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:35:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9c5b58dc52a31ecaf027b1c6ab5f09de457620a24033de80325059e00d4226ed-merged.mount: Deactivated successfully.
Jan 23 04:35:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053844224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.915 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.923 250273 DEBUG nova.compute.provider_tree [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:35:41 np0005593232 nova_compute[250269]: 2026-01-23 09:35:41.969 250273 DEBUG nova.scheduler.client.report [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.044 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.045 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.091 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.091 250273 DEBUG nova.network.neutron [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.112 250273 INFO nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.164 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:35:42 np0005593232 podman[270860]: 2026-01-23 09:35:42.170205674 +0000 UTC m=+1.698847137 container remove e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:35:42 np0005593232 systemd[1]: libpod-conmon-e84e068d806804961cc76b6934b8206474354c8cb79b67a0b227cdcb4ebc1780.scope: Deactivated successfully.
Jan 23 04:35:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:42.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.286 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.287 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.287 250273 INFO nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Creating image(s)#033[00m
Jan 23 04:35:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.313 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.337 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.363 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.367 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.429 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.430 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.430 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.431 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.454 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.457 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.580 250273 DEBUG nova.network.neutron [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:35:42 np0005593232 nova_compute[250269]: 2026-01-23 09:35:42.581 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:42.590 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:42.591 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:35:42.592 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9c656946-029a-4f92-9f33-69a5ac79473b does not exist
Jan 23 04:35:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 849954a5-3bbc-476a-b7cb-cdb18f07eedc does not exist
Jan 23 04:35:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 03f03b10-4a05-4fce-ab4f-e4c5e4a0d580 does not exist
Jan 23 04:35:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:35:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1244: 321 pgs: 321 active+clean; 88 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.228 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.772s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.300 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] resizing rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.417 250273 DEBUG nova.objects.instance [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lazy-loading 'migration_context' on Instance uuid 29ef4211-b5dc-4b2e-a7ec-94232e37ffde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.439 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.440 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Ensure instance console log exists: /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.441 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.441 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.441 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.444 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.449 250273 WARNING nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.455 250273 DEBUG nova.virt.libvirt.host [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.455 250273 DEBUG nova.virt.libvirt.host [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.459 250273 DEBUG nova.virt.libvirt.host [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.460 250273 DEBUG nova.virt.libvirt.host [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.462 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.462 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.463 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.463 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.463 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.463 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.464 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.464 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.464 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.464 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.464 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.465 250273 DEBUG nova.virt.hardware [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.467 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.625 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:43.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:35:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826771537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.912 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.940 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:43 np0005593232 nova_compute[250269]: 2026-01-23 09:35:43.944 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:44.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:35:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1519958115' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.379 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.381 250273 DEBUG nova.objects.instance [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lazy-loading 'pci_devices' on Instance uuid 29ef4211-b5dc-4b2e-a7ec-94232e37ffde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.401 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <uuid>29ef4211-b5dc-4b2e-a7ec-94232e37ffde</uuid>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <name>instance-00000017</name>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:name>tempest-TenantUsagesTestJSON-server-1468990267</nova:name>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:35:43</nova:creationTime>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:user uuid="e9c891dd457e49108b23ec021ef1c519">tempest-TenantUsagesTestJSON-1592711133-project-member</nova:user>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <nova:project uuid="9863673018e54c29b0920027d7711f24">tempest-TenantUsagesTestJSON-1592711133</nova:project>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="serial">29ef4211-b5dc-4b2e-a7ec-94232e37ffde</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="uuid">29ef4211-b5dc-4b2e-a7ec-94232e37ffde</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/console.log" append="off"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:35:44 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:35:44 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:35:44 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:35:44 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.512 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.512 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.516 250273 INFO nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Using config drive#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.542 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.761 250273 INFO nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Creating config drive at /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.765 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41km_96n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.896 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp41km_96n" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.932 250273 DEBUG nova.storage.rbd_utils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] rbd image 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.937 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:44 np0005593232 nova_compute[250269]: 2026-01-23 09:35:44.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1245: 321 pgs: 321 active+clean; 88 MiB data, 354 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 04:35:45 np0005593232 nova_compute[250269]: 2026-01-23 09:35:45.675 250273 DEBUG oslo_concurrency.processutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config 29ef4211-b5dc-4b2e-a7ec-94232e37ffde_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:45 np0005593232 nova_compute[250269]: 2026-01-23 09:35:45.677 250273 INFO nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Deleting local config drive /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde/disk.config because it was imported into RBD.#033[00m
Jan 23 04:35:45 np0005593232 systemd-machined[215836]: New machine qemu-10-instance-00000017.
Jan 23 04:35:45 np0005593232 systemd[1]: Started Virtual Machine qemu-10-instance-00000017.
Jan 23 04:35:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:45.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.357 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160946.3572335, 29ef4211-b5dc-4b2e-a7ec-94232e37ffde => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.358 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.361 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.362 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.365 250273 INFO nova.virt.libvirt.driver [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance spawned successfully.#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.365 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.479 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.484 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.484 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.485 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.485 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.486 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.486 250273 DEBUG nova.virt.libvirt.driver [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.489 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.524 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.525 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160946.3581927, 29ef4211-b5dc-4b2e-a7ec-94232e37ffde => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.525 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] VM Started (Lifecycle Event)#033[00m
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009960101433091383 of space, bias 1.0, pg target 0.29880304299274146 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:35:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.601 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.605 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.797 250273 INFO nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Took 4.51 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.800 250273 DEBUG nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:35:46 np0005593232 nova_compute[250269]: 2026-01-23 09:35:46.880 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:35:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1246: 321 pgs: 321 active+clean; 82 MiB data, 357 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 149 op/s
Jan 23 04:35:47 np0005593232 nova_compute[250269]: 2026-01-23 09:35:47.257 250273 INFO nova.compute.manager [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Took 5.97 seconds to build instance.#033[00m
Jan 23 04:35:47 np0005593232 nova_compute[250269]: 2026-01-23 09:35:47.572 250273 DEBUG oslo_concurrency.lockutils [None req-041d18bf-29a2-4ff6-bb7c-dc2ee9b15eec e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:48 np0005593232 nova_compute[250269]: 2026-01-23 09:35:48.627 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1247: 321 pgs: 321 active+clean; 88 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.2 MiB/s wr, 187 op/s
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.688 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.689 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.689 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.689 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.689 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.690 250273 INFO nova.compute.manager [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Terminating instance#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.691 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "refresh_cache-29ef4211-b5dc-4b2e-a7ec-94232e37ffde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.691 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquired lock "refresh_cache-29ef4211-b5dc-4b2e-a7ec-94232e37ffde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.691 250273 DEBUG nova.network.neutron [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:35:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:49.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:49 np0005593232 nova_compute[250269]: 2026-01-23 09:35:49.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.013 250273 DEBUG nova.network.neutron [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:35:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.697 250273 DEBUG nova.network.neutron [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.717 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Releasing lock "refresh_cache-29ef4211-b5dc-4b2e-a7ec-94232e37ffde" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.718 250273 DEBUG nova.compute.manager [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:35:50 np0005593232 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 23 04:35:50 np0005593232 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000017.scope: Consumed 5.118s CPU time.
Jan 23 04:35:50 np0005593232 systemd-machined[215836]: Machine qemu-10-instance-00000017 terminated.
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.939 250273 INFO nova.virt.libvirt.driver [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance destroyed successfully.#033[00m
Jan 23 04:35:50 np0005593232 nova_compute[250269]: 2026-01-23 09:35:50.940 250273 DEBUG nova.objects.instance [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lazy-loading 'resources' on Instance uuid 29ef4211-b5dc-4b2e-a7ec-94232e37ffde obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:35:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1248: 321 pgs: 321 active+clean; 88 MiB data, 356 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.535 250273 INFO nova.virt.libvirt.driver [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Deleting instance files /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde_del#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.536 250273 INFO nova.virt.libvirt.driver [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Deletion of /var/lib/nova/instances/29ef4211-b5dc-4b2e-a7ec-94232e37ffde_del complete#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.593 250273 INFO nova.compute.manager [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Took 0.87 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.594 250273 DEBUG oslo.service.loopingcall [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.594 250273 DEBUG nova.compute.manager [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.594 250273 DEBUG nova.network.neutron [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:35:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:51.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.938 250273 DEBUG nova.network.neutron [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.957 250273 DEBUG nova.network.neutron [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:35:51 np0005593232 nova_compute[250269]: 2026-01-23 09:35:51.973 250273 INFO nova.compute.manager [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Took 0.38 seconds to deallocate network for instance.#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.018 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.019 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.081 250273 DEBUG oslo_concurrency.processutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481134942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.518 250273 DEBUG oslo_concurrency.processutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.526 250273 DEBUG nova.compute.provider_tree [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.555 250273 DEBUG nova.scheduler.client.report [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.596 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.624 250273 INFO nova.scheduler.client.report [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Deleted allocations for instance 29ef4211-b5dc-4b2e-a7ec-94232e37ffde#033[00m
Jan 23 04:35:52 np0005593232 nova_compute[250269]: 2026-01-23 09:35:52.709 250273 DEBUG oslo_concurrency.lockutils [None req-fca29793-2924-48ed-ad80-d9c5588a28b9 e9c891dd457e49108b23ec021ef1c519 9863673018e54c29b0920027d7711f24 - - default default] Lock "29ef4211-b5dc-4b2e-a7ec-94232e37ffde" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1249: 321 pgs: 321 active+clean; 41 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 237 op/s
Jan 23 04:35:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:53 np0005593232 nova_compute[250269]: 2026-01-23 09:35:53.630 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:53.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:54.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:55 np0005593232 nova_compute[250269]: 2026-01-23 09:35:55.008 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1250: 321 pgs: 321 active+clean; 41 MiB data, 336 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 23 04:35:55 np0005593232 podman[271426]: 2026-01-23 09:35:55.433913335 +0000 UTC m=+0.095203038 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 04:35:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:55.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.130 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.130 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.165 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:35:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:35:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:56.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.259 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.259 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.269 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:35:56 np0005593232 nova_compute[250269]: 2026-01-23 09:35:56.270 250273 INFO nova.compute.claims [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:35:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1251: 321 pgs: 321 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Jan 23 04:35:57 np0005593232 nova_compute[250269]: 2026-01-23 09:35:57.375 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:57.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:35:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1166270336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.021 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.028 250273 DEBUG nova.compute.provider_tree [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.051 250273 DEBUG nova.scheduler.client.report [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.078 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.079 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.144 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.145 250273 DEBUG nova.network.neutron [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.182 250273 INFO nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.205 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:35:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:35:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:35:58.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.328 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.330 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.330 250273 INFO nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Creating image(s)#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.363 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.400 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.432 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.436 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.500 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.501 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.502 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.502 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.530 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.534 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.774 250273 DEBUG nova.network.neutron [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:35:58 np0005593232 nova_compute[250269]: 2026-01-23 09:35:58.775 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:35:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1252: 321 pgs: 321 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 859 KiB/s wr, 104 op/s
Jan 23 04:35:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:35:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:35:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:35:59.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:00 np0005593232 nova_compute[250269]: 2026-01-23 09:36:00.011 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1253: 321 pgs: 321 active+clean; 41 MiB data, 332 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.2 KiB/s wr, 65 op/s
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.078 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.158 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] resizing rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.276 250273 DEBUG nova.objects.instance [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'migration_context' on Instance uuid 10d9de49-07bc-45e5-8935-d36bcbef1c0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.293 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.293 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Ensure instance console log exists: /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.294 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.294 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.294 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.296 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.301 250273 WARNING nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.307 250273 DEBUG nova.virt.libvirt.host [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.308 250273 DEBUG nova.virt.libvirt.host [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.311 250273 DEBUG nova.virt.libvirt.host [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.312 250273 DEBUG nova.virt.libvirt.host [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.314 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.314 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.315 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.316 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.316 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.316 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.317 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.317 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.318 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.318 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.319 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.319 250273 DEBUG nova.virt.hardware [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.324 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:36:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1929972637' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.749 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.777 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:01 np0005593232 nova_compute[250269]: 2026-01-23 09:36:01.782 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:02.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:36:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607227918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.340 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.342 250273 DEBUG nova.objects.instance [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'pci_devices' on Instance uuid 10d9de49-07bc-45e5-8935-d36bcbef1c0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.365 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <uuid>10d9de49-07bc-45e5-8935-d36bcbef1c0a</uuid>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <name>instance-00000018</name>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-656982695</nova:name>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:36:01</nova:creationTime>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:user uuid="3f23a08c373543188394924b6b01739b">tempest-ServersAdminNegativeTestJSON-1654129510-project-member</nova:user>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <nova:project uuid="8e2e23ed016b4d0f959d9af65ac543af">tempest-ServersAdminNegativeTestJSON-1654129510</nova:project>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="serial">10d9de49-07bc-45e5-8935-d36bcbef1c0a</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="uuid">10d9de49-07bc-45e5-8935-d36bcbef1c0a</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/console.log" append="off"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:36:02 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:36:02 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:36:02 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:36:02 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.427 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.428 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.429 250273 INFO nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Using config drive#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.459 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.674 250273 INFO nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Creating config drive at /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.684 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9e5ngjw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.826 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_9e5ngjw" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.866 250273 DEBUG nova.storage.rbd_utils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:02 np0005593232 nova_compute[250269]: 2026-01-23 09:36:02.872 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1254: 321 pgs: 321 active+clean; 134 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.5 MiB/s wr, 109 op/s
Jan 23 04:36:03 np0005593232 nova_compute[250269]: 2026-01-23 09:36:03.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:03 np0005593232 nova_compute[250269]: 2026-01-23 09:36:03.709 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:03 np0005593232 nova_compute[250269]: 2026-01-23 09:36:03.828 250273 DEBUG oslo_concurrency.processutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config 10d9de49-07bc-45e5-8935-d36bcbef1c0a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.956s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:03 np0005593232 nova_compute[250269]: 2026-01-23 09:36:03.829 250273 INFO nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Deleting local config drive /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a/disk.config because it was imported into RBD.#033[00m
Jan 23 04:36:03 np0005593232 systemd-machined[215836]: New machine qemu-11-instance-00000018.
Jan 23 04:36:03 np0005593232 systemd[1]: Started Virtual Machine qemu-11-instance-00000018.
Jan 23 04:36:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:04.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.506 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160964.505997, 10d9de49-07bc-45e5-8935-d36bcbef1c0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.506 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.509 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.509 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.513 250273 INFO nova.virt.libvirt.driver [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance spawned successfully.#033[00m
Jan 23 04:36:04 np0005593232 nova_compute[250269]: 2026-01-23 09:36:04.513 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.014 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1255: 321 pgs: 321 active+clean; 134 MiB data, 376 MiB used, 21 GiB / 21 GiB avail; 28 KiB/s rd, 3.5 MiB/s wr, 46 op/s
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:36:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:05.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.937 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160950.9371061, 29ef4211-b5dc-4b2e-a7ec-94232e37ffde => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:05 np0005593232 nova_compute[250269]: 2026-01-23 09:36:05.938 250273 INFO nova.compute.manager [-] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:36:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:06.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1256: 321 pgs: 321 active+clean; 134 MiB data, 378 MiB used, 21 GiB / 21 GiB avail; 644 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.232 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.232 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.232 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.233 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.233 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.234 250273 DEBUG nova.virt.libvirt.driver [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.306 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.307 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.307 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.328 250273 DEBUG nova.compute.manager [None req-75fbe676-6532-4f88-a060-bcad0b117499 - - - - - -] [instance: 29ef4211-b5dc-4b2e-a7ec-94232e37ffde] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.329 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.333 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.366 250273 INFO nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Took 9.04 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.366 250273 DEBUG nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.367 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.368 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160964.5089383, 10d9de49-07bc-45e5-8935-d36bcbef1c0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.368 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] VM Started (Lifecycle Event)#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.412 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.417 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.450 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.452 250273 INFO nova.compute.manager [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Took 11.22 seconds to build instance.#033[00m
Jan 23 04:36:07 np0005593232 nova_compute[250269]: 2026-01-23 09:36:07.471 250273 DEBUG oslo_concurrency.lockutils [None req-84b56c13-a0ba-4462-9ddf-227d613323a7 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:07.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:08Z|00080|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 04:36:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:08.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:08 np0005593232 nova_compute[250269]: 2026-01-23 09:36:08.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:08 np0005593232 podman[271826]: 2026-01-23 09:36:08.410728639 +0000 UTC m=+0.064162588 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 04:36:08 np0005593232 nova_compute[250269]: 2026-01-23 09:36:08.709 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1257: 321 pgs: 321 active+clean; 134 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 23 04:36:09 np0005593232 nova_compute[250269]: 2026-01-23 09:36:09.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:09 np0005593232 nova_compute[250269]: 2026-01-23 09:36:09.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:09 np0005593232 nova_compute[250269]: 2026-01-23 09:36:09.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:36:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:36:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:09.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.017 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:10.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.360 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.360 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.360 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.361 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:36:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011853698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.822 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.890 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:36:10 np0005593232 nova_compute[250269]: 2026-01-23 09:36:10.891 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000018 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.025 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.027 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4501MB free_disk=20.94662857055664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.027 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.027 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1258: 321 pgs: 321 active+clean; 134 MiB data, 379 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.242 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 10d9de49-07bc-45e5-8935-d36bcbef1c0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.243 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.243 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.293 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:36:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115771718' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.727 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.733 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.754 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.774 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:36:11 np0005593232 nova_compute[250269]: 2026-01-23 09:36:11.774 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:12.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.580 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.581 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.613 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.724 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.726 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.737 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.738 250273 INFO nova.compute.claims [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:36:12 np0005593232 nova_compute[250269]: 2026-01-23 09:36:12.917 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1259: 321 pgs: 321 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 228 op/s
Jan 23 04:36:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:36:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979342436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.379 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.384 250273 DEBUG nova.compute.provider_tree [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.409 250273 DEBUG nova.scheduler.client.report [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.441 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.442 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.515 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.515 250273 DEBUG nova.network.neutron [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.541 250273 INFO nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.572 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.686 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.687 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.688 250273 INFO nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Creating image(s)#033[00m
Jan 23 04:36:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.749 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.780 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.811 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.818 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.847 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.884 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.886 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.888 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.889 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.923 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:13 np0005593232 nova_compute[250269]: 2026-01-23 09:36:13.928 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.063 250273 DEBUG nova.network.neutron [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.065 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:36:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.284 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.365 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] resizing rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.485 250273 DEBUG nova.objects.instance [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'migration_context' on Instance uuid b6d0c9a4-fe40-4f45-be0a-2784569cfb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.505 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.505 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Ensure instance console log exists: /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.506 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.507 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.507 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.509 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.515 250273 WARNING nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.521 250273 DEBUG nova.virt.libvirt.host [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.522 250273 DEBUG nova.virt.libvirt.host [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.525 250273 DEBUG nova.virt.libvirt.host [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.526 250273 DEBUG nova.virt.libvirt.host [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.527 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.528 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.528 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.529 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.529 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.529 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.530 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.530 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.530 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.531 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.531 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.531 250273 DEBUG nova.virt.hardware [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.534 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:36:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513477114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:36:14 np0005593232 nova_compute[250269]: 2026-01-23 09:36:14.990 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.014 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.017 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.041 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1260: 321 pgs: 321 active+clean; 181 MiB data, 400 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.535 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.537 250273 DEBUG nova.objects.instance [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d0c9a4-fe40-4f45-be0a-2784569cfb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.555 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <uuid>b6d0c9a4-fe40-4f45-be0a-2784569cfb83</uuid>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <name>instance-0000001b</name>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1013522132</nova:name>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:36:14</nova:creationTime>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:user uuid="3f23a08c373543188394924b6b01739b">tempest-ServersAdminNegativeTestJSON-1654129510-project-member</nova:user>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <nova:project uuid="8e2e23ed016b4d0f959d9af65ac543af">tempest-ServersAdminNegativeTestJSON-1654129510</nova:project>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="serial">b6d0c9a4-fe40-4f45-be0a-2784569cfb83</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="uuid">b6d0c9a4-fe40-4f45-be0a-2784569cfb83</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/console.log" append="off"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:36:15 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:36:15 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:36:15 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:36:15 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.607 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.608 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.609 250273 INFO nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Using config drive#033[00m
Jan 23 04:36:15 np0005593232 nova_compute[250269]: 2026-01-23 09:36:15.635 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:15.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.037 250273 INFO nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Creating config drive at /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.046 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpjoq4e3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.179 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgpjoq4e3" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.211 250273 DEBUG nova.storage.rbd_utils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] rbd image b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.216 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.401 250273 DEBUG oslo_concurrency.processutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config b6d0c9a4-fe40-4f45-be0a-2784569cfb83_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.402 250273 INFO nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Deleting local config drive /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83/disk.config because it was imported into RBD.#033[00m
Jan 23 04:36:16 np0005593232 systemd-machined[215836]: New machine qemu-12-instance-0000001b.
Jan 23 04:36:16 np0005593232 systemd[1]: Started Virtual Machine qemu-12-instance-0000001b.
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.940 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160976.9401162, b6d0c9a4-fe40-4f45-be0a-2784569cfb83 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.941 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.944 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.944 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.949 250273 INFO nova.virt.libvirt.driver [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance spawned successfully.#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.950 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.979 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.986 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.989 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.990 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.990 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.990 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.991 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:16 np0005593232 nova_compute[250269]: 2026-01-23 09:36:16.991 250273 DEBUG nova.virt.libvirt.driver [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.021 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.022 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160976.9410665, b6d0c9a4-fe40-4f45-be0a-2784569cfb83 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.022 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] VM Started (Lifecycle Event)#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.053 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.057 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1261: 321 pgs: 321 active+clean; 214 MiB data, 419 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.4 MiB/s wr, 213 op/s
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.064 250273 INFO nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Took 3.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.065 250273 DEBUG nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.077 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.141 250273 INFO nova.compute.manager [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Took 4.45 seconds to build instance.#033[00m
Jan 23 04:36:17 np0005593232 nova_compute[250269]: 2026-01-23 09:36:17.195 250273 DEBUG oslo_concurrency.lockutils [None req-a409c6d8-126e-4913-8ef9-87e3eb3991c5 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:17.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.006 250273 DEBUG nova.objects.instance [None req-f185f4e5-e9cb-4e2e-9126-e5b4ad3004a5 3e83d7614f5b435a85a8c70e9aae698b b5988c78db6044d9828d2df5aeeb92ac - - default default] Lazy-loading 'pci_devices' on Instance uuid b6d0c9a4-fe40-4f45-be0a-2784569cfb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.043 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160978.0430722, b6d0c9a4-fe40-4f45-be0a-2784569cfb83 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.044 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.083 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.093 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:18.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.278 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 23 04:36:18 np0005593232 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 23 04:36:18 np0005593232 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000001b.scope: Consumed 1.538s CPU time.
Jan 23 04:36:18 np0005593232 systemd-machined[215836]: Machine qemu-12-instance-0000001b terminated.
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.482 250273 DEBUG nova.compute.manager [None req-f185f4e5-e9cb-4e2e-9126-e5b4ad3004a5 3e83d7614f5b435a85a8c70e9aae698b b5988c78db6044d9828d2df5aeeb92ac - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:18 np0005593232 nova_compute[250269]: 2026-01-23 09:36:18.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1262: 321 pgs: 321 active+clean; 246 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.5 MiB/s wr, 217 op/s
Jan 23 04:36:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:19.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:20 np0005593232 nova_compute[250269]: 2026-01-23 09:36:20.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:20.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1263: 321 pgs: 321 active+clean; 246 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.5 MiB/s wr, 173 op/s
Jan 23 04:36:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:21.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:22.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1264: 321 pgs: 321 active+clean; 268 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 3.9 MiB/s rd, 6.5 MiB/s wr, 272 op/s
Jan 23 04:36:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:23 np0005593232 nova_compute[250269]: 2026-01-23 09:36:23.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:23.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:24.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.506 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.506 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.507 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.507 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.508 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.509 250273 INFO nova.compute.manager [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Terminating instance#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.510 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "refresh_cache-b6d0c9a4-fe40-4f45-be0a-2784569cfb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.511 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquired lock "refresh_cache-b6d0c9a4-fe40-4f45-be0a-2784569cfb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.511 250273 DEBUG nova.network.neutron [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:36:24 np0005593232 nova_compute[250269]: 2026-01-23 09:36:24.763 250273 DEBUG nova.network.neutron [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:36:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1265: 321 pgs: 321 active+clean; 268 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 171 op/s
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.246 250273 DEBUG nova.network.neutron [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.286 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Releasing lock "refresh_cache-b6d0c9a4-fe40-4f45-be0a-2784569cfb83" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.287 250273 DEBUG nova.compute.manager [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.294 250273 INFO nova.virt.libvirt.driver [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance destroyed successfully.#033[00m
Jan 23 04:36:25 np0005593232 nova_compute[250269]: 2026-01-23 09:36:25.294 250273 DEBUG nova.objects.instance [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'resources' on Instance uuid b6d0c9a4-fe40-4f45-be0a-2784569cfb83 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:25.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:26.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:26 np0005593232 podman[272337]: 2026-01-23 09:36:26.441485903 +0000 UTC m=+0.091724329 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:36:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1266: 321 pgs: 321 active+clean; 227 MiB data, 447 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 5.2 MiB/s wr, 256 op/s
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.170 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Creating tmpfile /var/lib/nova/instances/tmpet99ib7x to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.172 250273 DEBUG nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpet99ib7x',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.319 250273 INFO nova.virt.libvirt.driver [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Deleting instance files /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83_del#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.320 250273 INFO nova.virt.libvirt.driver [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Deletion of /var/lib/nova/instances/b6d0c9a4-fe40-4f45-be0a-2784569cfb83_del complete#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.411 250273 INFO nova.compute.manager [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Took 2.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.412 250273 DEBUG oslo.service.loopingcall [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.413 250273 DEBUG nova.compute.manager [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:36:27 np0005593232 nova_compute[250269]: 2026-01-23 09:36:27.413 250273 DEBUG nova.network.neutron [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:36:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.036 250273 DEBUG nova.network.neutron [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.062 250273 DEBUG nova.network.neutron [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.089 250273 INFO nova.compute.manager [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Took 0.68 seconds to deallocate network for instance.#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.148 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.149 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:28.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.278 250273 DEBUG oslo_concurrency.processutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:28.439 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:36:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:28.440 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:36:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056149219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.781 250273 DEBUG oslo_concurrency.processutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.788 250273 DEBUG nova.compute.provider_tree [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.810 250273 DEBUG nova.scheduler.client.report [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.826 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.856 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:28 np0005593232 nova_compute[250269]: 2026-01-23 09:36:28.895 250273 INFO nova.scheduler.client.report [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Deleted allocations for instance b6d0c9a4-fe40-4f45-be0a-2784569cfb83#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.012 250273 DEBUG oslo_concurrency.lockutils [None req-284742d4-e74b-4a17-82a8-6db162148839 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "b6d0c9a4-fe40-4f45-be0a-2784569cfb83" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1267: 321 pgs: 321 active+clean; 174 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 257 op/s
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.449 250273 DEBUG nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpet99ib7x',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5cea9bfc-e97a-4d07-a251-8ca3978b5f98',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.505 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.505 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquired lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.505 250273 DEBUG nova.network.neutron [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.661 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.661 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.662 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.662 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.662 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.663 250273 INFO nova.compute.manager [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Terminating instance#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.664 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "refresh_cache-10d9de49-07bc-45e5-8935-d36bcbef1c0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.664 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquired lock "refresh_cache-10d9de49-07bc-45e5-8935-d36bcbef1c0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.665 250273 DEBUG nova.network.neutron [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:36:29 np0005593232 nova_compute[250269]: 2026-01-23 09:36:29.835 250273 DEBUG nova.network.neutron [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:36:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:30 np0005593232 nova_compute[250269]: 2026-01-23 09:36:30.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:30.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.011 250273 DEBUG nova.network.neutron [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.059 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Releasing lock "refresh_cache-10d9de49-07bc-45e5-8935-d36bcbef1c0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.060 250273 DEBUG nova.compute.manager [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:36:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1268: 321 pgs: 321 active+clean; 174 MiB data, 427 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 212 op/s
Jan 23 04:36:31 np0005593232 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 23 04:36:31 np0005593232 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000018.scope: Consumed 14.179s CPU time.
Jan 23 04:36:31 np0005593232 systemd-machined[215836]: Machine qemu-11-instance-00000018 terminated.
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.292 250273 INFO nova.virt.libvirt.driver [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance destroyed successfully.#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.294 250273 DEBUG nova.objects.instance [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lazy-loading 'resources' on Instance uuid 10d9de49-07bc-45e5-8935-d36bcbef1c0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.803 250273 INFO nova.virt.libvirt.driver [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Deleting instance files /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a_del#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.804 250273 INFO nova.virt.libvirt.driver [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Deletion of /var/lib/nova/instances/10d9de49-07bc-45e5-8935-d36bcbef1c0a_del complete#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.866 250273 DEBUG nova.network.neutron [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Updating instance_info_cache with network_info: [{"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.880 250273 INFO nova.compute.manager [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.881 250273 DEBUG oslo.service.loopingcall [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.881 250273 DEBUG nova.compute.manager [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.882 250273 DEBUG nova.network.neutron [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.889 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Releasing lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.891 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpet99ib7x',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5cea9bfc-e97a-4d07-a251-8ca3978b5f98',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.892 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Creating instance directory: /var/lib/nova/instances/5cea9bfc-e97a-4d07-a251-8ca3978b5f98 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.893 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Ensure instance console log exists: /var/lib/nova/instances/5cea9bfc-e97a-4d07-a251-8ca3978b5f98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.893 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.895 250273 DEBUG nova.virt.libvirt.vif [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1674522276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1674522276',id=26,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:36:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d0dce6e339c349d4ab97cee5e49fff3a',ramdisk_id='',reservation_id='r-lql1hhru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1207260646',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1207260646-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:36:21Z,user_data=None,user_id='4f72965e950c4761bfedd99fdc411a83',uuid=5cea9bfc-e97a-4d07-a251-8ca3978b5f98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.896 250273 DEBUG nova.network.os_vif_util [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converting VIF {"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.897 250273 DEBUG nova.network.os_vif_util [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.898 250273 DEBUG os_vif [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.900 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.901 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.906 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa19a3bde-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.907 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa19a3bde-24, col_values=(('external_ids', {'iface-id': 'a19a3bde-2463-4f15-afe7-f8df8c608bb7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7b:ec:dc', 'vm-uuid': '5cea9bfc-e97a-4d07-a251-8ca3978b5f98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:31 np0005593232 NetworkManager[49057]: <info>  [1769160991.9377] manager: (tapa19a3bde-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.949 250273 INFO os_vif [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24')#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.949 250273 DEBUG nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 23 04:36:31 np0005593232 nova_compute[250269]: 2026-01-23 09:36:31.950 250273 DEBUG nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpet99ib7x',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5cea9bfc-e97a-4d07-a251-8ca3978b5f98',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 23 04:36:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.069 250273 DEBUG nova.network.neutron [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.097 250273 DEBUG nova.network.neutron [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.121 250273 INFO nova.compute.manager [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Took 0.24 seconds to deallocate network for instance.#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.224 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.225 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:32.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.308 250273 DEBUG oslo_concurrency.processutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:36:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:36:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860122213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.799 250273 DEBUG oslo_concurrency.processutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.811 250273 DEBUG nova.compute.provider_tree [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:36:32 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.855 250273 DEBUG nova.scheduler.client.report [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.898 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:32 np0005593232 nova_compute[250269]: 2026-01-23 09:36:32.943 250273 INFO nova.scheduler.client.report [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Deleted allocations for instance 10d9de49-07bc-45e5-8935-d36bcbef1c0a#033[00m
Jan 23 04:36:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1269: 321 pgs: 321 active+clean; 100 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.5 MiB/s wr, 226 op/s
Jan 23 04:36:33 np0005593232 nova_compute[250269]: 2026-01-23 09:36:33.155 250273 DEBUG oslo_concurrency.lockutils [None req-3493890b-0df3-49e4-b5ed-4bf410d510a9 3f23a08c373543188394924b6b01739b 8e2e23ed016b4d0f959d9af65ac543af - - default default] Lock "10d9de49-07bc-45e5-8935-d36bcbef1c0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:33 np0005593232 nova_compute[250269]: 2026-01-23 09:36:33.484 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160978.4812589, b6d0c9a4-fe40-4f45-be0a-2784569cfb83 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:33 np0005593232 nova_compute[250269]: 2026-01-23 09:36:33.484 250273 INFO nova.compute.manager [-] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:36:33 np0005593232 nova_compute[250269]: 2026-01-23 09:36:33.565 250273 DEBUG nova.compute.manager [None req-eb7e4abe-4e46-4d32-b35e-0886d708f67f - - - - - -] [instance: b6d0c9a4-fe40-4f45-be0a-2784569cfb83] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:33 np0005593232 nova_compute[250269]: 2026-01-23 09:36:33.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:33.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:34.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1270: 321 pgs: 321 active+clean; 100 MiB data, 360 MiB used, 21 GiB / 21 GiB avail; 1.7 MiB/s rd, 500 KiB/s wr, 127 op/s
Jan 23 04:36:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:35.442 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:35 np0005593232 nova_compute[250269]: 2026-01-23 09:36:35.610 250273 DEBUG nova.network.neutron [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Port a19a3bde-2463-4f15-afe7-f8df8c608bb7 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 23 04:36:35 np0005593232 nova_compute[250269]: 2026-01-23 09:36:35.612 250273 DEBUG nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpet99ib7x',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5cea9bfc-e97a-4d07-a251-8ca3978b5f98',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 23 04:36:35 np0005593232 systemd[1]: Starting libvirt proxy daemon...
Jan 23 04:36:35 np0005593232 systemd[1]: Started libvirt proxy daemon.
Jan 23 04:36:35 np0005593232 kernel: tapa19a3bde-24: entered promiscuous mode
Jan 23 04:36:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:35Z|00081|binding|INFO|Claiming lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 for this additional chassis.
Jan 23 04:36:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:35Z|00082|binding|INFO|a19a3bde-2463-4f15-afe7-f8df8c608bb7: Claiming fa:16:3e:7b:ec:dc 10.100.0.6
Jan 23 04:36:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:35Z|00083|binding|INFO|Claiming lport 9852d6c7-7b56-465e-865f-eb8c24e61417 for this additional chassis.
Jan 23 04:36:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:35Z|00084|binding|INFO|9852d6c7-7b56-465e-865f-eb8c24e61417: Claiming fa:16:3e:9e:36:a6 19.80.0.21
Jan 23 04:36:35 np0005593232 NetworkManager[49057]: <info>  [1769160995.9135] manager: (tapa19a3bde-24): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Jan 23 04:36:35 np0005593232 nova_compute[250269]: 2026-01-23 09:36:35.912 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:35 np0005593232 nova_compute[250269]: 2026-01-23 09:36:35.921 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:35 np0005593232 systemd-machined[215836]: New machine qemu-13-instance-0000001a.
Jan 23 04:36:35 np0005593232 systemd-udevd[272521]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:36:35 np0005593232 NetworkManager[49057]: <info>  [1769160995.9575] device (tapa19a3bde-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:36:35 np0005593232 NetworkManager[49057]: <info>  [1769160995.9583] device (tapa19a3bde-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:36:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:35.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:35 np0005593232 systemd[1]: Started Virtual Machine qemu-13-instance-0000001a.
Jan 23 04:36:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:36Z|00085|binding|INFO|Setting lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 ovn-installed in OVS
Jan 23 04:36:36 np0005593232 nova_compute[250269]: 2026-01-23 09:36:36.041 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:36.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:36 np0005593232 nova_compute[250269]: 2026-01-23 09:36:36.693 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160996.693077, 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:36 np0005593232 nova_compute[250269]: 2026-01-23 09:36:36.694 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] VM Started (Lifecycle Event)#033[00m
Jan 23 04:36:36 np0005593232 nova_compute[250269]: 2026-01-23 09:36:36.749 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:36 np0005593232 nova_compute[250269]: 2026-01-23 09:36:36.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1271: 321 pgs: 321 active+clean; 110 MiB data, 377 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 206 op/s
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:36:37
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'vms', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:36:37 np0005593232 nova_compute[250269]: 2026-01-23 09:36:37.207 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769160997.207481, 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:37 np0005593232 nova_compute[250269]: 2026-01-23 09:36:37.208 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:36:37 np0005593232 nova_compute[250269]: 2026-01-23 09:36:37.242 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:37 np0005593232 nova_compute[250269]: 2026-01-23 09:36:37.246 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:36:37 np0005593232 nova_compute[250269]: 2026-01-23 09:36:37.277 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 04:36:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:37.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:36:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:36:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:38.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:38 np0005593232 nova_compute[250269]: 2026-01-23 09:36:38.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1272: 321 pgs: 321 active+clean; 121 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 691 KiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 23 04:36:39 np0005593232 podman[272574]: 2026-01-23 09:36:39.407860731 +0000 UTC m=+0.063422718 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 04:36:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:40.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00086|binding|INFO|Claiming lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 for this chassis.
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00087|binding|INFO|a19a3bde-2463-4f15-afe7-f8df8c608bb7: Claiming fa:16:3e:7b:ec:dc 10.100.0.6
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00088|binding|INFO|Claiming lport 9852d6c7-7b56-465e-865f-eb8c24e61417 for this chassis.
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00089|binding|INFO|9852d6c7-7b56-465e-865f-eb8c24e61417: Claiming fa:16:3e:9e:36:a6 19.80.0.21
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00090|binding|INFO|Setting lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 up in Southbound
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00091|binding|INFO|Setting lport 9852d6c7-7b56-465e-865f-eb8c24e61417 up in Southbound
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.398 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:36:a6 19.80.0.21'], port_security=['fa:16:3e:9e:36:a6 19.80.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': ''}, parent_port=['a19a3bde-2463-4f15-afe7-f8df8c608bb7'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1709862236', 'neutron:cidrs': '19.80.0.21/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-914246be-3a6e-47b3-afc0-463db5fa1dae', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1709862236', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=6bd4ce00-6348-4d9c-ba3b-d576a6d3e856, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9852d6c7-7b56-465e-865f-eb8c24e61417) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.400 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ec:dc 10.100.0.6'], port_security=['fa:16:3e:7b:ec:dc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-178752437', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5cea9bfc-e97a-4d07-a251-8ca3978b5f98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-178752437', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '11', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb60528-b878-42fd-9c2f-0a3345010b1a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a19a3bde-2463-4f15-afe7-f8df8c608bb7) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.400 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9852d6c7-7b56-465e-865f-eb8c24e61417 in datapath 914246be-3a6e-47b3-afc0-463db5fa1dae bound to our chassis#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.402 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 914246be-3a6e-47b3-afc0-463db5fa1dae#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.412 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5401759f-02f9-46e0-bfd6-1a707a2e8c4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.413 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap914246be-31 in ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.415 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap914246be-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.415 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[08f7550d-4c85-4023-997a-2cbd81631843]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.416 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[83a6ac83-4bf4-4ac6-90d0-adabd0e34920]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.428 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e6c8d9-80b1-4326-9147-a7d56da31db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.440 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0e58408e-7853-468a-98d5-63c48e111d34]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.470 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0e122b47-5f38-4504-bcaf-468b9ab96acd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.476 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7df52c41-56a6-4b0a-b8fe-331838beb5cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 NetworkManager[49057]: <info>  [1769161000.4773] manager: (tap914246be-30): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Jan 23 04:36:40 np0005593232 systemd-udevd[272601]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.506 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d52a24-540a-480d-b91b-2a71c6036e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.509 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c3540b1d-6485-4c7a-9e66-5c589f4022d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 NetworkManager[49057]: <info>  [1769161000.5333] device (tap914246be-30): carrier: link connected
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.536 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0e884361-fdad-44ca-b454-4a9c4783b1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.552 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[443a0b93-b5f9-4bec-b8e9-14ae61884310]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap914246be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:ab:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491367, 'reachable_time': 28689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272620, 'error': None, 'target': 'ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.567 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[843fe688-15f0-4d63-b40d-095bd892c584]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe91:abf6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491367, 'tstamp': 491367}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272621, 'error': None, 'target': 'ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.583 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a262b92d-b6c9-4291-bc2e-d0e009862184]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap914246be-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:91:ab:f6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491367, 'reachable_time': 28689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272622, 'error': None, 'target': 'ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 nova_compute[250269]: 2026-01-23 09:36:40.597 250273 INFO nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Post operation of migration started#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.615 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[87169400-f27b-4632-9ec1-1a24af213d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.671 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59341f9d-2e4f-4da2-91d8-a8c56f251040]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.673 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap914246be-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.674 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.674 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap914246be-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:40 np0005593232 nova_compute[250269]: 2026-01-23 09:36:40.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:40 np0005593232 NetworkManager[49057]: <info>  [1769161000.6770] manager: (tap914246be-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 23 04:36:40 np0005593232 kernel: tap914246be-30: entered promiscuous mode
Jan 23 04:36:40 np0005593232 nova_compute[250269]: 2026-01-23 09:36:40.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.681 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap914246be-30, col_values=(('external_ids', {'iface-id': '3fd7a4e0-3e1c-454c-a93e-1fa905fbcde2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:40 np0005593232 nova_compute[250269]: 2026-01-23 09:36:40.682 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:40 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:40Z|00092|binding|INFO|Releasing lport 3fd7a4e0-3e1c-454c-a93e-1fa905fbcde2 from this chassis (sb_readonly=0)
Jan 23 04:36:40 np0005593232 nova_compute[250269]: 2026-01-23 09:36:40.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.699 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/914246be-3a6e-47b3-afc0-463db5fa1dae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/914246be-3a6e-47b3-afc0-463db5fa1dae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.700 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b968ce4-85e0-4f75-a108-2a370c233209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.701 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-914246be-3a6e-47b3-afc0-463db5fa1dae
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/914246be-3a6e-47b3-afc0-463db5fa1dae.pid.haproxy
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 914246be-3a6e-47b3-afc0-463db5fa1dae
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:36:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:40.702 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae', 'env', 'PROCESS_TAG=haproxy-914246be-3a6e-47b3-afc0-463db5fa1dae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/914246be-3a6e-47b3-afc0-463db5fa1dae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:36:41 np0005593232 podman[272655]: 2026-01-23 09:36:41.084181932 +0000 UTC m=+0.047609519 container create 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 04:36:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1273: 321 pgs: 321 active+clean; 121 MiB data, 384 MiB used, 21 GiB / 21 GiB avail; 411 KiB/s rd, 2.1 MiB/s wr, 99 op/s
Jan 23 04:36:41 np0005593232 systemd[1]: Started libpod-conmon-479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b.scope.
Jan 23 04:36:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:41 np0005593232 podman[272655]: 2026-01-23 09:36:41.058574957 +0000 UTC m=+0.022002564 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:36:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14216a362cf9d56e7493dc1ce3066f9cbd2c28ed860b0c43a1cf7f6075534319/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:41 np0005593232 podman[272655]: 2026-01-23 09:36:41.168747747 +0000 UTC m=+0.132175344 container init 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 04:36:41 np0005593232 podman[272655]: 2026-01-23 09:36:41.173746499 +0000 UTC m=+0.137174076 container start 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 04:36:41 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [NOTICE]   (272674) : New worker (272676) forked
Jan 23 04:36:41 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [NOTICE]   (272674) : Loading success.
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.231 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a19a3bde-2463-4f15-afe7-f8df8c608bb7 in datapath 8eab8076-0848-4daf-bbac-f3f8b65ca750 unbound from our chassis#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.235 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8eab8076-0848-4daf-bbac-f3f8b65ca750#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.253 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[34ea6cc4-2818-4b74-bb2d-bd8f6ed899c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.255 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8eab8076-01 in ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.258 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8eab8076-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.258 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4a7e85-b76d-4e28-9383-b98395d14b50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.260 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c95e54da-5bef-4842-8e8c-340f377edb68]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.270 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[48bc1248-4f96-496d-9a34-a9f155d7a861]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.294 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f84e2e0-db4a-4de6-8b35-56b991be28c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.334 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b2b7d7c2-44a4-472a-88ed-fde140ec67bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.343 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b87d79-ec8f-4b85-851c-ed9ed1e81198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 NetworkManager[49057]: <info>  [1769161001.3444] manager: (tap8eab8076-00): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Jan 23 04:36:41 np0005593232 systemd-udevd[272612]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.377 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9cfb273f-1c54-46b2-a316-71e3897af9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.380 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9b445398-7794-444c-b9bf-cb22485d5fc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 NetworkManager[49057]: <info>  [1769161001.4033] device (tap8eab8076-00): carrier: link connected
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.410 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bd0b74-92c3-4d42-a26a-f54632cc07ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.429 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1b32ed-e05c-4cfc-926f-68ac8994157e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8eab8076-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:5b:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491454, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272695, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.448 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c00ba124-35b5-4b60-b933-6a2dfda7d4ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:5b99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491454, 'tstamp': 491454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272696, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.464 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fb63f2b1-7f3f-49fb-9af6-8c2a17fd9c6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8eab8076-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:5b:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491454, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272697, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.491 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b902f4d0-7105-4972-8d52-43171f89f0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.549 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[81679485-d2e6-4317-a1ee-d667186c3105]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eab8076-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eab8076-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:41 np0005593232 kernel: tap8eab8076-00: entered promiscuous mode
Jan 23 04:36:41 np0005593232 NetworkManager[49057]: <info>  [1769161001.5538] manager: (tap8eab8076-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.556 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8eab8076-00, col_values=(('external_ids', {'iface-id': 'b545a870-aa18-4f64-a8a7-f8512824c4cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:36:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:36:41Z|00093|binding|INFO|Releasing lport b545a870-aa18-4f64-a8a7-f8512824c4cc from this chassis (sb_readonly=0)
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.557 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.573 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8eab8076-0848-4daf-bbac-f3f8b65ca750.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8eab8076-0848-4daf-bbac-f3f8b65ca750.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.574 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc0e9ad-c1e9-4f3e-9fac-99ea95e07d03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.575 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8eab8076-0848-4daf-bbac-f3f8b65ca750
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8eab8076-0848-4daf-bbac-f3f8b65ca750.pid.haproxy
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8eab8076-0848-4daf-bbac-f3f8b65ca750
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:36:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:41.575 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'env', 'PROCESS_TAG=haproxy-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8eab8076-0848-4daf-bbac-f3f8b65ca750.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.673 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.673 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquired lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.673 250273 DEBUG nova.network.neutron [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:36:41 np0005593232 podman[272728]: 2026-01-23 09:36:41.931315217 +0000 UTC m=+0.044660286 container create 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:36:41 np0005593232 nova_compute[250269]: 2026-01-23 09:36:41.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:41 np0005593232 systemd[1]: Started libpod-conmon-7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec.scope.
Jan 23 04:36:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:41.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cd26fcad317a5eb192e672116216a3fc3727b065e793f91b473b43baa8b5e78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:42 np0005593232 podman[272728]: 2026-01-23 09:36:42.002089242 +0000 UTC m=+0.115434331 container init 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:36:42 np0005593232 podman[272728]: 2026-01-23 09:36:41.909726536 +0000 UTC m=+0.023071605 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:36:42 np0005593232 podman[272728]: 2026-01-23 09:36:42.009914054 +0000 UTC m=+0.123259123 container start 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 04:36:42 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [NOTICE]   (272747) : New worker (272749) forked
Jan 23 04:36:42 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [NOTICE]   (272747) : Loading success.
Jan 23 04:36:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:42.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:42.591 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:42.592 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:36:42.593 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1274: 321 pgs: 321 active+clean; 167 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 431 KiB/s rd, 3.9 MiB/s wr, 130 op/s
Jan 23 04:36:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:43 np0005593232 nova_compute[250269]: 2026-01-23 09:36:43.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:36:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:43.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:36:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:44.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:36:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1276: 321 pgs: 321 active+clean; 167 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 510 KiB/s rd, 4.7 MiB/s wr, 139 op/s
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:36:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:45.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 76317835-654c-454f-a13e-f4cf68dd4a03 does not exist
Jan 23 04:36:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9edc48b1-b789-4fbb-a69e-a33c3fd5a965 does not exist
Jan 23 04:36:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f0cc178-5b11-49fe-8001-5aae8929c97c does not exist
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:36:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.209 250273 DEBUG nova.network.neutron [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Updating instance_info_cache with network_info: [{"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.268 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Releasing lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:36:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:46.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.289 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769160991.288595, 10d9de49-07bc-45e5-8935-d36bcbef1c0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.290 250273 INFO nova.compute.manager [-] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.292 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.292 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.292 250273 DEBUG oslo_concurrency.lockutils [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.297 250273 INFO nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 23 04:36:46 np0005593232 virtqemud[249592]: Domain id=13 name='instance-0000001a' uuid=5cea9bfc-e97a-4d07-a251-8ca3978b5f98 is tainted: custom-monitor
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.319 250273 DEBUG nova.compute.manager [None req-eeb02915-35be-492d-9e9c-3dcb0f8a6290 - - - - - -] [instance: 10d9de49-07bc-45e5-8935-d36bcbef1c0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031559708263539923 of space, bias 1.0, pg target 0.9467912479061977 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:36:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.577355574 +0000 UTC m=+0.024983898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.727621141 +0000 UTC m=+0.175249455 container create 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:36:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:36:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:36:46 np0005593232 systemd[1]: Started libpod-conmon-4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453.scope.
Jan 23 04:36:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.817826886 +0000 UTC m=+0.265455200 container init 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.828694644 +0000 UTC m=+0.276322958 container start 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.832215983 +0000 UTC m=+0.279844287 container attach 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:36:46 np0005593232 laughing_mclaren[273051]: 167 167
Jan 23 04:36:46 np0005593232 systemd[1]: libpod-4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453.scope: Deactivated successfully.
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.835134726 +0000 UTC m=+0.282763030 container died 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:36:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-13dd1b95efe82ead0c674dde95c7762656d444de8e6657709ca3b3178af3a67e-merged.mount: Deactivated successfully.
Jan 23 04:36:46 np0005593232 podman[273034]: 2026-01-23 09:36:46.89248198 +0000 UTC m=+0.340110284 container remove 4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:36:46 np0005593232 systemd[1]: libpod-conmon-4d23dd36d19afad9ed8e48ee1f9da7089d2e8fe507313b1ffd0b1205a8ed0453.scope: Deactivated successfully.
Jan 23 04:36:46 np0005593232 nova_compute[250269]: 2026-01-23 09:36:46.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:47 np0005593232 podman[273076]: 2026-01-23 09:36:47.067123907 +0000 UTC m=+0.043904104 container create 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:36:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1277: 321 pgs: 321 active+clean; 167 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 55 KiB/s rd, 2.8 MiB/s wr, 58 op/s
Jan 23 04:36:47 np0005593232 systemd[1]: Started libpod-conmon-8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969.scope.
Jan 23 04:36:47 np0005593232 podman[273076]: 2026-01-23 09:36:47.045131604 +0000 UTC m=+0.021911821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:47 np0005593232 podman[273076]: 2026-01-23 09:36:47.168739385 +0000 UTC m=+0.145519592 container init 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:36:47 np0005593232 podman[273076]: 2026-01-23 09:36:47.175521758 +0000 UTC m=+0.152301955 container start 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:36:47 np0005593232 podman[273076]: 2026-01-23 09:36:47.178963555 +0000 UTC m=+0.155743742 container attach 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:36:47 np0005593232 nova_compute[250269]: 2026-01-23 09:36:47.305 250273 INFO nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 23 04:36:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:47.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:48 np0005593232 cranky_jennings[273092]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:36:48 np0005593232 cranky_jennings[273092]: --> relative data size: 1.0
Jan 23 04:36:48 np0005593232 cranky_jennings[273092]: --> All data devices are unavailable
Jan 23 04:36:48 np0005593232 systemd[1]: libpod-8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969.scope: Deactivated successfully.
Jan 23 04:36:48 np0005593232 podman[273076]: 2026-01-23 09:36:48.042424562 +0000 UTC m=+1.019204769 container died 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:36:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-00cd8ad90f45ba98671241cbe7bd8af7f795c83cef2fc09d8caac542e5c4e74c-merged.mount: Deactivated successfully.
Jan 23 04:36:48 np0005593232 podman[273076]: 2026-01-23 09:36:48.097045729 +0000 UTC m=+1.073825926 container remove 8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:36:48 np0005593232 systemd[1]: libpod-conmon-8dec9f607e30c579eb0d27f48fae066a87527014766fea03dbbca2dd6998a969.scope: Deactivated successfully.
Jan 23 04:36:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:48 np0005593232 nova_compute[250269]: 2026-01-23 09:36:48.315 250273 INFO nova.virt.libvirt.driver [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 23 04:36:48 np0005593232 nova_compute[250269]: 2026-01-23 09:36:48.321 250273 DEBUG nova.compute.manager [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:36:48 np0005593232 nova_compute[250269]: 2026-01-23 09:36:48.362 250273 DEBUG nova.objects.instance [None req-91d1a2e7-3db5-44bc-a87f-8b8335c98ccb 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 04:36:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.704276857 +0000 UTC m=+0.024709210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.818220775 +0000 UTC m=+0.138653108 container create 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:48 np0005593232 systemd[1]: Started libpod-conmon-425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae.scope.
Jan 23 04:36:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.908365199 +0000 UTC m=+0.228797542 container init 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.914882913 +0000 UTC m=+0.235315246 container start 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:36:48 np0005593232 zen_hodgkin[273277]: 167 167
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.918618629 +0000 UTC m=+0.239050992 container attach 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:36:48 np0005593232 systemd[1]: libpod-425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae.scope: Deactivated successfully.
Jan 23 04:36:48 np0005593232 conmon[273277]: conmon 425279e0db63f98a61bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae.scope/container/memory.events
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.920158063 +0000 UTC m=+0.240590396 container died 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:36:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-137c6ec5949d37cff34ef43f7dfabab5e484c24908dc0cba6b3ea0589354f2fa-merged.mount: Deactivated successfully.
Jan 23 04:36:48 np0005593232 nova_compute[250269]: 2026-01-23 09:36:48.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:48 np0005593232 podman[273261]: 2026-01-23 09:36:48.954361922 +0000 UTC m=+0.274794255 container remove 425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:36:48 np0005593232 systemd[1]: libpod-conmon-425279e0db63f98a61bc93143563f216104830176fb340e3751a4b061500ffae.scope: Deactivated successfully.
Jan 23 04:36:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1278: 321 pgs: 321 active+clean; 167 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 34 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 23 04:36:49 np0005593232 podman[273300]: 2026-01-23 09:36:49.164075002 +0000 UTC m=+0.047047064 container create e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:36:49 np0005593232 systemd[1]: Started libpod-conmon-e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51.scope.
Jan 23 04:36:49 np0005593232 podman[273300]: 2026-01-23 09:36:49.146145374 +0000 UTC m=+0.029117466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6af6abff66b69d00b829cdc87853e1101d174e3cfeab5908d4173ad0e12c9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6af6abff66b69d00b829cdc87853e1101d174e3cfeab5908d4173ad0e12c9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6af6abff66b69d00b829cdc87853e1101d174e3cfeab5908d4173ad0e12c9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6af6abff66b69d00b829cdc87853e1101d174e3cfeab5908d4173ad0e12c9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:49 np0005593232 podman[273300]: 2026-01-23 09:36:49.259846754 +0000 UTC m=+0.142818856 container init e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:49 np0005593232 podman[273300]: 2026-01-23 09:36:49.266212135 +0000 UTC m=+0.149184207 container start e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:36:49 np0005593232 podman[273300]: 2026-01-23 09:36:49.270550158 +0000 UTC m=+0.153522250 container attach e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:36:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:49.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]: {
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:    "0": [
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:        {
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "devices": [
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "/dev/loop3"
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            ],
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "lv_name": "ceph_lv0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "lv_size": "7511998464",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "name": "ceph_lv0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "tags": {
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.cluster_name": "ceph",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.crush_device_class": "",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.encrypted": "0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.osd_id": "0",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.type": "block",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:                "ceph.vdo": "0"
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            },
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "type": "block",
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:            "vg_name": "ceph_vg0"
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:        }
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]:    ]
Jan 23 04:36:50 np0005593232 confident_brahmagupta[273317]: }
Jan 23 04:36:50 np0005593232 systemd[1]: libpod-e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51.scope: Deactivated successfully.
Jan 23 04:36:50 np0005593232 conmon[273317]: conmon e1fc762593387f83f461 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51.scope/container/memory.events
Jan 23 04:36:50 np0005593232 podman[273300]: 2026-01-23 09:36:50.067280265 +0000 UTC m=+0.950252337 container died e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:36:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fb6af6abff66b69d00b829cdc87853e1101d174e3cfeab5908d4173ad0e12c9d-merged.mount: Deactivated successfully.
Jan 23 04:36:50 np0005593232 podman[273300]: 2026-01-23 09:36:50.121713537 +0000 UTC m=+1.004685609 container remove e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:50 np0005593232 systemd[1]: libpod-conmon-e1fc762593387f83f461993c98b11856760537c1ca1b0be08a8e435460317b51.scope: Deactivated successfully.
Jan 23 04:36:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 23 04:36:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 23 04:36:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 23 04:36:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:50.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.69482678 +0000 UTC m=+0.040069466 container create 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:36:50 np0005593232 systemd[1]: Started libpod-conmon-85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653.scope.
Jan 23 04:36:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.677136739 +0000 UTC m=+0.022379445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.772950703 +0000 UTC m=+0.118193399 container init 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.778791299 +0000 UTC m=+0.124033985 container start 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:36:50 np0005593232 upbeat_chebyshev[273495]: 167 167
Jan 23 04:36:50 np0005593232 systemd[1]: libpod-85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653.scope: Deactivated successfully.
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.783561414 +0000 UTC m=+0.128804100 container attach 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.783736589 +0000 UTC m=+0.128979275 container died 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:36:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b5e5a7474fb91ea3d0935c57bd44629b81f7d0e5d250a9e6195db5ab6ffac0e6-merged.mount: Deactivated successfully.
Jan 23 04:36:50 np0005593232 podman[273479]: 2026-01-23 09:36:50.821328364 +0000 UTC m=+0.166571050 container remove 85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:50 np0005593232 systemd[1]: libpod-conmon-85765bd69e8abeafd8f9fc7c1934eaa86dcac14e142b21f4e57e0f9a4868d653.scope: Deactivated successfully.
Jan 23 04:36:50 np0005593232 podman[273520]: 2026-01-23 09:36:50.996986789 +0000 UTC m=+0.041798885 container create cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:36:51 np0005593232 systemd[1]: Started libpod-conmon-cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1.scope.
Jan 23 04:36:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:36:51 np0005593232 podman[273520]: 2026-01-23 09:36:50.979966217 +0000 UTC m=+0.024778333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:36:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe41ddc124b202bcba7de8b1adf4c84e4d31de9673ca9b3d459d306abcf6cd1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe41ddc124b202bcba7de8b1adf4c84e4d31de9673ca9b3d459d306abcf6cd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe41ddc124b202bcba7de8b1adf4c84e4d31de9673ca9b3d459d306abcf6cd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe41ddc124b202bcba7de8b1adf4c84e4d31de9673ca9b3d459d306abcf6cd1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:36:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1280: 321 pgs: 321 active+clean; 167 MiB data, 406 MiB used, 21 GiB / 21 GiB avail; 13 KiB/s rd, 2.1 KiB/s wr, 18 op/s
Jan 23 04:36:51 np0005593232 podman[273520]: 2026-01-23 09:36:51.178331805 +0000 UTC m=+0.223143981 container init cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:36:51 np0005593232 podman[273520]: 2026-01-23 09:36:51.185656563 +0000 UTC m=+0.230468699 container start cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:36:51 np0005593232 podman[273520]: 2026-01-23 09:36:51.224458372 +0000 UTC m=+0.269270478 container attach cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:51 np0005593232 nova_compute[250269]: 2026-01-23 09:36:51.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]: {
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:        "osd_id": 0,
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:        "type": "bluestore"
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]:    }
Jan 23 04:36:52 np0005593232 stoic_chatterjee[273537]: }
Jan 23 04:36:52 np0005593232 systemd[1]: libpod-cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1.scope: Deactivated successfully.
Jan 23 04:36:52 np0005593232 podman[273520]: 2026-01-23 09:36:52.05337144 +0000 UTC m=+1.098183556 container died cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:36:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4fe41ddc124b202bcba7de8b1adf4c84e4d31de9673ca9b3d459d306abcf6cd1-merged.mount: Deactivated successfully.
Jan 23 04:36:52 np0005593232 podman[273520]: 2026-01-23 09:36:52.117794105 +0000 UTC m=+1.162606201 container remove cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_chatterjee, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:36:52 np0005593232 systemd[1]: libpod-conmon-cbd70795c661ae980c8dd3b93b1a4e07d293eabbcabcb6b3a06a67d7e473f0e1.scope: Deactivated successfully.
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4bbb4583-1f28-49a6-bfcd-c63dad7d3981 does not exist
Jan 23 04:36:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4ffbdd15-5200-416c-be90-11f10b2e24e2 does not exist
Jan 23 04:36:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 99f86e38-5411-49f1-a99e-02b5971359f1 does not exist
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:36:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1281: 321 pgs: 321 active+clean; 167 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 33 KiB/s rd, 3.6 KiB/s wr, 46 op/s
Jan 23 04:36:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:53 np0005593232 nova_compute[250269]: 2026-01-23 09:36:53.980 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1282: 321 pgs: 321 active+clean; 167 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 29 KiB/s rd, 3.1 KiB/s wr, 39 op/s
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.904259) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161015904404, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2406, "num_deletes": 505, "total_data_size": 3658907, "memory_usage": 3726576, "flush_reason": "Manual Compaction"}
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161015948063, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3579232, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26617, "largest_seqno": 29022, "table_properties": {"data_size": 3568832, "index_size": 6126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25701, "raw_average_key_size": 20, "raw_value_size": 3545803, "raw_average_value_size": 2820, "num_data_blocks": 266, "num_entries": 1257, "num_filter_entries": 1257, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769160824, "oldest_key_time": 1769160824, "file_creation_time": 1769161015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 43851 microseconds, and 10794 cpu microseconds.
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.948153) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3579232 bytes OK
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.948193) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.950783) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.950850) EVENT_LOG_v1 {"time_micros": 1769161015950838, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.950912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3647884, prev total WAL file size 3647884, number of live WAL files 2.
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.952341) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3495KB)], [62(10MB)]
Jan 23 04:36:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161015952463, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 14654412, "oldest_snapshot_seqno": -1}
Jan 23 04:36:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5361 keys, 8870157 bytes, temperature: kUnknown
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161016070689, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 8870157, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8834244, "index_size": 21403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 136737, "raw_average_key_size": 25, "raw_value_size": 8737487, "raw_average_value_size": 1629, "num_data_blocks": 864, "num_entries": 5361, "num_filter_entries": 5361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161015, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.071050) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 8870157 bytes
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.072779) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 123.8 rd, 75.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(6.6) write-amplify(2.5) OK, records in: 6388, records dropped: 1027 output_compression: NoCompression
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.072829) EVENT_LOG_v1 {"time_micros": 1769161016072818, "job": 34, "event": "compaction_finished", "compaction_time_micros": 118326, "compaction_time_cpu_micros": 30682, "output_level": 6, "num_output_files": 1, "total_output_size": 8870157, "num_input_records": 6388, "num_output_records": 5361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161016073758, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161016076252, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:55.952209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.076325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.076330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.076332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.076334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:36:56.076335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:36:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:36:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:56.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:36:56 np0005593232 nova_compute[250269]: 2026-01-23 09:36:56.945 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1283: 321 pgs: 321 active+clean; 167 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 30 op/s
Jan 23 04:36:57 np0005593232 podman[273673]: 2026-01-23 09:36:57.429104417 +0000 UTC m=+0.088062595 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:36:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:36:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:36:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:36:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:36:58.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:36:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:36:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 23 04:36:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 23 04:36:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 23 04:36:58 np0005593232 nova_compute[250269]: 2026-01-23 09:36:58.985 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:36:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1285: 321 pgs: 321 active+clean; 167 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 25 KiB/s rd, 22 KiB/s wr, 35 op/s
Jan 23 04:37:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:00.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:00.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 23 04:37:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 23 04:37:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 23 04:37:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1287: 321 pgs: 321 active+clean; 167 MiB data, 423 MiB used, 21 GiB / 21 GiB avail; 4.7 KiB/s rd, 23 KiB/s wr, 8 op/s
Jan 23 04:37:01 np0005593232 nova_compute[250269]: 2026-01-23 09:37:01.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:02.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:02.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:02 np0005593232 nova_compute[250269]: 2026-01-23 09:37:02.774 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1288: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 25 KiB/s wr, 130 op/s
Jan 23 04:37:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:04 np0005593232 nova_compute[250269]: 2026-01-23 09:37:04.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:04 np0005593232 nova_compute[250269]: 2026-01-23 09:37:04.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:04.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1289: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 23 KiB/s wr, 125 op/s
Jan 23 04:37:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:06.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:06 np0005593232 nova_compute[250269]: 2026-01-23 09:37:06.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1290: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 5.2 MiB/s rd, 5.4 KiB/s wr, 127 op/s
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.754 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.754 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.754 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:37:07 np0005593232 nova_compute[250269]: 2026-01-23 09:37:07.754 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:37:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:08.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:09 np0005593232 nova_compute[250269]: 2026-01-23 09:37:09.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1291: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 4.4 MiB/s rd, 4.9 KiB/s wr, 107 op/s
Jan 23 04:37:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:10.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:10.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:10 np0005593232 podman[273705]: 2026-01-23 09:37:10.40160035 +0000 UTC m=+0.060842704 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.883 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Updating instance_info_cache with network_info: [{"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.908 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-5cea9bfc-e97a-4d07-a251-8ca3978b5f98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.908 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.909 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.909 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.909 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.909 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:10 np0005593232 nova_compute[250269]: 2026-01-23 09:37:10.910 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:37:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1292: 321 pgs: 321 active+clean; 167 MiB data, 424 MiB used, 21 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.8 KiB/s wr, 105 op/s
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.369 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.369 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.398 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.398 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.399 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.399 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.399 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:37:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3596787790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.869 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.958 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:37:11 np0005593232 nova_compute[250269]: 2026-01-23 09:37:11.958 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:37:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:12.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.134 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.135 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4504MB free_disk=20.921859741210938GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.298 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.299 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.299 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.362 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:37:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355159754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.809 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.816 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.841 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.867 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:37:12 np0005593232 nova_compute[250269]: 2026-01-23 09:37:12.868 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1293: 321 pgs: 321 active+clean; 245 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 179 op/s
Jan 23 04:37:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:14 np0005593232 nova_compute[250269]: 2026-01-23 09:37:14.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:14.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1294: 321 pgs: 321 active+clean; 245 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 910 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Jan 23 04:37:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:37:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:16.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:37:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:37:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:16.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:37:16 np0005593232 nova_compute[250269]: 2026-01-23 09:37:16.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1295: 321 pgs: 321 active+clean; 246 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 919 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 23 04:37:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:17.253 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:37:17 np0005593232 nova_compute[250269]: 2026-01-23 09:37:17.253 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:17.256 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:37:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:18.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:19 np0005593232 nova_compute[250269]: 2026-01-23 09:37:19.025 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1296: 321 pgs: 321 active+clean; 246 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 232 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 23 04:37:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1297: 321 pgs: 321 active+clean; 246 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 232 KiB/s rd, 3.9 MiB/s wr, 93 op/s
Jan 23 04:37:21 np0005593232 nova_compute[250269]: 2026-01-23 09:37:21.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:22.259 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:22.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.334 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.334 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.359 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.514 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.515 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.521 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.522 250273 INFO nova.compute.claims [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:37:22 np0005593232 nova_compute[250269]: 2026-01-23 09:37:22.700 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:37:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429884606' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:37:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1298: 321 pgs: 321 active+clean; 293 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 249 KiB/s rd, 5.7 MiB/s wr, 122 op/s
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.140 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.148 250273 DEBUG nova.compute.provider_tree [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.505 250273 DEBUG nova.scheduler.client.report [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.572 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.573 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.656 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.657 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.684 250273 INFO nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.717 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:37:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.891 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.892 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.892 250273 INFO nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Creating image(s)#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.920 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.950 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.981 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:23 np0005593232 nova_compute[250269]: 2026-01-23 09:37:23.984 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:24.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.047 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.048 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.048 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.049 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.076 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.080 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:24.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.535 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.600 250273 DEBUG nova.policy [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '191a72cfd0a841e9806246e07eb62fa6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a5f46b255cd4387bd3e4c0acaa39466', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.607 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] resizing rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.802 250273 DEBUG nova.objects.instance [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lazy-loading 'migration_context' on Instance uuid ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.831 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.831 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Ensure instance console log exists: /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.832 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.832 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:24 np0005593232 nova_compute[250269]: 2026-01-23 09:37:24.833 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1299: 321 pgs: 321 active+clean; 293 MiB data, 491 MiB used, 21 GiB / 21 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 23 04:37:25 np0005593232 nova_compute[250269]: 2026-01-23 09:37:25.804 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Successfully created port: 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:37:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 23 04:37:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 23 04:37:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 23 04:37:26 np0005593232 nova_compute[250269]: 2026-01-23 09:37:26.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1301: 321 pgs: 321 active+clean; 322 MiB data, 505 MiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.5 MiB/s wr, 93 op/s
Jan 23 04:37:27 np0005593232 nova_compute[250269]: 2026-01-23 09:37:27.809 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Successfully updated port: 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:37:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.181 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.181 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquired lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.182 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:37:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:28.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:28 np0005593232 podman[274016]: 2026-01-23 09:37:28.42560672 +0000 UTC m=+0.090459273 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.482 250273 DEBUG nova.compute.manager [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-changed-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.482 250273 DEBUG nova.compute.manager [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Refreshing instance network info cache due to event network-changed-1b51b4db-a755-47c2-9d6b-f75e5cdb0204. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.483 250273 DEBUG oslo_concurrency.lockutils [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:37:28 np0005593232 nova_compute[250269]: 2026-01-23 09:37:28.642 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:37:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:29 np0005593232 nova_compute[250269]: 2026-01-23 09:37:29.065 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1302: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Jan 23 04:37:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.145 250273 DEBUG nova.network.neutron [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updating instance_info_cache with network_info: [{"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.204 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Releasing lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.204 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Instance network_info: |[{"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.205 250273 DEBUG oslo_concurrency.lockutils [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.205 250273 DEBUG nova.network.neutron [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Refreshing network info cache for port 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.208 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Start _get_guest_xml network_info=[{"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.214 250273 WARNING nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.220 250273 DEBUG nova.virt.libvirt.host [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.220 250273 DEBUG nova.virt.libvirt.host [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.224 250273 DEBUG nova.virt.libvirt.host [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.225 250273 DEBUG nova.virt.libvirt.host [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.226 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.226 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.226 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.226 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.227 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.228 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.228 250273 DEBUG nova.virt.hardware [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.230 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:30.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:37:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612696128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.676 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.702 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:30 np0005593232 nova_compute[250269]: 2026-01-23 09:37:30.706 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:37:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998089347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:37:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1303: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 684 KiB/s rd, 4.3 MiB/s wr, 110 op/s
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.142 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.144 250273 DEBUG nova.virt.libvirt.vif [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:37:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2022362159',display_name='tempest-ServersAdminTestJSON-server-2022362159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2022362159',id=31,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5f46b255cd4387bd3e4c0acaa39466',ramdisk_id='',reservation_id='r-s5k184ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1167530593',owner_user_name='tempest-ServersAdminTestJSON-1167530593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:37:23Z,user_data=None,user_id='191a72cfd0a841e9806246e07eb62fa6',uuid=ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.144 250273 DEBUG nova.network.os_vif_util [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converting VIF {"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.146 250273 DEBUG nova.network.os_vif_util [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.147 250273 DEBUG nova.objects.instance [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lazy-loading 'pci_devices' on Instance uuid ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.172 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <uuid>ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae</uuid>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <name>instance-0000001f</name>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersAdminTestJSON-server-2022362159</nova:name>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:37:30</nova:creationTime>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:user uuid="191a72cfd0a841e9806246e07eb62fa6">tempest-ServersAdminTestJSON-1167530593-project-member</nova:user>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:project uuid="1a5f46b255cd4387bd3e4c0acaa39466">tempest-ServersAdminTestJSON-1167530593</nova:project>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <nova:port uuid="1b51b4db-a755-47c2-9d6b-f75e5cdb0204">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="serial">ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="uuid">ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:c8:6c:17"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <target dev="tap1b51b4db-a7"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/console.log" append="off"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:37:31 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:37:31 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:37:31 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:37:31 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.174 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Preparing to wait for external event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.174 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.174 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.174 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.175 250273 DEBUG nova.virt.libvirt.vif [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:37:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2022362159',display_name='tempest-ServersAdminTestJSON-server-2022362159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2022362159',id=31,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a5f46b255cd4387bd3e4c0acaa39466',ramdisk_id='',reservation_id='r-s5k184ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1167530593',owner_user_name='tempest-ServersAdminTestJSON-1167530593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:37:23Z,user_data=None,user_id='191a72cfd0a841e9806246e07eb62fa6',uuid=ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.175 250273 DEBUG nova.network.os_vif_util [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converting VIF {"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.176 250273 DEBUG nova.network.os_vif_util [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.176 250273 DEBUG os_vif [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.177 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.177 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.178 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.181 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.181 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b51b4db-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.181 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b51b4db-a7, col_values=(('external_ids', {'iface-id': '1b51b4db-a755-47c2-9d6b-f75e5cdb0204', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:6c:17', 'vm-uuid': 'ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:31 np0005593232 NetworkManager[49057]: <info>  [1769161051.1839] manager: (tap1b51b4db-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.186 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.191 250273 INFO os_vif [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7')#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.252 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.254 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.254 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] No VIF found with MAC fa:16:3e:c8:6c:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.256 250273 INFO nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Using config drive#033[00m
Jan 23 04:37:31 np0005593232 nova_compute[250269]: 2026-01-23 09:37:31.290 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:32.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:32 np0005593232 nova_compute[250269]: 2026-01-23 09:37:32.621 250273 INFO nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Creating config drive at /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config#033[00m
Jan 23 04:37:32 np0005593232 nova_compute[250269]: 2026-01-23 09:37:32.628 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tmbdv1g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:32 np0005593232 nova_compute[250269]: 2026-01-23 09:37:32.761 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tmbdv1g" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:32 np0005593232 nova_compute[250269]: 2026-01-23 09:37:32.796 250273 DEBUG nova.storage.rbd_utils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] rbd image ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:37:32 np0005593232 nova_compute[250269]: 2026-01-23 09:37:32.800 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.070 250273 DEBUG oslo_concurrency.processutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.071 250273 INFO nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Deleting local config drive /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae/disk.config because it was imported into RBD.#033[00m
Jan 23 04:37:33 np0005593232 kernel: tap1b51b4db-a7: entered promiscuous mode
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.1249] manager: (tap1b51b4db-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Jan 23 04:37:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:33Z|00094|binding|INFO|Claiming lport 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 for this chassis.
Jan 23 04:37:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:33Z|00095|binding|INFO|1b51b4db-a755-47c2-9d6b-f75e5cdb0204: Claiming fa:16:3e:c8:6c:17 10.100.0.7
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.126 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1304: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.2 MiB/s wr, 198 op/s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.147 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6c:17 10.100.0.7'], port_security=['fa:16:3e:c8:6c:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5f46b255cd4387bd3e4c0acaa39466', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d939c30-94ef-4237-8ee8-7374d4fefcd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bd55a4d-ba72-4dcd-bf4e-ec1dab31b370, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=1b51b4db-a755-47c2-9d6b-f75e5cdb0204) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.148 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 in datapath 1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c bound to our chassis#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.150 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c#033[00m
Jan 23 04:37:33 np0005593232 systemd-udevd[274228]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.163 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4f517cb4-f42f-4e49-b667-edfc5510eef5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.165 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f2b13ad-71 in ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.1671] device (tap1b51b4db-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.1683] device (tap1b51b4db-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.166 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f2b13ad-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.166 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d11b9ce7-9a88-46f5-b034-46ef39778ff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.170 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59bacaa7-7a5b-48f4-9898-8da93d30fefe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 systemd-machined[215836]: New machine qemu-14-instance-0000001f.
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.183 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[bd387696-0ed1-4b69-8553-1bd26a01ad23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.196 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.197 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5d15c49d-d8d2-4b41-9db0-c383ff265816]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:33Z|00096|binding|INFO|Setting lport 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 ovn-installed in OVS
Jan 23 04:37:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:33Z|00097|binding|INFO|Setting lport 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 up in Southbound
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 systemd[1]: Started Virtual Machine qemu-14-instance-0000001f.
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.227 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf5a6b-1289-48a5-82aa-ed70ebb954de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.234 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d67183c5-b9b5-48d6-96a8-46966a47c977]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.2356] manager: (tap1f2b13ad-70): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.265 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d33c57b9-0fe5-49d4-8e37-04c79aa4ce6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.269 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[917f9f5b-6a2b-4b50-bad4-198184a73efe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.2893] device (tap1f2b13ad-70): carrier: link connected
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.294 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b517cc-65d9-4f3f-a753-fe0953319aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.313 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[829d4839-6e99-4435-9f61-b50940dbf227]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f2b13ad-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:78:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496643, 'reachable_time': 25283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274264, 'error': None, 'target': 'ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.331 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8e031d-3df0-47c7-b891-2c82ed016b3f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:78b8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496643, 'tstamp': 496643}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274265, 'error': None, 'target': 'ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.355 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8a880a78-5b4f-423c-b950-3721be6e2ae7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f2b13ad-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:78:b8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496643, 'reachable_time': 25283, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274266, 'error': None, 'target': 'ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.386 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[40d163c6-9fee-4878-a745-32f795803272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.443 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d19ec002-b7ef-4d7c-b41a-74fd0f8c70d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.445 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f2b13ad-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.445 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.445 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f2b13ad-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.447 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 kernel: tap1f2b13ad-70: entered promiscuous mode
Jan 23 04:37:33 np0005593232 NetworkManager[49057]: <info>  [1769161053.4478] manager: (tap1f2b13ad-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.450 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f2b13ad-70, col_values=(('external_ids', {'iface-id': '5880c863-f7b0-4399-b221-f31849823320'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:33Z|00098|binding|INFO|Releasing lport 5880c863-f7b0-4399-b221-f31849823320 from this chassis (sb_readonly=0)
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.451 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.466 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.467 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.468 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0f4398-9560-4be9-b2e5-c587aaf233b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.469 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c.pid.haproxy
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:37:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:33.470 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'env', 'PROCESS_TAG=haproxy-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.613 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Creating tmpfile /var/lib/nova/instances/tmpc7fzoq9f to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.620 250273 DEBUG nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=<?>,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc7fzoq9f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.706 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161053.705539, ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.707 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] VM Started (Lifecycle Event)#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.740 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.745 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161053.7060788, ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.745 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:37:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.776 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.780 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.790 250273 DEBUG nova.network.neutron [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updated VIF entry in instance network info cache for port 1b51b4db-a755-47c2-9d6b-f75e5cdb0204. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.790 250273 DEBUG nova.network.neutron [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updating instance_info_cache with network_info: [{"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:37:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 23 04:37:33 np0005593232 podman[274338]: 2026-01-23 09:37:33.898421417 +0000 UTC m=+0.084848324 container create dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:37:33 np0005593232 podman[274338]: 2026-01-23 09:37:33.837380778 +0000 UTC m=+0.023807705 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.934 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:37:33 np0005593232 nova_compute[250269]: 2026-01-23 09:37:33.935 250273 DEBUG oslo_concurrency.lockutils [req-ccdd917c-0144-4085-bed4-d73df31141d8 req-2005a1cd-1cda-467b-bd5a-982575db4e66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:37:33 np0005593232 systemd[1]: Started libpod-conmon-dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8.scope.
Jan 23 04:37:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:37:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8e6cb4bf5bd9695192cffbd81adaf2c61a1f2a01af467ab53cf8f3148ca1c50/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:34 np0005593232 podman[274338]: 2026-01-23 09:37:34.003211755 +0000 UTC m=+0.189638672 container init dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 04:37:34 np0005593232 podman[274338]: 2026-01-23 09:37:34.009464092 +0000 UTC m=+0.195890989 container start dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:37:34 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [NOTICE]   (274357) : New worker (274359) forked
Jan 23 04:37:34 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [NOTICE]   (274357) : Loading success.
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.056 250273 DEBUG nova.compute.manager [req-feb78e29-a114-47de-ac29-b1ed293749f4 req-ce41fe1b-756b-4298-be38-a3d98422f535 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.057 250273 DEBUG oslo_concurrency.lockutils [req-feb78e29-a114-47de-ac29-b1ed293749f4 req-ce41fe1b-756b-4298-be38-a3d98422f535 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.057 250273 DEBUG oslo_concurrency.lockutils [req-feb78e29-a114-47de-ac29-b1ed293749f4 req-ce41fe1b-756b-4298-be38-a3d98422f535 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.057 250273 DEBUG oslo_concurrency.lockutils [req-feb78e29-a114-47de-ac29-b1ed293749f4 req-ce41fe1b-756b-4298-be38-a3d98422f535 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.058 250273 DEBUG nova.compute.manager [req-feb78e29-a114-47de-ac29-b1ed293749f4 req-ce41fe1b-756b-4298-be38-a3d98422f535 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Processing event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:37:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.058 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:37:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:34.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.062 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161054.0618794, ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.062 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.067 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.077 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.079 250273 INFO nova.virt.libvirt.driver [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Instance spawned successfully.#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.080 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:37:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:34.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.325 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.331 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.334 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.335 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.335 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.335 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.336 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.336 250273 DEBUG nova.virt.libvirt.driver [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:37:34 np0005593232 nova_compute[250269]: 2026-01-23 09:37:34.814 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.018 250273 INFO nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Took 11.13 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.019 250273 DEBUG nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.106 250273 INFO nova.compute.manager [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Took 12.63 seconds to build instance.#033[00m
Jan 23 04:37:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1306: 321 pgs: 321 active+clean; 339 MiB data, 513 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 1.6 MiB/s wr, 185 op/s
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.178 250273 DEBUG oslo_concurrency.lockutils [None req-09b3b975-f95d-463d-af0f-7ada418b4964 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.788 250273 DEBUG nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc7fzoq9f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.840 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.840 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquired lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:37:35 np0005593232 nova_compute[250269]: 2026-01-23 09:37:35.841 250273 DEBUG nova.network.neutron [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:37:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:36.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.184 250273 DEBUG nova.compute.manager [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.184 250273 DEBUG oslo_concurrency.lockutils [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.185 250273 DEBUG oslo_concurrency.lockutils [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.185 250273 DEBUG oslo_concurrency.lockutils [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.185 250273 DEBUG nova.compute.manager [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] No waiting events found dispatching network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.185 250273 WARNING nova.compute.manager [req-614bbf30-2c8a-49d4-8896-5b361d509580 req-9f279056-158a-4c2e-8e97-632b357efcb0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received unexpected event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:37:36 np0005593232 nova_compute[250269]: 2026-01-23 09:37:36.186 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1307: 321 pgs: 321 active+clean; 349 MiB data, 522 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:37:37
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.mgr']
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:37:37 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:37:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:38.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:38.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.068 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.114 250273 DEBUG nova.network.neutron [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Updating instance_info_cache with network_info: [{"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:37:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1308: 321 pgs: 321 active+clean; 377 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.138 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Releasing lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.140 250273 DEBUG os_brick.utils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.143 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.158 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.158 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[407aff3c-6584-49a6-b6c3-1caaa8b46424]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.159 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.168 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.168 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[fdb260ff-6f4f-4daa-a64b-dcc1e56ff00d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.170 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.179 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.180 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9afef4-c935-4c09-8bed-82e7e5d79423]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.181 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[572babef-7905-402e-812d-f54d64c12ef8]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.182 250273 DEBUG oslo_concurrency.processutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.217 250273 DEBUG oslo_concurrency.processutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.220 250273 DEBUG os_brick.initiator.connectors.lightos [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.220 250273 DEBUG os_brick.initiator.connectors.lightos [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.220 250273 DEBUG os_brick.initiator.connectors.lightos [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 04:37:39 np0005593232 nova_compute[250269]: 2026-01-23 09:37:39.221 250273 DEBUG os_brick.utils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 04:37:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:40.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:40.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.695 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc7fzoq9f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b06791ec-66fd-4114-8448-7ea0b7f88f25='157a81cb-fd76-48d4-abf5-e6fb564e20a5'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.696 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Creating instance directory: /var/lib/nova/instances/261ab1ec-f79b-4867-bcb6-1c1d7491120e pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.697 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Ensure instance console log exists: /var/lib/nova/instances/261ab1ec-f79b-4867-bcb6-1c1d7491120e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.697 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.702 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.705 250273 DEBUG nova.virt.libvirt.vif [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-724421301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-724421301',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:37:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d0dce6e339c349d4ab97cee5e49fff3a',ramdisk_id='',reservation_id='r-106tqp53',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1207260646',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1207260646-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:37:26Z,user_data=None,user_id='4f72965e950c4761bfedd99fdc411a83',uuid=261ab1ec-f79b-4867-bcb6-1c1d7491120e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.705 250273 DEBUG nova.network.os_vif_util [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converting VIF {"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.706 250273 DEBUG nova.network.os_vif_util [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.708 250273 DEBUG os_vif [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.710 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.710 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.714 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.715 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap27e277b3-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.716 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap27e277b3-21, col_values=(('external_ids', {'iface-id': '27e277b3-2135-4e3e-b336-e0da87509465', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:06:0e', 'vm-uuid': '261ab1ec-f79b-4867-bcb6-1c1d7491120e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.754 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:40 np0005593232 NetworkManager[49057]: <info>  [1769161060.7550] manager: (tap27e277b3-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.761 250273 INFO os_vif [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21')#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.765 250273 DEBUG nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 23 04:37:40 np0005593232 nova_compute[250269]: 2026-01-23 09:37:40.766 250273 DEBUG nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc7fzoq9f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b06791ec-66fd-4114-8448-7ea0b7f88f25='157a81cb-fd76-48d4-abf5-e6fb564e20a5'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 23 04:37:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1309: 321 pgs: 321 active+clean; 377 MiB data, 531 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Jan 23 04:37:41 np0005593232 podman[274381]: 2026-01-23 09:37:41.407652844 +0000 UTC m=+0.062843151 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:37:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:42.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:37:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:37:42 np0005593232 nova_compute[250269]: 2026-01-23 09:37:42.513 250273 DEBUG nova.network.neutron [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Port 27e277b3-2135-4e3e-b336-e0da87509465 updated with migration profile {'migrating_to': 'compute-0.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 23 04:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:42.592 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:42.593 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:42 np0005593232 nova_compute[250269]: 2026-01-23 09:37:42.790 250273 DEBUG nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpc7fzoq9f',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={b06791ec-66fd-4114-8448-7ea0b7f88f25='157a81cb-fd76-48d4-abf5-e6fb564e20a5'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 23 04:37:43 np0005593232 kernel: tap27e277b3-21: entered promiscuous mode
Jan 23 04:37:43 np0005593232 NetworkManager[49057]: <info>  [1769161063.0103] manager: (tap27e277b3-21): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Jan 23 04:37:43 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:43Z|00099|binding|INFO|Claiming lport 27e277b3-2135-4e3e-b336-e0da87509465 for this additional chassis.
Jan 23 04:37:43 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:43Z|00100|binding|INFO|27e277b3-2135-4e3e-b336-e0da87509465: Claiming fa:16:3e:34:06:0e 10.100.0.11
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.011 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:43 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:43Z|00101|binding|INFO|Setting lport 27e277b3-2135-4e3e-b336-e0da87509465 ovn-installed in OVS
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.030 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:43 np0005593232 systemd-machined[215836]: New machine qemu-15-instance-0000001d.
Jan 23 04:37:43 np0005593232 systemd-udevd[274417]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:37:43 np0005593232 systemd[1]: Started Virtual Machine qemu-15-instance-0000001d.
Jan 23 04:37:43 np0005593232 NetworkManager[49057]: <info>  [1769161063.0668] device (tap27e277b3-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:37:43 np0005593232 NetworkManager[49057]: <info>  [1769161063.0674] device (tap27e277b3-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:37:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1310: 321 pgs: 321 active+clean; 418 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 4.7 MiB/s wr, 230 op/s
Jan 23 04:37:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.959 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161063.9588947, 261ab1ec-f79b-4867-bcb6-1c1d7491120e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.960 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] VM Started (Lifecycle Event)#033[00m
Jan 23 04:37:43 np0005593232 nova_compute[250269]: 2026-01-23 09:37:43.992 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:44.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:44.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.637 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161064.636951, 261ab1ec-f79b-4867-bcb6-1c1d7491120e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.637 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.683 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.687 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:37:44 np0005593232 nova_compute[250269]: 2026-01-23 09:37:44.740 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 04:37:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1311: 321 pgs: 321 active+clean; 418 MiB data, 563 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 4.1 MiB/s wr, 203 op/s
Jan 23 04:37:45 np0005593232 nova_compute[250269]: 2026-01-23 09:37:45.755 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:45 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:45Z|00102|binding|INFO|Claiming lport 27e277b3-2135-4e3e-b336-e0da87509465 for this chassis.
Jan 23 04:37:45 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:45Z|00103|binding|INFO|27e277b3-2135-4e3e-b336-e0da87509465: Claiming fa:16:3e:34:06:0e 10.100.0.11
Jan 23 04:37:45 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:45Z|00104|binding|INFO|Setting lport 27e277b3-2135-4e3e-b336-e0da87509465 up in Southbound
Jan 23 04:37:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:45.995 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:06:0e 10.100.0.11'], port_security=['fa:16:3e:34:06:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '261ab1ec-f79b-4867-bcb6-1c1d7491120e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '10', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb60528-b878-42fd-9c2f-0a3345010b1a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=27e277b3-2135-4e3e-b336-e0da87509465) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:37:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:45.996 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 27e277b3-2135-4e3e-b336-e0da87509465 in datapath 8eab8076-0848-4daf-bbac-f3f8b65ca750 bound to our chassis#033[00m
Jan 23 04:37:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:45.998 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8eab8076-0848-4daf-bbac-f3f8b65ca750#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.012 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c07e3066-34c8-405a-a55a-ab7f920eeddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.043 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8ef8b432-3054-40ae-96f8-70a24ce26629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.047 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ab2c897f-8fe9-45d7-80a1-3ba35b6443ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:46.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.076 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bec09e87-087e-438e-bdcf-902c6c13742d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.093 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d54be5f4-8f3c-4be8-bcf9-6ef03c0910d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8eab8076-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:5b:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 6, 'rx_bytes': 1456, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491454, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274475, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.108 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[506a7ab5-5410-4663-a856-f64701727be6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8eab8076-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491465, 'tstamp': 491465}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274476, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8eab8076-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491468, 'tstamp': 491468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274476, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.110 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eab8076-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.112 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.113 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eab8076-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.113 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.114 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8eab8076-00, col_values=(('external_ids', {'iface-id': 'b545a870-aa18-4f64-a8a7-f8512824c4cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:37:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:46.114 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.158 250273 INFO nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Post operation of migration started#033[00m
Jan 23 04:37:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:46.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.543 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.543 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquired lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:37:46 np0005593232 nova_compute[250269]: 2026-01-23 09:37:46.544 250273 DEBUG nova.network.neutron [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007328126454029406 of space, bias 1.0, pg target 2.1984379362088218 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002154144565419691 of space, bias 1.0, pg target 0.6419350804950679 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:37:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 04:37:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1312: 321 pgs: 321 active+clean; 461 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 283 op/s
Jan 23 04:37:47 np0005593232 nova_compute[250269]: 2026-01-23 09:37:47.963 250273 DEBUG nova.network.neutron [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Updating instance_info_cache with network_info: [{"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:37:47 np0005593232 nova_compute[250269]: 2026-01-23 09:37:47.987 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Releasing lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:37:48 np0005593232 nova_compute[250269]: 2026-01-23 09:37:48.024 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:48 np0005593232 nova_compute[250269]: 2026-01-23 09:37:48.024 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:48 np0005593232 nova_compute[250269]: 2026-01-23 09:37:48.025 250273 DEBUG oslo_concurrency.lockutils [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:48 np0005593232 nova_compute[250269]: 2026-01-23 09:37:48.029 250273 INFO nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 23 04:37:48 np0005593232 virtqemud[249592]: Domain id=15 name='instance-0000001d' uuid=261ab1ec-f79b-4867-bcb6-1c1d7491120e is tainted: custom-monitor
Jan 23 04:37:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:48.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:48.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:48Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:6c:17 10.100.0.7
Jan 23 04:37:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:37:48Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:6c:17 10.100.0.7
Jan 23 04:37:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:49 np0005593232 nova_compute[250269]: 2026-01-23 09:37:49.036 250273 INFO nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 23 04:37:49 np0005593232 nova_compute[250269]: 2026-01-23 09:37:49.072 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1313: 321 pgs: 321 active+clean; 504 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 7.8 MiB/s wr, 312 op/s
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.043 250273 INFO nova.virt.libvirt.driver [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.048 250273 DEBUG nova.compute.manager [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.079 250273 DEBUG nova.objects.instance [None req-5cef1da9-f042-4a0d-a3f1-4c769aeb546d 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 04:37:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:50.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:50.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.521 250273 DEBUG nova.compute.manager [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.521 250273 DEBUG oslo_concurrency.lockutils [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.521 250273 DEBUG oslo_concurrency.lockutils [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.522 250273 DEBUG oslo_concurrency.lockutils [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.522 250273 DEBUG nova.compute.manager [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.522 250273 WARNING nova.compute.manager [req-f32b61f1-ef63-4611-93b5-5b3b8bdf827b req-80190171-a248-420a-a29a-86f829c1e0ca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:37:50 np0005593232 nova_compute[250269]: 2026-01-23 09:37:50.758 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1314: 321 pgs: 321 active+clean; 504 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 7.1 MiB/s wr, 265 op/s
Jan 23 04:37:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:37:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104260040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:37:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:37:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1104260040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:37:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1315: 321 pgs: 321 active+clean; 503 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 8.5 MiB/s wr, 334 op/s
Jan 23 04:37:53 np0005593232 podman[274704]: 2026-01-23 09:37:53.344967516 +0000 UTC m=+0.063488239 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:37:53 np0005593232 podman[274704]: 2026-01-23 09:37:53.45030983 +0000 UTC m=+0.168830553 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Jan 23 04:37:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:54 np0005593232 nova_compute[250269]: 2026-01-23 09:37:54.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:54.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:54 np0005593232 podman[274855]: 2026-01-23 09:37:54.108972346 +0000 UTC m=+0.051134759 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:37:54 np0005593232 podman[274855]: 2026-01-23 09:37:54.127182032 +0000 UTC m=+0.069344415 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:37:54 np0005593232 podman[274919]: 2026-01-23 09:37:54.334578306 +0000 UTC m=+0.049370729 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.buildah.version=1.28.2, version=2.2.4, io.openshift.expose-services=, release=1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 23 04:37:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:54 np0005593232 podman[274919]: 2026-01-23 09:37:54.350193148 +0000 UTC m=+0.064985551 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, architecture=x86_64)
Jan 23 04:37:54 np0005593232 nova_compute[250269]: 2026-01-23 09:37:54.361 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Check if temp file /var/lib/nova/instances/tmphmsj_e0s exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 23 04:37:54 np0005593232 nova_compute[250269]: 2026-01-23 09:37:54.362 250273 DEBUG nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphmsj_e0s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 23 04:37:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:37:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:37:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1316: 321 pgs: 321 active+clean; 503 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 254 op/s
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1247da2c-118b-4bd9-a4be-ab81a922e825 does not exist
Jan 23 04:37:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 56eef849-aeef-48c2-98ea-d5c490f73a09 does not exist
Jan 23 04:37:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8b9a1acd-58ba-411e-872a-f99194a143f0 does not exist
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:37:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:37:55 np0005593232 nova_compute[250269]: 2026-01-23 09:37:55.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:55 np0005593232 podman[275224]: 2026-01-23 09:37:55.926574718 +0000 UTC m=+0.076437046 container create 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:37:55 np0005593232 podman[275224]: 2026-01-23 09:37:55.871416156 +0000 UTC m=+0.021278464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:37:55 np0005593232 systemd[1]: Started libpod-conmon-6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4.scope.
Jan 23 04:37:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:37:56 np0005593232 podman[275224]: 2026-01-23 09:37:56.018580424 +0000 UTC m=+0.168442742 container init 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:37:56 np0005593232 podman[275224]: 2026-01-23 09:37:56.028487455 +0000 UTC m=+0.178349753 container start 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:37:56 np0005593232 wonderful_babbage[275241]: 167 167
Jan 23 04:37:56 np0005593232 systemd[1]: libpod-6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4.scope: Deactivated successfully.
Jan 23 04:37:56 np0005593232 podman[275224]: 2026-01-23 09:37:56.034475494 +0000 UTC m=+0.184337782 container attach 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 23 04:37:56 np0005593232 podman[275224]: 2026-01-23 09:37:56.034987459 +0000 UTC m=+0.184849797 container died 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:37:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-827aa6349d74bae3fa8c1943476055255c014af85ffcba07c0e7c9dbe306692c-merged.mount: Deactivated successfully.
Jan 23 04:37:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:56 np0005593232 podman[275224]: 2026-01-23 09:37:56.092803966 +0000 UTC m=+0.242666254 container remove 6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_babbage, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Jan 23 04:37:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:56.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:56 np0005593232 systemd[1]: libpod-conmon-6f32a4cd4b1be4cd683d1c9ed9979fee902638f29745648a5306f832b2dfdaa4.scope: Deactivated successfully.
Jan 23 04:37:56 np0005593232 podman[275264]: 2026-01-23 09:37:56.31254494 +0000 UTC m=+0.046008495 container create cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:37:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:56.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:56 np0005593232 systemd[1]: Started libpod-conmon-cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c.scope.
Jan 23 04:37:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:37:56 np0005593232 podman[275264]: 2026-01-23 09:37:56.288740875 +0000 UTC m=+0.022204450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:37:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:56 np0005593232 podman[275264]: 2026-01-23 09:37:56.403471145 +0000 UTC m=+0.136934720 container init cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:37:56 np0005593232 podman[275264]: 2026-01-23 09:37:56.410182455 +0000 UTC m=+0.143646010 container start cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:37:56 np0005593232 podman[275264]: 2026-01-23 09:37:56.413937432 +0000 UTC m=+0.147401017 container attach cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:37:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1317: 321 pgs: 321 active+clean; 484 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.1 MiB/s wr, 326 op/s
Jan 23 04:37:57 np0005593232 unruffled_tesla[275281]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:37:57 np0005593232 unruffled_tesla[275281]: --> relative data size: 1.0
Jan 23 04:37:57 np0005593232 unruffled_tesla[275281]: --> All data devices are unavailable
Jan 23 04:37:57 np0005593232 systemd[1]: libpod-cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c.scope: Deactivated successfully.
Jan 23 04:37:57 np0005593232 conmon[275281]: conmon cfb50dfe052e9bc6b09b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c.scope/container/memory.events
Jan 23 04:37:57 np0005593232 podman[275297]: 2026-01-23 09:37:57.281471994 +0000 UTC m=+0.044025578 container died cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:37:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6bf34509dc63892c2854a394ce278b0c1b2dfcd99d63f992a212361d905023a4-merged.mount: Deactivated successfully.
Jan 23 04:37:57 np0005593232 podman[275297]: 2026-01-23 09:37:57.350252373 +0000 UTC m=+0.112805857 container remove cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_tesla, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:37:57 np0005593232 systemd[1]: libpod-conmon-cfb50dfe052e9bc6b09b546546ff2654f48ac09434bc29e323fd2a87ec37f45c.scope: Deactivated successfully.
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.010845574 +0000 UTC m=+0.045815259 container create 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 04:37:58 np0005593232 systemd[1]: Started libpod-conmon-11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418.scope.
Jan 23 04:37:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:58.082 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:37:58 np0005593232 nova_compute[250269]: 2026-01-23 09:37:58.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:37:58.084 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:37:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:57.992255717 +0000 UTC m=+0.027225422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:37:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:37:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:37:58.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.114228932 +0000 UTC m=+0.149198637 container init 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.122088335 +0000 UTC m=+0.157058020 container start 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:37:58 np0005593232 hungry_lewin[275467]: 167 167
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.125860482 +0000 UTC m=+0.160830167 container attach 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:37:58 np0005593232 systemd[1]: libpod-11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418.scope: Deactivated successfully.
Jan 23 04:37:58 np0005593232 conmon[275467]: conmon 11109ba02adba1611648 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418.scope/container/memory.events
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.127890789 +0000 UTC m=+0.162860494 container died 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:37:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-94f3215912ebfabb3ffbb6728697e85224ef8703da6156a8cef270a1fe180056-merged.mount: Deactivated successfully.
Jan 23 04:37:58 np0005593232 podman[275451]: 2026-01-23 09:37:58.172450481 +0000 UTC m=+0.207420166 container remove 11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:37:58 np0005593232 systemd[1]: libpod-conmon-11109ba02adba16116488fe36cf4035387a44a2f3314fd3da2b0f0b742102418.scope: Deactivated successfully.
Jan 23 04:37:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:37:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:37:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:37:58.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:37:58 np0005593232 podman[275490]: 2026-01-23 09:37:58.357154613 +0000 UTC m=+0.040555150 container create 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:37:58 np0005593232 systemd[1]: Started libpod-conmon-03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5.scope.
Jan 23 04:37:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:37:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fa1145e36e72d661321fe5814fcd4d754a73fb33e5b54958ea0df6e44dc73e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fa1145e36e72d661321fe5814fcd4d754a73fb33e5b54958ea0df6e44dc73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fa1145e36e72d661321fe5814fcd4d754a73fb33e5b54958ea0df6e44dc73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8fa1145e36e72d661321fe5814fcd4d754a73fb33e5b54958ea0df6e44dc73e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:37:58 np0005593232 podman[275490]: 2026-01-23 09:37:58.421219808 +0000 UTC m=+0.104620365 container init 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:37:58 np0005593232 podman[275490]: 2026-01-23 09:37:58.428616477 +0000 UTC m=+0.112017004 container start 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:37:58 np0005593232 podman[275490]: 2026-01-23 09:37:58.431946512 +0000 UTC m=+0.115347079 container attach 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:37:58 np0005593232 podman[275490]: 2026-01-23 09:37:58.339181984 +0000 UTC m=+0.022582551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:37:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:37:59 np0005593232 nova_compute[250269]: 2026-01-23 09:37:59.076 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:37:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1318: 321 pgs: 321 active+clean; 484 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.0 MiB/s wr, 257 op/s
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]: {
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:    "0": [
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:        {
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "devices": [
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "/dev/loop3"
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            ],
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "lv_name": "ceph_lv0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "lv_size": "7511998464",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "name": "ceph_lv0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "tags": {
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.cluster_name": "ceph",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.crush_device_class": "",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.encrypted": "0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.osd_id": "0",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.type": "block",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:                "ceph.vdo": "0"
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            },
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "type": "block",
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:            "vg_name": "ceph_vg0"
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:        }
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]:    ]
Jan 23 04:37:59 np0005593232 pedantic_lehmann[275506]: }
Jan 23 04:37:59 np0005593232 systemd[1]: libpod-03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5.scope: Deactivated successfully.
Jan 23 04:37:59 np0005593232 podman[275490]: 2026-01-23 09:37:59.274794425 +0000 UTC m=+0.958194972 container died 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:37:59 np0005593232 nova_compute[250269]: 2026-01-23 09:37:59.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:37:59 np0005593232 nova_compute[250269]: 2026-01-23 09:37:59.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:37:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a8fa1145e36e72d661321fe5814fcd4d754a73fb33e5b54958ea0df6e44dc73e-merged.mount: Deactivated successfully.
Jan 23 04:37:59 np0005593232 podman[275490]: 2026-01-23 09:37:59.397580873 +0000 UTC m=+1.080981410 container remove 03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:37:59 np0005593232 systemd[1]: libpod-conmon-03866e15f445900010a9c792abab603913add77040f829ea74a388d6cc6176e5.scope: Deactivated successfully.
Jan 23 04:37:59 np0005593232 podman[275517]: 2026-01-23 09:37:59.40557543 +0000 UTC m=+0.100858218 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.072849149 +0000 UTC m=+0.040230380 container create d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:38:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:00.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:00 np0005593232 systemd[1]: Started libpod-conmon-d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58.scope.
Jan 23 04:38:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.054891891 +0000 UTC m=+0.022273142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.178124951 +0000 UTC m=+0.145506192 container init d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.185889851 +0000 UTC m=+0.153271082 container start d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:38:00 np0005593232 relaxed_shockley[275710]: 167 167
Jan 23 04:38:00 np0005593232 systemd[1]: libpod-d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58.scope: Deactivated successfully.
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.203230752 +0000 UTC m=+0.170612003 container attach d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.203621384 +0000 UTC m=+0.171002615 container died d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:38:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-90a017bb4030a49d54e67d349b7c8897da50adb464ebb89c3ba58af2566ead6f-merged.mount: Deactivated successfully.
Jan 23 04:38:00 np0005593232 podman[275694]: 2026-01-23 09:38:00.302566846 +0000 UTC m=+0.269948077 container remove d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_shockley, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:38:00 np0005593232 systemd[1]: libpod-conmon-d3ade8154c83e4ae092a14e1d3d8b4a0ceca9f07399945d91d61ab457155ce58.scope: Deactivated successfully.
Jan 23 04:38:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:00.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:00 np0005593232 podman[275736]: 2026-01-23 09:38:00.498945019 +0000 UTC m=+0.055391330 container create 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:38:00 np0005593232 systemd[1]: Started libpod-conmon-99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1.scope.
Jan 23 04:38:00 np0005593232 podman[275736]: 2026-01-23 09:38:00.466435368 +0000 UTC m=+0.022881759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:38:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:38:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2cddae74d315628ae97f03b22a4140a174d0e8a872477895866a0e34798a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:38:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2cddae74d315628ae97f03b22a4140a174d0e8a872477895866a0e34798a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:38:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2cddae74d315628ae97f03b22a4140a174d0e8a872477895866a0e34798a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:38:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25c2cddae74d315628ae97f03b22a4140a174d0e8a872477895866a0e34798a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:38:00 np0005593232 podman[275736]: 2026-01-23 09:38:00.608843631 +0000 UTC m=+0.165289942 container init 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:38:00 np0005593232 podman[275736]: 2026-01-23 09:38:00.617374493 +0000 UTC m=+0.173820804 container start 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:38:00 np0005593232 podman[275736]: 2026-01-23 09:38:00.621029057 +0000 UTC m=+0.177475388 container attach 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.751 250273 DEBUG nova.compute.manager [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.754 250273 DEBUG oslo_concurrency.lockutils [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.754 250273 DEBUG oslo_concurrency.lockutils [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.755 250273 DEBUG oslo_concurrency.lockutils [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.755 250273 DEBUG nova.compute.manager [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.755 250273 DEBUG nova.compute.manager [req-be716531-006b-4c55-b126-6fd8953f413f req-136226e2-74cf-4ead-bccf-dc10fe98d879 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:38:00 np0005593232 nova_compute[250269]: 2026-01-23 09:38:00.763 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1319: 321 pgs: 321 active+clean; 484 MiB data, 632 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 164 op/s
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.442 250273 INFO nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Took 6.09 seconds for pre_live_migration on destination host compute-2.ctlplane.example.com.#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.443 250273 DEBUG nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.466 250273 DEBUG nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmphmsj_e0s',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='261ab1ec-f79b-4867-bcb6-1c1d7491120e',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(6e07873c-3bce-4fe1-8af8-b92dbe7fb0df),old_vol_attachment_ids={b06791ec-66fd-4114-8448-7ea0b7f88f25='1efcf993-49fb-4692-9e1a-35930d237781'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.470 250273 DEBUG nova.objects.instance [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lazy-loading 'migration_context' on Instance uuid 261ab1ec-f79b-4867-bcb6-1c1d7491120e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.472 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.473 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.474 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]: {
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:        "osd_id": 0,
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:        "type": "bluestore"
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]:    }
Jan 23 04:38:01 np0005593232 hungry_bohr[275753]: }
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.499 250273 DEBUG nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Find same serial number: pos=1, serial=b06791ec-66fd-4114-8448-7ea0b7f88f25 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.500 250273 DEBUG nova.virt.libvirt.vif [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T09:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-724421301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-724421301',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:37:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d0dce6e339c349d4ab97cee5e49fff3a',ramdisk_id='',reservation_id='r-106tqp53',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1207260646',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1207260646-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:37:50Z,user_data=None,user_id='4f72965e950c4761bfedd99fdc411a83',uuid=261ab1ec-f79b-4867-bcb6-1c1d7491120e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.501 250273 DEBUG nova.network.os_vif_util [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converting VIF {"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.501 250273 DEBUG nova.network.os_vif_util [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.502 250273 DEBUG nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Updating guest XML with vif config: <interface type="ethernet">
Jan 23 04:38:01 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:34:06:0e"/>
Jan 23 04:38:01 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:38:01 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:38:01 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:38:01 np0005593232 nova_compute[250269]:  <target dev="tap27e277b3-21"/>
Jan 23 04:38:01 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:38:01 np0005593232 nova_compute[250269]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.502 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 23 04:38:01 np0005593232 systemd[1]: libpod-99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1.scope: Deactivated successfully.
Jan 23 04:38:01 np0005593232 podman[275736]: 2026-01-23 09:38:01.520526245 +0000 UTC m=+1.076972546 container died 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:38:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-25c2cddae74d315628ae97f03b22a4140a174d0e8a872477895866a0e34798a9-merged.mount: Deactivated successfully.
Jan 23 04:38:01 np0005593232 podman[275736]: 2026-01-23 09:38:01.577292853 +0000 UTC m=+1.133739184 container remove 99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:38:01 np0005593232 systemd[1]: libpod-conmon-99edce5b40faeb6b48898ec0bb05602a527975cc55737d1c685e8a56925e95a1.scope: Deactivated successfully.
Jan 23 04:38:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:38:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:38:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:38:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:38:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d1f1a2bf-ca67-40a3-8e7e-6157c1cabd6d does not exist
Jan 23 04:38:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5b2e3f83-6c0a-42aa-a568-91375ea4b916 does not exist
Jan 23 04:38:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e0191d3c-2f48-4bcd-ad75-0c9c0f25d310 does not exist
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.977 250273 DEBUG nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:38:01 np0005593232 nova_compute[250269]: 2026-01-23 09:38:01.978 250273 INFO nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 23 04:38:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:02.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:38:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:38:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:02.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.381 250273 INFO nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.864 250273 DEBUG nova.compute.manager [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.865 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.866 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.866 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.866 250273 DEBUG nova.compute.manager [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.867 250273 WARNING nova.compute.manager [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.867 250273 DEBUG nova.compute.manager [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-changed-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.867 250273 DEBUG nova.compute.manager [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Refreshing instance network info cache due to event network-changed-27e277b3-2135-4e3e-b336-e0da87509465. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.868 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.868 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.868 250273 DEBUG nova.network.neutron [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Refreshing network info cache for port 27e277b3-2135-4e3e-b336-e0da87509465 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.885 250273 DEBUG nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 23 04:38:02 np0005593232 nova_compute[250269]: 2026-01-23 09:38:02.886 250273 DEBUG nova.virt.libvirt.migration [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.015 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161083.014945, 261ab1ec-f79b-4867-bcb6-1c1d7491120e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.017 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.052 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.056 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.086 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 23 04:38:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1320: 321 pgs: 321 active+clean; 451 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 231 op/s
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.314 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:03 np0005593232 kernel: tap27e277b3-21 (unregistering): left promiscuous mode
Jan 23 04:38:03 np0005593232 NetworkManager[49057]: <info>  [1769161083.8622] device (tap27e277b3-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:03 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:03Z|00105|binding|INFO|Releasing lport 27e277b3-2135-4e3e-b336-e0da87509465 from this chassis (sb_readonly=0)
Jan 23 04:38:03 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:03Z|00106|binding|INFO|Setting lport 27e277b3-2135-4e3e-b336-e0da87509465 down in Southbound
Jan 23 04:38:03 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:03Z|00107|binding|INFO|Removing iface tap27e277b3-21 ovn-installed in OVS
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.898 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:06:0e 10.100.0.11'], port_security=['fa:16:3e:34:06:0e 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com,compute-2.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '3ec410d4-99bb-47ec-9f70-86f8400b2621'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '261ab1ec-f79b-4867-bcb6-1c1d7491120e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '17', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb60528-b878-42fd-9c2f-0a3345010b1a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=27e277b3-2135-4e3e-b336-e0da87509465) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.901 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 27e277b3-2135-4e3e-b336-e0da87509465 in datapath 8eab8076-0848-4daf-bbac-f3f8b65ca750 unbound from our chassis#033[00m
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.903 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8eab8076-0848-4daf-bbac-f3f8b65ca750#033[00m
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.918 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c78387e3-89da-49b5-8425-b83f899f22ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:03 np0005593232 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 23 04:38:03 np0005593232 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000001d.scope: Consumed 2.541s CPU time.
Jan 23 04:38:03 np0005593232 systemd-machined[215836]: Machine qemu-15-instance-0000001d terminated.
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.946 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[153b3318-c84f-4c12-a3ac-4269b60f3db1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.949 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[53ce0140-13fe-45ec-a80b-78051394f095]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:03 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-b06791ec-66fd-4114-8448-7ea0b7f88f25: No such file or directory
Jan 23 04:38:03 np0005593232 virtqemud[249592]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-b06791ec-66fd-4114-8448-7ea0b7f88f25: No such file or directory
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.976 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[11c699b7-82e6-4013-96df-dea593c913de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.980 250273 DEBUG nova.virt.libvirt.guest [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.980 250273 INFO nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migration operation has completed#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.981 250273 INFO nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] _post_live_migration() is started..#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.982 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.982 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 23 04:38:03 np0005593232 nova_compute[250269]: 2026-01-23 09:38:03.983 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 23 04:38:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:03.992 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9e4b64-3929-44f9-b8da-c4cc1c5631da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8eab8076-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:5b:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 43, 'tx_packets': 8, 'rx_bytes': 2086, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491454, 'reachable_time': 30252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275863, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.010 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bd85bc40-a232-4b44-95ac-7d433d92ed08]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8eab8076-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491465, 'tstamp': 491465}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275866, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8eab8076-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491468, 'tstamp': 491468}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275866, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.012 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eab8076-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.017 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.017 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8eab8076-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.018 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.018 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8eab8076-00, col_values=(('external_ids', {'iface-id': 'b545a870-aa18-4f64-a8a7-f8512824c4cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:04.018 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:04.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.224 250273 DEBUG nova.compute.manager [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.225 250273 DEBUG oslo_concurrency.lockutils [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.226 250273 DEBUG oslo_concurrency.lockutils [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.226 250273 DEBUG oslo_concurrency.lockutils [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.227 250273 DEBUG nova.compute.manager [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:04 np0005593232 nova_compute[250269]: 2026-01-23 09:38:04.227 250273 DEBUG nova.compute.manager [req-7bc79bf9-5b5a-4098-a0c9-764c24fce58e req-2b61784e-943c-4140-8779-ecd06c8ed59d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:38:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:04.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1321: 321 pgs: 321 active+clean; 451 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.257 250273 DEBUG nova.network.neutron [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Updated VIF entry in instance network info cache for port 27e277b3-2135-4e3e-b336-e0da87509465. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.258 250273 DEBUG nova.network.neutron [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Updating instance_info_cache with network_info: [{"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-2.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.379 250273 DEBUG oslo_concurrency.lockutils [req-c8d3235f-7a84-4409-9147-a71f72ce3c98 req-35a5dde1-b0aa-4a8c-a104-9edd18809bb9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-261ab1ec-f79b-4867-bcb6-1c1d7491120e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.710 250273 DEBUG nova.network.neutron [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Activated binding for port 27e277b3-2135-4e3e-b336-e0da87509465 and host compute-2.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.710 250273 DEBUG nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.711 250273 DEBUG nova.virt.libvirt.vif [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T09:37:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-724421301',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-724421301',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:37:26Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='d0dce6e339c349d4ab97cee5e49fff3a',ramdisk_id='',reservation_id='r-106tqp53',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1207260646',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1207260646-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:37:53Z,user_data=None,user_id='4f72965e950c4761bfedd99fdc411a83',uuid=261ab1ec-f79b-4867-bcb6-1c1d7491120e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.712 250273 DEBUG nova.network.os_vif_util [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converting VIF {"id": "27e277b3-2135-4e3e-b336-e0da87509465", "address": "fa:16:3e:34:06:0e", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap27e277b3-21", "ovs_interfaceid": "27e277b3-2135-4e3e-b336-e0da87509465", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.712 250273 DEBUG nova.network.os_vif_util [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.712 250273 DEBUG os_vif [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.715 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.715 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap27e277b3-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.718 250273 DEBUG nova.compute.manager [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.719 250273 DEBUG oslo_concurrency.lockutils [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.719 250273 DEBUG oslo_concurrency.lockutils [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.719 250273 DEBUG oslo_concurrency.lockutils [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.719 250273 DEBUG nova.compute.manager [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.719 250273 DEBUG nova.compute.manager [req-529d9d7d-763e-4dca-a16c-98a5119feb58 req-d0545983-9176-47a9-bea9-beed587497a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-unplugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.720 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.723 250273 INFO os_vif [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:06:0e,bridge_name='br-int',has_traffic_filtering=True,id=27e277b3-2135-4e3e-b336-e0da87509465,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap27e277b3-21')#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.723 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.723 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.723 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.724 250273 DEBUG nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.724 250273 INFO nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Deleting instance files /var/lib/nova/instances/261ab1ec-f79b-4867-bcb6-1c1d7491120e_del#033[00m
Jan 23 04:38:05 np0005593232 nova_compute[250269]: 2026-01-23 09:38:05.725 250273 INFO nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Deletion of /var/lib/nova/instances/261ab1ec-f79b-4867-bcb6-1c1d7491120e_del complete#033[00m
Jan 23 04:38:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:06.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:06.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.813 250273 DEBUG nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.813 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.814 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.814 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.814 250273 DEBUG nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.815 250273 WARNING nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.815 250273 DEBUG nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.815 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.816 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.816 250273 DEBUG oslo_concurrency.lockutils [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.816 250273 DEBUG nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:06 np0005593232 nova_compute[250269]: 2026-01-23 09:38:06.817 250273 WARNING nova.compute.manager [req-5b569c2d-3661-4575-814a-1ca02ed7b969 req-f0012e3a-925a-4f3f-91d4-e09dea6b8ba8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:38:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:07.086 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1322: 321 pgs: 321 active+clean; 483 MiB data, 622 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 181 op/s
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:08.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:38:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:08.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.525 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.526 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.526 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:38:08 np0005593232 nova_compute[250269]: 2026-01-23 09:38:08.526 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:38:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.080 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1323: 321 pgs: 321 active+clean; 497 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 874 KiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.401 250273 DEBUG nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.401 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.401 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.401 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 DEBUG nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 WARNING nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 DEBUG nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.402 250273 DEBUG oslo_concurrency.lockutils [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.403 250273 DEBUG nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] No waiting events found dispatching network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:09 np0005593232 nova_compute[250269]: 2026-01-23 09:38:09.403 250273 WARNING nova.compute.manager [req-4bf9d18a-5ddd-417f-b709-b94dfdd04be8 req-ebeaa3d4-e178-4f32-be7b-e42d9bc0ad8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Received unexpected event network-vif-plugged-27e277b3-2135-4e3e-b336-e0da87509465 for instance with vm_state active and task_state migrating.#033[00m
Jan 23 04:38:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:10.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:10.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:10 np0005593232 nova_compute[250269]: 2026-01-23 09:38:10.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1324: 321 pgs: 321 active+clean; 497 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 448 KiB/s rd, 3.6 MiB/s wr, 103 op/s
Jan 23 04:38:11 np0005593232 nova_compute[250269]: 2026-01-23 09:38:11.710 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updating instance_info_cache with network_info: [{"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:38:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:12 np0005593232 podman[275896]: 2026-01-23 09:38:12.162699003 +0000 UTC m=+0.057481910 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.242 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.242 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.242 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.242 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.243 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:38:12 np0005593232 nova_compute[250269]: 2026-01-23 09:38:12.280 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:38:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:12.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1325: 321 pgs: 321 active+clean; 484 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 268 op/s
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.333 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.333 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:38:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:38:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2745068445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:38:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:13 np0005593232 nova_compute[250269]: 2026-01-23 09:38:13.778 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.063 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.063 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.067 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.068 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.271 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.272 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4285MB free_disk=20.78537368774414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.273 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.273 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:14.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.590 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.590 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.591 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "261ab1ec-f79b-4867-bcb6-1c1d7491120e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.626 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration for instance 261ab1ec-f79b-4867-bcb6-1c1d7491120e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.649 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:14 np0005593232 nova_compute[250269]: 2026-01-23 09:38:14.749 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.113 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.114 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.114 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration 6e07873c-3bce-4fe1-8af8-b92dbe7fb0df is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.114 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.114 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:38:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1326: 321 pgs: 321 active+clean; 484 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Jan 23 04:38:15 np0005593232 nova_compute[250269]: 2026-01-23 09:38:15.721 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:16 np0005593232 nova_compute[250269]: 2026-01-23 09:38:16.307 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:38:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:16.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:38:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403723560' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:38:16 np0005593232 nova_compute[250269]: 2026-01-23 09:38:16.727 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:38:16 np0005593232 nova_compute[250269]: 2026-01-23 09:38:16.732 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.128 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:38:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1327: 321 pgs: 321 active+clean; 484 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.228 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.229 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.229 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.229 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.230 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.230 250273 DEBUG oslo_concurrency.processutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.251 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:38:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:38:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223618870' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.676 250273 DEBUG oslo_concurrency.processutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.830 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.831 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.834 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:17 np0005593232 nova_compute[250269]: 2026-01-23 09:38:17.835 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.002 250273 WARNING nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.003 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4273MB free_disk=20.78524398803711GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.003 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.004 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.108 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Migration for instance 261ab1ec-f79b-4867-bcb6-1c1d7491120e refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 04:38:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.170 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.275 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Instance 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.276 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Instance ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.276 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Migration 6e07873c-3bce-4fe1-8af8-b92dbe7fb0df is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.276 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.277 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:38:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:18.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.587 250273 DEBUG oslo_concurrency.processutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:38:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.981 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161083.9796898, 261ab1ec-f79b-4867-bcb6-1c1d7491120e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:38:18 np0005593232 nova_compute[250269]: 2026-01-23 09:38:18.981 250273 INFO nova.compute.manager [-] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:38:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:38:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1638202772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.042 250273 DEBUG nova.compute.manager [None req-8561a8c2-aa33-4dea-ba01-e49ea7859f24 - - - - - -] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.067 250273 DEBUG oslo_concurrency.processutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.075 250273 DEBUG nova.compute.provider_tree [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.103 250273 DEBUG nova.scheduler.client.report [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.105 250273 DEBUG nova.compute.resource_tracker [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.105 250273 DEBUG oslo_concurrency.lockutils [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.111 250273 INFO nova.compute.manager [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Migrating instance to compute-2.ctlplane.example.com finished successfully.#033[00m
Jan 23 04:38:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1328: 321 pgs: 321 active+clean; 484 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 251 op/s
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.577 250273 INFO nova.scheduler.client.report [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] Deleted allocation for migration 6e07873c-3bce-4fe1-8af8-b92dbe7fb0df#033[00m
Jan 23 04:38:19 np0005593232 nova_compute[250269]: 2026-01-23 09:38:19.578 250273 DEBUG nova.virt.libvirt.driver [None req-5be3b4d0-1559-4f86-9a92-4b6b08004a67 933b0942f8ee41568f9bab0377f99d4a cf9f72c217124dc988cfe3d1b549fa02 - - default default] [instance: 261ab1ec-f79b-4867-bcb6-1c1d7491120e] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 23 04:38:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:20.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:20.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:20 np0005593232 nova_compute[250269]: 2026-01-23 09:38:20.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1329: 321 pgs: 321 active+clean; 484 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 234 op/s
Jan 23 04:38:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:22.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:22.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1330: 321 pgs: 321 active+clean; 484 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 248 op/s
Jan 23 04:38:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:24 np0005593232 nova_compute[250269]: 2026-01-23 09:38:24.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:24.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:24.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1331: 321 pgs: 321 active+clean; 484 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 83 op/s
Jan 23 04:38:25 np0005593232 nova_compute[250269]: 2026-01-23 09:38:25.727 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:26.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:26.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1332: 321 pgs: 321 active+clean; 458 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.712 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Acquiring lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.712 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.713 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Acquiring lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.713 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.713 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.714 250273 INFO nova.compute.manager [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Terminating instance#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.715 250273 DEBUG nova.compute.manager [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:38:27 np0005593232 kernel: tapa19a3bde-24 (unregistering): left promiscuous mode
Jan 23 04:38:27 np0005593232 NetworkManager[49057]: <info>  [1769161107.7699] device (tapa19a3bde-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.777 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00108|binding|INFO|Releasing lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 from this chassis (sb_readonly=0)
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00109|binding|INFO|Setting lport a19a3bde-2463-4f15-afe7-f8df8c608bb7 down in Southbound
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00110|binding|INFO|Releasing lport 9852d6c7-7b56-465e-865f-eb8c24e61417 from this chassis (sb_readonly=0)
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00111|binding|INFO|Setting lport 9852d6c7-7b56-465e-865f-eb8c24e61417 down in Southbound
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00112|binding|INFO|Removing iface tapa19a3bde-24 ovn-installed in OVS
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.780 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00113|binding|INFO|Releasing lport 3fd7a4e0-3e1c-454c-a93e-1fa905fbcde2 from this chassis (sb_readonly=0)
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00114|binding|INFO|Releasing lport 5880c863-f7b0-4399-b221-f31849823320 from this chassis (sb_readonly=0)
Jan 23 04:38:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:38:27Z|00115|binding|INFO|Releasing lport b545a870-aa18-4f64-a8a7-f8512824c4cc from this chassis (sb_readonly=0)
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.785 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:36:a6 19.80.0.21'], port_security=['fa:16:3e:9e:36:a6 19.80.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['a19a3bde-2463-4f15-afe7-f8df8c608bb7'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1709862236', 'neutron:cidrs': '19.80.0.21/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-914246be-3a6e-47b3-afc0-463db5fa1dae', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1709862236', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=6bd4ce00-6348-4d9c-ba3b-d576a6d3e856, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[], logical_port=9852d6c7-7b56-465e-865f-eb8c24e61417) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.787 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7b:ec:dc 10.100.0.6'], port_security=['fa:16:3e:7b:ec:dc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-178752437', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5cea9bfc-e97a-4d07-a251-8ca3978b5f98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-178752437', 'neutron:project_id': 'd0dce6e339c349d4ab97cee5e49fff3a', 'neutron:revision_number': '13', 'neutron:security_group_ids': '0179c400-b2f2-4914-b563-942a61ef1858', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbb60528-b878-42fd-9c2f-0a3345010b1a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a19a3bde-2463-4f15-afe7-f8df8c608bb7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.787 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9852d6c7-7b56-465e-865f-eb8c24e61417 in datapath 914246be-3a6e-47b3-afc0-463db5fa1dae unbound from our chassis#033[00m
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.789 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 914246be-3a6e-47b3-afc0-463db5fa1dae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.790 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2c8d1635-39fe-48f0-9cbe-ea31740320f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:27.791 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae namespace which is not needed anymore#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 23 04:38:27 np0005593232 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000001a.scope: Consumed 5.788s CPU time.
Jan 23 04:38:27 np0005593232 systemd-machined[215836]: Machine qemu-13-instance-0000001a terminated.
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:27 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [NOTICE]   (272674) : haproxy version is 2.8.14-c23fe91
Jan 23 04:38:27 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [NOTICE]   (272674) : path to executable is /usr/sbin/haproxy
Jan 23 04:38:27 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [WARNING]  (272674) : Exiting Master process...
Jan 23 04:38:27 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [ALERT]    (272674) : Current worker (272676) exited with code 143 (Terminated)
Jan 23 04:38:27 np0005593232 neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae[272670]: [WARNING]  (272674) : All workers exited. Exiting... (0)
Jan 23 04:38:27 np0005593232 systemd[1]: libpod-479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b.scope: Deactivated successfully.
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.954 250273 INFO nova.virt.libvirt.driver [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Instance destroyed successfully.#033[00m
Jan 23 04:38:27 np0005593232 nova_compute[250269]: 2026-01-23 09:38:27.954 250273 DEBUG nova.objects.instance [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lazy-loading 'resources' on Instance uuid 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:38:27 np0005593232 podman[276062]: 2026-01-23 09:38:27.958638251 +0000 UTC m=+0.053398433 container died 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:38:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b-userdata-shm.mount: Deactivated successfully.
Jan 23 04:38:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-14216a362cf9d56e7493dc1ce3066f9cbd2c28ed860b0c43a1cf7f6075534319-merged.mount: Deactivated successfully.
Jan 23 04:38:28 np0005593232 podman[276062]: 2026-01-23 09:38:28.005231961 +0000 UTC m=+0.099992143 container cleanup 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 04:38:28 np0005593232 systemd[1]: libpod-conmon-479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b.scope: Deactivated successfully.
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.029 250273 DEBUG nova.virt.libvirt.vif [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T09:36:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1674522276',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1674522276',id=26,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:36:21Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d0dce6e339c349d4ab97cee5e49fff3a',ramdisk_id='',reservation_id='r-lql1hhru',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1207260646',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1207260646-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:36:48Z,user_data=None,user_id='4f72965e950c4761bfedd99fdc411a83',uuid=5cea9bfc-e97a-4d07-a251-8ca3978b5f98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.030 250273 DEBUG nova.network.os_vif_util [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Converting VIF {"id": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "address": "fa:16:3e:7b:ec:dc", "network": {"id": "8eab8076-0848-4daf-bbac-f3f8b65ca750", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-1369330040-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d0dce6e339c349d4ab97cee5e49fff3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa19a3bde-24", "ovs_interfaceid": "a19a3bde-2463-4f15-afe7-f8df8c608bb7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.030 250273 DEBUG nova.network.os_vif_util [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.031 250273 DEBUG os_vif [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.034 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa19a3bde-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.036 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.038 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.039 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.041 250273 INFO os_vif [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7b:ec:dc,bridge_name='br-int',has_traffic_filtering=True,id=a19a3bde-2463-4f15-afe7-f8df8c608bb7,network=Network(8eab8076-0848-4daf-bbac-f3f8b65ca750),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa19a3bde-24')#033[00m
Jan 23 04:38:28 np0005593232 podman[276103]: 2026-01-23 09:38:28.072660561 +0000 UTC m=+0.047102915 container remove 479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.079 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2e425f99-5a63-4c8b-af95-65f950538723]: (4, ('Fri Jan 23 09:38:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae (479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b)\n479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b\nFri Jan 23 09:38:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae (479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b)\n479779ca116ef5a9f2c8dd4d7fc058c7c950a23d64dd7fce681e62050904894b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.082 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a47e47d2-2299-4a91-9e50-86b8cbe3c42a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.083 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap914246be-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 kernel: tap914246be-30: left promiscuous mode
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.107 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[524e0fe0-7f60-4731-93f2-bd862ba1838b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.122 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e338e809-b16e-4ca3-9932-10a1c31cc0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.123 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ef9976-4dae-4184-9616-41991a0c0781]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:28.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.142 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[15c0db91-a88a-416f-af1d-f4b04eb74b14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491360, 'reachable_time': 22501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276137, 'error': None, 'target': 'ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.145 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-914246be-3a6e-47b3-afc0-463db5fa1dae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:38:28 np0005593232 systemd[1]: run-netns-ovnmeta\x2d914246be\x2d3a6e\x2d47b3\x2dafc0\x2d463db5fa1dae.mount: Deactivated successfully.
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.145 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[c6188af3-6918-4d91-9fd4-0d4133486a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.146 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a19a3bde-2463-4f15-afe7-f8df8c608bb7 in datapath 8eab8076-0848-4daf-bbac-f3f8b65ca750 unbound from our chassis#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.148 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8eab8076-0848-4daf-bbac-f3f8b65ca750, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.149 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[45e70c00-069d-43e2-9e17-1fe61c2060b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.149 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750 namespace which is not needed anymore#033[00m
Jan 23 04:38:28 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [NOTICE]   (272747) : haproxy version is 2.8.14-c23fe91
Jan 23 04:38:28 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [NOTICE]   (272747) : path to executable is /usr/sbin/haproxy
Jan 23 04:38:28 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [WARNING]  (272747) : Exiting Master process...
Jan 23 04:38:28 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [ALERT]    (272747) : Current worker (272749) exited with code 143 (Terminated)
Jan 23 04:38:28 np0005593232 neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750[272743]: [WARNING]  (272747) : All workers exited. Exiting... (0)
Jan 23 04:38:28 np0005593232 systemd[1]: libpod-7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec.scope: Deactivated successfully.
Jan 23 04:38:28 np0005593232 podman[276155]: 2026-01-23 09:38:28.278973215 +0000 UTC m=+0.049547074 container died 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 04:38:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec-userdata-shm.mount: Deactivated successfully.
Jan 23 04:38:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6cd26fcad317a5eb192e672116216a3fc3727b065e793f91b473b43baa8b5e78-merged.mount: Deactivated successfully.
Jan 23 04:38:28 np0005593232 podman[276155]: 2026-01-23 09:38:28.314851941 +0000 UTC m=+0.085425770 container cleanup 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:38:28 np0005593232 systemd[1]: libpod-conmon-7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec.scope: Deactivated successfully.
Jan 23 04:38:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:28.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:28 np0005593232 podman[276185]: 2026-01-23 09:38:28.425205126 +0000 UTC m=+0.088733273 container remove 7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.431 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a93e48f7-32bf-4fab-b0c9-fc925a5730a8]: (4, ('Fri Jan 23 09:38:28 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750 (7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec)\n7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec\nFri Jan 23 09:38:28 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750 (7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec)\n7cd6cd44455a87f9fc1ea9caaa97d23f11ab5660c25cfa841811d3ccd2011eec\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.433 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59e0b94d-bccd-49f5-aeef-9b2fc2f99b29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.433 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8eab8076-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 kernel: tap8eab8076-00: left promiscuous mode
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.485 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.488 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[38c64503-5ca1-4a60-8e9e-f8cca4d2899b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.516 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[849b1a48-8bc4-465a-9cdd-6925ad231cbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.519 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a60df06-dc30-4f08-b4f8-1406fb128641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.541 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b693c5b3-c1db-445c-920e-caa5df1b15b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491447, 'reachable_time': 23362, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276200, 'error': None, 'target': 'ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.544 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8eab8076-0848-4daf-bbac-f3f8b65ca750 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:38:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:28.544 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[d576f96f-339f-4b85-8c35-cb251903e011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.564 250273 INFO nova.virt.libvirt.driver [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Deleting instance files /var/lib/nova/instances/5cea9bfc-e97a-4d07-a251-8ca3978b5f98_del#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.564 250273 INFO nova.virt.libvirt.driver [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Deletion of /var/lib/nova/instances/5cea9bfc-e97a-4d07-a251-8ca3978b5f98_del complete#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.666 250273 INFO nova.compute.manager [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.667 250273 DEBUG oslo.service.loopingcall [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.668 250273 DEBUG nova.compute.manager [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:38:28 np0005593232 nova_compute[250269]: 2026-01-23 09:38:28.669 250273 DEBUG nova.network.neutron [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:38:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:28 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8eab8076\x2d0848\x2d4daf\x2dbbac\x2df3f8b65ca750.mount: Deactivated successfully.
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.049 250273 DEBUG nova.compute.manager [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Received event network-vif-unplugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.050 250273 DEBUG oslo_concurrency.lockutils [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.050 250273 DEBUG oslo_concurrency.lockutils [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.050 250273 DEBUG oslo_concurrency.lockutils [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.051 250273 DEBUG nova.compute.manager [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] No waiting events found dispatching network-vif-unplugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.051 250273 DEBUG nova.compute.manager [req-b0b674e8-e8b6-4adb-a164-7d8aa7015b55 req-0faf1476-ef42-48cd-992d-5d94fe33beed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Received event network-vif-unplugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:38:29 np0005593232 nova_compute[250269]: 2026-01-23 09:38:29.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1333: 321 pgs: 321 active+clean; 438 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 912 KiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 23 04:38:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:30.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:30.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:30 np0005593232 podman[276202]: 2026-01-23 09:38:30.42180755 +0000 UTC m=+0.076598841 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:38:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1334: 321 pgs: 321 active+clean; 438 MiB data, 618 MiB used, 20 GiB / 21 GiB avail; 345 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.472 250273 DEBUG nova.compute.manager [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Received event network-vif-plugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.472 250273 DEBUG oslo_concurrency.lockutils [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.473 250273 DEBUG oslo_concurrency.lockutils [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.474 250273 DEBUG oslo_concurrency.lockutils [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.474 250273 DEBUG nova.compute.manager [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] No waiting events found dispatching network-vif-plugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:38:31 np0005593232 nova_compute[250269]: 2026-01-23 09:38:31.475 250273 WARNING nova.compute.manager [req-ee469c76-048f-4f7a-b49c-f975782ec5ad req-4882f03d-3813-47e9-903e-eec663f8be07 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Received unexpected event network-vif-plugged-a19a3bde-2463-4f15-afe7-f8df8c608bb7 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:38:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:32.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:32.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:33 np0005593232 nova_compute[250269]: 2026-01-23 09:38:33.037 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1335: 321 pgs: 321 active+clean; 358 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 23 04:38:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:33 np0005593232 nova_compute[250269]: 2026-01-23 09:38:33.811 250273 DEBUG nova.network.neutron [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:38:33 np0005593232 nova_compute[250269]: 2026-01-23 09:38:33.857 250273 INFO nova.compute.manager [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Took 5.19 seconds to deallocate network for instance.#033[00m
Jan 23 04:38:33 np0005593232 nova_compute[250269]: 2026-01-23 09:38:33.950 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:33 np0005593232 nova_compute[250269]: 2026-01-23 09:38:33.951 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.104 250273 DEBUG oslo_concurrency.processutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:38:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:34.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:38:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665733448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.566 250273 DEBUG oslo_concurrency.processutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.573 250273 DEBUG nova.compute.provider_tree [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.609 250273 DEBUG nova.scheduler.client.report [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.675 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.732 250273 INFO nova.scheduler.client.report [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Deleted allocations for instance 5cea9bfc-e97a-4d07-a251-8ca3978b5f98#033[00m
Jan 23 04:38:34 np0005593232 nova_compute[250269]: 2026-01-23 09:38:34.865 250273 DEBUG oslo_concurrency.lockutils [None req-e5772bea-b73f-4904-8fb2-632028d959e4 4f72965e950c4761bfedd99fdc411a83 d0dce6e339c349d4ab97cee5e49fff3a - - default default] Lock "5cea9bfc-e97a-4d07-a251-8ca3978b5f98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1336: 321 pgs: 321 active+clean; 358 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 355 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 23 04:38:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:36.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1337: 321 pgs: 321 active+clean; 327 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 363 KiB/s rd, 2.2 MiB/s wr, 123 op/s
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:38:37
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'vms']
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:38:38 np0005593232 nova_compute[250269]: 2026-01-23 09:38:38.040 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:38.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:39 np0005593232 nova_compute[250269]: 2026-01-23 09:38:39.095 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1338: 321 pgs: 321 active+clean; 279 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 346 KiB/s wr, 87 op/s
Jan 23 04:38:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:40.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1339: 321 pgs: 321 active+clean; 279 MiB data, 546 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 40 KiB/s wr, 60 op/s
Jan 23 04:38:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:42.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:42 np0005593232 podman[276307]: 2026-01-23 09:38:42.414772507 +0000 UTC m=+0.060427483 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 04:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:42.592 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:42.593 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:38:42 np0005593232 nova_compute[250269]: 2026-01-23 09:38:42.953 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161107.9523046, 5cea9bfc-e97a-4d07-a251-8ca3978b5f98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:38:42 np0005593232 nova_compute[250269]: 2026-01-23 09:38:42.953 250273 INFO nova.compute.manager [-] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:38:43 np0005593232 nova_compute[250269]: 2026-01-23 09:38:43.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:43 np0005593232 nova_compute[250269]: 2026-01-23 09:38:43.057 250273 DEBUG nova.compute.manager [None req-0916a3eb-7c7b-4fb0-91f9-a56eec19eac9 - - - - - -] [instance: 5cea9bfc-e97a-4d07-a251-8ca3978b5f98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:38:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1340: 321 pgs: 321 active+clean; 325 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 94 op/s
Jan 23 04:38:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:44 np0005593232 nova_compute[250269]: 2026-01-23 09:38:44.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:44.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:44.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1341: 321 pgs: 321 active+clean; 325 MiB data, 555 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 23 04:38:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:46.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007494431067373907 of space, bias 1.0, pg target 2.248329320212172 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:38:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 04:38:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1342: 321 pgs: 321 active+clean; 325 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 23 04:38:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:38:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6820 writes, 29K keys, 6820 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6820 writes, 6820 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1644 writes, 7220 keys, 1644 commit groups, 1.0 writes per commit group, ingest: 11.05 MB, 0.02 MB/s#012Interval WAL: 1644 writes, 1644 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     44.2      0.87              0.14        17    0.051       0      0       0.0       0.0#012  L6      1/0    8.46 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   3.6     96.1     79.0      1.75              0.44        16    0.109     79K   8974       0.0       0.0#012 Sum      1/0    8.46 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.6     64.3     67.5      2.61              0.57        33    0.079     79K   8974       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     79.3     81.3      0.60              0.17         8    0.076     23K   2599       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0     96.1     79.0      1.75              0.44        16    0.109     79K   8974       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     44.4      0.86              0.14        16    0.054       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.037, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.17 GB write, 0.07 MB/s write, 0.16 GB read, 0.07 MB/s read, 2.6 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 17.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000297 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1025,16.88 MB,5.5516%) FilterBlock(34,228.48 KB,0.0733978%) IndexBlock(34,426.16 KB,0.136897%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:38:48 np0005593232 nova_compute[250269]: 2026-01-23 09:38:48.047 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:48.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:49 np0005593232 nova_compute[250269]: 2026-01-23 09:38:49.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1343: 321 pgs: 321 active+clean; 325 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:38:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:50.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:50.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1344: 321 pgs: 321 active+clean; 325 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 23 04:38:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:52.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:53 np0005593232 nova_compute[250269]: 2026-01-23 09:38:53.051 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1345: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 04:38:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:53.250 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:38:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:53.251 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:38:53 np0005593232 nova_compute[250269]: 2026-01-23 09:38:53.251 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:54 np0005593232 nova_compute[250269]: 2026-01-23 09:38:54.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 04:38:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:54.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 04:38:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1346: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 23 04:38:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:38:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:56.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:38:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:38:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:56.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:38:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1347: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 23 04:38:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:38:57.253 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:38:58 np0005593232 nova_compute[250269]: 2026-01-23 09:38:58.056 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:38:58.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:38:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:38:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:38:58.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:38:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:38:59 np0005593232 nova_compute[250269]: 2026-01-23 09:38:59.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:38:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1348: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 23 04:39:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:00.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:00.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1349: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 71 op/s
Jan 23 04:39:01 np0005593232 podman[276388]: 2026-01-23 09:39:01.416621939 +0000 UTC m=+0.070921700 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:39:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:02.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:39:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:02.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1be06666-4303-4be0-a1a1-79222792e8a5 does not exist
Jan 23 04:39:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f7212813-98b3-4f4a-8356-1f259286fdb3 does not exist
Jan 23 04:39:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9dea4f30-56a6-4405-b38d-518c05abf848 does not exist
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:39:03 np0005593232 nova_compute[250269]: 2026-01-23 09:39:03.059 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:39:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1350: 321 pgs: 321 active+clean; 351 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 108 op/s
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.646293354 +0000 UTC m=+0.046565140 container create fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:39:03 np0005593232 systemd[1]: Started libpod-conmon-fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a.scope.
Jan 23 04:39:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.720042473 +0000 UTC m=+0.120314279 container init fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.626222876 +0000 UTC m=+0.026494682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.728370009 +0000 UTC m=+0.128641795 container start fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.731989232 +0000 UTC m=+0.132261038 container attach fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:39:03 np0005593232 stupefied_yalow[276704]: 167 167
Jan 23 04:39:03 np0005593232 systemd[1]: libpod-fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a.scope: Deactivated successfully.
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.737907809 +0000 UTC m=+0.138179605 container died fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:39:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1a8e03c02abc066f13be9642bd53d8547a3a19800e034b7ed5c6cece503495dd-merged.mount: Deactivated successfully.
Jan 23 04:39:03 np0005593232 podman[276688]: 2026-01-23 09:39:03.775220076 +0000 UTC m=+0.175491862 container remove fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:39:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:03 np0005593232 systemd[1]: libpod-conmon-fc3f3cc3fba0fad8ab17de3fd3ee4576cdc9fac9f65b0321e81db65b8cb39d5a.scope: Deactivated successfully.
Jan 23 04:39:03 np0005593232 podman[276726]: 2026-01-23 09:39:03.959916878 +0000 UTC m=+0.043773991 container create b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:39:04 np0005593232 systemd[1]: Started libpod-conmon-b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7.scope.
Jan 23 04:39:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:03.941916108 +0000 UTC m=+0.025773241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:04.038820813 +0000 UTC m=+0.122677946 container init b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:04.046748137 +0000 UTC m=+0.130605250 container start b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:04.051484581 +0000 UTC m=+0.135341684 container attach b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:39:04 np0005593232 nova_compute[250269]: 2026-01-23 09:39:04.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:04.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:04.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:04 np0005593232 zen_kalam[276742]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:39:04 np0005593232 zen_kalam[276742]: --> relative data size: 1.0
Jan 23 04:39:04 np0005593232 zen_kalam[276742]: --> All data devices are unavailable
Jan 23 04:39:04 np0005593232 systemd[1]: libpod-b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7.scope: Deactivated successfully.
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:04.877781875 +0000 UTC m=+0.961638998 container died b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:39:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f5b5a40e235f27cf6dfcb75395d52b457a95bfce91f0d7e3dca94a11eca2309c-merged.mount: Deactivated successfully.
Jan 23 04:39:04 np0005593232 podman[276726]: 2026-01-23 09:39:04.930168339 +0000 UTC m=+1.014025452 container remove b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:39:04 np0005593232 systemd[1]: libpod-conmon-b0ace19b418494fb48ff8136f36db50e739c0011b97b58d06a3f7813c7389ea7.scope: Deactivated successfully.
Jan 23 04:39:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1351: 321 pgs: 321 active+clean; 351 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 120 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.510115026 +0000 UTC m=+0.037723739 container create 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:39:05 np0005593232 systemd[1]: Started libpod-conmon-5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652.scope.
Jan 23 04:39:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.576151867 +0000 UTC m=+0.103760600 container init 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.583984969 +0000 UTC m=+0.111593682 container start 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:39:05 np0005593232 sweet_hugle[276929]: 167 167
Jan 23 04:39:05 np0005593232 systemd[1]: libpod-5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652.scope: Deactivated successfully.
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.589536946 +0000 UTC m=+0.117145679 container attach 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.494300508 +0000 UTC m=+0.021909251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.59002424 +0000 UTC m=+0.117632953 container died 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:39:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d8d8a00336f226d76f6637b2349ef55b31a2a7e4bc343f43c08326c907bdb847-merged.mount: Deactivated successfully.
Jan 23 04:39:05 np0005593232 podman[276912]: 2026-01-23 09:39:05.622810989 +0000 UTC m=+0.150419712 container remove 5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_hugle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:39:05 np0005593232 systemd[1]: libpod-conmon-5c15b985be089729c8e79b78b3d1dddd882680843bc816dc7a6f544400fc5652.scope: Deactivated successfully.
Jan 23 04:39:05 np0005593232 podman[276953]: 2026-01-23 09:39:05.791307251 +0000 UTC m=+0.026673266 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:06 np0005593232 podman[276953]: 2026-01-23 09:39:06.031846435 +0000 UTC m=+0.267212430 container create 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:39:06 np0005593232 systemd[1]: Started libpod-conmon-317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5.scope.
Jan 23 04:39:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcdb2a0c9d42a7e10d378ddf9fd688e95d767f37682d9f694f6e2e330a522ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcdb2a0c9d42a7e10d378ddf9fd688e95d767f37682d9f694f6e2e330a522ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcdb2a0c9d42a7e10d378ddf9fd688e95d767f37682d9f694f6e2e330a522ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdcdb2a0c9d42a7e10d378ddf9fd688e95d767f37682d9f694f6e2e330a522ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:06.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:06.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:06 np0005593232 podman[276953]: 2026-01-23 09:39:06.496424064 +0000 UTC m=+0.731790049 container init 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:39:06 np0005593232 podman[276953]: 2026-01-23 09:39:06.504540524 +0000 UTC m=+0.739906489 container start 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:39:06 np0005593232 podman[276953]: 2026-01-23 09:39:06.778199155 +0000 UTC m=+1.013565120 container attach 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1352: 321 pgs: 321 active+clean; 357 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:07 np0005593232 great_newton[276969]: {
Jan 23 04:39:07 np0005593232 great_newton[276969]:    "0": [
Jan 23 04:39:07 np0005593232 great_newton[276969]:        {
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "devices": [
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "/dev/loop3"
Jan 23 04:39:07 np0005593232 great_newton[276969]:            ],
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "lv_name": "ceph_lv0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "lv_size": "7511998464",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "name": "ceph_lv0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "tags": {
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.cluster_name": "ceph",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.crush_device_class": "",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.encrypted": "0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.osd_id": "0",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.type": "block",
Jan 23 04:39:07 np0005593232 great_newton[276969]:                "ceph.vdo": "0"
Jan 23 04:39:07 np0005593232 great_newton[276969]:            },
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "type": "block",
Jan 23 04:39:07 np0005593232 great_newton[276969]:            "vg_name": "ceph_vg0"
Jan 23 04:39:07 np0005593232 great_newton[276969]:        }
Jan 23 04:39:07 np0005593232 great_newton[276969]:    ]
Jan 23 04:39:07 np0005593232 great_newton[276969]: }
Jan 23 04:39:07 np0005593232 systemd[1]: libpod-317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5.scope: Deactivated successfully.
Jan 23 04:39:07 np0005593232 podman[276953]: 2026-01-23 09:39:07.337645471 +0000 UTC m=+1.573011436 container died 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:39:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bdcdb2a0c9d42a7e10d378ddf9fd688e95d767f37682d9f694f6e2e330a522ec-merged.mount: Deactivated successfully.
Jan 23 04:39:07 np0005593232 podman[276953]: 2026-01-23 09:39:07.449799308 +0000 UTC m=+1.685165273 container remove 317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_newton, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:39:07 np0005593232 systemd[1]: libpod-conmon-317b215b2f0597f4e11a21cc6c42b96a16c3a42a684421114900bee0ae76dbc5.scope: Deactivated successfully.
Jan 23 04:39:08 np0005593232 nova_compute[250269]: 2026-01-23 09:39:08.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.148112337 +0000 UTC m=+0.101991070 container create 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.071231299 +0000 UTC m=+0.025110042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:08 np0005593232 systemd[1]: Started libpod-conmon-2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a.scope.
Jan 23 04:39:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:08.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.242533301 +0000 UTC m=+0.196412034 container init 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.249345004 +0000 UTC m=+0.203223727 container start 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:39:08 np0005593232 vigilant_wright[277148]: 167 167
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.253751749 +0000 UTC m=+0.207630472 container attach 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:39:08 np0005593232 systemd[1]: libpod-2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a.scope: Deactivated successfully.
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.254605183 +0000 UTC m=+0.208483916 container died 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:39:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95571100024aad3412884298b4a6360e69ee0c9ecfe924095d0e8df13b776a0e-merged.mount: Deactivated successfully.
Jan 23 04:39:08 np0005593232 podman[277131]: 2026-01-23 09:39:08.287033922 +0000 UTC m=+0.240912645 container remove 2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:39:08 np0005593232 systemd[1]: libpod-conmon-2c94ea336d766f5acd2c8421b6bf6fcde70ccaaf4b28169d674a72f01664e90a.scope: Deactivated successfully.
Jan 23 04:39:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:08.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:08 np0005593232 podman[277173]: 2026-01-23 09:39:08.491814222 +0000 UTC m=+0.054057462 container create c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:39:08 np0005593232 systemd[1]: Started libpod-conmon-c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782.scope.
Jan 23 04:39:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:39:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a55ef7ccc50e3c650237326724b37d55b1bbb120fdc431ef6dc31791502a57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a55ef7ccc50e3c650237326724b37d55b1bbb120fdc431ef6dc31791502a57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a55ef7ccc50e3c650237326724b37d55b1bbb120fdc431ef6dc31791502a57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7a55ef7ccc50e3c650237326724b37d55b1bbb120fdc431ef6dc31791502a57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:39:08 np0005593232 podman[277173]: 2026-01-23 09:39:08.47514766 +0000 UTC m=+0.037390930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:39:08 np0005593232 podman[277173]: 2026-01-23 09:39:08.582452209 +0000 UTC m=+0.144695449 container init c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:39:08 np0005593232 podman[277173]: 2026-01-23 09:39:08.588313485 +0000 UTC m=+0.150556725 container start c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:39:08 np0005593232 podman[277173]: 2026-01-23 09:39:08.591707131 +0000 UTC m=+0.153950371 container attach c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:39:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1353: 321 pgs: 321 active+clean; 358 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.281 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.281 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:39:09 np0005593232 nova_compute[250269]: 2026-01-23 09:39:09.408 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]: {
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:        "osd_id": 0,
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:        "type": "bluestore"
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]:    }
Jan 23 04:39:09 np0005593232 distracted_brahmagupta[277190]: }
Jan 23 04:39:09 np0005593232 systemd[1]: libpod-c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782.scope: Deactivated successfully.
Jan 23 04:39:09 np0005593232 podman[277173]: 2026-01-23 09:39:09.450658031 +0000 UTC m=+1.012901301 container died c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:39:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e7a55ef7ccc50e3c650237326724b37d55b1bbb120fdc431ef6dc31791502a57-merged.mount: Deactivated successfully.
Jan 23 04:39:09 np0005593232 podman[277173]: 2026-01-23 09:39:09.507952314 +0000 UTC m=+1.070195554 container remove c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:39:09 np0005593232 systemd[1]: libpod-conmon-c7c80e77a20502d4e77336b5c2ec9f97059ce83af2985f8a3738d3bb0862f782.scope: Deactivated successfully.
Jan 23 04:39:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:39:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:39:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 48650f76-fb44-47aa-a388-fe2e29e5e076 does not exist
Jan 23 04:39:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0053e199-466a-4fae-bc3c-999b679e1c27 does not exist
Jan 23 04:39:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5cce8b8a-8a7a-4f23-8f11-039942025b59 does not exist
Jan 23 04:39:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:10.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:10.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:39:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1354: 321 pgs: 321 active+clean; 358 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 254 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 04:39:11 np0005593232 nova_compute[250269]: 2026-01-23 09:39:11.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:11 np0005593232 nova_compute[250269]: 2026-01-23 09:39:11.356 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:11 np0005593232 nova_compute[250269]: 2026-01-23 09:39:11.356 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:11 np0005593232 nova_compute[250269]: 2026-01-23 09:39:11.356 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:39:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:12.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:12 np0005593232 nova_compute[250269]: 2026-01-23 09:39:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:12 np0005593232 nova_compute[250269]: 2026-01-23 09:39:12.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:12 np0005593232 podman[277299]: 2026-01-23 09:39:12.665633444 +0000 UTC m=+0.061404150 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 04:39:13 np0005593232 nova_compute[250269]: 2026-01-23 09:39:13.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1355: 321 pgs: 321 active+clean; 358 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 255 KiB/s rd, 2.2 MiB/s wr, 62 op/s
Jan 23 04:39:13 np0005593232 nova_compute[250269]: 2026-01-23 09:39:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.108 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:14.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:39:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:39:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:14.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:39:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:39:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1959071815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.793 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.893 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:39:14 np0005593232 nova_compute[250269]: 2026-01-23 09:39:14.894 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.054 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.055 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4430MB free_disk=20.806194305419922GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.055 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.056 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.170 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.170 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.170 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:39:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1356: 321 pgs: 321 active+clean; 358 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 135 KiB/s rd, 163 KiB/s wr, 25 op/s
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.205 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.241 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.241 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.299 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.356 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.454 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:39:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:39:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281403747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.971 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:39:15 np0005593232 nova_compute[250269]: 2026-01-23 09:39:15.980 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:39:16 np0005593232 nova_compute[250269]: 2026-01-23 09:39:16.028 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:39:16 np0005593232 nova_compute[250269]: 2026-01-23 09:39:16.094 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:39:16 np0005593232 nova_compute[250269]: 2026-01-23 09:39:16.095 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:16.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:16.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1357: 321 pgs: 321 active+clean; 358 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 142 KiB/s rd, 165 KiB/s wr, 34 op/s
Jan 23 04:39:18 np0005593232 nova_compute[250269]: 2026-01-23 09:39:18.119 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:18.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:18.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:19 np0005593232 nova_compute[250269]: 2026-01-23 09:39:19.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1358: 321 pgs: 321 active+clean; 333 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 73 KiB/s wr, 18 op/s
Jan 23 04:39:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:20.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1359: 321 pgs: 321 active+clean; 333 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 37 KiB/s wr, 14 op/s
Jan 23 04:39:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:22.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:23 np0005593232 nova_compute[250269]: 2026-01-23 09:39:23.122 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1360: 321 pgs: 321 active+clean; 325 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 23 04:39:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:24 np0005593232 nova_compute[250269]: 2026-01-23 09:39:24.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:24.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:24.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1361: 321 pgs: 321 active+clean; 325 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 23 04:39:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:26.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:26.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1362: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 23 04:39:28 np0005593232 nova_compute[250269]: 2026-01-23 09:39:28.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:28.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:29 np0005593232 nova_compute[250269]: 2026-01-23 09:39:29.116 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1363: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 23 04:39:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:30.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:30.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1364: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 23 04:39:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:32 np0005593232 podman[277400]: 2026-01-23 09:39:32.438686581 +0000 UTC m=+0.095602939 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:39:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:32.517 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:39:32 np0005593232 nova_compute[250269]: 2026-01-23 09:39:32.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:32.519 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:39:33 np0005593232 nova_compute[250269]: 2026-01-23 09:39:33.164 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1365: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 23 04:39:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:34 np0005593232 nova_compute[250269]: 2026-01-23 09:39:34.118 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:34.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1366: 321 pgs: 321 active+clean; 326 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 20 KiB/s wr, 74 op/s
Jan 23 04:39:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:35.521 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:39:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:36.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:39:37
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta']
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1367: 321 pgs: 321 active+clean; 279 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:39:38 np0005593232 nova_compute[250269]: 2026-01-23 09:39:38.168 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:38.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:38.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:39 np0005593232 nova_compute[250269]: 2026-01-23 09:39:39.121 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1368: 321 pgs: 321 active+clean; 273 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 721 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Jan 23 04:39:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:40.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1369: 321 pgs: 321 active+clean; 273 MiB data, 542 MiB used, 20 GiB / 21 GiB avail; 241 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 23 04:39:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:42.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:42.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1370: 321 pgs: 321 active+clean; 275 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 23 04:39:43 np0005593232 nova_compute[250269]: 2026-01-23 09:39:43.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:43 np0005593232 podman[277485]: 2026-01-23 09:39:43.386668019 +0000 UTC m=+0.052418006 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 23 04:39:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:44 np0005593232 nova_compute[250269]: 2026-01-23 09:39:44.122 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:44.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:44.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:44 np0005593232 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 23 04:39:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1371: 321 pgs: 321 active+clean; 275 MiB data, 560 MiB used, 20 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Jan 23 04:39:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:46.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:46.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005890636632235251 of space, bias 1.0, pg target 1.767190989670575 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:39:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:39:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1372: 321 pgs: 321 active+clean; 223 MiB data, 527 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:48.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.284 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.285 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.285 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.285 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.285 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.287 250273 INFO nova.compute.manager [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Terminating instance#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.287 250273 DEBUG nova.compute.manager [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:39:48 np0005593232 kernel: tap1b51b4db-a7 (unregistering): left promiscuous mode
Jan 23 04:39:48 np0005593232 NetworkManager[49057]: <info>  [1769161188.3473] device (tap1b51b4db-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:39:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:39:48Z|00116|binding|INFO|Releasing lport 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 from this chassis (sb_readonly=0)
Jan 23 04:39:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:39:48Z|00117|binding|INFO|Setting lport 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 down in Southbound
Jan 23 04:39:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:39:48Z|00118|binding|INFO|Removing iface tap1b51b4db-a7 ovn-installed in OVS
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.356 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.372 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:6c:17 10.100.0.7'], port_security=['fa:16:3e:c8:6c:17 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a5f46b255cd4387bd3e4c0acaa39466', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d939c30-94ef-4237-8ee8-7374d4fefcd9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bd55a4d-ba72-4dcd-bf4e-ec1dab31b370, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=1b51b4db-a755-47c2-9d6b-f75e5cdb0204) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.374 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 1b51b4db-a755-47c2-9d6b-f75e5cdb0204 in datapath 1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c unbound from our chassis#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.376 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.378 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[640506c4-3e4e-4d0c-9959-1e0bc508e32c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.378 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c namespace which is not needed anymore#033[00m
Jan 23 04:39:48 np0005593232 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 23 04:39:48 np0005593232 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000001f.scope: Consumed 19.224s CPU time.
Jan 23 04:39:48 np0005593232 systemd-machined[215836]: Machine qemu-14-instance-0000001f terminated.
Jan 23 04:39:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:48.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [NOTICE]   (274357) : haproxy version is 2.8.14-c23fe91
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [NOTICE]   (274357) : path to executable is /usr/sbin/haproxy
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [WARNING]  (274357) : Exiting Master process...
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [WARNING]  (274357) : Exiting Master process...
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [ALERT]    (274357) : Current worker (274359) exited with code 143 (Terminated)
Jan 23 04:39:48 np0005593232 neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c[274353]: [WARNING]  (274357) : All workers exited. Exiting... (0)
Jan 23 04:39:48 np0005593232 systemd[1]: libpod-dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8.scope: Deactivated successfully.
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.509 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 podman[277535]: 2026-01-23 09:39:48.514221966 +0000 UTC m=+0.048292619 container died dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.538 250273 INFO nova.virt.libvirt.driver [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Instance destroyed successfully.#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.538 250273 DEBUG nova.objects.instance [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lazy-loading 'resources' on Instance uuid ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:39:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8-userdata-shm.mount: Deactivated successfully.
Jan 23 04:39:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b8e6cb4bf5bd9695192cffbd81adaf2c61a1f2a01af467ab53cf8f3148ca1c50-merged.mount: Deactivated successfully.
Jan 23 04:39:48 np0005593232 podman[277535]: 2026-01-23 09:39:48.561437284 +0000 UTC m=+0.095507937 container cleanup dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.564 250273 DEBUG nova.virt.libvirt.vif [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:37:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-2022362159',display_name='tempest-ServersAdminTestJSON-server-2022362159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-2022362159',id=31,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:37:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a5f46b255cd4387bd3e4c0acaa39466',ramdisk_id='',reservation_id='r-s5k184ms',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1167530593',owner_user_name='tempest-ServersAdminTestJSON-1167530593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:37:35Z,user_data=None,user_id='191a72cfd0a841e9806246e07eb62fa6',uuid=ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.565 250273 DEBUG nova.network.os_vif_util [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converting VIF {"id": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "address": "fa:16:3e:c8:6c:17", "network": {"id": "1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-62484463-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a5f46b255cd4387bd3e4c0acaa39466", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b51b4db-a7", "ovs_interfaceid": "1b51b4db-a755-47c2-9d6b-f75e5cdb0204", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.565 250273 DEBUG nova.network.os_vif_util [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.566 250273 DEBUG os_vif [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.568 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b51b4db-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:39:48 np0005593232 systemd[1]: libpod-conmon-dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8.scope: Deactivated successfully.
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.608 250273 INFO os_vif [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:6c:17,bridge_name='br-int',has_traffic_filtering=True,id=1b51b4db-a755-47c2-9d6b-f75e5cdb0204,network=Network(1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b51b4db-a7')#033[00m
Jan 23 04:39:48 np0005593232 podman[277576]: 2026-01-23 09:39:48.662562958 +0000 UTC m=+0.047393493 container remove dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.668 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9af446ae-0ed2-4dfc-bd2c-de2af7a4f4ce]: (4, ('Fri Jan 23 09:39:48 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c (dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8)\ndbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8\nFri Jan 23 09:39:48 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c (dbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8)\ndbcb2b93545371eb91ecbeaa840bd86bc814e66c5438663a842f4b91d6c0d3f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.670 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[30e5184a-60d8-4da1-b9c2-1cf8164d0395]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.671 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f2b13ad-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.672 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 kernel: tap1f2b13ad-70: left promiscuous mode
Jan 23 04:39:48 np0005593232 nova_compute[250269]: 2026-01-23 09:39:48.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.702 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[595498c0-7f8b-40b9-95a4-e1a90fe92541]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.718 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0d607298-701e-4b00-9109-ed1dfe64aa0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.719 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1730bcac-0b01-416f-93f9-8fb5e70dde0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.736 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ff621b91-a9d7-4e62-baa1-2449f83a3b33]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496636, 'reachable_time': 26704, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277606, 'error': None, 'target': 'ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.738 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f2b13ad-7b25-4a2b-b4d5-7432a67ce12c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:39:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:39:48.738 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f1054e24-889e-4f9b-b233-2824754632b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:39:48 np0005593232 systemd[1]: run-netns-ovnmeta\x2d1f2b13ad\x2d7b25\x2d4a2b\x2db4d5\x2d7432a67ce12c.mount: Deactivated successfully.
Jan 23 04:39:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.026 250273 INFO nova.virt.libvirt.driver [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Deleting instance files /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_del#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.027 250273 INFO nova.virt.libvirt.driver [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Deletion of /var/lib/nova/instances/ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae_del complete#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.124 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.140 250273 INFO nova.compute.manager [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.141 250273 DEBUG oslo.service.loopingcall [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.142 250273 DEBUG nova.compute.manager [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.142 250273 DEBUG nova.network.neutron [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:39:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1373: 321 pgs: 321 active+clean; 200 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 177 KiB/s rd, 641 KiB/s wr, 70 op/s
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.429 250273 DEBUG nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-unplugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.430 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.430 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.430 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.430 250273 DEBUG nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] No waiting events found dispatching network-vif-unplugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.431 250273 DEBUG nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-unplugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.431 250273 DEBUG nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.431 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.431 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.431 250273 DEBUG oslo_concurrency.lockutils [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.432 250273 DEBUG nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] No waiting events found dispatching network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:39:49 np0005593232 nova_compute[250269]: 2026-01-23 09:39:49.432 250273 WARNING nova.compute.manager [req-6ee23f53-c5ca-4c5b-927f-376c15345610 req-b8ddd2cf-2cad-42d8-81ad-fd7b048c17ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received unexpected event network-vif-plugged-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:39:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:50.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:50.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.119 250273 DEBUG nova.network.neutron [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:39:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1374: 321 pgs: 321 active+clean; 200 MiB data, 514 MiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 43 KiB/s wr, 36 op/s
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.196 250273 INFO nova.compute.manager [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Took 2.05 seconds to deallocate network for instance.#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.274 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.275 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.376 250273 DEBUG oslo_concurrency.processutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.574 250273 DEBUG nova.compute.manager [req-f68c37bd-eb06-4bd9-adfa-dc6d696f4c3d req-3d3fe338-66f5-4bf8-a44d-078956d9803e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Received event network-vif-deleted-1b51b4db-a755-47c2-9d6b-f75e5cdb0204 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:39:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:39:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607151538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.834 250273 DEBUG oslo_concurrency.processutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.840 250273 DEBUG nova.compute.provider_tree [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.871 250273 DEBUG nova.scheduler.client.report [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.907 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:51 np0005593232 nova_compute[250269]: 2026-01-23 09:39:51.990 250273 INFO nova.scheduler.client.report [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Deleted allocations for instance ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae#033[00m
Jan 23 04:39:52 np0005593232 nova_compute[250269]: 2026-01-23 09:39:52.145 250273 DEBUG oslo_concurrency.lockutils [None req-253a4d9b-4ad5-4a7a-b3e2-b93fc7e8e048 191a72cfd0a841e9806246e07eb62fa6 1a5f46b255cd4387bd3e4c0acaa39466 - - default default] Lock "ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:39:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:52.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:39:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:52.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:39:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1375: 321 pgs: 321 active+clean; 121 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 95 KiB/s rd, 45 KiB/s wr, 64 op/s
Jan 23 04:39:53 np0005593232 nova_compute[250269]: 2026-01-23 09:39:53.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:54 np0005593232 nova_compute[250269]: 2026-01-23 09:39:54.126 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:54.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:54.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1376: 321 pgs: 321 active+clean; 121 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 37 KiB/s rd, 15 KiB/s wr, 54 op/s
Jan 23 04:39:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:56.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1377: 321 pgs: 321 active+clean; 70 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 47 KiB/s rd, 15 KiB/s wr, 69 op/s
Jan 23 04:39:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:39:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:39:58.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:39:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:39:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:39:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:39:58.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:39:58 np0005593232 nova_compute[250269]: 2026-01-23 09:39:58.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:39:59 np0005593232 nova_compute[250269]: 2026-01-23 09:39:59.128 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:39:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1378: 321 pgs: 321 active+clean; 41 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 40 KiB/s rd, 15 KiB/s wr, 60 op/s
Jan 23 04:40:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:40:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:00.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:40:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:00.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1379: 321 pgs: 321 active+clean; 41 MiB data, 432 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 23 04:40:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:02.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 04:40:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1380: 321 pgs: 321 active+clean; 41 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Jan 23 04:40:03 np0005593232 podman[277692]: 2026-01-23 09:40:03.447108774 +0000 UTC m=+0.099985892 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 04:40:03 np0005593232 nova_compute[250269]: 2026-01-23 09:40:03.528 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161188.5249605, ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:40:03 np0005593232 nova_compute[250269]: 2026-01-23 09:40:03.529 250273 INFO nova.compute.manager [-] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:40:03 np0005593232 nova_compute[250269]: 2026-01-23 09:40:03.604 250273 DEBUG nova.compute.manager [None req-0d75378a-be23-4423-b6f6-6c26290bea0d - - - - - -] [instance: ef092bb0-6bff-41d7-9159-cf8d2ef5b7ae] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:03 np0005593232 nova_compute[250269]: 2026-01-23 09:40:03.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:04 np0005593232 nova_compute[250269]: 2026-01-23 09:40:04.129 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:04.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1381: 321 pgs: 321 active+clean; 41 MiB data, 421 MiB used, 21 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 04:40:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:06.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:07 np0005593232 nova_compute[250269]: 2026-01-23 09:40:07.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:07 np0005593232 nova_compute[250269]: 2026-01-23 09:40:07.096 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:07 np0005593232 nova_compute[250269]: 2026-01-23 09:40:07.097 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1382: 321 pgs: 321 active+clean; 62 MiB data, 429 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 749 KiB/s wr, 44 op/s
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:08.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:40:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:08.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:40:08 np0005593232 nova_compute[250269]: 2026-01-23 09:40:08.650 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:09 np0005593232 nova_compute[250269]: 2026-01-23 09:40:09.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1383: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 581 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 23 04:40:10 np0005593232 nova_compute[250269]: 2026-01-23 09:40:10.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:10 np0005593232 nova_compute[250269]: 2026-01-23 09:40:10.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:40:10 np0005593232 nova_compute[250269]: 2026-01-23 09:40:10.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:40:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:10.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:10 np0005593232 nova_compute[250269]: 2026-01-23 09:40:10.329 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:40:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:10.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1384: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 572 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 23 04:40:11 np0005593232 nova_compute[250269]: 2026-01-23 09:40:11.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:40:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.345 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.345 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.372 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.411 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "cd90446c-4f34-48ec-890c-da3641fbe66f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.412 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2152848929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.458 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:40:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:12.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.536 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.536 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.546 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.547 250273 INFO nova.compute.claims [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.599 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:12 np0005593232 nova_compute[250269]: 2026-01-23 09:40:12.729 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f6df13c-fee2-4d20-b1ac-4e52caf54d85 does not exist
Jan 23 04:40:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 96add23d-de34-4e1b-bc70-311440ae799f does not exist
Jan 23 04:40:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 115883ef-31b8-4823-9ba6-c466e81503d0 does not exist
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:40:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2746076077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.180 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.187 250273 DEBUG nova.compute.provider_tree [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:40:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1385: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.206 250273 DEBUG nova.scheduler.client.report [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.245 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.247 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.253 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.254 250273 INFO nova.compute.claims [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.301 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.302 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.342 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.344 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.463 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.502 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.503 250273 DEBUG nova.network.neutron [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.586 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.610 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:40:13 np0005593232 podman[278065]: 2026-01-23 09:40:13.555522008 +0000 UTC m=+0.037588864 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.653 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.723 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.724 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.725 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Creating image(s)#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.757 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.785 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.813 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.817 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.878 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.879 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.879 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.880 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.905 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.908 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/274512732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:13 np0005593232 podman[278065]: 2026-01-23 09:40:13.917072014 +0000 UTC m=+0.399138780 container create 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.939 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.947 250273 DEBUG nova.compute.provider_tree [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.966 250273 DEBUG nova.scheduler.client.report [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:40:13 np0005593232 nova_compute[250269]: 2026-01-23 09:40:13.996 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.065 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.065 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.088 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "2c9cdb1b-fc2b-46a0-861f-54f1bd562d2c" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.089 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.099 250273 DEBUG nova.network.neutron [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.100 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.133 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:14 np0005593232 systemd[1]: Started libpod-conmon-24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653.scope.
Jan 23 04:40:14 np0005593232 podman[278176]: 2026-01-23 09:40:14.154673988 +0000 UTC m=+0.195874033 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.159 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.159 250273 DEBUG nova.network.neutron [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.189 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:40:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.214 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:14.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.348 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.350 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.350 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Creating image(s)#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.386 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:14 np0005593232 podman[278065]: 2026-01-23 09:40:14.400348556 +0000 UTC m=+0.882415342 container init 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:40:14 np0005593232 podman[278065]: 2026-01-23 09:40:14.409994224 +0000 UTC m=+0.892060990 container start 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:40:14 np0005593232 suspicious_chebyshev[278212]: 167 167
Jan 23 04:40:14 np0005593232 systemd[1]: libpod-24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653.scope: Deactivated successfully.
Jan 23 04:40:14 np0005593232 podman[278065]: 2026-01-23 09:40:14.42929104 +0000 UTC m=+0.911357876 container attach 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:40:14 np0005593232 podman[278065]: 2026-01-23 09:40:14.429833655 +0000 UTC m=+0.911900461 container died 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:40:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fc05bd0e9409a77b468f294103bab7d5c3302c709d4bf6e6ee0316998023d958-merged.mount: Deactivated successfully.
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.463 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:14 np0005593232 podman[278065]: 2026-01-23 09:40:14.48274472 +0000 UTC m=+0.964811506 container remove 24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_chebyshev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:40:14 np0005593232 systemd[1]: libpod-conmon-24a302f0009fc8bf372d4172b1fda611a1e46b1efd695bd906ac685300f25653.scope: Deactivated successfully.
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.504 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.508 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:14.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.594 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.595 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.596 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.596 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.619 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.624 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 cd90446c-4f34-48ec-890c-da3641fbe66f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.650 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:14 np0005593232 podman[278297]: 2026-01-23 09:40:14.658818232 +0000 UTC m=+0.046646605 container create 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:40:14 np0005593232 systemd[1]: Started libpod-conmon-4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85.scope.
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.721 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] resizing rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:40:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:14 np0005593232 podman[278297]: 2026-01-23 09:40:14.640626668 +0000 UTC m=+0.028455061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:14 np0005593232 podman[278297]: 2026-01-23 09:40:14.754022915 +0000 UTC m=+0.141851328 container init 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:40:14 np0005593232 podman[278297]: 2026-01-23 09:40:14.760614425 +0000 UTC m=+0.148442798 container start 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:40:14 np0005593232 podman[278297]: 2026-01-23 09:40:14.765692691 +0000 UTC m=+0.153521074 container attach 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.846 250273 DEBUG nova.objects.instance [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'migration_context' on Instance uuid 58bac0ef-6888-4869-bd19-5e7aa21353e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.903 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.904 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Ensure instance console log exists: /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.905 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.905 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.905 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.907 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.918 250273 DEBUG nova.network.neutron [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.919 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.922 250273 WARNING nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.928 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 cd90446c-4f34-48ec-890c-da3641fbe66f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.960 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:40:14 np0005593232 nova_compute[250269]: 2026-01-23 09:40:14.962 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.000 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.001 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.003 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.003 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.003 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.004 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.004 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.004 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.005 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.005 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.005 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.006 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.006 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.006 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.009 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.036 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] resizing rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:40:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1386: 321 pgs: 321 active+clean; 88 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.415 250273 DEBUG nova.objects.instance [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'migration_context' on Instance uuid cd90446c-4f34-48ec-890c-da3641fbe66f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124703568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.439 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.440 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Ensure instance console log exists: /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.440 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.441 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.441 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.442 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.447 250273 WARNING nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.453 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.454 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.462 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.462 250273 DEBUG nova.virt.libvirt.host [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.464 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.464 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.464 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.464 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.465 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.465 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.465 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.465 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.465 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.466 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.466 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.466 250273 DEBUG nova.virt.hardware [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.469 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.493 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.522 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.526 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:15 np0005593232 festive_colden[278381]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:40:15 np0005593232 festive_colden[278381]: --> relative data size: 1.0
Jan 23 04:40:15 np0005593232 festive_colden[278381]: --> All data devices are unavailable
Jan 23 04:40:15 np0005593232 systemd[1]: libpod-4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85.scope: Deactivated successfully.
Jan 23 04:40:15 np0005593232 conmon[278381]: conmon 4abdfcb740ccc857b5c9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85.scope/container/memory.events
Jan 23 04:40:15 np0005593232 podman[278297]: 2026-01-23 09:40:15.629448424 +0000 UTC m=+1.017276797 container died 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:40:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2d4cf91668d8ac1d7d5f72457106c130a14771111b3b8088a5a9a2e4e4eea719-merged.mount: Deactivated successfully.
Jan 23 04:40:15 np0005593232 podman[278297]: 2026-01-23 09:40:15.693820618 +0000 UTC m=+1.081648991 container remove 4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:40:15 np0005593232 systemd[1]: libpod-conmon-4abdfcb740ccc857b5c910a60845595daa0e83d97068dd055c7f9839bf9cee85.scope: Deactivated successfully.
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/367487707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.933 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:40:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2639132003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.963 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.968 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.994 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:15 np0005593232 nova_compute[250269]: 2026-01-23 09:40:15.996 250273 DEBUG nova.objects.instance [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58bac0ef-6888-4869-bd19-5e7aa21353e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.119 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <uuid>58bac0ef-6888-4869-bd19-5e7aa21353e7</uuid>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <name>instance-00000025</name>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersOnMultiNodesTest-server-2012671541-1</nova:name>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:40:14</nova:creationTime>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:user uuid="a3b5a7f627074988a8a05a20558595fe">tempest-ServersOnMultiNodesTest-288318576-project-member</nova:user>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:project uuid="e8778f3a187440f3879f9d9533d45855">tempest-ServersOnMultiNodesTest-288318576</nova:project>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="serial">58bac0ef-6888-4869-bd19-5e7aa21353e7</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="uuid">58bac0ef-6888-4869-bd19-5e7aa21353e7</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/58bac0ef-6888-4869-bd19-5e7aa21353e7_disk">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/console.log" append="off"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:40:16 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:40:16 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.188 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.188 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.189 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Using config drive#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.224 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.305025075 +0000 UTC m=+0.040713194 container create 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:40:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:16.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:16 np0005593232 systemd[1]: Started libpod-conmon-64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf.scope.
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.289881018 +0000 UTC m=+0.025569157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.397 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.398 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.398 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.398 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.400 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.406476067 +0000 UTC m=+0.142164216 container init 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.415013) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216415081, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2380, "num_deletes": 508, "total_data_size": 3626022, "memory_usage": 3678336, "flush_reason": "Manual Compaction"}
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.414978812 +0000 UTC m=+0.150666941 container start 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.42220988 +0000 UTC m=+0.157898029 container attach 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:40:16 np0005593232 stupefied_wescoff[278820]: 167 167
Jan 23 04:40:16 np0005593232 systemd[1]: libpod-64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf.scope: Deactivated successfully.
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.424440015 +0000 UTC m=+0.160128144 container died 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.433 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.436 250273 DEBUG nova.objects.instance [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'pci_devices' on Instance uuid cd90446c-4f34-48ec-890c-da3641fbe66f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216446622, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3563880, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29023, "largest_seqno": 31402, "table_properties": {"data_size": 3553785, "index_size": 5885, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 24606, "raw_average_key_size": 19, "raw_value_size": 3531218, "raw_average_value_size": 2827, "num_data_blocks": 256, "num_entries": 1249, "num_filter_entries": 1249, "num_deletions": 508, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161017, "oldest_key_time": 1769161017, "file_creation_time": 1769161216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 31678 microseconds, and 7675 cpu microseconds.
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.446693) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3563880 bytes OK
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.446724) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.448977) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.448998) EVENT_LOG_v1 {"time_micros": 1769161216448991, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.449020) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3615166, prev total WAL file size 3615166, number of live WAL files 2.
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.450422) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3480KB)], [65(8662KB)]
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216450479, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12434037, "oldest_snapshot_seqno": -1}
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.456 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <uuid>cd90446c-4f34-48ec-890c-da3641fbe66f</uuid>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <name>instance-00000026</name>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersOnMultiNodesTest-server-2012671541-2</nova:name>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:40:15</nova:creationTime>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:user uuid="a3b5a7f627074988a8a05a20558595fe">tempest-ServersOnMultiNodesTest-288318576-project-member</nova:user>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <nova:project uuid="e8778f3a187440f3879f9d9533d45855">tempest-ServersOnMultiNodesTest-288318576</nova:project>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="serial">cd90446c-4f34-48ec-890c-da3641fbe66f</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="uuid">cd90446c-4f34-48ec-890c-da3641fbe66f</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/cd90446c-4f34-48ec-890c-da3641fbe66f_disk">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/console.log" append="off"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:40:16 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:40:16 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:40:16 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:40:16 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:40:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5b14fe404f7259d26f3b686edbc5e5f8b782ea5961c31744109d98f37c2aba0a-merged.mount: Deactivated successfully.
Jan 23 04:40:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5574 keys, 10381480 bytes, temperature: kUnknown
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216567244, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10381480, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10342324, "index_size": 24134, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13957, "raw_key_size": 143231, "raw_average_key_size": 25, "raw_value_size": 10240147, "raw_average_value_size": 1837, "num_data_blocks": 974, "num_entries": 5574, "num_filter_entries": 5574, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161216, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.567503) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10381480 bytes
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.569002) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 106.4 rd, 88.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.4) write-amplify(2.9) OK, records in: 6610, records dropped: 1036 output_compression: NoCompression
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.569043) EVENT_LOG_v1 {"time_micros": 1769161216569028, "job": 36, "event": "compaction_finished", "compaction_time_micros": 116855, "compaction_time_cpu_micros": 24253, "output_level": 6, "num_output_files": 1, "total_output_size": 10381480, "num_input_records": 6610, "num_output_records": 5574, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216570001, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161216571406, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.450359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.571523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.571529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.571532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.571536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:40:16.571537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:40:16 np0005593232 podman[278803]: 2026-01-23 09:40:16.573489999 +0000 UTC m=+0.309178118 container remove 64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_wescoff, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:40:16 np0005593232 systemd[1]: libpod-conmon-64fe6e069c13c265a931154cb91b224c7da20721e2eb3453372040b11e2a5abf.scope: Deactivated successfully.
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.606 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.607 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.607 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Using config drive#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.634 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.659 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Creating config drive at /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.663 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqu2k8ah execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:16 np0005593232 podman[278886]: 2026-01-23 09:40:16.749421817 +0000 UTC m=+0.039967752 container create b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:40:16 np0005593232 systemd[1]: Started libpod-conmon-b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece.scope.
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.792 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuqu2k8ah" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073db24c97b028fbab514be9184cbe2e7a03106180236e42abd3cb36641b0305/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073db24c97b028fbab514be9184cbe2e7a03106180236e42abd3cb36641b0305/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073db24c97b028fbab514be9184cbe2e7a03106180236e42abd3cb36641b0305/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/073db24c97b028fbab514be9184cbe2e7a03106180236e42abd3cb36641b0305/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:16 np0005593232 podman[278886]: 2026-01-23 09:40:16.731109079 +0000 UTC m=+0.021655034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.832 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.837 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:16 np0005593232 podman[278886]: 2026-01-23 09:40:16.839535403 +0000 UTC m=+0.130081358 container init b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:40:16 np0005593232 podman[278886]: 2026-01-23 09:40:16.845609028 +0000 UTC m=+0.136154963 container start b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:40:16 np0005593232 podman[278886]: 2026-01-23 09:40:16.849754127 +0000 UTC m=+0.140300062 container attach b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.869 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Creating config drive at /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config#033[00m
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.874 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphxf9b7ej execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861208277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:16 np0005593232 nova_compute[250269]: 2026-01-23 09:40:16.959 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.003 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config 58bac0ef-6888-4869-bd19-5e7aa21353e7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.004 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Deleting local config drive /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7/disk.config because it was imported into RBD.#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.010 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphxf9b7ej" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.047 250273 DEBUG nova.storage.rbd_utils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] rbd image cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.054 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:17 np0005593232 systemd-machined[215836]: New machine qemu-16-instance-00000025.
Jan 23 04:40:17 np0005593232 systemd[1]: Started Virtual Machine qemu-16-instance-00000025.
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.130 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.130 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000026 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.149 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.149 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000025 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:40:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1387: 321 pgs: 321 active+clean; 136 MiB data, 470 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 142 op/s
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.238 250273 DEBUG oslo_concurrency.processutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config cd90446c-4f34-48ec-890c-da3641fbe66f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.238 250273 INFO nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Deleting local config drive /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f/disk.config because it was imported into RBD.#033[00m
Jan 23 04:40:17 np0005593232 systemd-machined[215836]: New machine qemu-17-instance-00000026.
Jan 23 04:40:17 np0005593232 systemd[1]: Started Virtual Machine qemu-17-instance-00000026.
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.385 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.386 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4573MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.386 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.386 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.480 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 58bac0ef-6888-4869-bd19-5e7aa21353e7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.480 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance cd90446c-4f34-48ec-890c-da3641fbe66f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.481 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.481 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.550 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.665 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.667 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.668 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161217.6639569, 58bac0ef-6888-4869-bd19-5e7aa21353e7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.668 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.676 250273 INFO nova.virt.libvirt.driver [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance spawned successfully.#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.677 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]: {
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:    "0": [
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:        {
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "devices": [
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "/dev/loop3"
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            ],
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "lv_name": "ceph_lv0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "lv_size": "7511998464",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "name": "ceph_lv0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "tags": {
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.cluster_name": "ceph",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.crush_device_class": "",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.encrypted": "0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.osd_id": "0",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.type": "block",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:                "ceph.vdo": "0"
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            },
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "type": "block",
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:            "vg_name": "ceph_vg0"
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:        }
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]:    ]
Jan 23 04:40:17 np0005593232 distracted_goodall[278900]: }
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.713 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.714 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.717 250273 INFO nova.virt.libvirt.driver [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance spawned successfully.#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.718 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:40:17 np0005593232 systemd[1]: libpod-b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece.scope: Deactivated successfully.
Jan 23 04:40:17 np0005593232 podman[278886]: 2026-01-23 09:40:17.727903155 +0000 UTC m=+1.018449120 container died b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.753 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-073db24c97b028fbab514be9184cbe2e7a03106180236e42abd3cb36641b0305-merged.mount: Deactivated successfully.
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.773 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.777 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.777 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.778 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.778 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.779 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.781 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 podman[278886]: 2026-01-23 09:40:17.787274745 +0000 UTC m=+1.077820680 container remove b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_goodall, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.788 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.789 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.789 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.789 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.790 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.790 250273 DEBUG nova.virt.libvirt.driver [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:40:17 np0005593232 systemd[1]: libpod-conmon-b0ca6544bc0bc957afdcd65daf87bcac982db6f145ffb551a5ea0ccf5787eece.scope: Deactivated successfully.
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.815 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.816 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161217.666392, 58bac0ef-6888-4869-bd19-5e7aa21353e7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.816 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] VM Started (Lifecycle Event)#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.965 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:17 np0005593232 nova_compute[250269]: 2026-01-23 09:40:17.979 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.007 250273 INFO nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Took 3.66 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.008 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.012 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.013 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161217.7147334, cd90446c-4f34-48ec-890c-da3641fbe66f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.013 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.026 250273 INFO nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Took 4.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.027 250273 DEBUG nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790144878' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.060 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.066 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.068 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.074 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.118 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.123 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.124 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161217.714862, cd90446c-4f34-48ec-890c-da3641fbe66f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.124 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] VM Started (Lifecycle Event)#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.155 250273 INFO nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Took 5.60 seconds to build instance.#033[00m
Jan 23 04:40:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:18.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:18 np0005593232 podman[279271]: 2026-01-23 09:40:18.423988197 +0000 UTC m=+0.040929090 container create b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:40:18 np0005593232 systemd[1]: Started libpod-conmon-b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc.scope.
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.466 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.472 250273 INFO nova.compute.manager [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Took 6.01 seconds to build instance.#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.473 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.490 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.490 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.491 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:18 np0005593232 podman[279271]: 2026-01-23 09:40:18.405777892 +0000 UTC m=+0.022718805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:18 np0005593232 podman[279271]: 2026-01-23 09:40:18.511537809 +0000 UTC m=+0.128478722 container init b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.511 250273 DEBUG oslo_concurrency.lockutils [None req-8f6d10be-bb50-4d76-8052-b2bcc0da2cd2 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:18 np0005593232 podman[279271]: 2026-01-23 09:40:18.518928842 +0000 UTC m=+0.135869735 container start b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:40:18 np0005593232 podman[279271]: 2026-01-23 09:40:18.523156554 +0000 UTC m=+0.140097447 container attach b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:40:18 np0005593232 systemd[1]: libpod-b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc.scope: Deactivated successfully.
Jan 23 04:40:18 np0005593232 zealous_lamarr[279288]: 167 167
Jan 23 04:40:18 np0005593232 conmon[279288]: conmon b1af04221b1a7b1bdaed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc.scope/container/memory.events
Jan 23 04:40:18 np0005593232 podman[279293]: 2026-01-23 09:40:18.570149757 +0000 UTC m=+0.025652729 container died b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:40:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cafc6c3b50e2ec927d2e2e50cac6f94480164ff37afaab6ca36ad703f4adb449-merged.mount: Deactivated successfully.
Jan 23 04:40:18 np0005593232 podman[279293]: 2026-01-23 09:40:18.622398963 +0000 UTC m=+0.077901925 container remove b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:40:18 np0005593232 systemd[1]: libpod-conmon-b1af04221b1a7b1bdaedd215654691b511889e9257dec2eea3effb6ca84567fc.scope: Deactivated successfully.
Jan 23 04:40:18 np0005593232 nova_compute[250269]: 2026-01-23 09:40:18.656 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:18 np0005593232 podman[279314]: 2026-01-23 09:40:18.789411044 +0000 UTC m=+0.041774854 container create f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:40:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:18 np0005593232 systemd[1]: Started libpod-conmon-f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5.scope.
Jan 23 04:40:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:40:18 np0005593232 podman[279314]: 2026-01-23 09:40:18.770962773 +0000 UTC m=+0.023326613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:40:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25177f63f717d6f3f4d5d5dde846447ea3afab8d39714ff071241cf67e44fca1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25177f63f717d6f3f4d5d5dde846447ea3afab8d39714ff071241cf67e44fca1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25177f63f717d6f3f4d5d5dde846447ea3afab8d39714ff071241cf67e44fca1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25177f63f717d6f3f4d5d5dde846447ea3afab8d39714ff071241cf67e44fca1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:40:18 np0005593232 podman[279314]: 2026-01-23 09:40:18.886063718 +0000 UTC m=+0.138427548 container init f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:40:18 np0005593232 podman[279314]: 2026-01-23 09:40:18.892389141 +0000 UTC m=+0.144752951 container start f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:40:18 np0005593232 podman[279314]: 2026-01-23 09:40:18.895178731 +0000 UTC m=+0.147542561 container attach f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:40:19 np0005593232 nova_compute[250269]: 2026-01-23 09:40:19.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1388: 321 pgs: 321 active+clean; 181 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.6 MiB/s wr, 149 op/s
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]: {
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:        "osd_id": 0,
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:        "type": "bluestore"
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]:    }
Jan 23 04:40:19 np0005593232 elegant_kilby[279331]: }
Jan 23 04:40:19 np0005593232 systemd[1]: libpod-f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5.scope: Deactivated successfully.
Jan 23 04:40:19 np0005593232 podman[279314]: 2026-01-23 09:40:19.794109687 +0000 UTC m=+1.046473517 container died f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:40:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-25177f63f717d6f3f4d5d5dde846447ea3afab8d39714ff071241cf67e44fca1-merged.mount: Deactivated successfully.
Jan 23 04:40:19 np0005593232 podman[279314]: 2026-01-23 09:40:19.871108045 +0000 UTC m=+1.123471855 container remove f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kilby, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:40:19 np0005593232 systemd[1]: libpod-conmon-f1c55087b76b98ba3b32030fe1ea4d667b67f1ab8032afad1ac66620dc6ad7b5.scope: Deactivated successfully.
Jan 23 04:40:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:40:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:40:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e5dc304b-fd64-4f7d-b996-59a14542d468 does not exist
Jan 23 04:40:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6f687bca-247b-42db-8072-95aa8b94a04d does not exist
Jan 23 04:40:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d9a120de-2952-4299-bd9a-74fc18a9dbd1 does not exist
Jan 23 04:40:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:20.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:40:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1389: 321 pgs: 321 active+clean; 181 MiB data, 485 MiB used, 21 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 113 op/s
Jan 23 04:40:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:22.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:22.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1390: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 5.5 MiB/s rd, 5.7 MiB/s wr, 310 op/s
Jan 23 04:40:23 np0005593232 nova_compute[250269]: 2026-01-23 09:40:23.661 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:24 np0005593232 nova_compute[250269]: 2026-01-23 09:40:24.136 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:24.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1391: 321 pgs: 321 active+clean; 213 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Jan 23 04:40:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:26.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:40:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:26.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:40:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1392: 321 pgs: 321 active+clean; 214 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 262 op/s
Jan 23 04:40:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:28.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:28 np0005593232 nova_compute[250269]: 2026-01-23 09:40:28.665 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:29 np0005593232 nova_compute[250269]: 2026-01-23 09:40:29.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1393: 321 pgs: 321 active+clean; 214 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.4 MiB/s wr, 221 op/s
Jan 23 04:40:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:30.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1394: 321 pgs: 321 active+clean; 214 MiB data, 510 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.1 MiB/s wr, 197 op/s
Jan 23 04:40:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:32.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1395: 321 pgs: 321 active+clean; 346 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 9.3 MiB/s wr, 296 op/s
Jan 23 04:40:33 np0005593232 nova_compute[250269]: 2026-01-23 09:40:33.670 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:34 np0005593232 nova_compute[250269]: 2026-01-23 09:40:34.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:34 np0005593232 podman[279474]: 2026-01-23 09:40:34.469707273 +0000 UTC m=+0.127517635 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:40:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:34.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:34 np0005593232 nova_compute[250269]: 2026-01-23 09:40:34.726 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:34.726 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:40:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:34.728 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:40:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1396: 321 pgs: 321 active+clean; 346 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 7.2 MiB/s wr, 98 op/s
Jan 23 04:40:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:36.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:40:37
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1397: 321 pgs: 321 active+clean; 382 MiB data, 620 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 9.2 MiB/s wr, 213 op/s
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:40:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:38.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:38 np0005593232 nova_compute[250269]: 2026-01-23 09:40:38.674 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:39 np0005593232 nova_compute[250269]: 2026-01-23 09:40:39.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1398: 321 pgs: 321 active+clean; 411 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.6 MiB/s wr, 331 op/s
Jan 23 04:40:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:40.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1399: 321 pgs: 321 active+clean; 411 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 9.6 MiB/s wr, 331 op/s
Jan 23 04:40:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:41.731 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:40:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:42.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:42.594 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:42.595 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:40:42.595 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:42 np0005593232 ovn_controller[151001]: 2026-01-23T09:40:42Z|00119|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 04:40:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1400: 321 pgs: 321 active+clean; 325 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 9.6 MiB/s wr, 412 op/s
Jan 23 04:40:43 np0005593232 nova_compute[250269]: 2026-01-23 09:40:43.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.044 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.044 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.045 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "58bac0ef-6888-4869-bd19-5e7aa21353e7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.046 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.046 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.048 250273 INFO nova.compute.manager [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Terminating instance#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.049 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "refresh_cache-58bac0ef-6888-4869-bd19-5e7aa21353e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.049 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquired lock "refresh_cache-58bac0ef-6888-4869-bd19-5e7aa21353e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.049 250273 DEBUG nova.network.neutron [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:40:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639674211' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:40:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:40:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3639674211' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:40:44 np0005593232 podman[279510]: 2026-01-23 09:40:44.394841683 +0000 UTC m=+0.057859926 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 23 04:40:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:44.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.671 250273 DEBUG nova.network.neutron [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.919 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "cd90446c-4f34-48ec-890c-da3641fbe66f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.920 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.920 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "cd90446c-4f34-48ec-890c-da3641fbe66f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.920 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.920 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.921 250273 INFO nova.compute.manager [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Terminating instance#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.925 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "refresh_cache-cd90446c-4f34-48ec-890c-da3641fbe66f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.926 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquired lock "refresh_cache-cd90446c-4f34-48ec-890c-da3641fbe66f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.926 250273 DEBUG nova.network.neutron [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.970 250273 DEBUG nova.network.neutron [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.996 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Releasing lock "refresh_cache-58bac0ef-6888-4869-bd19-5e7aa21353e7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:40:44 np0005593232 nova_compute[250269]: 2026-01-23 09:40:44.997 250273 DEBUG nova.compute.manager [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:40:45 np0005593232 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 23 04:40:45 np0005593232 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000025.scope: Consumed 14.057s CPU time.
Jan 23 04:40:45 np0005593232 systemd-machined[215836]: Machine qemu-16-instance-00000025 terminated.
Jan 23 04:40:45 np0005593232 nova_compute[250269]: 2026-01-23 09:40:45.149 250273 DEBUG nova.network.neutron [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:40:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1401: 321 pgs: 321 active+clean; 325 MiB data, 597 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 2.4 MiB/s wr, 313 op/s
Jan 23 04:40:45 np0005593232 nova_compute[250269]: 2026-01-23 09:40:45.216 250273 INFO nova.virt.libvirt.driver [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance destroyed successfully.#033[00m
Jan 23 04:40:45 np0005593232 nova_compute[250269]: 2026-01-23 09:40:45.217 250273 DEBUG nova.objects.instance [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'resources' on Instance uuid 58bac0ef-6888-4869-bd19-5e7aa21353e7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:45 np0005593232 nova_compute[250269]: 2026-01-23 09:40:45.608 250273 INFO nova.virt.libvirt.driver [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Deleting instance files /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7_del#033[00m
Jan 23 04:40:45 np0005593232 nova_compute[250269]: 2026-01-23 09:40:45.609 250273 INFO nova.virt.libvirt.driver [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Deletion of /var/lib/nova/instances/58bac0ef-6888-4869-bd19-5e7aa21353e7_del complete#033[00m
Jan 23 04:40:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:46.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007476800960822632 of space, bias 1.0, pg target 2.2430402882467897 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:40:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 04:40:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1402: 321 pgs: 321 active+clean; 292 MiB data, 567 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.4 MiB/s wr, 400 op/s
Jan 23 04:40:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:40:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:48.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.576 250273 INFO nova.compute.manager [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Took 3.58 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.577 250273 DEBUG oslo.service.loopingcall [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.577 250273 DEBUG nova.compute.manager [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.577 250273 DEBUG nova.network.neutron [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.667 250273 DEBUG nova.network.neutron [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.704 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Releasing lock "refresh_cache-cd90446c-4f34-48ec-890c-da3641fbe66f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:40:48 np0005593232 nova_compute[250269]: 2026-01-23 09:40:48.705 250273 DEBUG nova.compute.manager [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:40:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.037 250273 DEBUG nova.network.neutron [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.063 250273 DEBUG nova.network.neutron [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.083 250273 INFO nova.compute.manager [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Took 0.51 seconds to deallocate network for instance.#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1403: 321 pgs: 321 active+clean; 246 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 420 KiB/s wr, 300 op/s
Jan 23 04:40:49 np0005593232 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 23 04:40:49 np0005593232 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000026.scope: Consumed 14.362s CPU time.
Jan 23 04:40:49 np0005593232 systemd-machined[215836]: Machine qemu-17-instance-00000026 terminated.
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.456 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.456 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.527 250273 INFO nova.virt.libvirt.driver [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance destroyed successfully.#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.528 250273 DEBUG nova.objects.instance [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lazy-loading 'resources' on Instance uuid cd90446c-4f34-48ec-890c-da3641fbe66f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.561 250273 DEBUG oslo_concurrency.processutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671302147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:49 np0005593232 nova_compute[250269]: 2026-01-23 09:40:49.997 250273 DEBUG oslo_concurrency.processutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.003 250273 DEBUG nova.compute.provider_tree [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.009 250273 INFO nova.virt.libvirt.driver [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Deleting instance files /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f_del#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.009 250273 INFO nova.virt.libvirt.driver [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Deletion of /var/lib/nova/instances/cd90446c-4f34-48ec-890c-da3641fbe66f_del complete#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.051 250273 DEBUG nova.scheduler.client.report [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.112 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.119 250273 INFO nova.compute.manager [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.119 250273 DEBUG oslo.service.loopingcall [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.120 250273 DEBUG nova.compute.manager [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.120 250273 DEBUG nova.network.neutron [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.181 250273 INFO nova.scheduler.client.report [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Deleted allocations for instance 58bac0ef-6888-4869-bd19-5e7aa21353e7#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.273 250273 DEBUG oslo_concurrency.lockutils [None req-a800d6f8-381e-4b05-82bf-ce4a21e119f3 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "58bac0ef-6888-4869-bd19-5e7aa21353e7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.384 250273 DEBUG nova.network.neutron [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.400 250273 DEBUG nova.network.neutron [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.421 250273 INFO nova.compute.manager [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Took 0.30 seconds to deallocate network for instance.#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.475 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.476 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:40:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:50.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.554 250273 DEBUG oslo_concurrency.processutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:40:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:40:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549255416' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.968 250273 DEBUG oslo_concurrency.processutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:40:50 np0005593232 nova_compute[250269]: 2026-01-23 09:40:50.974 250273 DEBUG nova.compute.provider_tree [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:40:51 np0005593232 nova_compute[250269]: 2026-01-23 09:40:51.105 250273 DEBUG nova.scheduler.client.report [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:40:51 np0005593232 nova_compute[250269]: 2026-01-23 09:40:51.137 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:51 np0005593232 nova_compute[250269]: 2026-01-23 09:40:51.198 250273 INFO nova.scheduler.client.report [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Deleted allocations for instance cd90446c-4f34-48ec-890c-da3641fbe66f#033[00m
Jan 23 04:40:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1404: 321 pgs: 321 active+clean; 246 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 57 KiB/s wr, 182 op/s
Jan 23 04:40:51 np0005593232 nova_compute[250269]: 2026-01-23 09:40:51.279 250273 DEBUG oslo_concurrency.lockutils [None req-4f7aa8c7-53c8-4f2d-b8aa-13ca3109f044 a3b5a7f627074988a8a05a20558595fe e8778f3a187440f3879f9d9533d45855 - - default default] Lock "cd90446c-4f34-48ec-890c-da3641fbe66f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:40:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:52.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1405: 321 pgs: 321 active+clean; 167 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 2.4 MiB/s rd, 60 KiB/s wr, 211 op/s
Jan 23 04:40:53 np0005593232 nova_compute[250269]: 2026-01-23 09:40:53.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:54 np0005593232 nova_compute[250269]: 2026-01-23 09:40:54.248 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:54.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:54.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 23 04:40:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 23 04:40:55 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 23 04:40:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1407: 321 pgs: 321 active+clean; 167 MiB data, 493 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 46 KiB/s wr, 156 op/s
Jan 23 04:40:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:56.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 23 04:40:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 23 04:40:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 23 04:40:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1409: 321 pgs: 321 active+clean; 159 MiB data, 479 MiB used, 21 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.7 MiB/s wr, 119 op/s
Jan 23 04:40:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 23 04:40:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 23 04:40:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 23 04:40:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:40:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:40:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:40:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:40:58.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:40:58 np0005593232 nova_compute[250269]: 2026-01-23 09:40:58.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:40:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:40:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1411: 321 pgs: 321 active+clean; 134 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 23 04:40:59 np0005593232 nova_compute[250269]: 2026-01-23 09:40:59.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:00 np0005593232 nova_compute[250269]: 2026-01-23 09:41:00.214 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161245.2133417, 58bac0ef-6888-4869-bd19-5e7aa21353e7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:41:00 np0005593232 nova_compute[250269]: 2026-01-23 09:41:00.215 250273 INFO nova.compute.manager [-] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:41:00 np0005593232 nova_compute[250269]: 2026-01-23 09:41:00.237 250273 DEBUG nova.compute.manager [None req-d278cc01-d734-4c82-9f3e-6d4395f94cd5 - - - - - -] [instance: 58bac0ef-6888-4869-bd19-5e7aa21353e7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:00.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1412: 321 pgs: 321 active+clean; 134 MiB data, 460 MiB used, 21 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 153 op/s
Jan 23 04:41:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:02.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1413: 321 pgs: 321 active+clean; 134 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 139 op/s
Jan 23 04:41:03 np0005593232 nova_compute[250269]: 2026-01-23 09:41:03.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 23 04:41:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 23 04:41:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 23 04:41:04 np0005593232 nova_compute[250269]: 2026-01-23 09:41:04.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:04.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:04 np0005593232 nova_compute[250269]: 2026-01-23 09:41:04.527 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161249.5259888, cd90446c-4f34-48ec-890c-da3641fbe66f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:41:04 np0005593232 nova_compute[250269]: 2026-01-23 09:41:04.528 250273 INFO nova.compute.manager [-] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:41:04 np0005593232 nova_compute[250269]: 2026-01-23 09:41:04.562 250273 DEBUG nova.compute.manager [None req-2f910373-d1da-43d9-a46a-89889c4a4024 - - - - - -] [instance: cd90446c-4f34-48ec-890c-da3641fbe66f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:04.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1415: 321 pgs: 321 active+clean; 134 MiB data, 467 MiB used, 21 GiB / 21 GiB avail; 297 KiB/s rd, 1.0 MiB/s wr, 61 op/s
Jan 23 04:41:05 np0005593232 podman[279690]: 2026-01-23 09:41:05.477163105 +0000 UTC m=+0.126301710 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:41:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:06.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1416: 321 pgs: 321 active+clean; 87 MiB data, 445 MiB used, 21 GiB / 21 GiB avail; 30 KiB/s rd, 2.0 KiB/s wr, 43 op/s
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:08 np0005593232 nova_compute[250269]: 2026-01-23 09:41:08.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:41:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2231931901' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:41:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1417: 321 pgs: 321 active+clean; 57 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 63 op/s
Jan 23 04:41:09 np0005593232 nova_compute[250269]: 2026-01-23 09:41:09.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:09 np0005593232 nova_compute[250269]: 2026-01-23 09:41:09.491 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:09 np0005593232 nova_compute[250269]: 2026-01-23 09:41:09.491 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.320 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:41:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:10.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.492 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.493 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.522 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:41:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.631 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.632 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.639 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.639 250273 INFO nova.compute.claims [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:41:10 np0005593232 nova_compute[250269]: 2026-01-23 09:41:10.770 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:41:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287290516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.198 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.204 250273 DEBUG nova.compute.provider_tree [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.224 250273 DEBUG nova.scheduler.client.report [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:41:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1418: 321 pgs: 321 active+clean; 57 MiB data, 436 MiB used, 21 GiB / 21 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 63 op/s
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.256 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.257 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.336 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.337 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.360 250273 INFO nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.386 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.540 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.542 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.542 250273 INFO nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Creating image(s)#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.573 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.602 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.631 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.635 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.695 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.697 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.697 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.697 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.725 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.729 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:11 np0005593232 nova_compute[250269]: 2026-01-23 09:41:11.787 250273 DEBUG nova.policy [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '56da68482e3a4fb582dcccad45f8f71b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '05bc71a77710455e8b34ead7fec81a31', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.022 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.098 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] resizing rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.307 250273 DEBUG nova.objects.instance [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lazy-loading 'migration_context' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.328 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.329 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Ensure instance console log exists: /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.329 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.330 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:12 np0005593232 nova_compute[250269]: 2026-01-23 09:41:12.331 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1419: 321 pgs: 321 active+clean; 81 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Jan 23 04:41:13 np0005593232 nova_compute[250269]: 2026-01-23 09:41:13.286 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Successfully created port: 1b310789-fedc-45bf-b49e-839dcea36fd7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:41:13 np0005593232 nova_compute[250269]: 2026-01-23 09:41:13.289 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:13 np0005593232 nova_compute[250269]: 2026-01-23 09:41:13.353 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:13 np0005593232 nova_compute[250269]: 2026-01-23 09:41:13.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 23 04:41:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 23 04:41:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 23 04:41:14 np0005593232 nova_compute[250269]: 2026-01-23 09:41:14.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:14 np0005593232 nova_compute[250269]: 2026-01-23 09:41:14.291 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:14 np0005593232 nova_compute[250269]: 2026-01-23 09:41:14.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:14 np0005593232 nova_compute[250269]: 2026-01-23 09:41:14.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:41:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:14.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:14.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.187 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Successfully updated port: 1b310789-fedc-45bf-b49e-839dcea36fd7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.202 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.203 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquired lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.203 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:41:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1421: 321 pgs: 321 active+clean; 81 MiB data, 442 MiB used, 21 GiB / 21 GiB avail; 53 KiB/s rd, 1.9 MiB/s wr, 79 op/s
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.381 250273 DEBUG nova.compute.manager [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-changed-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.381 250273 DEBUG nova.compute.manager [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Refreshing instance network info cache due to event network-changed-1b310789-fedc-45bf-b49e-839dcea36fd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.382 250273 DEBUG oslo_concurrency.lockutils [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:41:15 np0005593232 podman[279963]: 2026-01-23 09:41:15.394834762 +0000 UTC m=+0.055144270 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 04:41:15 np0005593232 nova_compute[250269]: 2026-01-23 09:41:15.466 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:41:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:16.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.532 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.532 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.533 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.533 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.533 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:16.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:41:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 17K writes, 68K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 17K writes, 5551 syncs, 3.18 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7100 writes, 28K keys, 7100 commit groups, 1.0 writes per commit group, ingest: 28.79 MB, 0.05 MB/s#012Interval WAL: 7100 writes, 2845 syncs, 2.50 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 04:41:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:41:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139990690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:41:16 np0005593232 nova_compute[250269]: 2026-01-23 09:41:16.996 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.154 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.156 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4639MB free_disk=20.96965789794922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.156 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.156 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1422: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 508 KiB/s rd, 2.1 MiB/s wr, 76 op/s
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.257 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 029c50ac-8221-4aa1-ab0f-92693d5d4d44 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.257 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.257 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.301 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:41:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322753566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.740 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.746 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.773 250273 DEBUG nova.network.neutron [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updating instance_info_cache with network_info: [{"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.783 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.810 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.811 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.812 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Releasing lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.812 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance network_info: |[{"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.813 250273 DEBUG oslo_concurrency.lockutils [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.813 250273 DEBUG nova.network.neutron [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Refreshing network info cache for port 1b310789-fedc-45bf-b49e-839dcea36fd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.816 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Start _get_guest_xml network_info=[{"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.821 250273 WARNING nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.830 250273 DEBUG nova.virt.libvirt.host [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.831 250273 DEBUG nova.virt.libvirt.host [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.839 250273 DEBUG nova.virt.libvirt.host [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.840 250273 DEBUG nova.virt.libvirt.host [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.841 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.841 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.841 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.842 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.842 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.842 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.842 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.843 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.843 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.843 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.843 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.843 250273 DEBUG nova.virt.hardware [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:41:17 np0005593232 nova_compute[250269]: 2026-01-23 09:41:17.846 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:18.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:18.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:18 np0005593232 nova_compute[250269]: 2026-01-23 09:41:18.745 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:41:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/940778192' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:41:18 np0005593232 nova_compute[250269]: 2026-01-23 09:41:18.813 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.967s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:18 np0005593232 nova_compute[250269]: 2026-01-23 09:41:18.837 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:18 np0005593232 nova_compute[250269]: 2026-01-23 09:41:18.842 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1423: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:41:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4114468076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.475 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.477 250273 DEBUG nova.virt.libvirt.vif [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1350534516',display_name='tempest-ImagesTestJSON-server-1350534516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1350534516',id=42,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05bc71a77710455e8b34ead7fec81a31',ramdisk_id='',reservation_id='r-3w4x2n4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1507872051',owner_user_name='tempest-ImagesTestJSON-1507872051-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:41:11Z,user_data=None,user_id='56da68482e3a4fb582dcccad45f8f71b',uuid=029c50ac-8221-4aa1-ab0f-92693d5d4d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.478 250273 DEBUG nova.network.os_vif_util [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converting VIF {"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.479 250273 DEBUG nova.network.os_vif_util [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.481 250273 DEBUG nova.objects.instance [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lazy-loading 'pci_devices' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.501 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <uuid>029c50ac-8221-4aa1-ab0f-92693d5d4d44</uuid>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <name>instance-0000002a</name>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:name>tempest-ImagesTestJSON-server-1350534516</nova:name>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:41:17</nova:creationTime>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:user uuid="56da68482e3a4fb582dcccad45f8f71b">tempest-ImagesTestJSON-1507872051-project-member</nova:user>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:project uuid="05bc71a77710455e8b34ead7fec81a31">tempest-ImagesTestJSON-1507872051</nova:project>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <nova:port uuid="1b310789-fedc-45bf-b49e-839dcea36fd7">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="serial">029c50ac-8221-4aa1-ab0f-92693d5d4d44</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="uuid">029c50ac-8221-4aa1-ab0f-92693d5d4d44</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:8b:cd:5a"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <target dev="tap1b310789-fe"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/console.log" append="off"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:41:19 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:41:19 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:41:19 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:41:19 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.502 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Preparing to wait for external event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.503 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.503 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.503 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.504 250273 DEBUG nova.virt.libvirt.vif [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1350534516',display_name='tempest-ImagesTestJSON-server-1350534516',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1350534516',id=42,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='05bc71a77710455e8b34ead7fec81a31',ramdisk_id='',reservation_id='r-3w4x2n4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1507872051',owner_user_name='tempest-ImagesTestJSON-1507872051-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:41:11Z,user_data=None,user_id='56da68482e3a4fb582dcccad45f8f71b',uuid=029c50ac-8221-4aa1-ab0f-92693d5d4d44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.504 250273 DEBUG nova.network.os_vif_util [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converting VIF {"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.504 250273 DEBUG nova.network.os_vif_util [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.505 250273 DEBUG os_vif [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.506 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.506 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.512 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b310789-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.513 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b310789-fe, col_values=(('external_ids', {'iface-id': '1b310789-fedc-45bf-b49e-839dcea36fd7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8b:cd:5a', 'vm-uuid': '029c50ac-8221-4aa1-ab0f-92693d5d4d44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.515 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:19 np0005593232 NetworkManager[49057]: <info>  [1769161279.5166] manager: (tap1b310789-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.525 250273 INFO os_vif [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe')#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.760 250273 DEBUG nova.network.neutron [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updated VIF entry in instance network info cache for port 1b310789-fedc-45bf-b49e-839dcea36fd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.761 250273 DEBUG nova.network.neutron [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updating instance_info_cache with network_info: [{"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.789 250273 DEBUG oslo_concurrency.lockutils [req-de424cf0-f41f-46f3-9c8c-e6cd3421db30 req-a0e9766c-7aee-4812-8e0b-c3f4581dd96a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.925 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.926 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.926 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] No VIF found with MAC fa:16:3e:8b:cd:5a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.927 250273 INFO nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Using config drive#033[00m
Jan 23 04:41:19 np0005593232 nova_compute[250269]: 2026-01-23 09:41:19.965 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:20.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:20 np0005593232 nova_compute[250269]: 2026-01-23 09:41:20.496 250273 INFO nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Creating config drive at /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config#033[00m
Jan 23 04:41:20 np0005593232 nova_compute[250269]: 2026-01-23 09:41:20.501 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3x1_hm4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:20.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:20 np0005593232 nova_compute[250269]: 2026-01-23 09:41:20.630 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk3x1_hm4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:20 np0005593232 nova_compute[250269]: 2026-01-23 09:41:20.661 250273 DEBUG nova.storage.rbd_utils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] rbd image 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:20 np0005593232 nova_compute[250269]: 2026-01-23 09:41:20.666 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:41:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1424: 321 pgs: 321 active+clean; 88 MiB data, 446 MiB used, 21 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:21 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:41:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:22.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.490 250273 DEBUG oslo_concurrency.processutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config 029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.824s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7166b887-fecd-43da-9ce0-3d5585c796e3 does not exist
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.491 250273 INFO nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Deleting local config drive /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44/disk.config because it was imported into RBD.#033[00m
Jan 23 04:41:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ca38263a-9ef7-49cb-af5e-bb0fe631990d does not exist
Jan 23 04:41:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d103b0ed-218e-4f5c-8d08-5618f021aa0e does not exist
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:41:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:41:22 np0005593232 kernel: tap1b310789-fe: entered promiscuous mode
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.5802] manager: (tap1b310789-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Jan 23 04:41:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:22Z|00120|binding|INFO|Claiming lport 1b310789-fedc-45bf-b49e-839dcea36fd7 for this chassis.
Jan 23 04:41:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:22Z|00121|binding|INFO|1b310789-fedc-45bf-b49e-839dcea36fd7: Claiming fa:16:3e:8b:cd:5a 10.100.0.10
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:22.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.593 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:cd:5a 10.100.0.10'], port_security=['fa:16:3e:8b:cd:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '029c50ac-8221-4aa1-ab0f-92693d5d4d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2696fd4-5fd7-4934-88ac-40162fad555d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05bc71a77710455e8b34ead7fec81a31', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab8b868e-d8b1-4e1d-87d5-538f88b95e73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3459fea4-e2ba-482e-8d51-91ef5b74d71a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=1b310789-fedc-45bf-b49e-839dcea36fd7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.596 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 1b310789-fedc-45bf-b49e-839dcea36fd7 in datapath c2696fd4-5fd7-4934-88ac-40162fad555d bound to our chassis#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.597 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c2696fd4-5fd7-4934-88ac-40162fad555d#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.614 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[df3a702e-ab28-4e77-87c7-df313b6b5300]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.615 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc2696fd4-51 in ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.617 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc2696fd4-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.618 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fef08e8a-da61-46bf-9103-24fac6c15500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.619 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bade9abf-d671-4e83-b74c-8e659d181034]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 systemd-udevd[280464]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.633 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aeca55-1730-4d50-8ac5-1a9d4c68fec9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 systemd-machined[215836]: New machine qemu-18-instance-0000002a.
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.6432] device (tap1b310789-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:41:22 np0005593232 systemd[1]: Started Virtual Machine qemu-18-instance-0000002a.
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.6442] device (tap1b310789-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.646 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[131badd0-9498-40d5-810c-92c7201d72eb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:22Z|00122|binding|INFO|Setting lport 1b310789-fedc-45bf-b49e-839dcea36fd7 ovn-installed in OVS
Jan 23 04:41:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:22Z|00123|binding|INFO|Setting lport 1b310789-fedc-45bf-b49e-839dcea36fd7 up in Southbound
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.673 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[54bb55d1-519f-4634-acb3-a1a1f17eace8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.711 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.7131] manager: (tapc2696fd4-50): new Veth device (/org/freedesktop/NetworkManager/Devices/63)
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.713 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f5e4c4e-95b3-4e71-ae86-0183a7612c80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.760 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f83a9ad1-c73e-49be-8bc9-72fc773de9ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.767 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3074a6-f69c-48de-a7a8-aa09fd8d1e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.8023] device (tapc2696fd4-50): carrier: link connected
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.812 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1648cc3d-4a17-42c1-838d-bf92c150c86f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.835 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b12a1f8e-8d89-499d-a4b6-918e98ec4a40]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2696fd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:02:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519594, 'reachable_time': 19580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280535, 'error': None, 'target': 'ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.851 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e17458a6-feb3-4f36-99c6-c197127935c1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:20d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519594, 'tstamp': 519594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280551, 'error': None, 'target': 'ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.868 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[077baa4a-70f9-414f-8854-23182a8e62c6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc2696fd4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:02:0d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519594, 'reachable_time': 19580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280553, 'error': None, 'target': 'ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.904 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[46ccebda-892d-4fe0-8a3b-b66efaf69bb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.972 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ec852cef-bdf1-41ae-a340-be5c0f083069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.973 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2696fd4-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.973 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.974 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2696fd4-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:22 np0005593232 kernel: tapc2696fd4-50: entered promiscuous mode
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 NetworkManager[49057]: <info>  [1769161282.9789] manager: (tapc2696fd4-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.979 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc2696fd4-50, col_values=(('external_ids', {'iface-id': '38b24332-af6b-47d2-95fe-400f5feeadcb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:22Z|00124|binding|INFO|Releasing lport 38b24332-af6b-47d2-95fe-400f5feeadcb from this chassis (sb_readonly=0)
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.980 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 nova_compute[250269]: 2026-01-23 09:41:22.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.995 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c2696fd4-5fd7-4934-88ac-40162fad555d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c2696fd4-5fd7-4934-88ac-40162fad555d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.996 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[292eaeb4-63c8-4e2a-9fef-a40cf8772349]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.997 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-c2696fd4-5fd7-4934-88ac-40162fad555d
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/c2696fd4-5fd7-4934-88ac-40162fad555d.pid.haproxy
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID c2696fd4-5fd7-4934-88ac-40162fad555d
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:41:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:22.998 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d', 'env', 'PROCESS_TAG=haproxy-c2696fd4-5fd7-4934-88ac-40162fad555d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c2696fd4-5fd7-4934-88ac-40162fad555d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.220845054 +0000 UTC m=+0.050356932 container create 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:41:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1426: 321 pgs: 321 active+clean; 154 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.8 MiB/s wr, 78 op/s
Jan 23 04:41:23 np0005593232 systemd[1]: Started libpod-conmon-15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a.scope.
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.192425315 +0000 UTC m=+0.021937213 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.319209808 +0000 UTC m=+0.148721696 container init 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.328988169 +0000 UTC m=+0.158500047 container start 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.332272404 +0000 UTC m=+0.161784282 container attach 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:41:23 np0005593232 vigorous_swartz[280628]: 167 167
Jan 23 04:41:23 np0005593232 conmon[280628]: conmon 15ccec87ea493dd99dc4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a.scope/container/memory.events
Jan 23 04:41:23 np0005593232 systemd[1]: libpod-15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a.scope: Deactivated successfully.
Jan 23 04:41:23 np0005593232 podman[280602]: 2026-01-23 09:41:23.337556866 +0000 UTC m=+0.167068744 container died 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:41:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f02ab7c52917e5a7efab889368ca889afbbed27634364fbe441cacd0f570c547-merged.mount: Deactivated successfully.
Jan 23 04:41:23 np0005593232 podman[280641]: 2026-01-23 09:41:23.3512333 +0000 UTC m=+0.027616396 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:41:24 np0005593232 nova_compute[250269]: 2026-01-23 09:41:24.057 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161284.0569026, 029c50ac-8221-4aa1-ab0f-92693d5d4d44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:41:24 np0005593232 nova_compute[250269]: 2026-01-23 09:41:24.058 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] VM Started (Lifecycle Event)#033[00m
Jan 23 04:41:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:41:24 np0005593232 podman[280602]: 2026-01-23 09:41:24.065250519 +0000 UTC m=+0.894762407 container remove 15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_swartz, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:41:24 np0005593232 systemd[1]: libpod-conmon-15ccec87ea493dd99dc4d2e7427f03500fa045d70608fd6eb6529655949a1b1a.scope: Deactivated successfully.
Jan 23 04:41:24 np0005593232 nova_compute[250269]: 2026-01-23 09:41:24.297 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:24 np0005593232 podman[280641]: 2026-01-23 09:41:24.360447842 +0000 UTC m=+1.036830908 container create 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:41:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:24.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:24 np0005593232 systemd[1]: Started libpod-conmon-05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99.scope.
Jan 23 04:41:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:24 np0005593232 nova_compute[250269]: 2026-01-23 09:41:24.515 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24c6206de337684961239ffa0d72807f58d781367de6969766037688e31b073d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 podman[280641]: 2026-01-23 09:41:24.530399159 +0000 UTC m=+1.206782265 container init 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:41:24 np0005593232 podman[280641]: 2026-01-23 09:41:24.535886597 +0000 UTC m=+1.212269683 container start 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 04:41:24 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [NOTICE]   (280735) : New worker (280738) forked
Jan 23 04:41:24 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [NOTICE]   (280735) : Loading success.
Jan 23 04:41:24 np0005593232 podman[280716]: 2026-01-23 09:41:24.563672177 +0000 UTC m=+0.373060888 container create 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:41:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:24.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:24 np0005593232 systemd[1]: Started libpod-conmon-650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b.scope.
Jan 23 04:41:24 np0005593232 podman[280716]: 2026-01-23 09:41:24.543313361 +0000 UTC m=+0.352702092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:24 np0005593232 podman[280716]: 2026-01-23 09:41:24.659843138 +0000 UTC m=+0.469231869 container init 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:41:24 np0005593232 podman[280716]: 2026-01-23 09:41:24.672297667 +0000 UTC m=+0.481686378 container start 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:41:24 np0005593232 podman[280716]: 2026-01-23 09:41:24.676308282 +0000 UTC m=+0.485697013 container attach 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:41:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1427: 321 pgs: 321 active+clean; 154 MiB data, 487 MiB used, 21 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 71 op/s
Jan 23 04:41:25 np0005593232 optimistic_mahavira[280749]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:41:25 np0005593232 optimistic_mahavira[280749]: --> relative data size: 1.0
Jan 23 04:41:25 np0005593232 optimistic_mahavira[280749]: --> All data devices are unavailable
Jan 23 04:41:25 np0005593232 systemd[1]: libpod-650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b.scope: Deactivated successfully.
Jan 23 04:41:25 np0005593232 podman[280716]: 2026-01-23 09:41:25.511192053 +0000 UTC m=+1.320580764 container died 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:41:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f1cbd29e988c186997552ea461aa23fb311afc84f392a09fea7a88584264998c-merged.mount: Deactivated successfully.
Jan 23 04:41:25 np0005593232 podman[280716]: 2026-01-23 09:41:25.579001817 +0000 UTC m=+1.388390528 container remove 650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:41:25 np0005593232 systemd[1]: libpod-conmon-650f1dd14a365b88e8b5619320b7bcca5c2634396ece909d4209b2e7521abe9b.scope: Deactivated successfully.
Jan 23 04:41:25 np0005593232 nova_compute[250269]: 2026-01-23 09:41:25.808 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:25 np0005593232 nova_compute[250269]: 2026-01-23 09:41:25.815 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161284.0579324, 029c50ac-8221-4aa1-ab0f-92693d5d4d44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:41:25 np0005593232 nova_compute[250269]: 2026-01-23 09:41:25.816 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.201590312 +0000 UTC m=+0.023704703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.323645139 +0000 UTC m=+0.145759500 container create 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:41:26 np0005593232 systemd[1]: Started libpod-conmon-729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836.scope.
Jan 23 04:41:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:26.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.469548472 +0000 UTC m=+0.291662833 container init 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.475364429 +0000 UTC m=+0.297478790 container start 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.478711786 +0000 UTC m=+0.300826177 container attach 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:41:26 np0005593232 lucid_noyce[280935]: 167 167
Jan 23 04:41:26 np0005593232 systemd[1]: libpod-729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836.scope: Deactivated successfully.
Jan 23 04:41:26 np0005593232 conmon[280935]: conmon 729ce433849ee532ccc7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836.scope/container/memory.events
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.482416322 +0000 UTC m=+0.304530693 container died 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:41:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-baab4846eb0e91d49bf26be99ec98a422c363ae348d158d0d1e12a417ba49bac-merged.mount: Deactivated successfully.
Jan 23 04:41:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:26.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:26 np0005593232 podman[280919]: 2026-01-23 09:41:26.940057686 +0000 UTC m=+0.762172047 container remove 729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_noyce, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 04:41:26 np0005593232 systemd[1]: libpod-conmon-729ce433849ee532ccc7915dbeb7215f95fc1b9081fd79daebdc4d4d016f4836.scope: Deactivated successfully.
Jan 23 04:41:27 np0005593232 podman[280962]: 2026-01-23 09:41:27.161477405 +0000 UTC m=+0.073252581 container create cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:41:27 np0005593232 systemd[1]: Started libpod-conmon-cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8.scope.
Jan 23 04:41:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e72643d85e3e3dfa65102d78231ca2fc60e88215f86f63b40c6d86ebab33f46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:27 np0005593232 podman[280962]: 2026-01-23 09:41:27.131522812 +0000 UTC m=+0.043298018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e72643d85e3e3dfa65102d78231ca2fc60e88215f86f63b40c6d86ebab33f46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e72643d85e3e3dfa65102d78231ca2fc60e88215f86f63b40c6d86ebab33f46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e72643d85e3e3dfa65102d78231ca2fc60e88215f86f63b40c6d86ebab33f46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:27 np0005593232 podman[280962]: 2026-01-23 09:41:27.236393803 +0000 UTC m=+0.148168959 container init cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 04:41:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1428: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.2 MiB/s wr, 61 op/s
Jan 23 04:41:27 np0005593232 podman[280962]: 2026-01-23 09:41:27.242426797 +0000 UTC m=+0.154201943 container start cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:41:27 np0005593232 podman[280962]: 2026-01-23 09:41:27.246062632 +0000 UTC m=+0.157837778 container attach cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:41:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.761 250273 DEBUG nova.compute.manager [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.761 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.761 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.762 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.762 250273 DEBUG nova.compute.manager [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Processing event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.762 250273 DEBUG nova.compute.manager [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.762 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.762 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.763 250273 DEBUG oslo_concurrency.lockutils [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.763 250273 DEBUG nova.compute.manager [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] No waiting events found dispatching network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.763 250273 WARNING nova.compute.manager [req-43ff74cb-a1f1-498a-9a93-b51e4caa59ec req-cd3ab147-4451-4d01-ad0d-d873eadd77c8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received unexpected event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.764 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.772 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.775 250273 INFO nova.virt.libvirt.driver [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance spawned successfully.#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.776 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.837 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.841 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161287.7738097, 029c50ac-8221-4aa1-ab0f-92693d5d4d44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:41:27 np0005593232 nova_compute[250269]: 2026-01-23 09:41:27.842 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:41:28 np0005593232 brave_lewin[280979]: {
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:    "0": [
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:        {
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "devices": [
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "/dev/loop3"
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            ],
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "lv_name": "ceph_lv0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "lv_size": "7511998464",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "name": "ceph_lv0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "tags": {
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.cluster_name": "ceph",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.crush_device_class": "",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.encrypted": "0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.osd_id": "0",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.type": "block",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:                "ceph.vdo": "0"
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            },
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "type": "block",
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:            "vg_name": "ceph_vg0"
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:        }
Jan 23 04:41:28 np0005593232 brave_lewin[280979]:    ]
Jan 23 04:41:28 np0005593232 brave_lewin[280979]: }
Jan 23 04:41:28 np0005593232 systemd[1]: libpod-cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8.scope: Deactivated successfully.
Jan 23 04:41:28 np0005593232 podman[280962]: 2026-01-23 09:41:28.141382553 +0000 UTC m=+1.053157699 container died cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:41:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2e72643d85e3e3dfa65102d78231ca2fc60e88215f86f63b40c6d86ebab33f46-merged.mount: Deactivated successfully.
Jan 23 04:41:28 np0005593232 podman[280962]: 2026-01-23 09:41:28.240423097 +0000 UTC m=+1.152198243 container remove cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 04:41:28 np0005593232 systemd[1]: libpod-conmon-cac109df5042a9b753df0da61cab788de85e5f77fd577c1583d555c31520bed8.scope: Deactivated successfully.
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.265 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.270 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.271 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.271 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.271 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.272 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.273 250273 DEBUG nova.virt.libvirt.driver [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.278 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.396 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:41:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:28.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.432 250273 INFO nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Took 16.89 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.433 250273 DEBUG nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.533 250273 INFO nova.compute.manager [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Took 17.94 seconds to build instance.#033[00m
Jan 23 04:41:28 np0005593232 nova_compute[250269]: 2026-01-23 09:41:28.561 250273 DEBUG oslo_concurrency.lockutils [None req-d1c531be-5ec2-44d6-a1ff-fa0f8cde5358 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:28.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.879979501 +0000 UTC m=+0.040636091 container create 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:41:28 np0005593232 systemd[1]: Started libpod-conmon-788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457.scope.
Jan 23 04:41:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.862427626 +0000 UTC m=+0.023084236 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.961581022 +0000 UTC m=+0.122237632 container init 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.969000236 +0000 UTC m=+0.129656826 container start 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:41:28 np0005593232 serene_hamilton[281158]: 167 167
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.972795245 +0000 UTC m=+0.133451835 container attach 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:41:28 np0005593232 systemd[1]: libpod-788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457.scope: Deactivated successfully.
Jan 23 04:41:28 np0005593232 podman[281141]: 2026-01-23 09:41:28.974251247 +0000 UTC m=+0.134907857 container died 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:41:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f0f41c4ad2f6250bcfd9b002226dd5b726cbd972c3775ab554a64ba7a4a978c1-merged.mount: Deactivated successfully.
Jan 23 04:41:29 np0005593232 podman[281141]: 2026-01-23 09:41:29.012928281 +0000 UTC m=+0.173584871 container remove 788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:41:29 np0005593232 systemd[1]: libpod-conmon-788aa7d32a7f919aec46601b2c829515e73b591cefc4ace72682e99301fe8457.scope: Deactivated successfully.
Jan 23 04:41:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:29 np0005593232 podman[281183]: 2026-01-23 09:41:29.169699928 +0000 UTC m=+0.036633117 container create e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:41:29 np0005593232 systemd[1]: Started libpod-conmon-e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e.scope.
Jan 23 04:41:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:41:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd5a1604a0771aeb095daed559a49f45c3c070f2b4e85ced24e8f69ef67ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1429: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 861 KiB/s rd, 4.2 MiB/s wr, 62 op/s
Jan 23 04:41:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd5a1604a0771aeb095daed559a49f45c3c070f2b4e85ced24e8f69ef67ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd5a1604a0771aeb095daed559a49f45c3c070f2b4e85ced24e8f69ef67ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/411cd5a1604a0771aeb095daed559a49f45c3c070f2b4e85ced24e8f69ef67ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:41:29 np0005593232 podman[281183]: 2026-01-23 09:41:29.153293155 +0000 UTC m=+0.020226374 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:41:29 np0005593232 podman[281183]: 2026-01-23 09:41:29.256975402 +0000 UTC m=+0.123908611 container init e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:41:29 np0005593232 podman[281183]: 2026-01-23 09:41:29.264150949 +0000 UTC m=+0.131084138 container start e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:41:29 np0005593232 podman[281183]: 2026-01-23 09:41:29.267021591 +0000 UTC m=+0.133954780 container attach e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:41:29 np0005593232 nova_compute[250269]: 2026-01-23 09:41:29.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:29 np0005593232 nova_compute[250269]: 2026-01-23 09:41:29.517 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]: {
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:        "osd_id": 0,
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:        "type": "bluestore"
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]:    }
Jan 23 04:41:30 np0005593232 suspicious_ellis[281200]: }
Jan 23 04:41:30 np0005593232 systemd[1]: libpod-e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e.scope: Deactivated successfully.
Jan 23 04:41:30 np0005593232 conmon[281200]: conmon e1eae5247a03f9a6cc22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e.scope/container/memory.events
Jan 23 04:41:30 np0005593232 podman[281183]: 2026-01-23 09:41:30.092215184 +0000 UTC m=+0.959148373 container died e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 23 04:41:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-411cd5a1604a0771aeb095daed559a49f45c3c070f2b4e85ced24e8f69ef67ba-merged.mount: Deactivated successfully.
Jan 23 04:41:30 np0005593232 podman[281183]: 2026-01-23 09:41:30.155026033 +0000 UTC m=+1.021959222 container remove e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_ellis, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:41:30 np0005593232 systemd[1]: libpod-conmon-e1eae5247a03f9a6cc22bff1c1544f20d8b6483bef4e61c382d84e3c00f3ed2e.scope: Deactivated successfully.
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1004270241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:41:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:30.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.557 250273 DEBUG oslo_concurrency.lockutils [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.558 250273 DEBUG oslo_concurrency.lockutils [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.558 250273 DEBUG nova.compute.manager [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.562 250273 DEBUG nova.compute.manager [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.563 250273 DEBUG nova.objects.instance [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lazy-loading 'flavor' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:41:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:30.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:30 np0005593232 nova_compute[250269]: 2026-01-23 09:41:30.597 250273 DEBUG nova.virt.libvirt.driver [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fdac8dd8-5be9-4b5e-864f-b552e8876412 does not exist
Jan 23 04:41:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ab41d2a2-ea4c-4ec4-ad84-1f3d53031342 does not exist
Jan 23 04:41:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c0b06e98-6f70-4520-94a0-d0792648578c does not exist
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:41:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1430: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 861 KiB/s rd, 4.2 MiB/s wr, 62 op/s
Jan 23 04:41:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:32.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:32.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1431: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 120 op/s
Jan 23 04:41:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:34 np0005593232 nova_compute[250269]: 2026-01-23 09:41:34.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:34.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:34 np0005593232 nova_compute[250269]: 2026-01-23 09:41:34.521 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:34.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1432: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 73 op/s
Jan 23 04:41:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:36.161 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:41:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:36.163 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:41:36 np0005593232 nova_compute[250269]: 2026-01-23 09:41:36.199 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:36.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:36 np0005593232 podman[281336]: 2026-01-23 09:41:36.468902614 +0000 UTC m=+0.131782028 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:41:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:36.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:41:37
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'volumes']
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:41:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1433: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 2.7 MiB/s rd, 25 KiB/s wr, 108 op/s
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:41:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:39.166 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1434: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 140 op/s
Jan 23 04:41:39 np0005593232 nova_compute[250269]: 2026-01-23 09:41:39.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:39 np0005593232 nova_compute[250269]: 2026-01-23 09:41:39.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:40.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:40.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:40 np0005593232 nova_compute[250269]: 2026-01-23 09:41:40.643 250273 DEBUG nova.virt.libvirt.driver [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 04:41:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1435: 321 pgs: 321 active+clean; 155 MiB data, 488 MiB used, 21 GiB / 21 GiB avail; 3.8 MiB/s rd, 12 KiB/s wr, 138 op/s
Jan 23 04:41:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:41Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8b:cd:5a 10.100.0.10
Jan 23 04:41:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:41Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8b:cd:5a 10.100.0.10
Jan 23 04:41:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:42.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:42.596 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:42.597 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:42.597 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:42 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:41:42 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:41:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:42.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1436: 321 pgs: 321 active+clean; 222 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.8 MiB/s wr, 217 op/s
Jan 23 04:41:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:44 np0005593232 nova_compute[250269]: 2026-01-23 09:41:44.306 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:44.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:44 np0005593232 nova_compute[250269]: 2026-01-23 09:41:44.525 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:41:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:44.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:41:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1437: 321 pgs: 321 active+clean; 222 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 153 op/s
Jan 23 04:41:45 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 23 04:41:46 np0005593232 podman[281372]: 2026-01-23 09:41:46.406128605 +0000 UTC m=+0.062214593 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 04:41:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:41:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:46.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004111086392611204 of space, bias 1.0, pg target 1.233325917783361 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:41:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:41:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1438: 321 pgs: 321 active+clean; 231 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 167 op/s
Jan 23 04:41:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:48.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1439: 321 pgs: 321 active+clean; 239 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.6 MiB/s wr, 159 op/s
Jan 23 04:41:49 np0005593232 nova_compute[250269]: 2026-01-23 09:41:49.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:49 np0005593232 nova_compute[250269]: 2026-01-23 09:41:49.526 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:41:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:41:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:50.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1440: 321 pgs: 321 active+clean; 239 MiB data, 561 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.6 MiB/s wr, 120 op/s
Jan 23 04:41:51 np0005593232 nova_compute[250269]: 2026-01-23 09:41:51.691 250273 DEBUG nova.virt.libvirt.driver [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 04:41:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:52.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1441: 321 pgs: 321 active+clean; 188 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.0 MiB/s wr, 248 op/s
Jan 23 04:41:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:54 np0005593232 nova_compute[250269]: 2026-01-23 09:41:54.310 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:54 np0005593232 nova_compute[250269]: 2026-01-23 09:41:54.528 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:54.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:54 np0005593232 nova_compute[250269]: 2026-01-23 09:41:54.759 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:54 np0005593232 nova_compute[250269]: 2026-01-23 09:41:54.760 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:54 np0005593232 nova_compute[250269]: 2026-01-23 09:41:54.863 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.002 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.002 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.009 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.010 250273 INFO nova.compute.claims [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.230 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1442: 321 pgs: 321 active+clean; 188 MiB data, 538 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Jan 23 04:41:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:41:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773042249' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.706 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.713 250273 DEBUG nova.compute.provider_tree [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.734 250273 DEBUG nova.scheduler.client.report [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.775 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.776 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.868 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.869 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.901 250273 INFO nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:41:55 np0005593232 nova_compute[250269]: 2026-01-23 09:41:55.929 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.167 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.169 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.170 250273 INFO nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Creating image(s)#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.212 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.255 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.297 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.302 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.342 250273 DEBUG nova.policy [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '187ce0cedde344a3b09ca4560410580e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4fd9229340ed4bf3a3a72baa6985a3e3', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.398 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.399 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.399 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.399 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.427 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:41:56 np0005593232 nova_compute[250269]: 2026-01-23 09:41:56.432 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 a1208de2-efde-4618-8388-2acfab37582a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:41:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:56.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1443: 321 pgs: 321 active+clean; 196 MiB data, 536 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 175 op/s
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.260 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 a1208de2-efde-4618-8388-2acfab37582a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.828s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.329 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] resizing rbd image a1208de2-efde-4618-8388-2acfab37582a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:41:57 np0005593232 kernel: tap1b310789-fe (unregistering): left promiscuous mode
Jan 23 04:41:57 np0005593232 NetworkManager[49057]: <info>  [1769161317.4473] device (tap1b310789-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:41:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:57Z|00125|binding|INFO|Releasing lport 1b310789-fedc-45bf-b49e-839dcea36fd7 from this chassis (sb_readonly=0)
Jan 23 04:41:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:57Z|00126|binding|INFO|Setting lport 1b310789-fedc-45bf-b49e-839dcea36fd7 down in Southbound
Jan 23 04:41:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:41:57Z|00127|binding|INFO|Removing iface tap1b310789-fe ovn-installed in OVS
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.510 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:57.516 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8b:cd:5a 10.100.0.10'], port_security=['fa:16:3e:8b:cd:5a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '029c50ac-8221-4aa1-ab0f-92693d5d4d44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c2696fd4-5fd7-4934-88ac-40162fad555d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '05bc71a77710455e8b34ead7fec81a31', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab8b868e-d8b1-4e1d-87d5-538f88b95e73', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3459fea4-e2ba-482e-8d51-91ef5b74d71a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=1b310789-fedc-45bf-b49e-839dcea36fd7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:57.517 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 1b310789-fedc-45bf-b49e-839dcea36fd7 in datapath c2696fd4-5fd7-4934-88ac-40162fad555d unbound from our chassis#033[00m
Jan 23 04:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:57.519 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c2696fd4-5fd7-4934-88ac-40162fad555d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:57.520 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6aad1f67-5ffd-44f4-a3b5-b7ab96648b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:57.521 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d namespace which is not needed anymore#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.533 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:57 np0005593232 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 23 04:41:57 np0005593232 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000002a.scope: Consumed 14.955s CPU time.
Jan 23 04:41:57 np0005593232 systemd-machined[215836]: Machine qemu-18-instance-0000002a terminated.
Jan 23 04:41:57 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [NOTICE]   (280735) : haproxy version is 2.8.14-c23fe91
Jan 23 04:41:57 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [NOTICE]   (280735) : path to executable is /usr/sbin/haproxy
Jan 23 04:41:57 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [ALERT]    (280735) : Current worker (280738) exited with code 143 (Terminated)
Jan 23 04:41:57 np0005593232 neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d[280731]: [WARNING]  (280735) : All workers exited. Exiting... (0)
Jan 23 04:41:57 np0005593232 systemd[1]: libpod-05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99.scope: Deactivated successfully.
Jan 23 04:41:57 np0005593232 podman[281643]: 2026-01-23 09:41:57.69905308 +0000 UTC m=+0.096302575 container died 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.763 250273 INFO nova.virt.libvirt.driver [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance shutdown successfully after 27 seconds.#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.770 250273 INFO nova.virt.libvirt.driver [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance destroyed successfully.#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.770 250273 DEBUG nova.objects.instance [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lazy-loading 'numa_topology' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.796 250273 DEBUG nova.compute.manager [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:41:57 np0005593232 nova_compute[250269]: 2026-01-23 09:41:57.854 250273 DEBUG oslo_concurrency.lockutils [None req-a3c3e76e-b150-469b-9bac-4cb1b28d4886 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 27.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99-userdata-shm.mount: Deactivated successfully.
Jan 23 04:41:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-24c6206de337684961239ffa0d72807f58d781367de6969766037688e31b073d-merged.mount: Deactivated successfully.
Jan 23 04:41:57 np0005593232 podman[281643]: 2026-01-23 09:41:57.941062202 +0000 UTC m=+0.338311697 container cleanup 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 04:41:57 np0005593232 systemd[1]: libpod-conmon-05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99.scope: Deactivated successfully.
Jan 23 04:41:58 np0005593232 podman[281685]: 2026-01-23 09:41:58.09407693 +0000 UTC m=+0.128708528 container remove 05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.095 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Successfully created port: 189994c7-5c8d-40fd-888d-a104e39b5aca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.104 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8d01609b-161f-4118-a25c-01e091e840a7]: (4, ('Fri Jan 23 09:41:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d (05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99)\n05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99\nFri Jan 23 09:41:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d (05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99)\n05196a139e32720e66bce7894196910678ec26479c0cbcc4eca6ee8ffb3cbf99\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.106 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c5f331da-7584-4e5b-b95f-422188825a04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.107 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2696fd4-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:58 np0005593232 kernel: tapc2696fd4-50: left promiscuous mode
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.114 250273 DEBUG nova.compute.manager [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-vif-unplugged-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.115 250273 DEBUG oslo_concurrency.lockutils [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.115 250273 DEBUG oslo_concurrency.lockutils [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.115 250273 DEBUG oslo_concurrency.lockutils [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.116 250273 DEBUG nova.compute.manager [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] No waiting events found dispatching network-vif-unplugged-1b310789-fedc-45bf-b49e-839dcea36fd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.116 250273 WARNING nova.compute.manager [req-5ed8c867-658a-4b6b-8c78-ad10486ecbf2 req-09901397-264d-4af3-8ab8-2cfda24118a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received unexpected event network-vif-unplugged-1b310789-fedc-45bf-b49e-839dcea36fd7 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.142 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f76242-baef-40ac-801f-bdc3c7993e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.156 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[536574f0-eaa4-45ef-8ad8-0642119a3c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.158 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5e81877d-116e-40ec-93f9-a669c934faaf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.173 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ffbd6676-3e27-44af-9f7e-3d7a88e86f85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519581, 'reachable_time': 30103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281703, 'error': None, 'target': 'ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 systemd[1]: run-netns-ovnmeta\x2dc2696fd4\x2d5fd7\x2d4934\x2d88ac\x2d40162fad555d.mount: Deactivated successfully.
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.178 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c2696fd4-5fd7-4934-88ac-40162fad555d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:41:58.178 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f509ebbb-3be1-43e2-b95a-6bd0a1674c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.216 250273 DEBUG nova.objects.instance [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'migration_context' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.232 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.233 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Ensure instance console log exists: /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.233 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.234 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:41:58 np0005593232 nova_compute[250269]: 2026-01-23 09:41:58.234 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:41:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:41:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:41:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:41:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:41:58.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:41:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.212 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Successfully updated port: 189994c7-5c8d-40fd-888d-a104e39b5aca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.234 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.235 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquired lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.235 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:41:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1444: 321 pgs: 321 active+clean; 214 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.3 MiB/s wr, 183 op/s
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.507 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:41:59 np0005593232 nova_compute[250269]: 2026-01-23 09:41:59.531 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.281 250273 DEBUG nova.compute.manager [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.282 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.282 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.282 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.282 250273 DEBUG nova.compute.manager [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] No waiting events found dispatching network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.283 250273 WARNING nova.compute.manager [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received unexpected event network-vif-plugged-1b310789-fedc-45bf-b49e-839dcea36fd7 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.283 250273 DEBUG nova.compute.manager [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.283 250273 DEBUG nova.compute.manager [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing instance network info cache due to event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.283 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:00.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:00.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:00 np0005593232 nova_compute[250269]: 2026-01-23 09:42:00.794 250273 DEBUG nova.network.neutron [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1445: 321 pgs: 321 active+clean; 214 MiB data, 545 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 155 op/s
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.164 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Releasing lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.164 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Instance network_info: |[{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.169 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.169 250273 DEBUG nova.network.neutron [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.174 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Start _get_guest_xml network_info=[{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.182 250273 WARNING nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.191 250273 DEBUG nova.virt.libvirt.host [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.193 250273 DEBUG nova.virt.libvirt.host [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.197 250273 DEBUG nova.virt.libvirt.host [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.198 250273 DEBUG nova.virt.libvirt.host [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.200 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.200 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.201 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.201 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.201 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.202 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.202 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.202 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.203 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.203 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.203 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.204 250273 DEBUG nova.virt.hardware [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.207 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.312 250273 DEBUG nova.compute.manager [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.372 250273 INFO nova.compute.manager [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] instance snapshotting#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.372 250273 WARNING nova.compute.manager [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Jan 23 04:42:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:02.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:02.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.671 250273 INFO nova.virt.libvirt.driver [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Beginning cold snapshot process#033[00m
Jan 23 04:42:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:42:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464670203' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.709 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.731 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:02 np0005593232 nova_compute[250269]: 2026-01-23 09:42:02.736 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:42:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426636385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.168 250273 DEBUG nova.virt.libvirt.imagebackend [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.186 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.188 250273 DEBUG nova.virt.libvirt.vif [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:41:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1527273986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1527273986',id=45,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW/D7eFeqMjBvekfY9VqlM3EY9Lv7j0wpym0wwbZXZxi5xiYHs3Y+SGaRgTVfBABcO7R/jAYgVwXr4x4dmhbR/VewPXJyWaKlJux19vulauSxlm5JZb+T430JhpaEya2w==',key_name='tempest-keypair-1370438234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4fd9229340ed4bf3a3a72baa6985a3e3',ramdisk_id='',reservation_id='r-9sg2yp8s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:41:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='187ce0cedde344a3b09ca4560410580e',uuid=a1208de2-efde-4618-8388-2acfab37582a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.188 250273 DEBUG nova.network.os_vif_util [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converting VIF {"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.189 250273 DEBUG nova.network.os_vif_util [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.191 250273 DEBUG nova.objects.instance [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.222 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <uuid>a1208de2-efde-4618-8388-2acfab37582a</uuid>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <name>instance-0000002d</name>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:name>tempest-UpdateMultiattachVolumeNegativeTest-server-1527273986</nova:name>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:42:02</nova:creationTime>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:user uuid="187ce0cedde344a3b09ca4560410580e">tempest-UpdateMultiattachVolumeNegativeTest-1520463047-project-member</nova:user>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:project uuid="4fd9229340ed4bf3a3a72baa6985a3e3">tempest-UpdateMultiattachVolumeNegativeTest-1520463047</nova:project>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <nova:port uuid="189994c7-5c8d-40fd-888d-a104e39b5aca">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="serial">a1208de2-efde-4618-8388-2acfab37582a</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="uuid">a1208de2-efde-4618-8388-2acfab37582a</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a1208de2-efde-4618-8388-2acfab37582a_disk">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a1208de2-efde-4618-8388-2acfab37582a_disk.config">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:6a:97:e5"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <target dev="tap189994c7-5c"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/console.log" append="off"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:42:03 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:42:03 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:42:03 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:42:03 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.223 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Preparing to wait for external event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.224 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.224 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.224 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.225 250273 DEBUG nova.virt.libvirt.vif [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:41:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1527273986',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1527273986',id=45,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW/D7eFeqMjBvekfY9VqlM3EY9Lv7j0wpym0wwbZXZxi5xiYHs3Y+SGaRgTVfBABcO7R/jAYgVwXr4x4dmhbR/VewPXJyWaKlJux19vulauSxlm5JZb+T430JhpaEya2w==',key_name='tempest-keypair-1370438234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4fd9229340ed4bf3a3a72baa6985a3e3',ramdisk_id='',reservation_id='r-9sg2yp8s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:41:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='187ce0cedde344a3b09ca4560410580e',uuid=a1208de2-efde-4618-8388-2acfab37582a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.225 250273 DEBUG nova.network.os_vif_util [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converting VIF {"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.226 250273 DEBUG nova.network.os_vif_util [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.226 250273 DEBUG os_vif [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.227 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.227 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.228 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.230 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.230 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap189994c7-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.231 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap189994c7-5c, col_values=(('external_ids', {'iface-id': '189994c7-5c8d-40fd-888d-a104e39b5aca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6a:97:e5', 'vm-uuid': 'a1208de2-efde-4618-8388-2acfab37582a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:03 np0005593232 NetworkManager[49057]: <info>  [1769161323.2334] manager: (tap189994c7-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.240 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.241 250273 INFO os_vif [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c')#033[00m
Jan 23 04:42:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1446: 321 pgs: 321 active+clean; 255 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 195 op/s
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.315 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.317 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.318 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No VIF found with MAC fa:16:3e:6a:97:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.319 250273 INFO nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Using config drive#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.355 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.625 250273 DEBUG nova.storage.rbd_utils [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] creating snapshot(15233b796bef4c8799780943030b5c88) on rbd image(029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:42:03 np0005593232 nova_compute[250269]: 2026-01-23 09:42:03.994 250273 INFO nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Creating config drive at /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.007 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ggfwoov execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 23 04:42:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 23 04:42:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.150 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ggfwoov" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.194 250273 DEBUG nova.storage.rbd_utils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] rbd image a1208de2-efde-4618-8388-2acfab37582a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.199 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config a1208de2-efde-4618-8388-2acfab37582a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.272 250273 DEBUG nova.storage.rbd_utils [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] cloning vms/029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk@15233b796bef4c8799780943030b5c88 to images/8acf0d1a-47cb-44fb-a780-075c7b86547c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.322 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.421 250273 DEBUG oslo_concurrency.processutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config a1208de2-efde-4618-8388-2acfab37582a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.222s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.422 250273 INFO nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Deleting local config drive /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a/disk.config because it was imported into RBD.#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.443 250273 DEBUG nova.storage.rbd_utils [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] flattening images/8acf0d1a-47cb-44fb-a780-075c7b86547c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:42:04 np0005593232 kernel: tap189994c7-5c: entered promiscuous mode
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.4714] manager: (tap189994c7-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Jan 23 04:42:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:04Z|00128|binding|INFO|Claiming lport 189994c7-5c8d-40fd-888d-a104e39b5aca for this chassis.
Jan 23 04:42:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:04Z|00129|binding|INFO|189994c7-5c8d-40fd-888d-a104e39b5aca: Claiming fa:16:3e:6a:97:e5 10.100.0.8
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.496 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:97:e5 10.100.0.8'], port_security=['fa:16:3e:6a:97:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a1208de2-efde-4618-8388-2acfab37582a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4fd9229340ed4bf3a3a72baa6985a3e3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '232ea62b-b441-41b5-8457-7d5744ac9ac2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91ae19d5-b9ed-444d-b1cd-8fb0c58abf8d, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=189994c7-5c8d-40fd-888d-a104e39b5aca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.497 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 189994c7-5c8d-40fd-888d-a104e39b5aca in datapath f19933f5-cfe3-4319-a83b-b72dde692ab6 bound to our chassis#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.499 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f19933f5-cfe3-4319-a83b-b72dde692ab6#033[00m
Jan 23 04:42:04 np0005593232 systemd-machined[215836]: New machine qemu-19-instance-0000002d.
Jan 23 04:42:04 np0005593232 systemd-udevd[281969]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.509 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[63f246e7-c0af-4749-81e8-5a0fbe47b267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.510 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf19933f5-c1 in ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.512 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf19933f5-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.512 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a8968954-ae95-4dea-b91f-93919d6a4466]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.513 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d87cb466-da4e-4883-abe3-6dbe9cadbeb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.5152] device (tap189994c7-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.5157] device (tap189994c7-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.529 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[52de4b25-6209-42d0-9a19-0d95b7691478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 systemd[1]: Started Virtual Machine qemu-19-instance-0000002d.
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.555 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.553 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2b779a82-7a80-4bd6-a2c8-3d597f74b527]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:04Z|00130|binding|INFO|Setting lport 189994c7-5c8d-40fd-888d-a104e39b5aca ovn-installed in OVS
Jan 23 04:42:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:04Z|00131|binding|INFO|Setting lport 189994c7-5c8d-40fd-888d-a104e39b5aca up in Southbound
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.569 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.587 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[211222f2-d191-4e2b-b8da-68d360731e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 systemd-udevd[281972]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.5927] manager: (tapf19933f5-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.592 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1962482b-248e-49f9-91c3-241a43992c03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.624 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[59855915-025e-4898-a753-cce2357c5088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.627 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[43752556-7a53-4853-8564-eda532182d1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.6527] device (tapf19933f5-c0): carrier: link connected
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.660 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[49a06fee-b495-4df7-a57c-6ae7cd2910f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.685 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4af79e82-e5b5-40e9-988a-dc172e42e735]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19933f5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:fb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523779, 'reachable_time': 28864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282007, 'error': None, 'target': 'ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.699 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e5531aed-487b-4c30-8fb8-319fbc743b91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feec:fb09'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523779, 'tstamp': 523779}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282008, 'error': None, 'target': 'ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.716 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff4caa0-0703-484f-b7f8-58f4acfa9da6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf19933f5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ec:fb:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523779, 'reachable_time': 28864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282009, 'error': None, 'target': 'ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.747 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[294e7d71-ffc7-4831-97ae-9db2f89b9823]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.801 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[55ec9e9e-aa62-4fb0-9bb5-2e85c2e75311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.803 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19933f5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.803 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.803 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf19933f5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 NetworkManager[49057]: <info>  [1769161324.8064] manager: (tapf19933f5-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 23 04:42:04 np0005593232 kernel: tapf19933f5-c0: entered promiscuous mode
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.810 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.811 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf19933f5-c0, col_values=(('external_ids', {'iface-id': '03e2fba1-8299-41b0-8205-575cd62d3292'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:04Z|00132|binding|INFO|Releasing lport 03e2fba1-8299-41b0-8205-575cd62d3292 from this chassis (sb_readonly=0)
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.832 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f19933f5-cfe3-4319-a83b-b72dde692ab6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f19933f5-cfe3-4319-a83b-b72dde692ab6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.833 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7003a3-b074-4d7e-8202-438e94f039ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.834 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-f19933f5-cfe3-4319-a83b-b72dde692ab6
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/f19933f5-cfe3-4319-a83b-b72dde692ab6.pid.haproxy
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID f19933f5-cfe3-4319-a83b-b72dde692ab6
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:42:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:04.834 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'env', 'PROCESS_TAG=haproxy-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f19933f5-cfe3-4319-a83b-b72dde692ab6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.894 250273 DEBUG nova.network.neutron [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updated VIF entry in instance network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.895 250273 DEBUG nova.network.neutron [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.917 250273 DEBUG oslo_concurrency.lockutils [req-3d17407c-9925-4b2c-9962-ca6e51b33bf1 req-92706a03-ca4a-4430-8d4b-f95505b89615 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:04 np0005593232 nova_compute[250269]: 2026-01-23 09:42:04.921 250273 DEBUG nova.storage.rbd_utils [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] removing snapshot(15233b796bef4c8799780943030b5c88) on rbd image(029c50ac-8221-4aa1-ab0f-92693d5d4d44_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.008 250273 DEBUG nova.compute.manager [req-cef9be58-bad6-4b3b-aa5e-bb1e1bb4c9b8 req-5dee3236-2b94-4e50-b986-f2d87a3b656c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.009 250273 DEBUG oslo_concurrency.lockutils [req-cef9be58-bad6-4b3b-aa5e-bb1e1bb4c9b8 req-5dee3236-2b94-4e50-b986-f2d87a3b656c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.009 250273 DEBUG oslo_concurrency.lockutils [req-cef9be58-bad6-4b3b-aa5e-bb1e1bb4c9b8 req-5dee3236-2b94-4e50-b986-f2d87a3b656c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.009 250273 DEBUG oslo_concurrency.lockutils [req-cef9be58-bad6-4b3b-aa5e-bb1e1bb4c9b8 req-5dee3236-2b94-4e50-b986-f2d87a3b656c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.009 250273 DEBUG nova.compute.manager [req-cef9be58-bad6-4b3b-aa5e-bb1e1bb4c9b8 req-5dee3236-2b94-4e50-b986-f2d87a3b656c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Processing event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:42:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 23 04:42:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 23 04:42:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.193 250273 DEBUG nova.storage.rbd_utils [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] creating snapshot(snap) on rbd image(8acf0d1a-47cb-44fb-a780-075c7b86547c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:42:05 np0005593232 podman[282064]: 2026-01-23 09:42:05.226940945 +0000 UTC m=+0.054406308 container create 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:42:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1449: 321 pgs: 321 active+clean; 255 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 218 KiB/s rd, 5.2 MiB/s wr, 92 op/s
Jan 23 04:42:05 np0005593232 systemd[1]: Started libpod-conmon-14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389.scope.
Jan 23 04:42:05 np0005593232 podman[282064]: 2026-01-23 09:42:05.199245167 +0000 UTC m=+0.026710550 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:42:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53566040ee2110f3625468f7fe5f4d82c4d5c4d0257a7bce0841010e2a7a34c8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:05 np0005593232 podman[282064]: 2026-01-23 09:42:05.324666781 +0000 UTC m=+0.152132144 container init 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:42:05 np0005593232 podman[282064]: 2026-01-23 09:42:05.329944823 +0000 UTC m=+0.157410186 container start 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:42:05 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [NOTICE]   (282139) : New worker (282147) forked
Jan 23 04:42:05 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [NOTICE]   (282139) : Loading success.
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.431 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161325.4312956, a1208de2-efde-4618-8388-2acfab37582a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.432 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] VM Started (Lifecycle Event)#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.435 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.438 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.443 250273 INFO nova.virt.libvirt.driver [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] Instance spawned successfully.#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.444 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.466 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.474 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.477 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.477 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.478 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.478 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.479 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.479 250273 DEBUG nova.virt.libvirt.driver [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.511 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.512 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161325.4315903, a1208de2-efde-4618-8388-2acfab37582a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.512 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.542 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.546 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161325.4377255, a1208de2-efde-4618-8388-2acfab37582a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.546 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.556 250273 INFO nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Took 9.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.556 250273 DEBUG nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.573 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.577 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.614 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.628 250273 INFO nova.compute.manager [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Took 10.67 seconds to build instance.#033[00m
Jan 23 04:42:05 np0005593232 nova_compute[250269]: 2026-01-23 09:42:05.660 250273 DEBUG oslo_concurrency.lockutils [None req-976a843b-d433-4bf6-a3d8-434e92e80fc1 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 23 04:42:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 23 04:42:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 23 04:42:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:06.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:06.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1451: 321 pgs: 321 active+clean; 325 MiB data, 631 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 11 MiB/s wr, 253 op/s
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.294 250273 DEBUG nova.compute.manager [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.295 250273 DEBUG oslo_concurrency.lockutils [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.295 250273 DEBUG oslo_concurrency.lockutils [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.295 250273 DEBUG oslo_concurrency.lockutils [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.296 250273 DEBUG nova.compute.manager [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] No waiting events found dispatching network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:42:07 np0005593232 nova_compute[250269]: 2026-01-23 09:42:07.296 250273 WARNING nova.compute.manager [req-a0e5845a-ec4f-4859-9e52-6518c8b976de req-ea6b69e5-4c34-4259-a313-b7aed177c2f9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received unexpected event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca for instance with vm_state active and task_state None.#033[00m
Jan 23 04:42:07 np0005593232 podman[282158]: 2026-01-23 09:42:07.457185584 +0000 UTC m=+0.113939344 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 04:42:08 np0005593232 nova_compute[250269]: 2026-01-23 09:42:08.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:08.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:08.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:08 np0005593232 nova_compute[250269]: 2026-01-23 09:42:08.810 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e184 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.163 250273 INFO nova.virt.libvirt.driver [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Snapshot image upload complete#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.164 250273 INFO nova.compute.manager [None req-389f4993-a4bc-434e-8ce6-3b6c6098b5ea 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Took 6.79 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 04:42:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1452: 321 pgs: 321 active+clean; 346 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 8.0 MiB/s wr, 357 op/s
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:09 np0005593232 NetworkManager[49057]: <info>  [1769161329.3040] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Jan 23 04:42:09 np0005593232 NetworkManager[49057]: <info>  [1769161329.3047] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.526 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.533 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:09 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:09Z|00133|binding|INFO|Releasing lport 03e2fba1-8299-41b0-8205-575cd62d3292 from this chassis (sb_readonly=0)
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.555 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.802 250273 DEBUG nova.compute.manager [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.803 250273 DEBUG nova.compute.manager [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing instance network info cache due to event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.803 250273 DEBUG oslo_concurrency.lockutils [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.804 250273 DEBUG oslo_concurrency.lockutils [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:09 np0005593232 nova_compute[250269]: 2026-01-23 09:42:09.804 250273 DEBUG nova.network.neutron [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:42:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:10.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:10.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1453: 321 pgs: 321 active+clean; 346 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 6.7 MiB/s wr, 301 op/s
Jan 23 04:42:11 np0005593232 nova_compute[250269]: 2026-01-23 09:42:11.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:42:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:12.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 23 04:42:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 23 04:42:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:12.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.655 250273 DEBUG nova.network.neutron [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updated VIF entry in instance network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.656 250273 DEBUG nova.network.neutron [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.659 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.659 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.659 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.659 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.685 250273 DEBUG oslo_concurrency.lockutils [req-def34a9a-3fa2-4881-9636-255c4863616d req-638e1d71-f06e-4d20-98db-e6212c039966 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.763 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161317.7632973, 029c50ac-8221-4aa1-ab0f-92693d5d4d44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.765 250273 INFO nova.compute.manager [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.790 250273 DEBUG nova.compute.manager [None req-641890eb-06d5-4334-bd77-da17adc76961 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:12 np0005593232 nova_compute[250269]: 2026-01-23 09:42:12.796 250273 DEBUG nova.compute.manager [None req-641890eb-06d5-4334-bd77-da17adc76961 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: None, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:42:13 np0005593232 nova_compute[250269]: 2026-01-23 09:42:13.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1455: 321 pgs: 321 active+clean; 346 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 9.1 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 23 04:42:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 23 04:42:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 23 04:42:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.444 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updating instance_info_cache with network_info: [{"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.479 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-029c50ac-8221-4aa1-ab0f-92693d5d4d44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.479 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:42:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.535 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:14.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.845 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.846 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.846 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.846 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.847 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.848 250273 INFO nova.compute.manager [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Terminating instance#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.848 250273 DEBUG nova.compute.manager [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.855 250273 INFO nova.virt.libvirt.driver [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Instance destroyed successfully.#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.856 250273 DEBUG nova.objects.instance [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lazy-loading 'resources' on Instance uuid 029c50ac-8221-4aa1-ab0f-92693d5d4d44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.885 250273 DEBUG nova.virt.libvirt.vif [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:41:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1350534516',display_name='tempest-ImagesTestJSON-server-1350534516',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1350534516',id=42,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:41:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='05bc71a77710455e8b34ead7fec81a31',ramdisk_id='',reservation_id='r-3w4x2n4s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1507872051',owner_user_name='tempest-ImagesTestJSON-1507872051-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:42:09Z,user_data=None,user_id='56da68482e3a4fb582dcccad45f8f71b',uuid=029c50ac-8221-4aa1-ab0f-92693d5d4d44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.886 250273 DEBUG nova.network.os_vif_util [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converting VIF {"id": "1b310789-fedc-45bf-b49e-839dcea36fd7", "address": "fa:16:3e:8b:cd:5a", "network": {"id": "c2696fd4-5fd7-4934-88ac-40162fad555d", "bridge": "br-int", "label": "tempest-ImagesTestJSON-113670604-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "05bc71a77710455e8b34ead7fec81a31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b310789-fe", "ovs_interfaceid": "1b310789-fedc-45bf-b49e-839dcea36fd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.886 250273 DEBUG nova.network.os_vif_util [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.887 250273 DEBUG os_vif [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.889 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b310789-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:42:14 np0005593232 nova_compute[250269]: 2026-01-23 09:42:14.899 250273 INFO os_vif [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8b:cd:5a,bridge_name='br-int',has_traffic_filtering=True,id=1b310789-fedc-45bf-b49e-839dcea36fd7,network=Network(c2696fd4-5fd7-4934-88ac-40162fad555d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1b310789-fe')#033[00m
Jan 23 04:42:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1457: 321 pgs: 321 active+clean; 346 MiB data, 642 MiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 1.5 MiB/s wr, 175 op/s
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.357 250273 INFO nova.virt.libvirt.driver [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Deleting instance files /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44_del#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.358 250273 INFO nova.virt.libvirt.driver [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Deletion of /var/lib/nova/instances/029c50ac-8221-4aa1-ab0f-92693d5d4d44_del complete#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.417 250273 INFO nova.compute.manager [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Took 0.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.418 250273 DEBUG oslo.service.loopingcall [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.418 250273 DEBUG nova.compute.manager [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:42:15 np0005593232 nova_compute[250269]: 2026-01-23 09:42:15.418 250273 DEBUG nova.network.neutron [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:42:16 np0005593232 nova_compute[250269]: 2026-01-23 09:42:16.042 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:16 np0005593232 nova_compute[250269]: 2026-01-23 09:42:16.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:16.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1458: 321 pgs: 321 active+clean; 242 MiB data, 583 MiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 21 KiB/s wr, 59 op/s
Jan 23 04:42:17 np0005593232 nova_compute[250269]: 2026-01-23 09:42:17.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:17 np0005593232 podman[282260]: 2026-01-23 09:42:17.456949057 +0000 UTC m=+0.096037276 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.110 250273 DEBUG nova.network.neutron [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.139 250273 INFO nova.compute.manager [-] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.216 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.216 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.302 250273 DEBUG oslo_concurrency.processutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.349 250273 DEBUG nova.compute.manager [req-ffe7a977-cd92-4332-951c-9e2b55fca4db req-ea2367bd-cb0d-49ac-bd4e-1a710a88f1b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 029c50ac-8221-4aa1-ab0f-92693d5d4d44] Received event network-vif-deleted-1b310789-fedc-45bf-b49e-839dcea36fd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.353 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:18.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:42:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165645961' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.775 250273 DEBUG oslo_concurrency.processutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.785 250273 DEBUG nova.compute.provider_tree [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.810 250273 DEBUG nova.scheduler.client.report [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.846 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.851 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.851 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.852 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.852 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:18 np0005593232 nova_compute[250269]: 2026-01-23 09:42:18.922 250273 INFO nova.scheduler.client.report [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Deleted allocations for instance 029c50ac-8221-4aa1-ab0f-92693d5d4d44#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.051 250273 DEBUG oslo_concurrency.lockutils [None req-adcde6ab-1ecc-410a-b8ef-fd095b4f2efc 56da68482e3a4fb582dcccad45f8f71b 05bc71a77710455e8b34ead7fec81a31 - - default default] Lock "029c50ac-8221-4aa1-ab0f-92693d5d4d44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.098129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339098209, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1471, "num_deletes": 253, "total_data_size": 2368936, "memory_usage": 2410040, "flush_reason": "Manual Compaction"}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339117777, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 1478971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31403, "largest_seqno": 32873, "table_properties": {"data_size": 1473559, "index_size": 2616, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14502, "raw_average_key_size": 21, "raw_value_size": 1461617, "raw_average_value_size": 2146, "num_data_blocks": 116, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161217, "oldest_key_time": 1769161217, "file_creation_time": 1769161339, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 19708 microseconds, and 9467 cpu microseconds.
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.117838) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 1478971 bytes OK
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.117884) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.119155) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.119170) EVENT_LOG_v1 {"time_micros": 1769161339119165, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.119188) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2362442, prev total WAL file size 2362442, number of live WAL files 2.
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.119961) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303035' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(1444KB)], [68(10138KB)]
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339120040, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 11860451, "oldest_snapshot_seqno": -1}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5788 keys, 8860100 bytes, temperature: kUnknown
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339228706, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8860100, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8822375, "index_size": 22127, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 148062, "raw_average_key_size": 25, "raw_value_size": 8719369, "raw_average_value_size": 1506, "num_data_blocks": 894, "num_entries": 5788, "num_filter_entries": 5788, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161339, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.229573) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8860100 bytes
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.232783) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.6 rd, 81.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(14.0) write-amplify(6.0) OK, records in: 6255, records dropped: 467 output_compression: NoCompression
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.232802) EVENT_LOG_v1 {"time_micros": 1769161339232792, "job": 38, "event": "compaction_finished", "compaction_time_micros": 109193, "compaction_time_cpu_micros": 25961, "output_level": 6, "num_output_files": 1, "total_output_size": 8860100, "num_input_records": 6255, "num_output_records": 5788, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339233393, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161339235207, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.119817) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.235333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.235341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.235343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.235346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:42:19.235348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:42:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1459: 321 pgs: 321 active+clean; 188 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 369 KiB/s rd, 23 KiB/s wr, 101 op/s
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:42:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769546104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.357 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:19 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:19Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6a:97:e5 10.100.0.8
Jan 23 04:42:19 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:19Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6a:97:e5 10.100.0.8
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.437 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.437 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.538 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.591 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.592 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4350MB free_disk=20.898845672607422GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.592 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.592 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.650 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance a1208de2-efde-4618-8388-2acfab37582a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.651 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.651 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.685 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:19 np0005593232 nova_compute[250269]: 2026-01-23 09:42:19.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:42:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1998736925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:42:20 np0005593232 nova_compute[250269]: 2026-01-23 09:42:20.167 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:20 np0005593232 nova_compute[250269]: 2026-01-23 09:42:20.172 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:42:20 np0005593232 nova_compute[250269]: 2026-01-23 09:42:20.192 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:42:20 np0005593232 nova_compute[250269]: 2026-01-23 09:42:20.219 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:42:20 np0005593232 nova_compute[250269]: 2026-01-23 09:42:20.220 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:20.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:20.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1460: 321 pgs: 321 active+clean; 188 MiB data, 554 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 2.6 KiB/s wr, 60 op/s
Jan 23 04:42:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:22.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:22.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1461: 321 pgs: 321 active+clean; 265 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 4.7 MiB/s wr, 154 op/s
Jan 23 04:42:23 np0005593232 nova_compute[250269]: 2026-01-23 09:42:23.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 23 04:42:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 23 04:42:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 23 04:42:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:24.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:24 np0005593232 nova_compute[250269]: 2026-01-23 09:42:24.540 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:24.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:24 np0005593232 nova_compute[250269]: 2026-01-23 09:42:24.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1463: 321 pgs: 321 active+clean; 265 MiB data, 581 MiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 4.7 MiB/s wr, 154 op/s
Jan 23 04:42:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1464: 321 pgs: 321 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 4.7 MiB/s wr, 144 op/s
Jan 23 04:42:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:42:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/927929359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:42:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:28.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:28.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1465: 321 pgs: 321 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 415 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Jan 23 04:42:29 np0005593232 nova_compute[250269]: 2026-01-23 09:42:29.542 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:29 np0005593232 nova_compute[250269]: 2026-01-23 09:42:29.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:30.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:30.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1466: 321 pgs: 321 active+clean; 267 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 415 KiB/s rd, 4.7 MiB/s wr, 111 op/s
Jan 23 04:42:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:42:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:42:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:32.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:32.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d3ea818-1840-44b0-b6cf-fafe8b79c4d4 does not exist
Jan 23 04:42:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 674071dd-446a-4603-a1ab-35aedbba7f5e does not exist
Jan 23 04:42:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5ec3fd9a-d736-4a10-919d-8980d5cb5696 does not exist
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:42:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:42:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:42:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:42:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1467: 321 pgs: 321 active+clean; 313 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.379736043 +0000 UTC m=+0.047282203 container create 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:42:33 np0005593232 systemd[1]: Started libpod-conmon-17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245.scope.
Jan 23 04:42:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.35706575 +0000 UTC m=+0.024611940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.45739142 +0000 UTC m=+0.124937590 container init 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.465168914 +0000 UTC m=+0.132715054 container start 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.468216882 +0000 UTC m=+0.135763032 container attach 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:42:33 np0005593232 hungry_pike[282642]: 167 167
Jan 23 04:42:33 np0005593232 systemd[1]: libpod-17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245.scope: Deactivated successfully.
Jan 23 04:42:33 np0005593232 conmon[282642]: conmon 17823a9ad91133233f95 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245.scope/container/memory.events
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.472148815 +0000 UTC m=+0.139694975 container died 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:42:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-132cc8584e0e3dcd7af26852230d1a1ef0f3a97ac5b018ec72cee8da7168a34b-merged.mount: Deactivated successfully.
Jan 23 04:42:33 np0005593232 podman[282626]: 2026-01-23 09:42:33.510542671 +0000 UTC m=+0.178088811 container remove 17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_pike, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:42:33 np0005593232 systemd[1]: libpod-conmon-17823a9ad91133233f951d8f6f99f47b1147b6e85923182b6a61802a70ab7245.scope: Deactivated successfully.
Jan 23 04:42:33 np0005593232 podman[282665]: 2026-01-23 09:42:33.690723972 +0000 UTC m=+0.041269250 container create b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:42:33 np0005593232 systemd[1]: Started libpod-conmon-b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266.scope.
Jan 23 04:42:33 np0005593232 podman[282665]: 2026-01-23 09:42:33.673836635 +0000 UTC m=+0.024381933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:33 np0005593232 podman[282665]: 2026-01-23 09:42:33.795910302 +0000 UTC m=+0.146455590 container init b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:42:33 np0005593232 podman[282665]: 2026-01-23 09:42:33.80452737 +0000 UTC m=+0.155072658 container start b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:42:33 np0005593232 podman[282665]: 2026-01-23 09:42:33.808215626 +0000 UTC m=+0.158760924 container attach b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:42:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:34 np0005593232 nova_compute[250269]: 2026-01-23 09:42:34.544 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:34 np0005593232 strange_newton[282681]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:42:34 np0005593232 strange_newton[282681]: --> relative data size: 1.0
Jan 23 04:42:34 np0005593232 strange_newton[282681]: --> All data devices are unavailable
Jan 23 04:42:34 np0005593232 systemd[1]: libpod-b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266.scope: Deactivated successfully.
Jan 23 04:42:34 np0005593232 podman[282665]: 2026-01-23 09:42:34.625525161 +0000 UTC m=+0.976070439 container died b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:42:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e36db405abdc9ad33e7c4035fba6df0ba63a55aa518b89e6a87fa265141fd112-merged.mount: Deactivated successfully.
Jan 23 04:42:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:34.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:34 np0005593232 podman[282665]: 2026-01-23 09:42:34.679349631 +0000 UTC m=+1.029894909 container remove b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_newton, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:42:34 np0005593232 systemd[1]: libpod-conmon-b21a9cb0a743d29636ffd9803f42de2587801457b688928c0e8fd50574fa0266.scope: Deactivated successfully.
Jan 23 04:42:34 np0005593232 nova_compute[250269]: 2026-01-23 09:42:34.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1468: 321 pgs: 321 active+clean; 313 MiB data, 617 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 1.9 MiB/s wr, 117 op/s
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.428212674 +0000 UTC m=+0.105467039 container create 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.345192612 +0000 UTC m=+0.022447027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:35 np0005593232 systemd[1]: Started libpod-conmon-628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716.scope.
Jan 23 04:42:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.71826159 +0000 UTC m=+0.395515975 container init 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.730807401 +0000 UTC m=+0.408061796 container start 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:42:35 np0005593232 great_mendel[282916]: 167 167
Jan 23 04:42:35 np0005593232 systemd[1]: libpod-628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716.scope: Deactivated successfully.
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.820792594 +0000 UTC m=+0.498046959 container attach 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:42:35 np0005593232 podman[282900]: 2026-01-23 09:42:35.823474231 +0000 UTC m=+0.500728636 container died 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:42:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ae3937da3f394f5b1f477d3cfd6302da47897f9f0133c03aadca9ffb95fc03d1-merged.mount: Deactivated successfully.
Jan 23 04:42:36 np0005593232 podman[282900]: 2026-01-23 09:42:36.373321941 +0000 UTC m=+1.050576326 container remove 628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:42:36 np0005593232 systemd[1]: libpod-conmon-628c8fd6eb5863ca0ff3af956d6cc2944db7628c121afb9b4ffa89fb93df0716.scope: Deactivated successfully.
Jan 23 04:42:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 23 04:42:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:36 np0005593232 podman[282941]: 2026-01-23 09:42:36.570723548 +0000 UTC m=+0.027529364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:36.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:36 np0005593232 podman[282941]: 2026-01-23 09:42:36.73043879 +0000 UTC m=+0.187244586 container create ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:42:36 np0005593232 systemd[1]: Started libpod-conmon-ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae.scope.
Jan 23 04:42:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e6ed7e4e831ad8fe3b9a4bad3056bd464254c115cc3c8d5316d0f05237426f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e6ed7e4e831ad8fe3b9a4bad3056bd464254c115cc3c8d5316d0f05237426f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e6ed7e4e831ad8fe3b9a4bad3056bd464254c115cc3c8d5316d0f05237426f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70e6ed7e4e831ad8fe3b9a4bad3056bd464254c115cc3c8d5316d0f05237426f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:36 np0005593232 podman[282941]: 2026-01-23 09:42:36.849225921 +0000 UTC m=+0.306031747 container init ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:42:36 np0005593232 podman[282941]: 2026-01-23 09:42:36.858464927 +0000 UTC m=+0.315270723 container start ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:42:36 np0005593232 podman[282941]: 2026-01-23 09:42:36.862175624 +0000 UTC m=+0.318981450 container attach ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:42:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 23 04:42:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:42:37
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.log', 'vms', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.control']
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:42:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1470: 321 pgs: 321 active+clean; 275 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 23 04:42:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:37.639 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:42:37 np0005593232 nova_compute[250269]: 2026-01-23 09:42:37.639 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:37.643 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:42:37 np0005593232 admiring_jang[282957]: {
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:    "0": [
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:        {
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "devices": [
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "/dev/loop3"
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            ],
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "lv_name": "ceph_lv0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "lv_size": "7511998464",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "name": "ceph_lv0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "tags": {
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.cluster_name": "ceph",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.crush_device_class": "",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.encrypted": "0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.osd_id": "0",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.type": "block",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:                "ceph.vdo": "0"
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            },
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "type": "block",
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:            "vg_name": "ceph_vg0"
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:        }
Jan 23 04:42:37 np0005593232 admiring_jang[282957]:    ]
Jan 23 04:42:37 np0005593232 admiring_jang[282957]: }
Jan 23 04:42:37 np0005593232 systemd[1]: libpod-ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae.scope: Deactivated successfully.
Jan 23 04:42:37 np0005593232 podman[282967]: 2026-01-23 09:42:37.822052897 +0000 UTC m=+0.036601565 container died ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:42:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-70e6ed7e4e831ad8fe3b9a4bad3056bd464254c115cc3c8d5316d0f05237426f-merged.mount: Deactivated successfully.
Jan 23 04:42:37 np0005593232 podman[282967]: 2026-01-23 09:42:37.991957594 +0000 UTC m=+0.206506172 container remove ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_jang, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:42:37 np0005593232 systemd[1]: libpod-conmon-ae75f39a38261b1e42da2c0438d3b28f0ad0a6e2db739d20368256cb199c65ae.scope: Deactivated successfully.
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:42:38 np0005593232 podman[282968]: 2026-01-23 09:42:38.056692542 +0000 UTC m=+0.242864813 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 23 04:42:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 23 04:42:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 23 04:42:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 23 04:42:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:38.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.615364061 +0000 UTC m=+0.039415676 container create 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:42:38 np0005593232 systemd[1]: Started libpod-conmon-85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8.scope.
Jan 23 04:42:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.598697091 +0000 UTC m=+0.022748726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:38.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.700289248 +0000 UTC m=+0.124340903 container init 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.707424163 +0000 UTC m=+0.131475798 container start 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:42:38 np0005593232 agitated_lewin[283164]: 167 167
Jan 23 04:42:38 np0005593232 systemd[1]: libpod-85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8.scope: Deactivated successfully.
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.712637803 +0000 UTC m=+0.136689428 container attach 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.713449067 +0000 UTC m=+0.137500702 container died 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:42:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d2b79239c928e94eaa6851aec6a6b41686b7ed20185b8bd998877ac0f0a34cb8-merged.mount: Deactivated successfully.
Jan 23 04:42:38 np0005593232 podman[283147]: 2026-01-23 09:42:38.764911639 +0000 UTC m=+0.188963254 container remove 85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lewin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:42:38 np0005593232 systemd[1]: libpod-conmon-85912d4e947c16bf33d3e405046a961a9c7329185a284bdc67e21b9a2e542df8.scope: Deactivated successfully.
Jan 23 04:42:38 np0005593232 podman[283188]: 2026-01-23 09:42:38.937425338 +0000 UTC m=+0.037705497 container create 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:42:38 np0005593232 systemd[1]: Started libpod-conmon-80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a.scope.
Jan 23 04:42:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aff1031163dbb83df3609c03a02734f653853ea6971dcbab78b06192b9193/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aff1031163dbb83df3609c03a02734f653853ea6971dcbab78b06192b9193/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aff1031163dbb83df3609c03a02734f653853ea6971dcbab78b06192b9193/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa9aff1031163dbb83df3609c03a02734f653853ea6971dcbab78b06192b9193/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:39.009251267 +0000 UTC m=+0.109531446 container init 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:38.921300203 +0000 UTC m=+0.021580382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:39.021317925 +0000 UTC m=+0.121598114 container start 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:39.025639969 +0000 UTC m=+0.125920128 container attach 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:42:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 23 04:42:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1472: 321 pgs: 321 active+clean; 234 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.7 MiB/s wr, 248 op/s
Jan 23 04:42:39 np0005593232 nova_compute[250269]: 2026-01-23 09:42:39.547 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 23 04:42:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]: {
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:        "osd_id": 0,
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:        "type": "bluestore"
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]:    }
Jan 23 04:42:39 np0005593232 elegant_snyder[283204]: }
Jan 23 04:42:39 np0005593232 systemd[1]: libpod-80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a.scope: Deactivated successfully.
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:39.89069509 +0000 UTC m=+0.990975269 container died 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:42:39 np0005593232 nova_compute[250269]: 2026-01-23 09:42:39.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fa9aff1031163dbb83df3609c03a02734f653853ea6971dcbab78b06192b9193-merged.mount: Deactivated successfully.
Jan 23 04:42:39 np0005593232 podman[283188]: 2026-01-23 09:42:39.940988649 +0000 UTC m=+1.041268828 container remove 80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 04:42:39 np0005593232 systemd[1]: libpod-conmon-80c9eb5d74538ca1599c319438ea7470e1577987bd1fff930bf2b28336c2f24a.scope: Deactivated successfully.
Jan 23 04:42:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:42:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:42:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ab2b70d4-dc7c-4208-81f2-65ea992939cf does not exist
Jan 23 04:42:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 037f7f92-0f9f-4941-9d06-68dc82523719 does not exist
Jan 23 04:42:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1e13c9db-528f-4557-9fc9-ac3dd8115caa does not exist
Jan 23 04:42:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:42:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:40.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:40.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1474: 321 pgs: 321 active+clean; 234 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 39 KiB/s wr, 127 op/s
Jan 23 04:42:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 23 04:42:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 23 04:42:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 23 04:42:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:41.646 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:42.597 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:42.598 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:42.599 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.642 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.643 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.647 250273 DEBUG nova.compute.manager [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.647 250273 DEBUG nova.compute.manager [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing instance network info cache due to event network-changed-189994c7-5c8d-40fd-888d-a104e39b5aca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.648 250273 DEBUG oslo_concurrency.lockutils [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.648 250273 DEBUG oslo_concurrency.lockutils [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.649 250273 DEBUG nova.network.neutron [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Refreshing network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.682 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:42:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:42.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.780 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.781 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.790 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.791 250273 INFO nova.compute.claims [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:42:42 np0005593232 nova_compute[250269]: 2026-01-23 09:42:42.949 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1476: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 327 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 7.1 MiB/s wr, 406 op/s
Jan 23 04:42:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:42:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/820369719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.418 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.424 250273 DEBUG nova.compute.provider_tree [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.456 250273 DEBUG nova.scheduler.client.report [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.502 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.503 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.613 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.614 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.640 250273 INFO nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.662 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.731 250273 INFO nova.virt.block_device [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Booting with volume 113a804f-8abc-47f6-b4ca-e098579996e7 at /dev/vda#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.826 250273 DEBUG nova.policy [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '83b4563d24244490b58764ee8525c26d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2e0529ecec18434aa4fe09ac251ff46f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.914 250273 DEBUG os_brick.utils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.918 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.939 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.939 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b234fa70-f366-43a3-a411-453abea204d1]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.940 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.953 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.953 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b4893ed1-e8a5-49c0-85fa-cf7e6983bb22]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.954 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.968 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.969 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[618dbd66-b8a3-4732-926a-569f6061d0e1]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.970 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[ce31285d-df8d-482a-b613-1cc6b8307663]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:43 np0005593232 nova_compute[250269]: 2026-01-23 09:42:43.970 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.002 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.005 250273 DEBUG os_brick.initiator.connectors.lightos [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.005 250273 DEBUG os_brick.initiator.connectors.lightos [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.005 250273 DEBUG os_brick.initiator.connectors.lightos [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.005 250273 DEBUG os_brick.utils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.006 250273 DEBUG nova.virt.block_device [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updating existing volume attachment record: 1b16e998-3fac-40f5-b6f7-4f1cd4125dc9 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 04:42:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:44.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.549 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:44.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:44 np0005593232 nova_compute[250269]: 2026-01-23 09:42:44.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 23 04:42:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1477: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 327 MiB data, 621 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.9 MiB/s wr, 271 op/s
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.279 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.280 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.281 250273 INFO nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Creating image(s)#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.281 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.281 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Ensure instance console log exists: /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.282 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.282 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.282 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.306 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Successfully created port: bd920bc4-48e9-4ef3-8874-84bf9c13b048 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.435 250273 DEBUG nova.network.neutron [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updated VIF entry in instance network info cache for port 189994c7-5c8d-40fd-888d-a104e39b5aca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.435 250273 DEBUG nova.network.neutron [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:45 np0005593232 nova_compute[250269]: 2026-01-23 09:42:45.471 250273 DEBUG oslo_concurrency.lockutils [req-ac83fbfc-b6ff-4640-a2bc-c74a28186963 req-7a1e66de-e227-4841-b4df-b6a72b0aee00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 23 04:42:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 23 04:42:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.622 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Successfully updated port: bd920bc4-48e9-4ef3-8874-84bf9c13b048 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.641 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.642 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquired lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.642 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004163976712265029 of space, bias 1.0, pg target 1.2491930136795086 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2959049806323283 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0038431814745021405 of space, bias 1.0, pg target 1.14911126087614 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:42:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 04:42:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:46.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.866 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.898 250273 DEBUG nova.compute.manager [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-changed-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.898 250273 DEBUG nova.compute.manager [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Refreshing instance network info cache due to event network-changed-bd920bc4-48e9-4ef3-8874-84bf9c13b048. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:42:46 np0005593232 nova_compute[250269]: 2026-01-23 09:42:46.899 250273 DEBUG oslo_concurrency.lockutils [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1479: 321 pgs: 321 active+clean; 263 MiB data, 591 MiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 5.6 MiB/s wr, 306 op/s
Jan 23 04:42:48 np0005593232 podman[283325]: 2026-01-23 09:42:48.394892459 +0000 UTC m=+0.054865711 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:42:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:48.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 23 04:42:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 23 04:42:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 23 04:42:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1481: 321 pgs: 321 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 5.4 MiB/s wr, 334 op/s
Jan 23 04:42:49 np0005593232 nova_compute[250269]: 2026-01-23 09:42:49.552 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:49 np0005593232 nova_compute[250269]: 2026-01-23 09:42:49.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.878 250273 DEBUG nova.network.neutron [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updating instance_info_cache with network_info: [{"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.909 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Releasing lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.909 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Instance network_info: |[{"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.910 250273 DEBUG oslo_concurrency.lockutils [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.911 250273 DEBUG nova.network.neutron [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Refreshing network info cache for port bd920bc4-48e9-4ef3-8874-84bf9c13b048 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.918 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Start _get_guest_xml network_info=[{"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': True, 'attachment_id': '1b16e998-3fac-40f5-b6f7-4f1cd4125dc9', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-113a804f-8abc-47f6-b4ca-e098579996e7', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '113a804f-8abc-47f6-b4ca-e098579996e7', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9b1cd8b8-1ac9-441e-961f-17dc56cb555c', 'attached_at': '', 'detached_at': '', 'volume_id': '113a804f-8abc-47f6-b4ca-e098579996e7', 'serial': '113a804f-8abc-47f6-b4ca-e098579996e7'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.925 250273 WARNING nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.932 250273 DEBUG nova.virt.libvirt.host [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.932 250273 DEBUG nova.virt.libvirt.host [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.937 250273 DEBUG nova.virt.libvirt.host [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.938 250273 DEBUG nova.virt.libvirt.host [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.940 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.940 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.941 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.942 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.942 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.942 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.943 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.943 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.944 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.944 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.945 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.945 250273 DEBUG nova.virt.hardware [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.989 250273 DEBUG nova.storage.rbd_utils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] rbd image 9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:50 np0005593232 nova_compute[250269]: 2026-01-23 09:42:50.996 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1482: 321 pgs: 321 active+clean; 226 MiB data, 573 MiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 2.9 KiB/s wr, 88 op/s
Jan 23 04:42:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:42:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2381966123' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.480 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.544 250273 DEBUG nova.virt.libvirt.vif [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1533628909',display_name='tempest-ServersTestBootFromVolume-server-1533628909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1533628909',id=48,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDVXhJkhKwKxHS7yJESnEwsJkVx76V/aGPNj/f/JPsIRZQR0K5ekZE6ebqaO6lv+YE26Xjl+L22ijzmBYVJ7WlL1ALNOAo7jhyaTFnaZLWUKxXvM+mGX2rsNAohFRABIA==',key_name='tempest-keypair-2000474017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e0529ecec18434aa4fe09ac251ff46f',ramdisk_id='',reservation_id='r-rtsyge8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1176222732',owner_user_name='tempest-ServersTestBootFromVolume-1176222732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:42:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83b4563d24244490b58764ee8525c26d',uuid=9b1cd8b8-1ac9-441e-961f-17dc56cb555c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.545 250273 DEBUG nova.network.os_vif_util [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converting VIF {"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.546 250273 DEBUG nova.network.os_vif_util [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.547 250273 DEBUG nova.objects.instance [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b1cd8b8-1ac9-441e-961f-17dc56cb555c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.581 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <uuid>9b1cd8b8-1ac9-441e-961f-17dc56cb555c</uuid>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <name>instance-00000030</name>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersTestBootFromVolume-server-1533628909</nova:name>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:42:50</nova:creationTime>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:user uuid="83b4563d24244490b58764ee8525c26d">tempest-ServersTestBootFromVolume-1176222732-project-member</nova:user>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:project uuid="2e0529ecec18434aa4fe09ac251ff46f">tempest-ServersTestBootFromVolume-1176222732</nova:project>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <nova:port uuid="bd920bc4-48e9-4ef3-8874-84bf9c13b048">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="serial">9b1cd8b8-1ac9-441e-961f-17dc56cb555c</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="uuid">9b1cd8b8-1ac9-441e-961f-17dc56cb555c</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-113a804f-8abc-47f6-b4ca-e098579996e7">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <serial>113a804f-8abc-47f6-b4ca-e098579996e7</serial>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e4:0d:d3"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <target dev="tapbd920bc4-48"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/console.log" append="off"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:42:51 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:42:51 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:42:51 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:42:51 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.583 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Preparing to wait for external event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.583 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.584 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.584 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.585 250273 DEBUG nova.virt.libvirt.vif [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1533628909',display_name='tempest-ServersTestBootFromVolume-server-1533628909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1533628909',id=48,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDVXhJkhKwKxHS7yJESnEwsJkVx76V/aGPNj/f/JPsIRZQR0K5ekZE6ebqaO6lv+YE26Xjl+L22ijzmBYVJ7WlL1ALNOAo7jhyaTFnaZLWUKxXvM+mGX2rsNAohFRABIA==',key_name='tempest-keypair-2000474017',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2e0529ecec18434aa4fe09ac251ff46f',ramdisk_id='',reservation_id='r-rtsyge8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServersTestBootFromVolume-1176222732',owner_user_name='tempest-ServersTestBootFromVolume-1176222732-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:42:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83b4563d24244490b58764ee8525c26d',uuid=9b1cd8b8-1ac9-441e-961f-17dc56cb555c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.585 250273 DEBUG nova.network.os_vif_util [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converting VIF {"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.586 250273 DEBUG nova.network.os_vif_util [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.586 250273 DEBUG os_vif [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.587 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.587 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.588 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.592 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.592 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd920bc4-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.593 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbd920bc4-48, col_values=(('external_ids', {'iface-id': 'bd920bc4-48e9-4ef3-8874-84bf9c13b048', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:0d:d3', 'vm-uuid': '9b1cd8b8-1ac9-441e-961f-17dc56cb555c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:51 np0005593232 NetworkManager[49057]: <info>  [1769161371.5961] manager: (tapbd920bc4-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.602 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.603 250273 INFO os_vif [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48')#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.683 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.683 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.684 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] No VIF found with MAC fa:16:3e:e4:0d:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.684 250273 INFO nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Using config drive#033[00m
Jan 23 04:42:51 np0005593232 nova_compute[250269]: 2026-01-23 09:42:51.719 250273 DEBUG nova.storage.rbd_utils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] rbd image 9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:52.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:52 np0005593232 nova_compute[250269]: 2026-01-23 09:42:52.747 250273 INFO nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Creating config drive at /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config#033[00m
Jan 23 04:42:52 np0005593232 nova_compute[250269]: 2026-01-23 09:42:52.753 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1u8xo5u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:52 np0005593232 nova_compute[250269]: 2026-01-23 09:42:52.911 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj1u8xo5u" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:52 np0005593232 nova_compute[250269]: 2026-01-23 09:42:52.946 250273 DEBUG nova.storage.rbd_utils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] rbd image 9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:42:52 np0005593232 nova_compute[250269]: 2026-01-23 09:42:52.950 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config 9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.153 250273 DEBUG oslo_concurrency.processutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config 9b1cd8b8-1ac9-441e-961f-17dc56cb555c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.155 250273 INFO nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Deleting local config drive /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c/disk.config because it was imported into RBD.#033[00m
Jan 23 04:42:53 np0005593232 kernel: tapbd920bc4-48: entered promiscuous mode
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.2174] manager: (tapbd920bc4-48): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00134|binding|INFO|Claiming lport bd920bc4-48e9-4ef3-8874-84bf9c13b048 for this chassis.
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00135|binding|INFO|bd920bc4-48e9-4ef3-8874-84bf9c13b048: Claiming fa:16:3e:e4:0d:d3 10.100.0.7
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.228 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:0d:d3 10.100.0.7'], port_security=['fa:16:3e:e4:0d:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b1cd8b8-1ac9-441e-961f-17dc56cb555c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e0529ecec18434aa4fe09ac251ff46f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '738ee158-14dd-4e68-a2b5-ae763d873278', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d97a1a3-b21f-4c43-ab8b-1309b459c3a0, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=bd920bc4-48e9-4ef3-8874-84bf9c13b048) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.231 161902 INFO neutron.agent.ovn.metadata.agent [-] Port bd920bc4-48e9-4ef3-8874-84bf9c13b048 in datapath 93c26ebc-7d13-4039-99a4-82ec186ccf44 bound to our chassis#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.235 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93c26ebc-7d13-4039-99a4-82ec186ccf44#033[00m
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00136|binding|INFO|Setting lport bd920bc4-48e9-4ef3-8874-84bf9c13b048 ovn-installed in OVS
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00137|binding|INFO|Setting lport bd920bc4-48e9-4ef3-8874-84bf9c13b048 up in Southbound
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.251 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdd5069-5269-43ed-99de-03eb41e62b88]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.252 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93c26ebc-71 in ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.258 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93c26ebc-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.259 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b012a723-6a95-40b6-948b-6330cb21b518]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.260 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49a17157-d7bd-4e8d-8e44-4048f6d5d73c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 systemd-udevd[283464]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:42:53 np0005593232 systemd-machined[215836]: New machine qemu-20-instance-00000030.
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.2745] device (tapbd920bc4-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:42:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1483: 321 pgs: 321 active+clean; 230 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 297 KiB/s rd, 2.0 MiB/s wr, 151 op/s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.273 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[253ca181-e5f9-4b7a-acef-d50880b85b68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.2760] device (tapbd920bc4-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:42:53 np0005593232 systemd[1]: Started Virtual Machine qemu-20-instance-00000030.
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.297 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2971e100-c662-4d22-819c-73471f9b3aef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.319 250273 DEBUG nova.network.neutron [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updated VIF entry in instance network info cache for port bd920bc4-48e9-4ef3-8874-84bf9c13b048. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.320 250273 DEBUG nova.network.neutron [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updating instance_info_cache with network_info: [{"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.324 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef6a9c6-3814-4158-8ff3-7252954b4c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.3311] manager: (tap93c26ebc-70): new Veth device (/org/freedesktop/NetworkManager/Devices/73)
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.331 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[024b2b36-0e75-4c1e-8093-cac6bf405da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.364 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3b892c7c-40ba-474e-bd87-12591b74e4eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.367 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[55a0524d-79e8-4434-88aa-9d1522636e41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.378 250273 DEBUG oslo_concurrency.lockutils [req-665906fc-bbf2-402a-9c59-21a8f5b021b3 req-3c51fc2e-5eac-47a7-99d0-d3a41f948474 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.3903] device (tap93c26ebc-70): carrier: link connected
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.395 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e46199ec-6195-437a-be47-984d7778aea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.410 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6c1bee-4fd3-4a40-9253-ef95f90579f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93c26ebc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:ad:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528653, 'reachable_time': 30570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283496, 'error': None, 'target': 'ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.426 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[751006b0-d521-422e-846f-7f9a6256b097]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:adcd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 528653, 'tstamp': 528653}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283497, 'error': None, 'target': 'ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.441 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9cb2f9-ad30-40af-98b7-d07047ae375d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93c26ebc-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:ad:cd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528653, 'reachable_time': 30570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283498, 'error': None, 'target': 'ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.474 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[870087da-52ec-424a-ba48-9e0cd3778275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.549 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5aee0fef-f382-49a9-923b-ae375ce65d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93c26ebc-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.551 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93c26ebc-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 NetworkManager[49057]: <info>  [1769161373.5542] manager: (tap93c26ebc-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Jan 23 04:42:53 np0005593232 kernel: tap93c26ebc-70: entered promiscuous mode
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.562 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93c26ebc-70, col_values=(('external_ids', {'iface-id': '21956d04-2e25-4eb5-84f0-3ad5a108d97f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.563 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00138|binding|INFO|Releasing lport 21956d04-2e25-4eb5-84f0-3ad5a108d97f from this chassis (sb_readonly=0)
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.564 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.581 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93c26ebc-7d13-4039-99a4-82ec186ccf44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93c26ebc-7d13-4039-99a4-82ec186ccf44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.582 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ca357a72-d82b-467d-9d8f-8bd4371663e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.582 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-93c26ebc-7d13-4039-99a4-82ec186ccf44
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/93c26ebc-7d13-4039-99a4-82ec186ccf44.pid.haproxy
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 93c26ebc-7d13-4039-99a4-82ec186ccf44
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:42:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:42:53.583 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'env', 'PROCESS_TAG=haproxy-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93c26ebc-7d13-4039-99a4-82ec186ccf44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00139|binding|INFO|Releasing lport 21956d04-2e25-4eb5-84f0-3ad5a108d97f from this chassis (sb_readonly=0)
Jan 23 04:42:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:53Z|00140|binding|INFO|Releasing lport 03e2fba1-8299-41b0-8205-575cd62d3292 from this chassis (sb_readonly=0)
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.957 250273 DEBUG nova.compute.manager [req-082a3916-ee84-487d-8ca6-1e2714b60deb req-826e16ca-7e16-46db-bd1d-c8966a3866da 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.957 250273 DEBUG oslo_concurrency.lockutils [req-082a3916-ee84-487d-8ca6-1e2714b60deb req-826e16ca-7e16-46db-bd1d-c8966a3866da 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.957 250273 DEBUG oslo_concurrency.lockutils [req-082a3916-ee84-487d-8ca6-1e2714b60deb req-826e16ca-7e16-46db-bd1d-c8966a3866da 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.958 250273 DEBUG oslo_concurrency.lockutils [req-082a3916-ee84-487d-8ca6-1e2714b60deb req-826e16ca-7e16-46db-bd1d-c8966a3866da 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:53 np0005593232 nova_compute[250269]: 2026-01-23 09:42:53.958 250273 DEBUG nova.compute.manager [req-082a3916-ee84-487d-8ca6-1e2714b60deb req-826e16ca-7e16-46db-bd1d-c8966a3866da 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Processing event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:42:54 np0005593232 podman[283564]: 2026-01-23 09:42:54.01973947 +0000 UTC m=+0.046649575 container create 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:42:54 np0005593232 systemd[1]: Started libpod-conmon-0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e.scope.
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.075 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161374.0747688, 9b1cd8b8-1ac9-441e-961f-17dc56cb555c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.077 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] VM Started (Lifecycle Event)#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.079 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.084 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.087 250273 INFO nova.virt.libvirt.driver [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Instance spawned successfully.#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.088 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:42:54 np0005593232 podman[283564]: 2026-01-23 09:42:53.995262575 +0000 UTC m=+0.022172700 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:42:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:42:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acf391568b7ae678b3d317f5a2d00c9873049e32f969d71dba48018d5ee09cce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:42:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.109 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.112 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:42:54 np0005593232 podman[283564]: 2026-01-23 09:42:54.115251461 +0000 UTC m=+0.142161566 container init 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.125 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.125 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 podman[283564]: 2026-01-23 09:42:54.12630043 +0000 UTC m=+0.153210535 container start 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.126 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.126 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.126 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.127 250273 DEBUG nova.virt.libvirt.driver [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:42:54 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [NOTICE]   (283591) : New worker (283593) forked
Jan 23 04:42:54 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [NOTICE]   (283591) : Loading success.
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.153 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.154 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161374.0758944, 9b1cd8b8-1ac9-441e-961f-17dc56cb555c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.154 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.215 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.219 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161374.0827575, 9b1cd8b8-1ac9-441e-961f-17dc56cb555c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.219 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.228 250273 INFO nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Took 8.95 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.228 250273 DEBUG nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.239 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.242 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.274 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.300 250273 INFO nova.compute.manager [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Took 11.55 seconds to build instance.#033[00m
Jan 23 04:42:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 23 04:42:54 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.357 250273 DEBUG oslo_concurrency.lockutils [None req-d23b23d3-2236-4c62-854d-5afcaa5938cf 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:54.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:54 np0005593232 nova_compute[250269]: 2026-01-23 09:42:54.556 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:54.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1485: 321 pgs: 321 active+clean; 230 MiB data, 575 MiB used, 20 GiB / 21 GiB avail; 264 KiB/s rd, 2.0 MiB/s wr, 104 op/s
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.157 250273 DEBUG nova.compute.manager [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.157 250273 DEBUG oslo_concurrency.lockutils [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.158 250273 DEBUG oslo_concurrency.lockutils [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.158 250273 DEBUG oslo_concurrency.lockutils [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.158 250273 DEBUG nova.compute.manager [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] No waiting events found dispatching network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.158 250273 WARNING nova.compute.manager [req-3a543eec-a79a-4ae4-8684-6e357a7bd282 req-627f33b0-8aa6-48ef-8b93-52f9ef9e64fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received unexpected event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:42:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:56 np0005593232 nova_compute[250269]: 2026-01-23 09:42:56.597 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:56.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1486: 321 pgs: 321 active+clean; 262 MiB data, 614 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 MiB/s wr, 186 op/s
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.255 250273 DEBUG nova.compute.manager [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-changed-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.256 250273 DEBUG nova.compute.manager [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Refreshing instance network info cache due to event network-changed-bd920bc4-48e9-4ef3-8874-84bf9c13b048. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.256 250273 DEBUG oslo_concurrency.lockutils [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.256 250273 DEBUG oslo_concurrency.lockutils [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.256 250273 DEBUG nova.network.neutron [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Refreshing network info cache for port bd920bc4-48e9-4ef3-8874-84bf9c13b048 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:42:58 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:58Z|00141|binding|INFO|Releasing lport 21956d04-2e25-4eb5-84f0-3ad5a108d97f from this chassis (sb_readonly=0)
Jan 23 04:42:58 np0005593232 ovn_controller[151001]: 2026-01-23T09:42:58Z|00142|binding|INFO|Releasing lport 03e2fba1-8299-41b0-8205-575cd62d3292 from this chassis (sb_readonly=0)
Jan 23 04:42:58 np0005593232 nova_compute[250269]: 2026-01-23 09:42:58.433 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:42:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:42:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:42:58.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:42:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:42:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:42:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:42:58.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:42:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:42:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1487: 321 pgs: 321 active+clean; 293 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 209 op/s
Jan 23 04:42:59 np0005593232 nova_compute[250269]: 2026-01-23 09:42:59.558 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:00 np0005593232 nova_compute[250269]: 2026-01-23 09:43:00.251 250273 DEBUG nova.network.neutron [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updated VIF entry in instance network info cache for port bd920bc4-48e9-4ef3-8874-84bf9c13b048. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:43:00 np0005593232 nova_compute[250269]: 2026-01-23 09:43:00.252 250273 DEBUG nova.network.neutron [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updating instance_info_cache with network_info: [{"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:43:00 np0005593232 nova_compute[250269]: 2026-01-23 09:43:00.286 250273 DEBUG oslo_concurrency.lockutils [req-e024a07c-c095-4e92-a03c-6719a182c155 req-ddf6d444-2dce-408a-8efb-1047aa6a594f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9b1cd8b8-1ac9-441e-961f-17dc56cb555c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:43:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:00.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1488: 321 pgs: 321 active+clean; 293 MiB data, 626 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.7 MiB/s wr, 209 op/s
Jan 23 04:43:01 np0005593232 nova_compute[250269]: 2026-01-23 09:43:01.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:02.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1489: 321 pgs: 321 active+clean; 293 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.1 MiB/s wr, 170 op/s
Jan 23 04:43:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:04 np0005593232 nova_compute[250269]: 2026-01-23 09:43:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:04 np0005593232 nova_compute[250269]: 2026-01-23 09:43:04.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:43:04 np0005593232 nova_compute[250269]: 2026-01-23 09:43:04.562 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:04.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:04.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1490: 321 pgs: 321 active+clean; 293 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 155 op/s
Jan 23 04:43:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 23 04:43:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:06.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:06 np0005593232 nova_compute[250269]: 2026-01-23 09:43:06.604 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 23 04:43:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 23 04:43:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:06.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1492: 321 pgs: 321 active+clean; 293 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 117 op/s
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:08 np0005593232 podman[283659]: 2026-01-23 09:43:08.484220884 +0000 UTC m=+0.128578375 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:43:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 23 04:43:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:08Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e4:0d:d3 10.100.0.7
Jan 23 04:43:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:08Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e4:0d:d3 10.100.0.7
Jan 23 04:43:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 23 04:43:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 23 04:43:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:08.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1494: 321 pgs: 321 active+clean; 296 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 600 KiB/s wr, 144 op/s
Jan 23 04:43:09 np0005593232 nova_compute[250269]: 2026-01-23 09:43:09.378 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:09 np0005593232 nova_compute[250269]: 2026-01-23 09:43:09.564 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 23 04:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 23 04:43:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 23 04:43:10 np0005593232 nova_compute[250269]: 2026-01-23 09:43:10.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:10.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1496: 321 pgs: 321 active+clean; 296 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 750 KiB/s wr, 173 op/s
Jan 23 04:43:11 np0005593232 nova_compute[250269]: 2026-01-23 09:43:11.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:11 np0005593232 nova_compute[250269]: 2026-01-23 09:43:11.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:11 np0005593232 nova_compute[250269]: 2026-01-23 09:43:11.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:12.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:12.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1497: 321 pgs: 321 active+clean; 372 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 7.1 MiB/s wr, 344 op/s
Jan 23 04:43:13 np0005593232 nova_compute[250269]: 2026-01-23 09:43:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:13 np0005593232 nova_compute[250269]: 2026-01-23 09:43:13.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:43:13 np0005593232 nova_compute[250269]: 2026-01-23 09:43:13.964 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:43:13 np0005593232 nova_compute[250269]: 2026-01-23 09:43:13.964 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:43:13 np0005593232 nova_compute[250269]: 2026-01-23 09:43:13.965 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:43:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 23 04:43:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 23 04:43:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 23 04:43:14 np0005593232 nova_compute[250269]: 2026-01-23 09:43:14.564 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:14.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:14 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 23 04:43:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:14.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:14 np0005593232 nova_compute[250269]: 2026-01-23 09:43:14.745 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:14 np0005593232 nova_compute[250269]: 2026-01-23 09:43:14.745 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:14 np0005593232 nova_compute[250269]: 2026-01-23 09:43:14.765 250273 DEBUG nova.objects.instance [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'flavor' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:43:14 np0005593232 nova_compute[250269]: 2026-01-23 09:43:14.807 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.131 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.132 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.132 250273 INFO nova.compute.manager [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Attaching volume 99bccbe9-de42-409d-aa8f-e509f6080e7b to /dev/vdb#033[00m
Jan 23 04:43:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1499: 321 pgs: 321 active+clean; 372 MiB data, 673 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 219 op/s
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.366 250273 DEBUG os_brick.utils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.368 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.379 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.380 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[2866a9f2-5a2d-40da-b158-db9040430c8d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.381 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.398 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.399 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[64178ebc-f4ad-44ca-b62b-aa5363e76b4c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.401 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.417 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.418 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[3a5c5a8d-13bf-4757-8789-22b5366edfc3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.420 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[514e69c5-19e9-4000-8f8d-d2f1e9854ac1]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.420 250273 DEBUG oslo_concurrency.processutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.462 250273 DEBUG oslo_concurrency.processutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.465 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.465 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.465 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.465 250273 DEBUG os_brick.utils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 04:43:15 np0005593232 nova_compute[250269]: 2026-01-23 09:43:15.466 250273 DEBUG nova.virt.block_device [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating existing volume attachment record: 1008aa32-dd0a-418d-a127-644d0bbd6028 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 04:43:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:43:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2312847304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.259 250273 DEBUG nova.objects.instance [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'flavor' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.287 250273 DEBUG nova.virt.libvirt.driver [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Attempting to attach volume 99bccbe9-de42-409d-aa8f-e509f6080e7b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.292 250273 DEBUG nova.virt.libvirt.guest [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-99bccbe9-de42-409d-aa8f-e509f6080e7b">
Jan 23 04:43:16 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 04:43:16 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  </auth>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <serial>99bccbe9-de42-409d-aa8f-e509f6080e7b</serial>
Jan 23 04:43:16 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 04:43:16 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:43:16 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.454 250273 DEBUG nova.virt.libvirt.driver [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.456 250273 DEBUG nova.virt.libvirt.driver [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.457 250273 DEBUG nova.virt.libvirt.driver [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.458 250273 DEBUG nova.virt.libvirt.driver [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] No VIF found with MAC fa:16:3e:6a:97:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:43:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:16.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.612 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:16 np0005593232 nova_compute[250269]: 2026-01-23 09:43:16.729 250273 DEBUG oslo_concurrency.lockutils [None req-c8985fc2-0fe8-43e5-96dd-ef83a59bd1cc 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:16.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1500: 321 pgs: 321 active+clean; 391 MiB data, 704 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 7.0 MiB/s wr, 413 op/s
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.321 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [{"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.344 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-a1208de2-efde-4618-8388-2acfab37582a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.345 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.345 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.346 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.346 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.346 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.346 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.391 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.391 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.392 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.392 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.392 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:18.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:18.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:43:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1819907217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.877 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:18 np0005593232 podman[283791]: 2026-01-23 09:43:18.988222139 +0000 UTC m=+0.055838640 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.991 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.992 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.992 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.996 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:43:18 np0005593232 nova_compute[250269]: 2026-01-23 09:43:18.996 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000030 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:43:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.191 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.192 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4251MB free_disk=20.86325454711914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.193 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.193 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1501: 321 pgs: 321 active+clean; 405 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.2 MiB/s wr, 443 op/s
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.368 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance a1208de2-efde-4618-8388-2acfab37582a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.369 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9b1cd8b8-1ac9-441e-961f-17dc56cb555c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.369 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.369 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.548 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:19 np0005593232 nova_compute[250269]: 2026-01-23 09:43:19.582 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:43:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/200743445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:43:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:43:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2684931260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.029 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.039 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.061 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.101 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.102 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:20 np0005593232 nova_compute[250269]: 2026-01-23 09:43:20.103 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:20.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:20.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:21 np0005593232 nova_compute[250269]: 2026-01-23 09:43:21.075 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:21 np0005593232 nova_compute[250269]: 2026-01-23 09:43:21.108 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1502: 321 pgs: 321 active+clean; 405 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 6.8 MiB/s wr, 422 op/s
Jan 23 04:43:21 np0005593232 nova_compute[250269]: 2026-01-23 09:43:21.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.567720) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402568069, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1017, "num_deletes": 256, "total_data_size": 1372825, "memory_usage": 1392240, "flush_reason": "Manual Compaction"}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402593022, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1356601, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32874, "largest_seqno": 33890, "table_properties": {"data_size": 1351584, "index_size": 2477, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11783, "raw_average_key_size": 20, "raw_value_size": 1341192, "raw_average_value_size": 2348, "num_data_blocks": 107, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161340, "oldest_key_time": 1769161340, "file_creation_time": 1769161402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:43:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 25410 microseconds, and 10841 cpu microseconds.
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:43:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.593284) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1356601 bytes OK
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.593378) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.596892) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.596924) EVENT_LOG_v1 {"time_micros": 1769161402596918, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.596943) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1367922, prev total WAL file size 1367922, number of live WAL files 2.
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.598110) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1324KB)], [71(8652KB)]
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402598311, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 10216701, "oldest_snapshot_seqno": -1}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5827 keys, 8326751 bytes, temperature: kUnknown
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402705153, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 8326751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8289271, "index_size": 21818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 149718, "raw_average_key_size": 25, "raw_value_size": 8185930, "raw_average_value_size": 1404, "num_data_blocks": 874, "num_entries": 5827, "num_filter_entries": 5827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161402, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.705625) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 8326751 bytes
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.707631) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.5 rd, 77.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.4 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(13.7) write-amplify(6.1) OK, records in: 6359, records dropped: 532 output_compression: NoCompression
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.707667) EVENT_LOG_v1 {"time_micros": 1769161402707655, "job": 40, "event": "compaction_finished", "compaction_time_micros": 106966, "compaction_time_cpu_micros": 42354, "output_level": 6, "num_output_files": 1, "total_output_size": 8326751, "num_input_records": 6359, "num_output_records": 5827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402708563, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161402712622, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.597861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.712789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.712800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.712805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.712809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:43:22.712813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:43:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:43:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:22.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:43:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1503: 321 pgs: 321 active+clean; 405 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 544 KiB/s rd, 2.6 MiB/s wr, 317 op/s
Jan 23 04:43:23 np0005593232 nova_compute[250269]: 2026-01-23 09:43:23.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:43:23 np0005593232 nova_compute[250269]: 2026-01-23 09:43:23.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:43:23 np0005593232 nova_compute[250269]: 2026-01-23 09:43:23.367 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:43:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:24 np0005593232 nova_compute[250269]: 2026-01-23 09:43:24.569 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:24.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1504: 321 pgs: 321 active+clean; 405 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 487 KiB/s rd, 2.3 MiB/s wr, 285 op/s
Jan 23 04:43:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:26 np0005593232 nova_compute[250269]: 2026-01-23 09:43:26.620 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:26.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1505: 321 pgs: 321 active+clean; 431 MiB data, 729 MiB used, 20 GiB / 21 GiB avail; 464 KiB/s rd, 3.2 MiB/s wr, 280 op/s
Jan 23 04:43:27 np0005593232 nova_compute[250269]: 2026-01-23 09:43:27.907 250273 DEBUG oslo_concurrency.lockutils [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:27 np0005593232 nova_compute[250269]: 2026-01-23 09:43:27.907 250273 DEBUG oslo_concurrency.lockutils [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:27 np0005593232 nova_compute[250269]: 2026-01-23 09:43:27.932 250273 INFO nova.compute.manager [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Detaching volume 99bccbe9-de42-409d-aa8f-e509f6080e7b#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.147 250273 INFO nova.virt.block_device [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Attempting to driver detach volume 99bccbe9-de42-409d-aa8f-e509f6080e7b from mountpoint /dev/vdb#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.155 250273 DEBUG nova.virt.libvirt.driver [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Attempting to detach device vdb from instance a1208de2-efde-4618-8388-2acfab37582a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.156 250273 DEBUG nova.virt.libvirt.guest [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-99bccbe9-de42-409d-aa8f-e509f6080e7b">
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <serial>99bccbe9-de42-409d-aa8f-e509f6080e7b</serial>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:43:28 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.165 250273 INFO nova.virt.libvirt.driver [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Successfully detached device vdb from instance a1208de2-efde-4618-8388-2acfab37582a from the persistent domain config.#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.166 250273 DEBUG nova.virt.libvirt.driver [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a1208de2-efde-4618-8388-2acfab37582a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.166 250273 DEBUG nova.virt.libvirt.guest [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-99bccbe9-de42-409d-aa8f-e509f6080e7b">
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <serial>99bccbe9-de42-409d-aa8f-e509f6080e7b</serial>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 04:43:28 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:43:28 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.220 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769161408.219693, a1208de2-efde-4618-8388-2acfab37582a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.223 250273 DEBUG nova.virt.libvirt.driver [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a1208de2-efde-4618-8388-2acfab37582a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.225 250273 INFO nova.virt.libvirt.driver [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Successfully detached device vdb from instance a1208de2-efde-4618-8388-2acfab37582a from the live domain config.#033[00m
Jan 23 04:43:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:28.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.750 250273 DEBUG nova.objects.instance [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'flavor' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:43:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:28.767 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:28.769 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:43:28 np0005593232 nova_compute[250269]: 2026-01-23 09:43:28.827 250273 DEBUG oslo_concurrency.lockutils [None req-aa9d30f3-a209-4482-a581-6aa320252552 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1506: 321 pgs: 321 active+clean; 451 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 189 KiB/s rd, 2.8 MiB/s wr, 137 op/s
Jan 23 04:43:29 np0005593232 nova_compute[250269]: 2026-01-23 09:43:29.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:30.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:30.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1507: 321 pgs: 321 active+clean; 451 MiB data, 737 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.623 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.811 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.811 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.812 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.812 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.812 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.814 250273 INFO nova.compute.manager [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Terminating instance#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.815 250273 DEBUG nova.compute.manager [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:43:31 np0005593232 kernel: tapbd920bc4-48 (unregistering): left promiscuous mode
Jan 23 04:43:31 np0005593232 NetworkManager[49057]: <info>  [1769161411.8859] device (tapbd920bc4-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:31Z|00143|binding|INFO|Releasing lport bd920bc4-48e9-4ef3-8874-84bf9c13b048 from this chassis (sb_readonly=0)
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:31Z|00144|binding|INFO|Setting lport bd920bc4-48e9-4ef3-8874-84bf9c13b048 down in Southbound
Jan 23 04:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:31Z|00145|binding|INFO|Removing iface tapbd920bc4-48 ovn-installed in OVS
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:31.912 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:0d:d3 10.100.0.7'], port_security=['fa:16:3e:e4:0d:d3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b1cd8b8-1ac9-441e-961f-17dc56cb555c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2e0529ecec18434aa4fe09ac251ff46f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '738ee158-14dd-4e68-a2b5-ae763d873278', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d97a1a3-b21f-4c43-ab8b-1309b459c3a0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=bd920bc4-48e9-4ef3-8874-84bf9c13b048) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:43:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:31.914 161902 INFO neutron.agent.ovn.metadata.agent [-] Port bd920bc4-48e9-4ef3-8874-84bf9c13b048 in datapath 93c26ebc-7d13-4039-99a4-82ec186ccf44 unbound from our chassis#033[00m
Jan 23 04:43:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:31.917 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93c26ebc-7d13-4039-99a4-82ec186ccf44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:43:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:31.920 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[16b18a44-3a02-49f3-82fd-3deb54452e79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:31.922 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44 namespace which is not needed anymore#033[00m
Jan 23 04:43:31 np0005593232 nova_compute[250269]: 2026-01-23 09:43:31.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:31 np0005593232 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000030.scope: Deactivated successfully.
Jan 23 04:43:31 np0005593232 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000030.scope: Consumed 16.836s CPU time.
Jan 23 04:43:31 np0005593232 systemd-machined[215836]: Machine qemu-20-instance-00000030 terminated.
Jan 23 04:43:32 np0005593232 kernel: tapbd920bc4-48: entered promiscuous mode
Jan 23 04:43:32 np0005593232 kernel: tapbd920bc4-48 (unregistering): left promiscuous mode
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.048 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.057 250273 INFO nova.virt.libvirt.driver [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Instance destroyed successfully.#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.058 250273 DEBUG nova.objects.instance [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lazy-loading 'resources' on Instance uuid 9b1cd8b8-1ac9-441e-961f-17dc56cb555c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:43:32 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [NOTICE]   (283591) : haproxy version is 2.8.14-c23fe91
Jan 23 04:43:32 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [NOTICE]   (283591) : path to executable is /usr/sbin/haproxy
Jan 23 04:43:32 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [WARNING]  (283591) : Exiting Master process...
Jan 23 04:43:32 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [ALERT]    (283591) : Current worker (283593) exited with code 143 (Terminated)
Jan 23 04:43:32 np0005593232 neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44[283587]: [WARNING]  (283591) : All workers exited. Exiting... (0)
Jan 23 04:43:32 np0005593232 systemd[1]: libpod-0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e.scope: Deactivated successfully.
Jan 23 04:43:32 np0005593232 podman[283864]: 2026-01-23 09:43:32.076395564 +0000 UTC m=+0.049957020 container died 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.080 250273 DEBUG nova.virt.libvirt.vif [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:42:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestBootFromVolume-server-1533628909',display_name='tempest-ServersTestBootFromVolume-server-1533628909',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestbootfromvolume-server-1533628909',id=48,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCDVXhJkhKwKxHS7yJESnEwsJkVx76V/aGPNj/f/JPsIRZQR0K5ekZE6ebqaO6lv+YE26Xjl+L22ijzmBYVJ7WlL1ALNOAo7jhyaTFnaZLWUKxXvM+mGX2rsNAohFRABIA==',key_name='tempest-keypair-2000474017',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:42:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2e0529ecec18434aa4fe09ac251ff46f',ramdisk_id='',reservation_id='r-rtsyge8o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-ServersTestBootFromVolume-1176222732',owner_user_name='tempest-ServersTestBootFromVolume-1176222732-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:42:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='83b4563d24244490b58764ee8525c26d',uuid=9b1cd8b8-1ac9-441e-961f-17dc56cb555c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.080 250273 DEBUG nova.network.os_vif_util [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converting VIF {"id": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "address": "fa:16:3e:e4:0d:d3", "network": {"id": "93c26ebc-7d13-4039-99a4-82ec186ccf44", "bridge": "br-int", "label": "tempest-ServersTestBootFromVolume-995634687-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2e0529ecec18434aa4fe09ac251ff46f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbd920bc4-48", "ovs_interfaceid": "bd920bc4-48e9-4ef3-8874-84bf9c13b048", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.081 250273 DEBUG nova.network.os_vif_util [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.082 250273 DEBUG os_vif [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.085 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd920bc4-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.095 250273 INFO os_vif [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e4:0d:d3,bridge_name='br-int',has_traffic_filtering=True,id=bd920bc4-48e9-4ef3-8874-84bf9c13b048,network=Network(93c26ebc-7d13-4039-99a4-82ec186ccf44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbd920bc4-48')#033[00m
Jan 23 04:43:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-acf391568b7ae678b3d317f5a2d00c9873049e32f969d71dba48018d5ee09cce-merged.mount: Deactivated successfully.
Jan 23 04:43:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e-userdata-shm.mount: Deactivated successfully.
Jan 23 04:43:32 np0005593232 podman[283864]: 2026-01-23 09:43:32.118339683 +0000 UTC m=+0.091901139 container cleanup 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:43:32 np0005593232 systemd[1]: libpod-conmon-0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e.scope: Deactivated successfully.
Jan 23 04:43:32 np0005593232 podman[283916]: 2026-01-23 09:43:32.182500721 +0000 UTC m=+0.041096725 container remove 0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.187 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d1ca5c-5e62-4379-993f-4adf2d40ad85]: (4, ('Fri Jan 23 09:43:32 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44 (0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e)\n0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e\nFri Jan 23 09:43:32 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44 (0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e)\n0d60895af74f8458ff57810e028ff6b05d8f21a12dc355f252dff93696b6777e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.189 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fc90d453-f320-4668-9541-e863b6ba29a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.191 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93c26ebc-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.192 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:32 np0005593232 kernel: tap93c26ebc-70: left promiscuous mode
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.198 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[552123d5-9691-4d20-9a25-0e5dfb77a7c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 nova_compute[250269]: 2026-01-23 09:43:32.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.214 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[124837be-6188-4132-988a-ab85c21f61b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.215 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1b98e5-3844-488a-8ce7-e9c4a1e0d35f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.231 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d90d65b2-1a85-40f8-864b-eaf5986abf61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 528646, 'reachable_time': 40633, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283934, 'error': None, 'target': 'ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 systemd[1]: run-netns-ovnmeta\x2d93c26ebc\x2d7d13\x2d4039\x2d99a4\x2d82ec186ccf44.mount: Deactivated successfully.
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.235 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93c26ebc-7d13-4039-99a4-82ec186ccf44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:32.235 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb9beba-bb81-4bcb-8c64-bff26a93b2a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:32.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:32.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.184 250273 INFO nova.virt.libvirt.driver [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Deleting instance files /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c_del#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.185 250273 INFO nova.virt.libvirt.driver [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Deletion of /var/lib/nova/instances/9b1cd8b8-1ac9-441e-961f-17dc56cb555c_del complete#033[00m
Jan 23 04:43:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1508: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 189 op/s
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.453 250273 DEBUG nova.compute.manager [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-unplugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.454 250273 DEBUG oslo_concurrency.lockutils [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.455 250273 DEBUG oslo_concurrency.lockutils [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.455 250273 DEBUG oslo_concurrency.lockutils [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.455 250273 DEBUG nova.compute.manager [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] No waiting events found dispatching network-vif-unplugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.455 250273 DEBUG nova.compute.manager [req-eb201697-4b5b-47ab-a869-c71beeaf3eeb req-b8b43288-28c4-45d8-92f4-90852edf7e9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-unplugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.543 250273 INFO nova.compute.manager [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Took 1.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.544 250273 DEBUG oslo.service.loopingcall [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.544 250273 DEBUG nova.compute.manager [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:43:33 np0005593232 nova_compute[250269]: 2026-01-23 09:43:33.545 250273 DEBUG nova.network.neutron [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:43:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:33.771 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:43:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:34 np0005593232 nova_compute[250269]: 2026-01-23 09:43:34.575 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:34.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1509: 321 pgs: 321 active+clean; 372 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.520 250273 DEBUG nova.network.neutron [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.545 250273 INFO nova.compute.manager [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Took 2.00 seconds to deallocate network for instance.#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.657 250273 DEBUG nova.compute.manager [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.658 250273 DEBUG oslo_concurrency.lockutils [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.658 250273 DEBUG oslo_concurrency.lockutils [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.658 250273 DEBUG oslo_concurrency.lockutils [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.659 250273 DEBUG nova.compute.manager [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] No waiting events found dispatching network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.659 250273 WARNING nova.compute.manager [req-96bf71e5-d9b0-4fab-9a5b-3386cb545b11 req-9177ed59-67fb-4d98-a294-a792c73b7946 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received unexpected event network-vif-plugged-bd920bc4-48e9-4ef3-8874-84bf9c13b048 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.861 250273 INFO nova.compute.manager [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Took 0.31 seconds to detach 1 volumes for instance.#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.862 250273 DEBUG nova.compute.manager [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Deleting volume: 113a804f-8abc-47f6-b4ca-e098579996e7 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217#033[00m
Jan 23 04:43:35 np0005593232 nova_compute[250269]: 2026-01-23 09:43:35.867 250273 DEBUG nova.compute.manager [req-7710a4b1-10f1-4fe6-a3b9-eeea986d36ae req-36dc77a3-b12b-4746-94cd-582bb2aaeaa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Received event network-vif-deleted-bd920bc4-48e9-4ef3-8874-84bf9c13b048 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.293 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.294 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.295 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.295 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.296 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.299 250273 INFO nova.compute.manager [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Terminating instance#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.301 250273 DEBUG nova.compute.manager [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.335 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.336 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:36 np0005593232 kernel: tap189994c7-5c (unregistering): left promiscuous mode
Jan 23 04:43:36 np0005593232 NetworkManager[49057]: <info>  [1769161416.3675] device (tap189994c7-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.377 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:36Z|00146|binding|INFO|Releasing lport 189994c7-5c8d-40fd-888d-a104e39b5aca from this chassis (sb_readonly=0)
Jan 23 04:43:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:36Z|00147|binding|INFO|Setting lport 189994c7-5c8d-40fd-888d-a104e39b5aca down in Southbound
Jan 23 04:43:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:43:36Z|00148|binding|INFO|Removing iface tap189994c7-5c ovn-installed in OVS
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.385 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6a:97:e5 10.100.0.8'], port_security=['fa:16:3e:6a:97:e5 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a1208de2-efde-4618-8388-2acfab37582a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4fd9229340ed4bf3a3a72baa6985a3e3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '232ea62b-b441-41b5-8457-7d5744ac9ac2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=91ae19d5-b9ed-444d-b1cd-8fb0c58abf8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=189994c7-5c8d-40fd-888d-a104e39b5aca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.386 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 189994c7-5c8d-40fd-888d-a104e39b5aca in datapath f19933f5-cfe3-4319-a83b-b72dde692ab6 unbound from our chassis#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.388 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f19933f5-cfe3-4319-a83b-b72dde692ab6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.389 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b347e4ee-47a4-41c1-b21b-4341e2b6da0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.389 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6 namespace which is not needed anymore#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.406 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 23 04:43:36 np0005593232 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d0000002d.scope: Consumed 18.176s CPU time.
Jan 23 04:43:36 np0005593232 systemd-machined[215836]: Machine qemu-19-instance-0000002d terminated.
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.520 250273 DEBUG oslo_concurrency.processutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:36 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [NOTICE]   (282139) : haproxy version is 2.8.14-c23fe91
Jan 23 04:43:36 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [NOTICE]   (282139) : path to executable is /usr/sbin/haproxy
Jan 23 04:43:36 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [WARNING]  (282139) : Exiting Master process...
Jan 23 04:43:36 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [ALERT]    (282139) : Current worker (282147) exited with code 143 (Terminated)
Jan 23 04:43:36 np0005593232 neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6[282116]: [WARNING]  (282139) : All workers exited. Exiting... (0)
Jan 23 04:43:36 np0005593232 systemd[1]: libpod-14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389.scope: Deactivated successfully.
Jan 23 04:43:36 np0005593232 podman[284011]: 2026-01-23 09:43:36.54760008 +0000 UTC m=+0.057388934 container died 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.550 250273 INFO nova.virt.libvirt.driver [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] Instance destroyed successfully.#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.551 250273 DEBUG nova.objects.instance [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lazy-loading 'resources' on Instance uuid a1208de2-efde-4618-8388-2acfab37582a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.579 250273 DEBUG nova.virt.libvirt.vif [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:41:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-UpdateMultiattachVolumeNegativeTest-server-1527273986',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-updatemultiattachvolumenegativetest-server-1527273986',id=45,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW/D7eFeqMjBvekfY9VqlM3EY9Lv7j0wpym0wwbZXZxi5xiYHs3Y+SGaRgTVfBABcO7R/jAYgVwXr4x4dmhbR/VewPXJyWaKlJux19vulauSxlm5JZb+T430JhpaEya2w==',key_name='tempest-keypair-1370438234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:42:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4fd9229340ed4bf3a3a72baa6985a3e3',ramdisk_id='',reservation_id='r-9sg2yp8s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047',owner_user_name='tempest-UpdateMultiattachVolumeNegativeTest-1520463047-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:42:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='187ce0cedde344a3b09ca4560410580e',uuid=a1208de2-efde-4618-8388-2acfab37582a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.580 250273 DEBUG nova.network.os_vif_util [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converting VIF {"id": "189994c7-5c8d-40fd-888d-a104e39b5aca", "address": "fa:16:3e:6a:97:e5", "network": {"id": "f19933f5-cfe3-4319-a83b-b72dde692ab6", "bridge": "br-int", "label": "tempest-UpdateMultiattachVolumeNegativeTest-1169169895-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4fd9229340ed4bf3a3a72baa6985a3e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap189994c7-5c", "ovs_interfaceid": "189994c7-5c8d-40fd-888d-a104e39b5aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.581 250273 DEBUG nova.network.os_vif_util [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.582 250273 DEBUG os_vif [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.584 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap189994c7-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:43:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389-userdata-shm.mount: Deactivated successfully.
Jan 23 04:43:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-53566040ee2110f3625468f7fe5f4d82c4d5c4d0257a7bce0841010e2a7a34c8-merged.mount: Deactivated successfully.
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.590 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.594 250273 INFO os_vif [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6a:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=189994c7-5c8d-40fd-888d-a104e39b5aca,network=Network(f19933f5-cfe3-4319-a83b-b72dde692ab6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap189994c7-5c')#033[00m
Jan 23 04:43:36 np0005593232 podman[284011]: 2026-01-23 09:43:36.596332084 +0000 UTC m=+0.106120938 container cleanup 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:43:36 np0005593232 systemd[1]: libpod-conmon-14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389.scope: Deactivated successfully.
Jan 23 04:43:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:36.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:36 np0005593232 podman[284059]: 2026-01-23 09:43:36.666740493 +0000 UTC m=+0.048971282 container remove 14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.673 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8e24eace-903d-4bb6-b4cc-f1a0c62b4be2]: (4, ('Fri Jan 23 09:43:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6 (14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389)\n14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389\nFri Jan 23 09:43:36 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6 (14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389)\n14c4caecc03dff537313152ce66c5570b770bcc8c312a717aca8dcf113c4e389\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.676 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[86ba4c94-00f5-4161-a1e8-f9a6eddebdaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.678 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf19933f5-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:43:36 np0005593232 kernel: tapf19933f5-c0: left promiscuous mode
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.697 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0503be8-db5e-49d3-81e0-5da63a80d3a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.708 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[87584d70-d0bd-4620-a34a-c97367846363]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.710 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e64fb2a7-6444-459b-b648-e44c61b4deb1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.727 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[376a8838-aeb0-4f64-917a-a126a27294b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523772, 'reachable_time': 43451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284104, 'error': None, 'target': 'ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.729 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f19933f5-cfe3-4319-a83b-b72dde692ab6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:43:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:36.729 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a2bb5a17-eee8-47a2-9bb5-a62948f20759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:43:36 np0005593232 systemd[1]: run-netns-ovnmeta\x2df19933f5\x2dcfe3\x2d4319\x2da83b\x2db72dde692ab6.mount: Deactivated successfully.
Jan 23 04:43:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:36.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 23 04:43:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 23 04:43:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 23 04:43:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:43:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1246843291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.988 250273 DEBUG oslo_concurrency.processutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:36 np0005593232 nova_compute[250269]: 2026-01-23 09:43:36.995 250273 DEBUG nova.compute.provider_tree [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:43:37 np0005593232 nova_compute[250269]: 2026-01-23 09:43:37.029 250273 DEBUG nova.scheduler.client.report [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:43:37 np0005593232 nova_compute[250269]: 2026-01-23 09:43:37.158 250273 INFO nova.virt.libvirt.driver [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Deleting instance files /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a_del#033[00m
Jan 23 04:43:37 np0005593232 nova_compute[250269]: 2026-01-23 09:43:37.159 250273 INFO nova.virt.libvirt.driver [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Deletion of /var/lib/nova/instances/a1208de2-efde-4618-8388-2acfab37582a_del complete#033[00m
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:43:37
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'backups', '.mgr', 'images']
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1511: 321 pgs: 321 active+clean; 335 MiB data, 691 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 906 KiB/s wr, 211 op/s
Jan 23 04:43:37 np0005593232 nova_compute[250269]: 2026-01-23 09:43:37.335 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:43:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:43:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:38.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:43:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:38.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.178 250273 INFO nova.scheduler.client.report [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Deleted allocations for instance 9b1cd8b8-1ac9-441e-961f-17dc56cb555c#033[00m
Jan 23 04:43:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1512: 321 pgs: 321 active+clean; 261 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 36 KiB/s wr, 285 op/s
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.317 250273 INFO nova.compute.manager [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Took 3.02 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.318 250273 DEBUG oslo.service.loopingcall [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.319 250273 DEBUG nova.compute.manager [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.320 250273 DEBUG nova.network.neutron [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:43:39 np0005593232 podman[284110]: 2026-01-23 09:43:39.445143094 +0000 UTC m=+0.098046896 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.577 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.712 250273 DEBUG oslo_concurrency.lockutils [None req-f59c918e-5e34-4444-9799-62d4dace4aae 83b4563d24244490b58764ee8525c26d 2e0529ecec18434aa4fe09ac251ff46f - - default default] Lock "9b1cd8b8-1ac9-441e-961f-17dc56cb555c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.718 250273 DEBUG nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-unplugged-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.719 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.719 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.719 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.720 250273 DEBUG nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] No waiting events found dispatching network-vif-unplugged-189994c7-5c8d-40fd-888d-a104e39b5aca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.720 250273 DEBUG nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-unplugged-189994c7-5c8d-40fd-888d-a104e39b5aca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.720 250273 DEBUG nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.720 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a1208de2-efde-4618-8388-2acfab37582a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.720 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.721 250273 DEBUG oslo_concurrency.lockutils [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.721 250273 DEBUG nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] No waiting events found dispatching network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:43:39 np0005593232 nova_compute[250269]: 2026-01-23 09:43:39.721 250273 WARNING nova.compute.manager [req-2856d70d-02d0-4555-9e70-83d1b5a89380 req-cc9a5d66-31dc-4465-afce-55a150d27828 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received unexpected event network-vif-plugged-189994c7-5c8d-40fd-888d-a104e39b5aca for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.321 250273 DEBUG nova.network.neutron [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.353 250273 INFO nova.compute.manager [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] Took 1.03 seconds to deallocate network for instance.#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.425 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.426 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.460 250273 DEBUG nova.compute.manager [req-c09d1f9d-1817-49ae-a0d2-aeaf938e9bc8 req-377f8ab4-7aed-44a8-a87f-1408ebf8304b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a1208de2-efde-4618-8388-2acfab37582a] Received event network-vif-deleted-189994c7-5c8d-40fd-888d-a104e39b5aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.469 250273 DEBUG oslo_concurrency.processutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:43:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:40.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:40.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:43:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3683953298' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.915 250273 DEBUG oslo_concurrency.processutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.922 250273 DEBUG nova.compute.provider_tree [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.949 250273 DEBUG nova.scheduler.client.report [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:43:40 np0005593232 nova_compute[250269]: 2026-01-23 09:43:40.978 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:43:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:43:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1513: 321 pgs: 321 active+clean; 261 MiB data, 644 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 36 KiB/s wr, 285 op/s
Jan 23 04:43:41 np0005593232 nova_compute[250269]: 2026-01-23 09:43:41.312 250273 INFO nova.scheduler.client.report [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Deleted allocations for instance a1208de2-efde-4618-8388-2acfab37582a#033[00m
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:43:41 np0005593232 nova_compute[250269]: 2026-01-23 09:43:41.469 250273 DEBUG oslo_concurrency.lockutils [None req-97af7bd3-d1d1-42b9-9d57-6f3be2d79bfb 187ce0cedde344a3b09ca4560410580e 4fd9229340ed4bf3a3a72baa6985a3e3 - - default default] Lock "a1208de2-efde-4618-8388-2acfab37582a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:41 np0005593232 nova_compute[250269]: 2026-01-23 09:43:41.588 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a580bf83-758b-4cf3-b3d8-32c32cd0fa7f does not exist
Jan 23 04:43:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a870c10b-5958-446f-9528-f5a169a99ceb does not exist
Jan 23 04:43:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7f1fa279-9c54-4fa4-b19b-d2ac92874cb0 does not exist
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:43:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.285637334 +0000 UTC m=+0.038470430 container create 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:43:42 np0005593232 systemd[1]: Started libpod-conmon-12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd.scope.
Jan 23 04:43:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.269639643 +0000 UTC m=+0.022472759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.376178812 +0000 UTC m=+0.129011928 container init 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.383263926 +0000 UTC m=+0.136097022 container start 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.386755237 +0000 UTC m=+0.139588363 container attach 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:43:42 np0005593232 crazy_proskuriakova[284445]: 167 167
Jan 23 04:43:42 np0005593232 systemd[1]: libpod-12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd.scope: Deactivated successfully.
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.389527747 +0000 UTC m=+0.142360843 container died 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:43:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-14937b52d36ee97663e1ecb958e8a75180dd42c1d702ffe08a5a9f16ddb231e6-merged.mount: Deactivated successfully.
Jan 23 04:43:42 np0005593232 podman[284429]: 2026-01-23 09:43:42.427540602 +0000 UTC m=+0.180373698 container remove 12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:42 np0005593232 systemd[1]: libpod-conmon-12454f31c18047d2bbc067282f43df69b609e7b536b2c31761981602b39c8fcd.scope: Deactivated successfully.
Jan 23 04:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:42.598 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:42.599 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:43:42.600 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:43:42 np0005593232 podman[284469]: 2026-01-23 09:43:42.605572621 +0000 UTC m=+0.045223654 container create 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:43:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:42.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:42 np0005593232 systemd[1]: Started libpod-conmon-9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895.scope.
Jan 23 04:43:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:42 np0005593232 podman[284469]: 2026-01-23 09:43:42.585528793 +0000 UTC m=+0.025179846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:42 np0005593232 podman[284469]: 2026-01-23 09:43:42.69025437 +0000 UTC m=+0.129905413 container init 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:43:42 np0005593232 podman[284469]: 2026-01-23 09:43:42.698636082 +0000 UTC m=+0.138287115 container start 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:43:42 np0005593232 podman[284469]: 2026-01-23 09:43:42.701912026 +0000 UTC m=+0.141563079 container attach 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:43:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:42.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:43:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3200543563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:43:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1514: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 23 KiB/s wr, 219 op/s
Jan 23 04:43:43 np0005593232 happy_williamson[284485]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:43:43 np0005593232 happy_williamson[284485]: --> relative data size: 1.0
Jan 23 04:43:43 np0005593232 happy_williamson[284485]: --> All data devices are unavailable
Jan 23 04:43:43 np0005593232 systemd[1]: libpod-9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895.scope: Deactivated successfully.
Jan 23 04:43:43 np0005593232 podman[284469]: 2026-01-23 09:43:43.544012964 +0000 UTC m=+0.983664007 container died 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:43:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b5d525394f725c0b89407f5508aab4ebbbe983494bb180ae56f0488e39eb5671-merged.mount: Deactivated successfully.
Jan 23 04:43:43 np0005593232 podman[284469]: 2026-01-23 09:43:43.600098009 +0000 UTC m=+1.039749042 container remove 9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_williamson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:43:43 np0005593232 systemd[1]: libpod-conmon-9a50b82aaf8862120ad0cd2ff8d0135527155c670da5908afacce3b711457895.scope: Deactivated successfully.
Jan 23 04:43:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.183098524 +0000 UTC m=+0.036955425 container create 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:43:44 np0005593232 systemd[1]: Started libpod-conmon-3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1.scope.
Jan 23 04:43:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.257680693 +0000 UTC m=+0.111537604 container init 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.169035669 +0000 UTC m=+0.022892590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.265940691 +0000 UTC m=+0.119797592 container start 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.268902666 +0000 UTC m=+0.122759577 container attach 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:43:44 np0005593232 gracious_borg[284669]: 167 167
Jan 23 04:43:44 np0005593232 systemd[1]: libpod-3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1.scope: Deactivated successfully.
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.272589202 +0000 UTC m=+0.126446103 container died 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:43:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-75329f25a53c6bb0dc72b6f8ea4df609ce04d11bd46a8203015d6ad48d943ff1-merged.mount: Deactivated successfully.
Jan 23 04:43:44 np0005593232 podman[284653]: 2026-01-23 09:43:44.312194533 +0000 UTC m=+0.166051424 container remove 3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:43:44 np0005593232 systemd[1]: libpod-conmon-3e99d7074ddad08895bb0822e610fbe836e0a3732013edc4b38744a81db29ec1.scope: Deactivated successfully.
Jan 23 04:43:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:43:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681468430' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:43:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:43:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681468430' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:43:44 np0005593232 podman[284692]: 2026-01-23 09:43:44.478685929 +0000 UTC m=+0.038770238 container create b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:43:44 np0005593232 systemd[1]: Started libpod-conmon-b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742.scope.
Jan 23 04:43:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8aafa76ba2f93c01a791c28d3414006a21bf6d833209d014792537acb15fc6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8aafa76ba2f93c01a791c28d3414006a21bf6d833209d014792537acb15fc6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8aafa76ba2f93c01a791c28d3414006a21bf6d833209d014792537acb15fc6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:44 np0005593232 podman[284692]: 2026-01-23 09:43:44.463252515 +0000 UTC m=+0.023336844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8aafa76ba2f93c01a791c28d3414006a21bf6d833209d014792537acb15fc6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:44 np0005593232 podman[284692]: 2026-01-23 09:43:44.566460528 +0000 UTC m=+0.126544857 container init b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:43:44 np0005593232 podman[284692]: 2026-01-23 09:43:44.576990941 +0000 UTC m=+0.137075250 container start b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:43:44 np0005593232 podman[284692]: 2026-01-23 09:43:44.581716127 +0000 UTC m=+0.141800456 container attach b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:43:44 np0005593232 nova_compute[250269]: 2026-01-23 09:43:44.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:44.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000058s ======
Jan 23 04:43:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:44.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 23 04:43:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1515: 321 pgs: 321 active+clean; 88 MiB data, 530 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 23 KiB/s wr, 219 op/s
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]: {
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:    "0": [
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:        {
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "devices": [
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "/dev/loop3"
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            ],
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "lv_name": "ceph_lv0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "lv_size": "7511998464",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "name": "ceph_lv0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "tags": {
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.cluster_name": "ceph",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.crush_device_class": "",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.encrypted": "0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.osd_id": "0",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.type": "block",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:                "ceph.vdo": "0"
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            },
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "type": "block",
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:            "vg_name": "ceph_vg0"
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:        }
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]:    ]
Jan 23 04:43:45 np0005593232 nice_ganguly[284708]: }
Jan 23 04:43:45 np0005593232 systemd[1]: libpod-b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742.scope: Deactivated successfully.
Jan 23 04:43:45 np0005593232 podman[284692]: 2026-01-23 09:43:45.407409784 +0000 UTC m=+0.967494103 container died b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:43:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d8aafa76ba2f93c01a791c28d3414006a21bf6d833209d014792537acb15fc6e-merged.mount: Deactivated successfully.
Jan 23 04:43:45 np0005593232 podman[284692]: 2026-01-23 09:43:45.464400066 +0000 UTC m=+1.024484365 container remove b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_ganguly, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:43:45 np0005593232 systemd[1]: libpod-conmon-b5e5c3d9e6e6c853cf936f8840328502cc8ba7347ff5bb9349e0e9cce7e89742.scope: Deactivated successfully.
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.093641763 +0000 UTC m=+0.021263403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.199956425 +0000 UTC m=+0.127578065 container create ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:46 np0005593232 systemd[1]: Started libpod-conmon-ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f.scope.
Jan 23 04:43:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.289830575 +0000 UTC m=+0.217452245 container init ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.297703802 +0000 UTC m=+0.225325442 container start ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:43:46 np0005593232 vigorous_brahmagupta[284885]: 167 167
Jan 23 04:43:46 np0005593232 systemd[1]: libpod-ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f.scope: Deactivated successfully.
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.302686146 +0000 UTC m=+0.230307826 container attach ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.30317997 +0000 UTC m=+0.230801610 container died ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:43:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e5ca9ec440f39186c1b8352d2d1899fff2fb59d7698a545d53fb7a7ea51b3e95-merged.mount: Deactivated successfully.
Jan 23 04:43:46 np0005593232 podman[284869]: 2026-01-23 09:43:46.429268512 +0000 UTC m=+0.356890182 container remove ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:43:46 np0005593232 systemd[1]: libpod-conmon-ece056505e9003043e6b3bc76a50124d7a0db59232a9d108e1173e984ce57c6f.scope: Deactivated successfully.
Jan 23 04:43:46 np0005593232 podman[284909]: 2026-01-23 09:43:46.634668339 +0000 UTC m=+0.083401053 container create a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:43:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:46.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:46 np0005593232 nova_compute[250269]: 2026-01-23 09:43:46.643 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:46 np0005593232 podman[284909]: 2026-01-23 09:43:46.573221079 +0000 UTC m=+0.021953813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:43:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:43:46 np0005593232 systemd[1]: Started libpod-conmon-a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625.scope.
Jan 23 04:43:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:43:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f6e9e35806f1eee37f52551e3ed0458e5b5f8ba7fcbfe4741fbdeec30204f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f6e9e35806f1eee37f52551e3ed0458e5b5f8ba7fcbfe4741fbdeec30204f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f6e9e35806f1eee37f52551e3ed0458e5b5f8ba7fcbfe4741fbdeec30204f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7f6e9e35806f1eee37f52551e3ed0458e5b5f8ba7fcbfe4741fbdeec30204f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:43:46 np0005593232 podman[284909]: 2026-01-23 09:43:46.765664363 +0000 UTC m=+0.214397097 container init a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:46.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:46 np0005593232 podman[284909]: 2026-01-23 09:43:46.779843722 +0000 UTC m=+0.228576436 container start a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:43:46 np0005593232 podman[284909]: 2026-01-23 09:43:46.819059841 +0000 UTC m=+0.267792585 container attach a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:43:47 np0005593232 nova_compute[250269]: 2026-01-23 09:43:47.056 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161412.054371, 9b1cd8b8-1ac9-441e-961f-17dc56cb555c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:43:47 np0005593232 nova_compute[250269]: 2026-01-23 09:43:47.057 250273 INFO nova.compute.manager [-] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:43:47 np0005593232 nova_compute[250269]: 2026-01-23 09:43:47.087 250273 DEBUG nova.compute.manager [None req-3362a98c-6dd0-4fef-a4c4-63b10f70d569 - - - - - -] [instance: 9b1cd8b8-1ac9-441e-961f-17dc56cb555c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1000093092' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1000093092' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:43:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1516: 321 pgs: 321 active+clean; 105 MiB data, 537 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 696 KiB/s wr, 236 op/s
Jan 23 04:43:47 np0005593232 nice_lamport[284926]: {
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:        "osd_id": 0,
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:        "type": "bluestore"
Jan 23 04:43:47 np0005593232 nice_lamport[284926]:    }
Jan 23 04:43:47 np0005593232 nice_lamport[284926]: }
Jan 23 04:43:47 np0005593232 systemd[1]: libpod-a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625.scope: Deactivated successfully.
Jan 23 04:43:47 np0005593232 podman[284909]: 2026-01-23 09:43:47.625680438 +0000 UTC m=+1.074413152 container died a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:43:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6c7f6e9e35806f1eee37f52551e3ed0458e5b5f8ba7fcbfe4741fbdeec30204f-merged.mount: Deactivated successfully.
Jan 23 04:43:47 np0005593232 podman[284909]: 2026-01-23 09:43:47.675124672 +0000 UTC m=+1.123857386 container remove a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:43:47 np0005593232 systemd[1]: libpod-conmon-a85d64d8db210a0a7f0c3a08568bc6b4d0ed4f300c75e5162b2550ec7708f625.scope: Deactivated successfully.
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:43:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4a67ffe6-8a9e-4d00-881b-de072304dafa does not exist
Jan 23 04:43:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3555d993-3b15-4093-afa2-7d64c5e1524c does not exist
Jan 23 04:43:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 031080e8-69f3-4397-a7aa-24b0a119d9c9 does not exist
Jan 23 04:43:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:43:48 np0005593232 nova_compute[250269]: 2026-01-23 09:43:48.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:48.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:48 np0005593232 nova_compute[250269]: 2026-01-23 09:43:48.736 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:48.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498028679' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1498028679' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 23 04:43:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 23 04:43:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1518: 321 pgs: 321 active+clean; 134 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 23 04:43:49 np0005593232 podman[285014]: 2026-01-23 09:43:49.404641266 +0000 UTC m=+0.061641977 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 04:43:49 np0005593232 nova_compute[250269]: 2026-01-23 09:43:49.581 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:50.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:50.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1519: 321 pgs: 321 active+clean; 134 MiB data, 551 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 131 op/s
Jan 23 04:43:51 np0005593232 nova_compute[250269]: 2026-01-23 09:43:51.546 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161416.5327652, a1208de2-efde-4618-8388-2acfab37582a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:43:51 np0005593232 nova_compute[250269]: 2026-01-23 09:43:51.547 250273 INFO nova.compute.manager [-] [instance: a1208de2-efde-4618-8388-2acfab37582a] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:43:51 np0005593232 nova_compute[250269]: 2026-01-23 09:43:51.649 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:52 np0005593232 nova_compute[250269]: 2026-01-23 09:43:52.494 250273 DEBUG nova.compute.manager [None req-9cda746c-82e5-4149-86a5-fe007fb46a1b - - - - - -] [instance: a1208de2-efde-4618-8388-2acfab37582a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:43:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:52.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:52.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1520: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 4.7 MiB/s wr, 133 op/s
Jan 23 04:43:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:54 np0005593232 nova_compute[250269]: 2026-01-23 09:43:54.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:54.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1521: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 433 KiB/s rd, 4.7 MiB/s wr, 133 op/s
Jan 23 04:43:56 np0005593232 nova_compute[250269]: 2026-01-23 09:43:56.653 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:43:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:43:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:56.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:43:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1522: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 411 KiB/s rd, 4.0 MiB/s wr, 104 op/s
Jan 23 04:43:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:43:58.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:43:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:43:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:43:58.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:43:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:43:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1523: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 2.5 MiB/s wr, 99 op/s
Jan 23 04:43:59 np0005593232 nova_compute[250269]: 2026-01-23 09:43:59.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:00.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:00.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1524: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 23 04:44:01 np0005593232 nova_compute[250269]: 2026-01-23 09:44:01.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:02 np0005593232 nova_compute[250269]: 2026-01-23 09:44:02.296 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:02.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:02.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1525: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 148 op/s
Jan 23 04:44:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 23 04:44:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 23 04:44:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 23 04:44:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 23 04:44:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 23 04:44:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 23 04:44:04 np0005593232 nova_compute[250269]: 2026-01-23 09:44:04.587 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:04.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:04.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1528: 321 pgs: 321 active+clean; 167 MiB data, 576 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 97 op/s
Jan 23 04:44:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 23 04:44:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 23 04:44:05 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 23 04:44:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 23 04:44:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 23 04:44:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 23 04:44:06 np0005593232 nova_compute[250269]: 2026-01-23 09:44:06.660 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:06.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1531: 321 pgs: 321 active+clean; 181 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 4.8 MiB/s wr, 133 op/s
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1532: 321 pgs: 321 active+clean; 181 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Jan 23 04:44:09 np0005593232 nova_compute[250269]: 2026-01-23 09:44:09.590 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:10 np0005593232 podman[285095]: 2026-01-23 09:44:10.439966762 +0000 UTC m=+0.095883124 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 04:44:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:10.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1533: 321 pgs: 321 active+clean; 181 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 136 op/s
Jan 23 04:44:11 np0005593232 nova_compute[250269]: 2026-01-23 09:44:11.315 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:11 np0005593232 nova_compute[250269]: 2026-01-23 09:44:11.315 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:11 np0005593232 nova_compute[250269]: 2026-01-23 09:44:11.664 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:12 np0005593232 nova_compute[250269]: 2026-01-23 09:44:12.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:12.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:12.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1534: 321 pgs: 321 active+clean; 121 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.7 MiB/s wr, 156 op/s
Jan 23 04:44:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 23 04:44:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 23 04:44:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 23 04:44:14 np0005593232 nova_compute[250269]: 2026-01-23 09:44:14.592 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:14.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:14.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:44:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1536: 321 pgs: 321 active+clean; 121 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.700 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.701 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:15 np0005593232 nova_compute[250269]: 2026-01-23 09:44:15.701 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:44:16 np0005593232 nova_compute[250269]: 2026-01-23 09:44:16.667 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:16.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:16.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1537: 321 pgs: 321 active+clean; 78 MiB data, 533 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 229 KiB/s wr, 85 op/s
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.288 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.289 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.314 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.423 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.424 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.434 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.435 250273 INFO nova.compute.claims [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.535 250273 DEBUG nova.scheduler.client.report [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.555 250273 DEBUG nova.scheduler.client.report [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.555 250273 DEBUG nova.compute.provider_tree [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.582 250273 DEBUG nova.scheduler.client.report [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.609 250273 DEBUG nova.scheduler.client.report [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:44:18 np0005593232 nova_compute[250269]: 2026-01-23 09:44:18.684 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:18.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:18.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:44:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267029535' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.172 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.180 250273 DEBUG nova.compute.provider_tree [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.203 250273 DEBUG nova.scheduler.client.report [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.232 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.234 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.301 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.302 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:44:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1538: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 66 op/s
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.337 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.337 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.338 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.339 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.376 250273 INFO nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.404 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.512 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.515 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.516 250273 INFO nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Creating image(s)#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.559 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.593 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.631 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.636 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.667 250273 DEBUG nova.policy [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '72d5ad5683984a21a60886499308fc93', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3bf3a67d862f46d790d628778b7c14e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.670 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.718 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.719 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.720 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.721 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.755 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.760 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:44:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/889821763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:44:19 np0005593232 nova_compute[250269]: 2026-01-23 09:44:19.844 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:19 np0005593232 podman[285311]: 2026-01-23 09:44:19.963075829 +0000 UTC m=+0.066048477 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.061 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.063 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4639MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.063 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.064 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.171 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 5f9cef8f-a65f-4fa3-aecd-678b25b19227 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.172 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.172 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.219 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.263 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.344 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] resizing rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.500 250273 DEBUG nova.objects.instance [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 5f9cef8f-a65f-4fa3-aecd-678b25b19227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.524 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.525 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Ensure instance console log exists: /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.526 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.526 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.526 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:44:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028373443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:44:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.694 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.703 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.733 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.760 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.761 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:20 np0005593232 nova_compute[250269]: 2026-01-23 09:44:20.771 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Successfully created port: 97ccc3f3-015a-4a0e-822f-ca2340cde865 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:44:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:20.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1539: 321 pgs: 321 active+clean; 41 MiB data, 512 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 66 op/s
Jan 23 04:44:21 np0005593232 nova_compute[250269]: 2026-01-23 09:44:21.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:21 np0005593232 nova_compute[250269]: 2026-01-23 09:44:21.761 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.187 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Successfully updated port: 97ccc3f3-015a-4a0e-822f-ca2340cde865 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.214 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.214 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquired lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.214 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.374 250273 DEBUG nova.compute.manager [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-changed-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.374 250273 DEBUG nova.compute.manager [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Refreshing instance network info cache due to event network-changed-97ccc3f3-015a-4a0e-822f-ca2340cde865. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.374 250273 DEBUG oslo_concurrency.lockutils [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:44:22 np0005593232 nova_compute[250269]: 2026-01-23 09:44:22.504 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:44:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:22.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1540: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 04:44:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.409 250273 DEBUG nova.network.neutron [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updating instance_info_cache with network_info: [{"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.706 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Releasing lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.706 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Instance network_info: |[{"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.707 250273 DEBUG oslo_concurrency.lockutils [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.707 250273 DEBUG nova.network.neutron [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Refreshing network info cache for port 97ccc3f3-015a-4a0e-822f-ca2340cde865 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.711 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Start _get_guest_xml network_info=[{"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.716 250273 WARNING nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.723 250273 DEBUG nova.virt.libvirt.host [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.724 250273 DEBUG nova.virt.libvirt.host [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.730 250273 DEBUG nova.virt.libvirt.host [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.731 250273 DEBUG nova.virt.libvirt.host [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.732 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.733 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.733 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.734 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.734 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.734 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.734 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.735 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.735 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.735 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.735 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.736 250273 DEBUG nova.virt.hardware [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:44:24 np0005593232 nova_compute[250269]: 2026-01-23 09:44:24.739 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:24.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:44:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/97257166' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.174 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.205 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.212 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1541: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 58 op/s
Jan 23 04:44:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:44:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950528519' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.706 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.708 250273 DEBUG nova.virt.libvirt.vif [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:44:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=53,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZylpj2pu3/Qzi10T9Kl/K84t8wiGZymM+I+ivo3dS3vuuHnNJLCZs7eg33tWV8W6FQi7p2FDmjp1wDwW6fPuRCdx7V8TgNizJ/a0QZD5q+obdPna0hA+wxJIXktHCAww==',key_name='tempest-keypair-1551683196',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bf3a67d862f46d790d628778b7c14e4',ramdisk_id='',reservation_id='r-pc9o7y4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1923071179',owner_user_name='tempest-ServersTestFqdnHostnames-1923071179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:44:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='72d5ad5683984a21a60886499308fc93',uuid=5f9cef8f-a65f-4fa3-aecd-678b25b19227,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.709 250273 DEBUG nova.network.os_vif_util [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converting VIF {"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.710 250273 DEBUG nova.network.os_vif_util [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.711 250273 DEBUG nova.objects.instance [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5f9cef8f-a65f-4fa3-aecd-678b25b19227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.728 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <uuid>5f9cef8f-a65f-4fa3-aecd-678b25b19227</uuid>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <name>instance-00000035</name>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:name>guest-instance-1.domain.com</nova:name>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:44:24</nova:creationTime>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:user uuid="72d5ad5683984a21a60886499308fc93">tempest-ServersTestFqdnHostnames-1923071179-project-member</nova:user>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:project uuid="3bf3a67d862f46d790d628778b7c14e4">tempest-ServersTestFqdnHostnames-1923071179</nova:project>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <nova:port uuid="97ccc3f3-015a-4a0e-822f-ca2340cde865">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="serial">5f9cef8f-a65f-4fa3-aecd-678b25b19227</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="uuid">5f9cef8f-a65f-4fa3-aecd-678b25b19227</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:61:c8:0e"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <target dev="tap97ccc3f3-01"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/console.log" append="off"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:44:25 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:44:25 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:44:25 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:44:25 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.729 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Preparing to wait for external event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.730 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.730 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.730 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.731 250273 DEBUG nova.virt.libvirt.vif [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:44:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=53,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZylpj2pu3/Qzi10T9Kl/K84t8wiGZymM+I+ivo3dS3vuuHnNJLCZs7eg33tWV8W6FQi7p2FDmjp1wDwW6fPuRCdx7V8TgNizJ/a0QZD5q+obdPna0hA+wxJIXktHCAww==',key_name='tempest-keypair-1551683196',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3bf3a67d862f46d790d628778b7c14e4',ramdisk_id='',reservation_id='r-pc9o7y4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestFqdnHostnames-1923071179',owner_user_name='tempest-ServersTestFqdnHostnames-1923071179-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:44:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='72d5ad5683984a21a60886499308fc93',uuid=5f9cef8f-a65f-4fa3-aecd-678b25b19227,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.731 250273 DEBUG nova.network.os_vif_util [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converting VIF {"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.732 250273 DEBUG nova.network.os_vif_util [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.732 250273 DEBUG os_vif [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.732 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.733 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.733 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.737 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.737 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap97ccc3f3-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.738 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap97ccc3f3-01, col_values=(('external_ids', {'iface-id': '97ccc3f3-015a-4a0e-822f-ca2340cde865', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:61:c8:0e', 'vm-uuid': '5f9cef8f-a65f-4fa3-aecd-678b25b19227'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:25 np0005593232 NetworkManager[49057]: <info>  [1769161465.7399] manager: (tap97ccc3f3-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.747 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.747 250273 INFO os_vif [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01')#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.798 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.798 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.798 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] No VIF found with MAC fa:16:3e:61:c8:0e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.799 250273 INFO nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Using config drive#033[00m
Jan 23 04:44:25 np0005593232 nova_compute[250269]: 2026-01-23 09:44:25.827 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:26.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:26 np0005593232 nova_compute[250269]: 2026-01-23 09:44:26.719 250273 INFO nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Creating config drive at /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config#033[00m
Jan 23 04:44:26 np0005593232 nova_compute[250269]: 2026-01-23 09:44:26.727 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ndfaga execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:26 np0005593232 nova_compute[250269]: 2026-01-23 09:44:26.868 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_8ndfaga" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:26 np0005593232 nova_compute[250269]: 2026-01-23 09:44:26.902 250273 DEBUG nova.storage.rbd_utils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] rbd image 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:44:26 np0005593232 nova_compute[250269]: 2026-01-23 09:44:26.907 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.165 250273 DEBUG oslo_concurrency.processutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config 5f9cef8f-a65f-4fa3-aecd-678b25b19227_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.166 250273 INFO nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Deleting local config drive /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227/disk.config because it was imported into RBD.#033[00m
Jan 23 04:44:27 np0005593232 kernel: tap97ccc3f3-01: entered promiscuous mode
Jan 23 04:44:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:27Z|00149|binding|INFO|Claiming lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 for this chassis.
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.2387] manager: (tap97ccc3f3-01): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:27Z|00150|binding|INFO|97ccc3f3-015a-4a0e-822f-ca2340cde865: Claiming fa:16:3e:61:c8:0e 10.100.0.10
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.242 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 systemd-machined[215836]: New machine qemu-21-instance-00000035.
Jan 23 04:44:27 np0005593232 systemd-udevd[285567]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:44:27 np0005593232 systemd[1]: Started Virtual Machine qemu-21-instance-00000035.
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.3037] device (tap97ccc3f3-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.3057] device (tap97ccc3f3-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:44:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1542: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.317 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:27Z|00151|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 ovn-installed in OVS
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.321 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:27Z|00152|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 up in Southbound
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.373 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:c8:0e 10.100.0.10'], port_security=['fa:16:3e:61:c8:0e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f9cef8f-a65f-4fa3-aecd-678b25b19227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94477bdc-dd64-4159-a01d-35011d156b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bf3a67d862f46d790d628778b7c14e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fba64cfb-5453-41a2-b3f0-12ac307129da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b50763eb-baef-49a8-bb89-a647ac206344, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=97ccc3f3-015a-4a0e-822f-ca2340cde865) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.374 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 97ccc3f3-015a-4a0e-822f-ca2340cde865 in datapath 94477bdc-dd64-4159-a01d-35011d156b67 bound to our chassis#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.375 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94477bdc-dd64-4159-a01d-35011d156b67#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.388 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ba36a8b9-05fc-4b83-8ba7-613ffe182a31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.389 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94477bdc-d1 in ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.391 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94477bdc-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.391 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2d32d114-aa41-484e-9409-176f7a088bf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.392 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[98a12687-b8f8-4486-91c5-499d8f0c62a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.407 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[91dcce0d-55aa-4b9f-bf12-ffc6bae70bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.419 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8a22cb1b-76c3-4486-9d42-656364e08a31]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.456 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[800938ab-f38f-4ddd-897e-399fdf20cb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.466 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4751e558-1302-46ab-8e6f-8b02557496b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.4672] manager: (tap94477bdc-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 23 04:44:27 np0005593232 systemd-udevd[285569]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.496 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b5efc2be-cae6-4b8f-a6c5-2bb02eeec69d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.499 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[436f41bb-1010-491e-b7c0-62c4507a4a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.5213] device (tap94477bdc-d0): carrier: link connected
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.526 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1dd061-99ea-4176-8308-48ec223c6a75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.540 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cac53453-3c18-4850-a480-553b04a7c7ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94477bdc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:a1:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538066, 'reachable_time': 32626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285602, 'error': None, 'target': 'ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.553 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ab817ec4-2130-4815-8ff0-6040e139fa92]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:a1e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 538066, 'tstamp': 538066}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285603, 'error': None, 'target': 'ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.567 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[802d3702-467f-420d-8f0d-597464e34124]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94477bdc-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:a1:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538066, 'reachable_time': 32626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285604, 'error': None, 'target': 'ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.594 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6209a89e-b7a7-40b0-b331-042ddd30d9df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.656 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[490b1523-f040-4705-b447-daa06bcee9cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.657 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94477bdc-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.658 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.658 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94477bdc-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:27 np0005593232 kernel: tap94477bdc-d0: entered promiscuous mode
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.660 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 NetworkManager[49057]: <info>  [1769161467.6625] manager: (tap94477bdc-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.662 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94477bdc-d0, col_values=(('external_ids', {'iface-id': 'dcb67153-a9a2-4bba-ace4-940503342c00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:27Z|00153|binding|INFO|Releasing lport dcb67153-a9a2-4bba-ace4-940503342c00 from this chassis (sb_readonly=0)
Jan 23 04:44:27 np0005593232 nova_compute[250269]: 2026-01-23 09:44:27.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.679 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94477bdc-dd64-4159-a01d-35011d156b67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94477bdc-dd64-4159-a01d-35011d156b67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.681 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[32b44aeb-329b-42ed-9786-339c72dfac80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.682 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-94477bdc-dd64-4159-a01d-35011d156b67
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/94477bdc-dd64-4159-a01d-35011d156b67.pid.haproxy
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 94477bdc-dd64-4159-a01d-35011d156b67
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:44:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:27.683 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67', 'env', 'PROCESS_TAG=haproxy-94477bdc-dd64-4159-a01d-35011d156b67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94477bdc-dd64-4159-a01d-35011d156b67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:44:28 np0005593232 podman[285636]: 2026-01-23 09:44:28.073854696 +0000 UTC m=+0.069498968 container create 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.081 250273 DEBUG nova.network.neutron [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updated VIF entry in instance network info cache for port 97ccc3f3-015a-4a0e-822f-ca2340cde865. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.082 250273 DEBUG nova.network.neutron [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updating instance_info_cache with network_info: [{"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:44:28 np0005593232 systemd[1]: Started libpod-conmon-03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d.scope.
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.119 250273 DEBUG oslo_concurrency.lockutils [req-8743346e-9bd7-46e1-ba54-e4d501f80722 req-6175f469-7873-427a-a9a5-bda45d48f330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:44:28 np0005593232 podman[285636]: 2026-01-23 09:44:28.028564325 +0000 UTC m=+0.024208647 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:44:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1d7e1e03f8d2a93f122eb33b8f329f49666f07999c7aee76cdd2cba511c3223/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:28 np0005593232 podman[285636]: 2026-01-23 09:44:28.161452161 +0000 UTC m=+0.157096463 container init 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 23 04:44:28 np0005593232 podman[285636]: 2026-01-23 09:44:28.167803496 +0000 UTC m=+0.163447768 container start 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 04:44:28 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [NOTICE]   (285656) : New worker (285658) forked
Jan 23 04:44:28 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [NOTICE]   (285656) : Loading success.
Jan 23 04:44:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:44:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:44:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:28.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.830 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161468.830315, 5f9cef8f-a65f-4fa3-aecd-678b25b19227 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.831 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] VM Started (Lifecycle Event)#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.889 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.895 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161468.830417, 5f9cef8f-a65f-4fa3-aecd-678b25b19227 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.896 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.941 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.944 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:44:28 np0005593232 nova_compute[250269]: 2026-01-23 09:44:28.979 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:44:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1543: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 23 04:44:29 np0005593232 nova_compute[250269]: 2026-01-23 09:44:29.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:30.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.718 250273 DEBUG nova.compute.manager [req-1fbd7f89-bf18-45ac-a5c1-687f76da64c5 req-0ea1b3b4-23d2-4b25-a544-643ad03c73cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.719 250273 DEBUG oslo_concurrency.lockutils [req-1fbd7f89-bf18-45ac-a5c1-687f76da64c5 req-0ea1b3b4-23d2-4b25-a544-643ad03c73cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.720 250273 DEBUG oslo_concurrency.lockutils [req-1fbd7f89-bf18-45ac-a5c1-687f76da64c5 req-0ea1b3b4-23d2-4b25-a544-643ad03c73cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.720 250273 DEBUG oslo_concurrency.lockutils [req-1fbd7f89-bf18-45ac-a5c1-687f76da64c5 req-0ea1b3b4-23d2-4b25-a544-643ad03c73cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.721 250273 DEBUG nova.compute.manager [req-1fbd7f89-bf18-45ac-a5c1-687f76da64c5 req-0ea1b3b4-23d2-4b25-a544-643ad03c73cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Processing event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.722 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.727 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161470.7269857, 5f9cef8f-a65f-4fa3-aecd-678b25b19227 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.727 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.730 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.733 250273 INFO nova.virt.libvirt.driver [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Instance spawned successfully.#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.734 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.762 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.767 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.768 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.769 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.769 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.770 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.770 250273 DEBUG nova.virt.libvirt.driver [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.776 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.807 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:44:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:30.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.843 250273 INFO nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Took 11.33 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.844 250273 DEBUG nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:44:30 np0005593232 nova_compute[250269]: 2026-01-23 09:44:30.947 250273 INFO nova.compute.manager [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Took 12.57 seconds to build instance.#033[00m
Jan 23 04:44:31 np0005593232 nova_compute[250269]: 2026-01-23 09:44:31.009 250273 DEBUG oslo_concurrency.lockutils [None req-0c8599bf-3860-41df-a1c5-442999a35890 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:31 np0005593232 nova_compute[250269]: 2026-01-23 09:44:31.146 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:31.146 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:44:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:31.148 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:44:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1544: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:44:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:32.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.915 250273 DEBUG nova.compute.manager [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.916 250273 DEBUG oslo_concurrency.lockutils [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.916 250273 DEBUG oslo_concurrency.lockutils [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.916 250273 DEBUG oslo_concurrency.lockutils [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.916 250273 DEBUG nova.compute.manager [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:32 np0005593232 nova_compute[250269]: 2026-01-23 09:44:32.916 250273 WARNING nova.compute.manager [req-b52eb001-832d-4469-8d39-c19a544d05f4 req-3e9f40c1-5edf-43c7-aa59-af866f728bb1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:44:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1545: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 796 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 23 04:44:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:34 np0005593232 nova_compute[250269]: 2026-01-23 09:44:34.611 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:34.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:34 np0005593232 nova_compute[250269]: 2026-01-23 09:44:34.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:34 np0005593232 NetworkManager[49057]: <info>  [1769161474.9097] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 23 04:44:34 np0005593232 NetworkManager[49057]: <info>  [1769161474.9103] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:35Z|00154|binding|INFO|Releasing lport dcb67153-a9a2-4bba-ace4-940503342c00 from this chassis (sb_readonly=0)
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.021 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1546: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 779 KiB/s rd, 14 KiB/s wr, 36 op/s
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.955 250273 DEBUG nova.compute.manager [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-changed-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.956 250273 DEBUG nova.compute.manager [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Refreshing instance network info cache due to event network-changed-97ccc3f3-015a-4a0e-822f-ca2340cde865. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.956 250273 DEBUG oslo_concurrency.lockutils [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.956 250273 DEBUG oslo_concurrency.lockutils [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:44:35 np0005593232 nova_compute[250269]: 2026-01-23 09:44:35.957 250273 DEBUG nova.network.neutron [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Refreshing network info cache for port 97ccc3f3-015a-4a0e-822f-ca2340cde865 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:44:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:36.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:36.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:44:37
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images']
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1547: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:44:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:44:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:38.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:38 np0005593232 nova_compute[250269]: 2026-01-23 09:44:38.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:38.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:39 np0005593232 nova_compute[250269]: 2026-01-23 09:44:39.026 250273 DEBUG nova.network.neutron [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updated VIF entry in instance network info cache for port 97ccc3f3-015a-4a0e-822f-ca2340cde865. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:44:39 np0005593232 nova_compute[250269]: 2026-01-23 09:44:39.026 250273 DEBUG nova.network.neutron [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updating instance_info_cache with network_info: [{"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:44:39 np0005593232 nova_compute[250269]: 2026-01-23 09:44:39.106 250273 DEBUG oslo_concurrency.lockutils [req-3c53b9ee-dfe0-4ff5-98ae-4e622fc1c50c req-239fec80-a4fe-49a8-a142-838425de1e1d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5f9cef8f-a65f-4fa3-aecd-678b25b19227" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:44:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:39 np0005593232 nova_compute[250269]: 2026-01-23 09:44:39.211 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1548: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 04:44:39 np0005593232 nova_compute[250269]: 2026-01-23 09:44:39.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:40.150 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:40.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:40 np0005593232 nova_compute[250269]: 2026-01-23 09:44:40.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1549: 321 pgs: 321 active+clean; 88 MiB data, 534 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 23 04:44:41 np0005593232 podman[285767]: 2026-01-23 09:44:41.494523609 +0000 UTC m=+0.139223062 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 04:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:42.599 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:42.599 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:42.600 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:42.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1550: 321 pgs: 321 active+clean; 134 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 23 04:44:43 np0005593232 nova_compute[250269]: 2026-01-23 09:44:43.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:44 np0005593232 nova_compute[250269]: 2026-01-23 09:44:44.614 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:44.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:44:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:44.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:44:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1551: 321 pgs: 321 active+clean; 134 MiB data, 559 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 23 04:44:45 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:45Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:61:c8:0e 10.100.0.10
Jan 23 04:44:45 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:45Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:61:c8:0e 10.100.0.10
Jan 23 04:44:45 np0005593232 nova_compute[250269]: 2026-01-23 09:44:45.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0019856589079657546 of space, bias 1.0, pg target 0.5956976723897264 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:44:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:44:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:46.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1552: 321 pgs: 321 active+clean; 172 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 4.1 MiB/s wr, 116 op/s
Jan 23 04:44:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:48.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:48.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:44:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b1305753-d4b3-4cff-8537-2b64e18e34a1 does not exist
Jan 23 04:44:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 81592522-25b4-4c6c-995b-950775094ebc does not exist
Jan 23 04:44:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bac5b3c6-f146-446a-a238-9f2641444886 does not exist
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1553: 321 pgs: 321 active+clean; 213 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 5.7 MiB/s wr, 121 op/s
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.578956609 +0000 UTC m=+0.046688153 container create 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:44:49 np0005593232 nova_compute[250269]: 2026-01-23 09:44:49.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:49 np0005593232 systemd[1]: Started libpod-conmon-720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd.scope.
Jan 23 04:44:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.559757569 +0000 UTC m=+0.027489123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.670419497 +0000 UTC m=+0.138151051 container init 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.6777294 +0000 UTC m=+0.145460944 container start 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.681721336 +0000 UTC m=+0.149452890 container attach 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 04:44:49 np0005593232 systemd[1]: libpod-720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd.scope: Deactivated successfully.
Jan 23 04:44:49 np0005593232 sad_volhard[286085]: 167 167
Jan 23 04:44:49 np0005593232 conmon[286085]: conmon 720ef5210928e6f50e64 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd.scope/container/memory.events
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.686930238 +0000 UTC m=+0.154661782 container died 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:44:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3a5c6caefb040bc134a7475c52c956a943f7e11a1bae06024c92503e3f7169f1-merged.mount: Deactivated successfully.
Jan 23 04:44:49 np0005593232 podman[286069]: 2026-01-23 09:44:49.730777727 +0000 UTC m=+0.198509261 container remove 720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_volhard, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:44:49 np0005593232 systemd[1]: libpod-conmon-720ef5210928e6f50e64da2d0d8582e408ddaec2bb75a3ce8b94f1b4737edadd.scope: Deactivated successfully.
Jan 23 04:44:49 np0005593232 podman[286110]: 2026-01-23 09:44:49.901930049 +0000 UTC m=+0.050298858 container create b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:44:49 np0005593232 systemd[1]: Started libpod-conmon-b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752.scope.
Jan 23 04:44:49 np0005593232 podman[286110]: 2026-01-23 09:44:49.87830732 +0000 UTC m=+0.026676139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:44:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:50 np0005593232 podman[286110]: 2026-01-23 09:44:50.015059769 +0000 UTC m=+0.163428608 container init b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:44:50 np0005593232 podman[286110]: 2026-01-23 09:44:50.024166265 +0000 UTC m=+0.172535074 container start b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:44:50 np0005593232 podman[286110]: 2026-01-23 09:44:50.027973156 +0000 UTC m=+0.176341995 container attach b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:44:50 np0005593232 podman[286129]: 2026-01-23 09:44:50.08777219 +0000 UTC m=+0.067618764 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:44:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:50.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:50 np0005593232 goofy_sinoussi[286126]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:44:50 np0005593232 goofy_sinoussi[286126]: --> relative data size: 1.0
Jan 23 04:44:50 np0005593232 goofy_sinoussi[286126]: --> All data devices are unavailable
Jan 23 04:44:50 np0005593232 nova_compute[250269]: 2026-01-23 09:44:50.837 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:50.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:50 np0005593232 systemd[1]: libpod-b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752.scope: Deactivated successfully.
Jan 23 04:44:50 np0005593232 podman[286110]: 2026-01-23 09:44:50.863481335 +0000 UTC m=+1.011850124 container died b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:44:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-73727430a51ac3a6198c19f5a46ac65e5f637e9f5cbac61e8acb817a1125f2a3-merged.mount: Deactivated successfully.
Jan 23 04:44:50 np0005593232 podman[286110]: 2026-01-23 09:44:50.927682038 +0000 UTC m=+1.076050827 container remove b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:50 np0005593232 systemd[1]: libpod-conmon-b0f0c2ee29f9f832985370c750edd67912b3fa8a84f98006bff1ad3b5034e752.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1554: 321 pgs: 321 active+clean; 213 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 407 KiB/s rd, 5.7 MiB/s wr, 121 op/s
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.397 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.398 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.398 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.398 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.399 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.400 250273 INFO nova.compute.manager [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Terminating instance#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.401 250273 DEBUG nova.compute.manager [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:44:51 np0005593232 kernel: tap97ccc3f3-01 (unregistering): left promiscuous mode
Jan 23 04:44:51 np0005593232 NetworkManager[49057]: <info>  [1769161491.4703] device (tap97ccc3f3-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00155|binding|INFO|Releasing lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 from this chassis (sb_readonly=0)
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00156|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 down in Southbound
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.515 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00157|binding|INFO|Removing iface tap97ccc3f3-01 ovn-installed in OVS
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.530 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.534 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:c8:0e 10.100.0.10'], port_security=['fa:16:3e:61:c8:0e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f9cef8f-a65f-4fa3-aecd-678b25b19227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94477bdc-dd64-4159-a01d-35011d156b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bf3a67d862f46d790d628778b7c14e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fba64cfb-5453-41a2-b3f0-12ac307129da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b50763eb-baef-49a8-bb89-a647ac206344, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=97ccc3f3-015a-4a0e-822f-ca2340cde865) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.535 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 97ccc3f3-015a-4a0e-822f-ca2340cde865 in datapath 94477bdc-dd64-4159-a01d-35011d156b67 unbound from our chassis#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.536 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94477bdc-dd64-4159-a01d-35011d156b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.538 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5572957a-4bd3-49d3-9523-ab182bc5532b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.539 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67 namespace which is not needed anymore#033[00m
Jan 23 04:44:51 np0005593232 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000035.scope: Consumed 14.817s CPU time.
Jan 23 04:44:51 np0005593232 systemd-machined[215836]: Machine qemu-21-instance-00000035 terminated.
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.608763362 +0000 UTC m=+0.051411520 container create 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:44:51 np0005593232 kernel: tap97ccc3f3-01: entered promiscuous mode
Jan 23 04:44:51 np0005593232 NetworkManager[49057]: <info>  [1769161491.6201] manager: (tap97ccc3f3-01): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Jan 23 04:44:51 np0005593232 kernel: tap97ccc3f3-01 (unregistering): left promiscuous mode
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00158|binding|INFO|Claiming lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 for this chassis.
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.622 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00159|binding|INFO|97ccc3f3-015a-4a0e-822f-ca2340cde865: Claiming fa:16:3e:61:c8:0e 10.100.0.10
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.630 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:c8:0e 10.100.0.10'], port_security=['fa:16:3e:61:c8:0e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f9cef8f-a65f-4fa3-aecd-678b25b19227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94477bdc-dd64-4159-a01d-35011d156b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bf3a67d862f46d790d628778b7c14e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fba64cfb-5453-41a2-b3f0-12ac307129da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b50763eb-baef-49a8-bb89-a647ac206344, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=97ccc3f3-015a-4a0e-822f-ca2340cde865) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.647 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00160|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 ovn-installed in OVS
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00161|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 up in Southbound
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00162|binding|INFO|Releasing lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 from this chassis (sb_readonly=1)
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00163|if_status|INFO|Not setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 down as sb is readonly
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00164|binding|INFO|Removing iface tap97ccc3f3-01 ovn-installed in OVS
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.649 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00165|binding|INFO|Releasing lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 from this chassis (sb_readonly=0)
Jan 23 04:44:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:44:51Z|00166|binding|INFO|Setting lport 97ccc3f3-015a-4a0e-822f-ca2340cde865 down in Southbound
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.652 250273 INFO nova.virt.libvirt.driver [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Instance destroyed successfully.#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.653 250273 DEBUG nova.objects.instance [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lazy-loading 'resources' on Instance uuid 5f9cef8f-a65f-4fa3-aecd-678b25b19227 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.659 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:61:c8:0e 10.100.0.10'], port_security=['fa:16:3e:61:c8:0e 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5f9cef8f-a65f-4fa3-aecd-678b25b19227', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94477bdc-dd64-4159-a01d-35011d156b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3bf3a67d862f46d790d628778b7c14e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fba64cfb-5453-41a2-b3f0-12ac307129da', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.220'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b50763eb-baef-49a8-bb89-a647ac206344, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=97ccc3f3-015a-4a0e-822f-ca2340cde865) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 systemd[1]: Started libpod-conmon-9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa.scope.
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.588641565 +0000 UTC m=+0.031289753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.689 250273 DEBUG nova.virt.libvirt.vif [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=2.2.2.2,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:44:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='guest-instance-1.domain.com',display_name='guest-instance-1.domain.com',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='guest-instance-1-domain-com',id=53,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEZylpj2pu3/Qzi10T9Kl/K84t8wiGZymM+I+ivo3dS3vuuHnNJLCZs7eg33tWV8W6FQi7p2FDmjp1wDwW6fPuRCdx7V8TgNizJ/a0QZD5q+obdPna0hA+wxJIXktHCAww==',key_name='tempest-keypair-1551683196',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:44:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3bf3a67d862f46d790d628778b7c14e4',ramdisk_id='',reservation_id='r-pc9o7y4m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestFqdnHostnames-1923071179',owner_user_name='tempest-ServersTestFqdnHostnames-1923071179-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:44:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='72d5ad5683984a21a60886499308fc93',uuid=5f9cef8f-a65f-4fa3-aecd-678b25b19227,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.690 250273 DEBUG nova.network.os_vif_util [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converting VIF {"id": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "address": "fa:16:3e:61:c8:0e", "network": {"id": "94477bdc-dd64-4159-a01d-35011d156b67", "bridge": "br-int", "label": "tempest-ServersTestFqdnHostnames-1113384252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.220", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3bf3a67d862f46d790d628778b7c14e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap97ccc3f3-01", "ovs_interfaceid": "97ccc3f3-015a-4a0e-822f-ca2340cde865", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:44:51 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [NOTICE]   (285656) : haproxy version is 2.8.14-c23fe91
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.691 250273 DEBUG nova.network.os_vif_util [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.691 250273 DEBUG os_vif [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:44:51 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [NOTICE]   (285656) : path to executable is /usr/sbin/haproxy
Jan 23 04:44:51 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [WARNING]  (285656) : Exiting Master process...
Jan 23 04:44:51 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [ALERT]    (285656) : Current worker (285658) exited with code 143 (Terminated)
Jan 23 04:44:51 np0005593232 neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67[285652]: [WARNING]  (285656) : All workers exited. Exiting... (0)
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.693 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap97ccc3f3-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:51 np0005593232 systemd[1]: libpod-03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 conmon[285652]: conmon 03aecfb2d8eb032dd19d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d.scope/container/memory.events
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.696 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.702 250273 INFO os_vif [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:61:c8:0e,bridge_name='br-int',has_traffic_filtering=True,id=97ccc3f3-015a-4a0e-822f-ca2340cde865,network=Network(94477bdc-dd64-4159-a01d-35011d156b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap97ccc3f3-01')#033[00m
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.704771453 +0000 UTC m=+0.147419631 container init 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:51 np0005593232 podman[286356]: 2026-01-23 09:44:51.70537005 +0000 UTC m=+0.063048050 container died 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.713906039 +0000 UTC m=+0.156554197 container start 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.717918576 +0000 UTC m=+0.160566754 container attach 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:51 np0005593232 elated_pascal[286371]: 167 167
Jan 23 04:44:51 np0005593232 systemd[1]: libpod-9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.722324674 +0000 UTC m=+0.164972852 container died 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:44:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d-userdata-shm.mount: Deactivated successfully.
Jan 23 04:44:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c1d7e1e03f8d2a93f122eb33b8f329f49666f07999c7aee76cdd2cba511c3223-merged.mount: Deactivated successfully.
Jan 23 04:44:51 np0005593232 podman[286356]: 2026-01-23 09:44:51.744953104 +0000 UTC m=+0.102631104 container cleanup 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 04:44:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aa37f0bea51b2d40207eeadc2fcee6911baa67ead1f8d3fd0b51c64aedb36d4a-merged.mount: Deactivated successfully.
Jan 23 04:44:51 np0005593232 systemd[1]: libpod-conmon-03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 podman[286322]: 2026-01-23 09:44:51.769225042 +0000 UTC m=+0.211873200 container remove 9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_pascal, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:44:51 np0005593232 systemd[1]: libpod-conmon-9bff2418634e5e727256361b9a47423545226435033cd61f6b11750d38de52aa.scope: Deactivated successfully.
Jan 23 04:44:51 np0005593232 podman[286416]: 2026-01-23 09:44:51.807013265 +0000 UTC m=+0.040077950 container remove 03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.813 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3b564f-c423-49e0-889e-40ea3ae40347]: (4, ('Fri Jan 23 09:44:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67 (03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d)\n03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d\nFri Jan 23 09:44:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67 (03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d)\n03aecfb2d8eb032dd19de9eee75f3259979c751e205e18e0eb42999a1e6ffc5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.815 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[28214288-ffad-487c-921f-ceb0aa582a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.816 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94477bdc-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.817 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 kernel: tap94477bdc-d0: left promiscuous mode
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.835 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[38963b2c-40b8-42fe-aaaf-368a3e1a3da5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.853 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0c2836-1de1-43f6-8359-7961d2c889f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.854 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a554b581-7292-45d6-8b14-13fff2c0f1b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.873 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d23e7ff5-b2eb-4fb6-a93d-97885421715a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 538059, 'reachable_time': 16159, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286435, 'error': None, 'target': 'ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.877 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94477bdc-dd64-4159-a01d-35011d156b67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.877 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e87104-56ff-4afb-bb86-8bc3e9953b88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.878 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 97ccc3f3-015a-4a0e-822f-ca2340cde865 in datapath 94477bdc-dd64-4159-a01d-35011d156b67 unbound from our chassis#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.879 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94477bdc-dd64-4159-a01d-35011d156b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.880 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff62bc0-3edf-4b90-90d1-09a5c60ded8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.880 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 97ccc3f3-015a-4a0e-822f-ca2340cde865 in datapath 94477bdc-dd64-4159-a01d-35011d156b67 unbound from our chassis#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.881 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94477bdc-dd64-4159-a01d-35011d156b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:44:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:44:51.882 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6a936c83-a6ff-4124-b95d-39b62b0e714b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.883 250273 DEBUG nova.compute.manager [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.883 250273 DEBUG oslo_concurrency.lockutils [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.884 250273 DEBUG oslo_concurrency.lockutils [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.884 250273 DEBUG oslo_concurrency.lockutils [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.884 250273 DEBUG nova.compute.manager [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:51 np0005593232 nova_compute[250269]: 2026-01-23 09:44:51.885 250273 DEBUG nova.compute.manager [req-a7ee83f4-1ff6-4bd1-adb2-b745e644cb66 req-bed16837-5b8a-4b1a-8a77-fdb8a53610d8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:44:51 np0005593232 systemd[1]: run-netns-ovnmeta\x2d94477bdc\x2ddd64\x2d4159\x2da01d\x2d35011d156b67.mount: Deactivated successfully.
Jan 23 04:44:51 np0005593232 podman[286441]: 2026-01-23 09:44:51.941911549 +0000 UTC m=+0.046871978 container create d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:51 np0005593232 systemd[1]: Started libpod-conmon-d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c.scope.
Jan 23 04:44:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:52 np0005593232 podman[286441]: 2026-01-23 09:44:51.922433891 +0000 UTC m=+0.027394340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c66cd0da5eabc7f68d5e355fdb4647cf3cfc5c5fccd94a4288a636138ecdc4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c66cd0da5eabc7f68d5e355fdb4647cf3cfc5c5fccd94a4288a636138ecdc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c66cd0da5eabc7f68d5e355fdb4647cf3cfc5c5fccd94a4288a636138ecdc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c66cd0da5eabc7f68d5e355fdb4647cf3cfc5c5fccd94a4288a636138ecdc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:52 np0005593232 podman[286441]: 2026-01-23 09:44:52.031648736 +0000 UTC m=+0.136609185 container init d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:44:52 np0005593232 podman[286441]: 2026-01-23 09:44:52.039245308 +0000 UTC m=+0.144205737 container start d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:52 np0005593232 podman[286441]: 2026-01-23 09:44:52.043801371 +0000 UTC m=+0.148761820 container attach d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:44:52 np0005593232 nova_compute[250269]: 2026-01-23 09:44:52.172 250273 INFO nova.virt.libvirt.driver [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Deleting instance files /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227_del#033[00m
Jan 23 04:44:52 np0005593232 nova_compute[250269]: 2026-01-23 09:44:52.174 250273 INFO nova.virt.libvirt.driver [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Deletion of /var/lib/nova/instances/5f9cef8f-a65f-4fa3-aecd-678b25b19227_del complete#033[00m
Jan 23 04:44:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:52.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:52.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]: {
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:    "0": [
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:        {
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "devices": [
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "/dev/loop3"
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            ],
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "lv_name": "ceph_lv0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "lv_size": "7511998464",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "name": "ceph_lv0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "tags": {
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.cluster_name": "ceph",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.crush_device_class": "",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.encrypted": "0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.osd_id": "0",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.type": "block",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:                "ceph.vdo": "0"
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            },
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "type": "block",
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:            "vg_name": "ceph_vg0"
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:        }
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]:    ]
Jan 23 04:44:52 np0005593232 heuristic_allen[286458]: }
Jan 23 04:44:52 np0005593232 systemd[1]: libpod-d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c.scope: Deactivated successfully.
Jan 23 04:44:52 np0005593232 podman[286441]: 2026-01-23 09:44:52.978244426 +0000 UTC m=+1.083204875 container died d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:44:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-96c66cd0da5eabc7f68d5e355fdb4647cf3cfc5c5fccd94a4288a636138ecdc4-merged.mount: Deactivated successfully.
Jan 23 04:44:53 np0005593232 podman[286441]: 2026-01-23 09:44:53.033333013 +0000 UTC m=+1.138293442 container remove d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_allen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:53 np0005593232 systemd[1]: libpod-conmon-d9deaebbf55d61853d42f8701c444777cc54cd345ae792ceeb30ea3c28c8595c.scope: Deactivated successfully.
Jan 23 04:44:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1555: 321 pgs: 321 active+clean; 165 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.7 MiB/s wr, 207 op/s
Jan 23 04:44:53 np0005593232 nova_compute[250269]: 2026-01-23 09:44:53.509 250273 INFO nova.compute.manager [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Took 2.11 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:44:53 np0005593232 nova_compute[250269]: 2026-01-23 09:44:53.510 250273 DEBUG oslo.service.loopingcall [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:44:53 np0005593232 nova_compute[250269]: 2026-01-23 09:44:53.510 250273 DEBUG nova.compute.manager [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:44:53 np0005593232 nova_compute[250269]: 2026-01-23 09:44:53.510 250273 DEBUG nova.network.neutron [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.718175498 +0000 UTC m=+0.051086461 container create 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:44:53 np0005593232 systemd[1]: Started libpod-conmon-4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb.scope.
Jan 23 04:44:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.696130665 +0000 UTC m=+0.029041718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.806331719 +0000 UTC m=+0.139242712 container init 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.813448556 +0000 UTC m=+0.146359549 container start 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 04:44:53 np0005593232 gracious_nash[286638]: 167 167
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.818432652 +0000 UTC m=+0.151343655 container attach 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:53 np0005593232 systemd[1]: libpod-4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb.scope: Deactivated successfully.
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.819330728 +0000 UTC m=+0.152241701 container died 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:44:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-57ba21d33ec41dd5c63d2cfbcbc25da13fcf186d528b5ca05bd44c5b0d1dfcdc-merged.mount: Deactivated successfully.
Jan 23 04:44:53 np0005593232 podman[286622]: 2026-01-23 09:44:53.863315201 +0000 UTC m=+0.196226174 container remove 4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:44:53 np0005593232 systemd[1]: libpod-conmon-4f3aa149d66c2bb044d6cd40f5c679c876e3f1c57f02ea31cc353e08cdca2bfb.scope: Deactivated successfully.
Jan 23 04:44:54 np0005593232 podman[286662]: 2026-01-23 09:44:54.055680761 +0000 UTC m=+0.067863290 container create 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:44:54 np0005593232 systemd[1]: Started libpod-conmon-8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61.scope.
Jan 23 04:44:54 np0005593232 podman[286662]: 2026-01-23 09:44:54.02785374 +0000 UTC m=+0.040036329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:44:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:44:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1b0641498f82f2f8f5e55de7f0ac2aa48fe38d21e86789988375443d3a9bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1b0641498f82f2f8f5e55de7f0ac2aa48fe38d21e86789988375443d3a9bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1b0641498f82f2f8f5e55de7f0ac2aa48fe38d21e86789988375443d3a9bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1b0641498f82f2f8f5e55de7f0ac2aa48fe38d21e86789988375443d3a9bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:44:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:54 np0005593232 podman[286662]: 2026-01-23 09:44:54.160846589 +0000 UTC m=+0.173029128 container init 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:44:54 np0005593232 podman[286662]: 2026-01-23 09:44:54.17769756 +0000 UTC m=+0.189880099 container start 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:44:54 np0005593232 podman[286662]: 2026-01-23 09:44:54.18148238 +0000 UTC m=+0.193664879 container attach 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.439 250273 DEBUG nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.440 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.441 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.441 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.441 250273 DEBUG nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.441 250273 WARNING nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.442 250273 DEBUG nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.442 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.443 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.443 250273 DEBUG oslo_concurrency.lockutils [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.443 250273 DEBUG nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.444 250273 WARNING nova.compute.manager [req-fcd9af95-0451-4030-88e7-fa452d78ca9f req-8fcfcb75-871c-4275-9aa8-4f9a149cf8ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:44:54 np0005593232 nova_compute[250269]: 2026-01-23 09:44:54.617 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:54.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]: {
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:        "osd_id": 0,
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:        "type": "bluestore"
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]:    }
Jan 23 04:44:55 np0005593232 hardcore_jemison[286678]: }
Jan 23 04:44:55 np0005593232 systemd[1]: libpod-8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61.scope: Deactivated successfully.
Jan 23 04:44:55 np0005593232 conmon[286678]: conmon 8023b6e7d031eb4e31b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61.scope/container/memory.events
Jan 23 04:44:55 np0005593232 podman[286700]: 2026-01-23 09:44:55.150096821 +0000 UTC m=+0.023693652 container died 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:44:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4d1b0641498f82f2f8f5e55de7f0ac2aa48fe38d21e86789988375443d3a9bed-merged.mount: Deactivated successfully.
Jan 23 04:44:55 np0005593232 podman[286700]: 2026-01-23 09:44:55.211296786 +0000 UTC m=+0.084893577 container remove 8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 04:44:55 np0005593232 systemd[1]: libpod-conmon-8023b6e7d031eb4e31b6168e179808b7dffcc378e93d0db76ebac46e218b5c61.scope: Deactivated successfully.
Jan 23 04:44:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:44:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:44:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 28528021-c9ff-4e32-aba1-0ff84f6e1ac3 does not exist
Jan 23 04:44:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a3450490-749f-46ca-8902-83829f328597 does not exist
Jan 23 04:44:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd4ec849-90e0-44be-a0d2-a57c0bfa9c3a does not exist
Jan 23 04:44:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1556: 321 pgs: 321 active+clean; 165 MiB data, 606 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 183 op/s
Jan 23 04:44:55 np0005593232 nova_compute[250269]: 2026-01-23 09:44:55.802 250273 DEBUG nova.network.neutron [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:44:55 np0005593232 nova_compute[250269]: 2026-01-23 09:44:55.858 250273 INFO nova.compute.manager [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Took 2.35 seconds to deallocate network for instance.#033[00m
Jan 23 04:44:55 np0005593232 nova_compute[250269]: 2026-01-23 09:44:55.927 250273 DEBUG nova.compute.manager [req-512f48f9-c129-4336-8743-26bfc116c11f req-e7844cd0-15a2-4281-82c4-d08fc0e2d402 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-deleted-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:55 np0005593232 nova_compute[250269]: 2026-01-23 09:44:55.930 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:55 np0005593232 nova_compute[250269]: 2026-01-23 09:44:55.931 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:56 np0005593232 nova_compute[250269]: 2026-01-23 09:44:56.021 250273 DEBUG oslo_concurrency.processutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:44:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:44:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:44:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246987693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:44:56 np0005593232 nova_compute[250269]: 2026-01-23 09:44:56.478 250273 DEBUG oslo_concurrency.processutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:44:56 np0005593232 nova_compute[250269]: 2026-01-23 09:44:56.488 250273 DEBUG nova.compute.provider_tree [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:44:56 np0005593232 nova_compute[250269]: 2026-01-23 09:44:56.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:44:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:56.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1557: 321 pgs: 321 active+clean; 134 MiB data, 578 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.654 250273 DEBUG nova.scheduler.client.report [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:44:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:44:58.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.782 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.782 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.783 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.783 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.783 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.783 250273 WARNING nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.784 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.784 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.784 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.784 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.784 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.785 250273 WARNING nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-unplugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.785 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.785 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.785 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.786 250273 DEBUG oslo_concurrency.lockutils [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.786 250273 DEBUG nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] No waiting events found dispatching network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.786 250273 WARNING nova.compute.manager [req-bea2a68a-ccb4-47d3-8e75-034c95f3882d req-a0e676bd-8e96-4883-b202-cbdfff90ff5c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Received unexpected event network-vif-plugged-97ccc3f3-015a-4a0e-822f-ca2340cde865 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:44:58 np0005593232 nova_compute[250269]: 2026-01-23 09:44:58.828 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:44:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:44:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:44:58.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:44:59 np0005593232 nova_compute[250269]: 2026-01-23 09:44:59.042 250273 INFO nova.scheduler.client.report [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Deleted allocations for instance 5f9cef8f-a65f-4fa3-aecd-678b25b19227#033[00m
Jan 23 04:44:59 np0005593232 nova_compute[250269]: 2026-01-23 09:44:59.125 250273 DEBUG oslo_concurrency.lockutils [None req-c8ebbc36-34d4-4154-b0e3-5856f0e10e24 72d5ad5683984a21a60886499308fc93 3bf3a67d862f46d790d628778b7c14e4 - - default default] Lock "5f9cef8f-a65f-4fa3-aecd-678b25b19227" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:44:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:44:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1558: 321 pgs: 321 active+clean; 134 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 155 op/s
Jan 23 04:44:59 np0005593232 nova_compute[250269]: 2026-01-23 09:44:59.619 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:00.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1559: 321 pgs: 321 active+clean; 134 MiB data, 564 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 113 op/s
Jan 23 04:45:01 np0005593232 nova_compute[250269]: 2026-01-23 09:45:01.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:02.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:02.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1560: 321 pgs: 321 active+clean; 162 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 206 op/s
Jan 23 04:45:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:04 np0005593232 nova_compute[250269]: 2026-01-23 09:45:04.620 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:04.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:04.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1561: 321 pgs: 321 active+clean; 162 MiB data, 587 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.482 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.483 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.524 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.649 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.650 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.650 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161491.6488965, 5f9cef8f-a65f-4fa3-aecd-678b25b19227 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.651 250273 INFO nova.compute.manager [-] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.663 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.664 250273 INFO nova.compute.claims [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.699 250273 DEBUG nova.compute.manager [None req-4207ac28-f993-4827-82b6-745fac09d0ec - - - - - -] [instance: 5f9cef8f-a65f-4fa3-aecd-678b25b19227] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:06.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:06.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:06 np0005593232 nova_compute[250269]: 2026-01-23 09:45:06.879 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:45:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806783839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.322 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.329 250273 DEBUG nova.compute.provider_tree [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1562: 321 pgs: 321 active+clean; 167 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 153 op/s
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.356 250273 DEBUG nova.scheduler.client.report [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.389 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.391 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.470 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.471 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.505 250273 INFO nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.558 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.702 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.705 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.706 250273 INFO nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Creating image(s)#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.745 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.790 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.827 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.832 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.934 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.936 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.938 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.938 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.977 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:07 np0005593232 nova_compute[250269]: 2026-01-23 09:45:07.983 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 0d38eba2-7627-421b-b5da-45eb53aca667_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.043 250273 DEBUG nova.policy [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1270518e615c4c63a54865bfe906ce5d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c05913e2e5c046bf92e6c8c855833959', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.280 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 0d38eba2-7627-421b-b5da-45eb53aca667_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.365 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] resizing rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.479 250273 DEBUG nova.objects.instance [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d38eba2-7627-421b-b5da-45eb53aca667 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.514 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.515 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Ensure instance console log exists: /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.516 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.516 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:08 np0005593232 nova_compute[250269]: 2026-01-23 09:45:08.516 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:08.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:08.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1563: 321 pgs: 321 active+clean; 167 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 23 04:45:09 np0005593232 nova_compute[250269]: 2026-01-23 09:45:09.622 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:09 np0005593232 nova_compute[250269]: 2026-01-23 09:45:09.757 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Successfully created port: 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:45:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:10.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:10.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:11 np0005593232 nova_compute[250269]: 2026-01-23 09:45:11.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1564: 321 pgs: 321 active+clean; 167 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Jan 23 04:45:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 23 04:45:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 23 04:45:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 23 04:45:11 np0005593232 nova_compute[250269]: 2026-01-23 09:45:11.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:12 np0005593232 nova_compute[250269]: 2026-01-23 09:45:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:12 np0005593232 podman[287034]: 2026-01-23 09:45:12.453829561 +0000 UTC m=+0.107164326 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 04:45:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 23 04:45:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 23 04:45:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 23 04:45:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000088s ======
Jan 23 04:45:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:12.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000088s
Jan 23 04:45:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:12.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.037 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Successfully updated port: 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.067 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.067 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquired lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.068 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.121 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.296 250273 DEBUG nova.compute.manager [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.297 250273 DEBUG nova.compute.manager [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing instance network info cache due to event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.297 250273 DEBUG oslo_concurrency.lockutils [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:45:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1567: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 239 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.7 MiB/s wr, 170 op/s
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.345 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 23 04:45:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 23 04:45:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 23 04:45:13 np0005593232 nova_compute[250269]: 2026-01-23 09:45:13.972 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:45:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:14 np0005593232 nova_compute[250269]: 2026-01-23 09:45:14.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:14 np0005593232 nova_compute[250269]: 2026-01-23 09:45:14.624 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:14.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:14.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1569: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 239 MiB data, 610 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.8 MiB/s wr, 161 op/s
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.619 250273 DEBUG nova.network.neutron [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updating instance_info_cache with network_info: [{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.657 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Releasing lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.657 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Instance network_info: |[{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.658 250273 DEBUG oslo_concurrency.lockutils [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.659 250273 DEBUG nova.network.neutron [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.664 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Start _get_guest_xml network_info=[{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.671 250273 WARNING nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.677 250273 DEBUG nova.virt.libvirt.host [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.678 250273 DEBUG nova.virt.libvirt.host [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.688 250273 DEBUG nova.virt.libvirt.host [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.689 250273 DEBUG nova.virt.libvirt.host [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.692 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.692 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.693 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.694 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.694 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.694 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.695 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.695 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.696 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.697 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.697 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.697 250273 DEBUG nova.virt.hardware [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:45:15 np0005593232 nova_compute[250269]: 2026-01-23 09:45:15.701 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3263015375' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.197 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.226 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.234 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.331 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.332 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852036845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.664 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.666 250273 DEBUG nova.virt.libvirt.vif [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-197087860',display_name='tempest-FloatingIPsAssociationTestJSON-server-197087860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-197087860',id=56,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c05913e2e5c046bf92e6c8c855833959',ramdisk_id='',reservation_id='r-avbwgdwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1969273951',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1969273951-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:45:07Z,user_data=None,user_id='1270518e615c4c63a54865bfe906ce5d',uuid=0d38eba2-7627-421b-b5da-45eb53aca667,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.666 250273 DEBUG nova.network.os_vif_util [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converting VIF {"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.667 250273 DEBUG nova.network.os_vif_util [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.669 250273 DEBUG nova.objects.instance [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d38eba2-7627-421b-b5da-45eb53aca667 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.692 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <uuid>0d38eba2-7627-421b-b5da-45eb53aca667</uuid>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <name>instance-00000038</name>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-197087860</nova:name>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:45:15</nova:creationTime>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:user uuid="1270518e615c4c63a54865bfe906ce5d">tempest-FloatingIPsAssociationTestJSON-1969273951-project-member</nova:user>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:project uuid="c05913e2e5c046bf92e6c8c855833959">tempest-FloatingIPsAssociationTestJSON-1969273951</nova:project>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <nova:port uuid="78ad8ff7-ce83-49f8-b130-d439e99c2ba3">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="serial">0d38eba2-7627-421b-b5da-45eb53aca667</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="uuid">0d38eba2-7627-421b-b5da-45eb53aca667</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0d38eba2-7627-421b-b5da-45eb53aca667_disk">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0d38eba2-7627-421b-b5da-45eb53aca667_disk.config">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:a5:9c:88"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <target dev="tap78ad8ff7-ce"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/console.log" append="off"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:45:16 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:45:16 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:45:16 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:45:16 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.694 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Preparing to wait for external event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.694 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.695 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.695 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.695 250273 DEBUG nova.virt.libvirt.vif [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-197087860',display_name='tempest-FloatingIPsAssociationTestJSON-server-197087860',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-197087860',id=56,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c05913e2e5c046bf92e6c8c855833959',ramdisk_id='',reservation_id='r-avbwgdwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1969273951',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1969273951-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:45:07Z,user_data=None,user_id='1270518e615c4c63a54865bfe906ce5d',uuid=0d38eba2-7627-421b-b5da-45eb53aca667,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.696 250273 DEBUG nova.network.os_vif_util [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converting VIF {"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.696 250273 DEBUG nova.network.os_vif_util [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.697 250273 DEBUG os_vif [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.698 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.698 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.702 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78ad8ff7-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.702 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78ad8ff7-ce, col_values=(('external_ids', {'iface-id': '78ad8ff7-ce83-49f8-b130-d439e99c2ba3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:9c:88', 'vm-uuid': '0d38eba2-7627-421b-b5da-45eb53aca667'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:16 np0005593232 NetworkManager[49057]: <info>  [1769161516.7048] manager: (tap78ad8ff7-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.711 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.712 250273 INFO os_vif [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce')#033[00m
Jan 23 04:45:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:16.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.783 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.784 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.784 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] No VIF found with MAC fa:16:3e:a5:9c:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.785 250273 INFO nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Using config drive#033[00m
Jan 23 04:45:16 np0005593232 nova_compute[250269]: 2026-01-23 09:45:16.818 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:16.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:45:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/835657075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:45:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1570: 321 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 310 active+clean; 280 MiB data, 647 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 10 MiB/s wr, 245 op/s
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.393 250273 INFO nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Creating config drive at /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.400 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vwole1v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.541 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4vwole1v" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.572 250273 DEBUG nova.storage.rbd_utils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] rbd image 0d38eba2-7627-421b-b5da-45eb53aca667_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.575 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config 0d38eba2-7627-421b-b5da-45eb53aca667_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.779 250273 DEBUG oslo_concurrency.processutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config 0d38eba2-7627-421b-b5da-45eb53aca667_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.779 250273 INFO nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Deleting local config drive /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667/disk.config because it was imported into RBD.#033[00m
Jan 23 04:45:17 np0005593232 kernel: tap78ad8ff7-ce: entered promiscuous mode
Jan 23 04:45:17 np0005593232 NetworkManager[49057]: <info>  [1769161517.8310] manager: (tap78ad8ff7-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Jan 23 04:45:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:17Z|00167|binding|INFO|Claiming lport 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 for this chassis.
Jan 23 04:45:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:17Z|00168|binding|INFO|78ad8ff7-ce83-49f8-b130-d439e99c2ba3: Claiming fa:16:3e:a5:9c:88 10.100.0.8
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.861 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:9c:88 10.100.0.8'], port_security=['fa:16:3e:a5:9c:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d38eba2-7627-421b-b5da-45eb53aca667', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e98627d8-446e-4b60-8051-8e37123acd76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c05913e2e5c046bf92e6c8c855833959', 'neutron:revision_number': '2', 'neutron:security_group_ids': '151c9987-f620-4133-b1a2-fc22782378bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8711ac9b-6a17-48e3-a5cd-eace160ad350, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=78ad8ff7-ce83-49f8-b130-d439e99c2ba3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.862 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 in datapath e98627d8-446e-4b60-8051-8e37123acd76 bound to our chassis#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.863 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e98627d8-446e-4b60-8051-8e37123acd76#033[00m
Jan 23 04:45:17 np0005593232 systemd-machined[215836]: New machine qemu-22-instance-00000038.
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.875 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[be71863e-cf80-4870-a320-8084443b8dc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.876 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape98627d8-41 in ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.879 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape98627d8-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.879 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c931e3a7-86f2-48b7-b4bf-76b18a3ac7b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.879 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8df4e8f7-e36b-4421-8da9-6ae51f9bee4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.898 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[fded3255-2fb9-4de2-ac3a-c6f662885cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.907 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:17Z|00169|binding|INFO|Setting lport 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 ovn-installed in OVS
Jan 23 04:45:17 np0005593232 systemd[1]: Started Virtual Machine qemu-22-instance-00000038.
Jan 23 04:45:17 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:17Z|00170|binding|INFO|Setting lport 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 up in Southbound
Jan 23 04:45:17 np0005593232 nova_compute[250269]: 2026-01-23 09:45:17.913 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:17 np0005593232 systemd-udevd[287254]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.922 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfc1b34-c30d-4c73-bf5a-2288fdf1b153]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 NetworkManager[49057]: <info>  [1769161517.9373] device (tap78ad8ff7-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:45:17 np0005593232 NetworkManager[49057]: <info>  [1769161517.9386] device (tap78ad8ff7-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.956 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5c1cb02a-a053-4aa3-88c4-82a83f407b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.960 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a11e8f-a0dc-417f-ba4b-69cb10ec2a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 NetworkManager[49057]: <info>  [1769161517.9620] manager: (tape98627d8-40): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.991 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[eb6e1ed1-120d-4189-a0e3-c118357be88f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:17.995 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[575ffa2d-1857-44cb-b650-af3e3bdc8565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 NetworkManager[49057]: <info>  [1769161518.0134] device (tape98627d8-40): carrier: link connected
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.017 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0cc984d7-843e-4ce6-a07c-314e6c76b1f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.030 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a36021c0-dad0-4879-8a14-11767c19b15f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape98627d8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9a:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543115, 'reachable_time': 35552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287284, 'error': None, 'target': 'ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.041 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[397ac772-2fe6-4c53-8817-44c07f7f1769]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:9a62'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543115, 'tstamp': 543115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287285, 'error': None, 'target': 'ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.053 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f07e0d4d-5e75-4a27-b712-4cf0165a99b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape98627d8-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:9a:62'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543115, 'reachable_time': 35552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287286, 'error': None, 'target': 'ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.074 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[43e6486b-a3dd-46f4-b868-b5f16c76504f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.116 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[292a0bad-57e3-4945-8fa6-a542d18556bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.121 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape98627d8-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.121 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.122 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape98627d8-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:18 np0005593232 NetworkManager[49057]: <info>  [1769161518.1244] manager: (tape98627d8-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Jan 23 04:45:18 np0005593232 kernel: tape98627d8-40: entered promiscuous mode
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.123 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.127 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape98627d8-40, col_values=(('external_ids', {'iface-id': 'cc7bf29e-8ca5-4432-828c-b26d34e969d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.128 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:18 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:18Z|00171|binding|INFO|Releasing lport cc7bf29e-8ca5-4432-828c-b26d34e969d3 from this chassis (sb_readonly=0)
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.144 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.145 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e98627d8-446e-4b60-8051-8e37123acd76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e98627d8-446e-4b60-8051-8e37123acd76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.145 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2e0f514-64a6-4db9-91d6-4e1c5bc60481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.146 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-e98627d8-446e-4b60-8051-8e37123acd76
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/e98627d8-446e-4b60-8051-8e37123acd76.pid.haproxy
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID e98627d8-446e-4b60-8051-8e37123acd76
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:45:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:18.147 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76', 'env', 'PROCESS_TAG=haproxy-e98627d8-446e-4b60-8051-8e37123acd76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e98627d8-446e-4b60-8051-8e37123acd76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.357 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161518.3562245, 0d38eba2-7627-421b-b5da-45eb53aca667 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.357 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] VM Started (Lifecycle Event)#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.393 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.398 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161518.3564854, 0d38eba2-7627-421b-b5da-45eb53aca667 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.399 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.434 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.437 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.462 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.490 250273 DEBUG nova.compute.manager [req-45d9e5bb-1ae3-4fd5-8dfb-12d3d896fda0 req-45d08842-cad6-48ee-8afb-88a2c253ff73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.490 250273 DEBUG oslo_concurrency.lockutils [req-45d9e5bb-1ae3-4fd5-8dfb-12d3d896fda0 req-45d08842-cad6-48ee-8afb-88a2c253ff73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.490 250273 DEBUG oslo_concurrency.lockutils [req-45d9e5bb-1ae3-4fd5-8dfb-12d3d896fda0 req-45d08842-cad6-48ee-8afb-88a2c253ff73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.491 250273 DEBUG oslo_concurrency.lockutils [req-45d9e5bb-1ae3-4fd5-8dfb-12d3d896fda0 req-45d08842-cad6-48ee-8afb-88a2c253ff73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.491 250273 DEBUG nova.compute.manager [req-45d9e5bb-1ae3-4fd5-8dfb-12d3d896fda0 req-45d08842-cad6-48ee-8afb-88a2c253ff73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Processing event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.491 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.495 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161518.494892, 0d38eba2-7627-421b-b5da-45eb53aca667 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.495 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.497 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.499 250273 INFO nova.virt.libvirt.driver [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Instance spawned successfully.#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.499 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:45:18 np0005593232 podman[287358]: 2026-01-23 09:45:18.520320964 +0000 UTC m=+0.054576733 container create 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.524 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.530 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.533 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.533 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.533 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.534 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.534 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.534 250273 DEBUG nova.virt.libvirt.driver [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:45:18 np0005593232 systemd[1]: Started libpod-conmon-8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a.scope.
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.567 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:45:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:45:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/625e70fc0d18585587081b0471695b151e06d47843da8cfaff20cfab9a9766d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:18 np0005593232 podman[287358]: 2026-01-23 09:45:18.493656936 +0000 UTC m=+0.027912725 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:45:18 np0005593232 podman[287358]: 2026-01-23 09:45:18.595504787 +0000 UTC m=+0.129760576 container init 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 04:45:18 np0005593232 podman[287358]: 2026-01-23 09:45:18.602503561 +0000 UTC m=+0.136759330 container start 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:45:18 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [NOTICE]   (287378) : New worker (287380) forked
Jan 23 04:45:18 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [NOTICE]   (287378) : Loading success.
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.662 250273 DEBUG nova.network.neutron [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updated VIF entry in instance network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.662 250273 DEBUG nova.network.neutron [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updating instance_info_cache with network_info: [{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:45:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:18.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:18.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.911 250273 DEBUG oslo_concurrency.lockutils [req-43d6a590-68f8-4508-92c0-4f073ee71be7 req-fa3b362f-82e8-4e92-80fa-7b7caeb1a889 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.958 250273 INFO nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Took 11.25 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:45:18 np0005593232 nova_compute[250269]: 2026-01-23 09:45:18.959 250273 DEBUG nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:45:19 np0005593232 nova_compute[250269]: 2026-01-23 09:45:19.075 250273 INFO nova.compute.manager [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Took 12.48 seconds to build instance.#033[00m
Jan 23 04:45:19 np0005593232 nova_compute[250269]: 2026-01-23 09:45:19.107 250273 DEBUG oslo_concurrency.lockutils [None req-80da3c87-7b3f-44cb-908f-2962629c8822 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:19 np0005593232 nova_compute[250269]: 2026-01-23 09:45:19.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1571: 321 pgs: 321 active+clean; 292 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.8 MiB/s wr, 208 op/s
Jan 23 04:45:19 np0005593232 nova_compute[250269]: 2026-01-23 09:45:19.630 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 23 04:45:20 np0005593232 podman[287390]: 2026-01-23 09:45:20.41172572 +0000 UTC m=+0.066437548 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 04:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 23 04:45:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.615 250273 DEBUG nova.compute.manager [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.615 250273 DEBUG oslo_concurrency.lockutils [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.615 250273 DEBUG oslo_concurrency.lockutils [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.615 250273 DEBUG oslo_concurrency.lockutils [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.616 250273 DEBUG nova.compute.manager [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] No waiting events found dispatching network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:45:20 np0005593232 nova_compute[250269]: 2026-01-23 09:45:20.616 250273 WARNING nova.compute.manager [req-6255d100-c77e-48d8-8383-1983f373e381 req-22518af4-72e6-4b53-a7b6-513b3672b886 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received unexpected event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:45:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:20.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:20.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1573: 321 pgs: 321 active+clean; 292 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.9 MiB/s wr, 117 op/s
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.706 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.768 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.892 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:45:21 np0005593232 nova_compute[250269]: 2026-01-23 09:45:21.892 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000038 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.043 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.044 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4442MB free_disk=20.876697540283203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.044 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.044 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.263 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 0d38eba2-7627-421b-b5da-45eb53aca667 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.264 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.264 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.386 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:45:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:22.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:45:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562926624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.832 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.840 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.881 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:45:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:22.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.940 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:45:22 np0005593232 nova_compute[250269]: 2026-01-23 09:45:22.941 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1574: 321 pgs: 321 active+clean; 246 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 4.1 MiB/s wr, 215 op/s
Jan 23 04:45:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 23 04:45:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 23 04:45:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 23 04:45:24 np0005593232 nova_compute[250269]: 2026-01-23 09:45:24.632 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:24.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:45:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:24.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:45:25 np0005593232 NetworkManager[49057]: <info>  [1769161525.0121] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 23 04:45:25 np0005593232 NetworkManager[49057]: <info>  [1769161525.0133] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Jan 23 04:45:25 np0005593232 nova_compute[250269]: 2026-01-23 09:45:25.019 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:25 np0005593232 nova_compute[250269]: 2026-01-23 09:45:25.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:25 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:25Z|00172|binding|INFO|Releasing lport cc7bf29e-8ca5-4432-828c-b26d34e969d3 from this chassis (sb_readonly=0)
Jan 23 04:45:25 np0005593232 nova_compute[250269]: 2026-01-23 09:45:25.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1576: 321 pgs: 321 active+clean; 246 MiB data, 619 MiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.1 MiB/s wr, 198 op/s
Jan 23 04:45:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 23 04:45:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 23 04:45:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 23 04:45:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 23 04:45:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 23 04:45:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 23 04:45:26 np0005593232 nova_compute[250269]: 2026-01-23 09:45:26.709 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:26.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:26.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1579: 321 pgs: 321 active+clean; 318 MiB data, 680 MiB used, 20 GiB / 21 GiB avail; 8.6 MiB/s rd, 7.3 MiB/s wr, 247 op/s
Jan 23 04:45:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:28.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:28.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1580: 321 pgs: 321 active+clean; 325 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.7 MiB/s wr, 147 op/s
Jan 23 04:45:29 np0005593232 nova_compute[250269]: 2026-01-23 09:45:29.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:45:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:30.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:45:30 np0005593232 nova_compute[250269]: 2026-01-23 09:45:30.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:30 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:30Z|00173|binding|INFO|Releasing lport cc7bf29e-8ca5-4432-828c-b26d34e969d3 from this chassis (sb_readonly=0)
Jan 23 04:45:30 np0005593232 nova_compute[250269]: 2026-01-23 09:45:30.869 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:30.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1581: 321 pgs: 321 active+clean; 325 MiB data, 683 MiB used, 20 GiB / 21 GiB avail; 6.5 MiB/s rd, 6.5 MiB/s wr, 123 op/s
Jan 23 04:45:31 np0005593232 nova_compute[250269]: 2026-01-23 09:45:31.713 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:31Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:9c:88 10.100.0.8
Jan 23 04:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:31Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:9c:88 10.100.0.8
Jan 23 04:45:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:32.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:32.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1582: 321 pgs: 321 active+clean; 355 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 9.0 MiB/s wr, 218 op/s
Jan 23 04:45:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 23 04:45:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 23 04:45:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 23 04:45:34 np0005593232 nova_compute[250269]: 2026-01-23 09:45:34.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:34.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:34.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1584: 321 pgs: 321 active+clean; 355 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.5 MiB/s wr, 173 op/s
Jan 23 04:45:36 np0005593232 nova_compute[250269]: 2026-01-23 09:45:36.456 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:36.461 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:45:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:36.463 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:45:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 23 04:45:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 23 04:45:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 23 04:45:36 np0005593232 nova_compute[250269]: 2026-01-23 09:45:36.715 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:36.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:45:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:36.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:45:37
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'backups', 'default.rgw.control', 'volumes', 'vms', 'images']
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1586: 321 pgs: 321 active+clean; 358 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 510 KiB/s rd, 3.2 MiB/s wr, 126 op/s
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:45:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:38.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:38.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:39 np0005593232 nova_compute[250269]: 2026-01-23 09:45:39.000 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1587: 321 pgs: 321 active+clean; 351 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 514 KiB/s rd, 3.2 MiB/s wr, 133 op/s
Jan 23 04:45:39 np0005593232 nova_compute[250269]: 2026-01-23 09:45:39.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:40.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:40.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1588: 321 pgs: 321 active+clean; 351 MiB data, 723 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 29 KiB/s wr, 25 op/s
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.860 250273 DEBUG nova.compute.manager [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.861 250273 DEBUG nova.compute.manager [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing instance network info cache due to event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.861 250273 DEBUG oslo_concurrency.lockutils [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.861 250273 DEBUG oslo_concurrency.lockutils [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:45:41 np0005593232 nova_compute[250269]: 2026-01-23 09:45:41.861 250273 DEBUG nova.network.neutron [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:42.465 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:42.599 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:42.600 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:42.601 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:42.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:42.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1589: 321 pgs: 321 active+clean; 200 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 27 KiB/s wr, 74 op/s
Jan 23 04:45:43 np0005593232 podman[287518]: 2026-01-23 09:45:43.440246554 +0000 UTC m=+0.094431726 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 23 04:45:44 np0005593232 nova_compute[250269]: 2026-01-23 09:45:44.049 250273 DEBUG nova.network.neutron [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updated VIF entry in instance network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:45:44 np0005593232 nova_compute[250269]: 2026-01-23 09:45:44.050 250273 DEBUG nova.network.neutron [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updating instance_info_cache with network_info: [{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:45:44 np0005593232 nova_compute[250269]: 2026-01-23 09:45:44.104 250273 DEBUG oslo_concurrency.lockutils [req-e17aa3ae-5ef9-4625-8c12-c9d2cc8c8e8f req-7e216ca2-ae79-43bf-b600-e64a8c7520b4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017684075' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:45:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3017684075' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:45:44 np0005593232 nova_compute[250269]: 2026-01-23 09:45:44.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:44.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1591: 321 pgs: 321 active+clean; 200 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 22 KiB/s wr, 63 op/s
Jan 23 04:45:46 np0005593232 nova_compute[250269]: 2026-01-23 09:45:46.722 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004334461660152615 of space, bias 1.0, pg target 1.3003384980457846 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:45:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:45:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1592: 321 pgs: 321 active+clean; 200 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 8.3 KiB/s wr, 53 op/s
Jan 23 04:45:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:48.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1593: 321 pgs: 321 active+clean; 200 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 6.2 KiB/s wr, 47 op/s
Jan 23 04:45:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 23 04:45:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 23 04:45:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 23 04:45:49 np0005593232 nova_compute[250269]: 2026-01-23 09:45:49.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:50.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:50.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1595: 321 pgs: 321 active+clean; 200 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s wr, 0 op/s
Jan 23 04:45:51 np0005593232 podman[287549]: 2026-01-23 09:45:51.437866251 +0000 UTC m=+0.079547502 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:45:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:51Z|00174|binding|INFO|Releasing lport cc7bf29e-8ca5-4432-828c-b26d34e969d3 from this chassis (sb_readonly=0)
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.495 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:51Z|00175|binding|INFO|Releasing lport cc7bf29e-8ca5-4432-828c-b26d34e969d3 from this chassis (sb_readonly=0)
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.623 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.774 250273 DEBUG nova.compute.manager [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.775 250273 DEBUG nova.compute.manager [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing instance network info cache due to event network-changed-78ad8ff7-ce83-49f8-b130-d439e99c2ba3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.775 250273 DEBUG oslo_concurrency.lockutils [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.775 250273 DEBUG oslo_concurrency.lockutils [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:45:51 np0005593232 nova_compute[250269]: 2026-01-23 09:45:51.775 250273 DEBUG nova.network.neutron [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Refreshing network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:45:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:52.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1596: 321 pgs: 321 active+clean; 219 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 757 KiB/s wr, 48 op/s
Jan 23 04:45:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 23 04:45:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 23 04:45:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 23 04:45:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:54 np0005593232 nova_compute[250269]: 2026-01-23 09:45:54.687 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:54.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:54.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:55 np0005593232 nova_compute[250269]: 2026-01-23 09:45:55.313 250273 DEBUG nova.network.neutron [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updated VIF entry in instance network info cache for port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:45:55 np0005593232 nova_compute[250269]: 2026-01-23 09:45:55.314 250273 DEBUG nova.network.neutron [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updating instance_info_cache with network_info: [{"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:45:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1598: 321 pgs: 321 active+clean; 219 MiB data, 633 MiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 862 KiB/s wr, 55 op/s
Jan 23 04:45:56 np0005593232 nova_compute[250269]: 2026-01-23 09:45:56.590 250273 DEBUG oslo_concurrency.lockutils [req-81e1b361-db2e-4062-ab6f-ec471c929493 req-9ecb0997-05ac-48ae-90ed-de277dc69489 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0d38eba2-7627-421b-b5da-45eb53aca667" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:45:56 np0005593232 nova_compute[250269]: 2026-01-23 09:45:56.727 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:56.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.236 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.237 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.238 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.238 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.238 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.240 250273 INFO nova.compute.manager [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Terminating instance#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.240 250273 DEBUG nova.compute.manager [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:45:57 np0005593232 kernel: tap78ad8ff7-ce (unregistering): left promiscuous mode
Jan 23 04:45:57 np0005593232 NetworkManager[49057]: <info>  [1769161557.3007] device (tap78ad8ff7-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:57Z|00176|binding|INFO|Releasing lport 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 from this chassis (sb_readonly=0)
Jan 23 04:45:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:57Z|00177|binding|INFO|Setting lport 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 down in Southbound
Jan 23 04:45:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:45:57Z|00178|binding|INFO|Removing iface tap78ad8ff7-ce ovn-installed in OVS
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.310 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.317 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:9c:88 10.100.0.8'], port_security=['fa:16:3e:a5:9c:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d38eba2-7627-421b-b5da-45eb53aca667', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e98627d8-446e-4b60-8051-8e37123acd76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c05913e2e5c046bf92e6c8c855833959', 'neutron:revision_number': '4', 'neutron:security_group_ids': '151c9987-f620-4133-b1a2-fc22782378bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8711ac9b-6a17-48e3-a5cd-eace160ad350, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=78ad8ff7-ce83-49f8-b130-d439e99c2ba3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.318 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 78ad8ff7-ce83-49f8-b130-d439e99c2ba3 in datapath e98627d8-446e-4b60-8051-8e37123acd76 unbound from our chassis#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.319 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e98627d8-446e-4b60-8051-8e37123acd76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.321 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[874b6d73-23c5-4fb0-99db-4d8839afd2d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.322 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76 namespace which is not needed anymore#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1599: 321 pgs: 321 active+clean; 246 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Jan 23 04:45:57 np0005593232 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000038.scope: Deactivated successfully.
Jan 23 04:45:57 np0005593232 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000038.scope: Consumed 14.376s CPU time.
Jan 23 04:45:57 np0005593232 systemd-machined[215836]: Machine qemu-22-instance-00000038 terminated.
Jan 23 04:45:57 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [NOTICE]   (287378) : haproxy version is 2.8.14-c23fe91
Jan 23 04:45:57 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [NOTICE]   (287378) : path to executable is /usr/sbin/haproxy
Jan 23 04:45:57 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [WARNING]  (287378) : Exiting Master process...
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.465 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [ALERT]    (287378) : Current worker (287380) exited with code 143 (Terminated)
Jan 23 04:45:57 np0005593232 neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76[287374]: [WARNING]  (287378) : All workers exited. Exiting... (0)
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.469 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 systemd[1]: libpod-8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a.scope: Deactivated successfully.
Jan 23 04:45:57 np0005593232 podman[287766]: 2026-01-23 09:45:57.47374604 +0000 UTC m=+0.059061624 container died 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.486 250273 INFO nova.virt.libvirt.driver [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Instance destroyed successfully.#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.487 250273 DEBUG nova.objects.instance [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lazy-loading 'resources' on Instance uuid 0d38eba2-7627-421b-b5da-45eb53aca667 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:45:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a-userdata-shm.mount: Deactivated successfully.
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.515 250273 DEBUG nova.virt.libvirt.vif [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:45:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-197087860',display_name='tempest-FloatingIPsAssociationTestJSON-server-197087860',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-197087860',id=56,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:45:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c05913e2e5c046bf92e6c8c855833959',ramdisk_id='',reservation_id='r-avbwgdwh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1969273951',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1969273951-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:45:19Z,user_data=None,user_id='1270518e615c4c63a54865bfe906ce5d',uuid=0d38eba2-7627-421b-b5da-45eb53aca667,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.517 250273 DEBUG nova.network.os_vif_util [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converting VIF {"id": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "address": "fa:16:3e:a5:9c:88", "network": {"id": "e98627d8-446e-4b60-8051-8e37123acd76", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-1442445063-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c05913e2e5c046bf92e6c8c855833959", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78ad8ff7-ce", "ovs_interfaceid": "78ad8ff7-ce83-49f8-b130-d439e99c2ba3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:45:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-625e70fc0d18585587081b0471695b151e06d47843da8cfaff20cfab9a9766d1-merged.mount: Deactivated successfully.
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.517 250273 DEBUG nova.network.os_vif_util [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.518 250273 DEBUG os_vif [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.520 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.520 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78ad8ff7-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.526 250273 INFO os_vif [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:9c:88,bridge_name='br-int',has_traffic_filtering=True,id=78ad8ff7-ce83-49f8-b130-d439e99c2ba3,network=Network(e98627d8-446e-4b60-8051-8e37123acd76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78ad8ff7-ce')#033[00m
Jan 23 04:45:57 np0005593232 podman[287766]: 2026-01-23 09:45:57.533227264 +0000 UTC m=+0.118542848 container cleanup 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:45:57 np0005593232 systemd[1]: libpod-conmon-8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a.scope: Deactivated successfully.
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:45:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1f01a7b7-7684-4932-a9db-3743f1e56c0e does not exist
Jan 23 04:45:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ef3793c7-4bf7-4aac-8ff1-344a9b293780 does not exist
Jan 23 04:45:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 23bdbce4-840c-429f-87cc-2348140f982b does not exist
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:45:57 np0005593232 podman[287833]: 2026-01-23 09:45:57.611219259 +0000 UTC m=+0.047678001 container remove 8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.617 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb1a373-e4d3-46d8-bfa3-a8d00cec29f4]: (4, ('Fri Jan 23 09:45:57 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76 (8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a)\n8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a\nFri Jan 23 09:45:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76 (8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a)\n8defa694aaceb7e1cba35648b58c673e2bb2a9191313cfe71a972beb41f8468a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.619 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b42e90e8-4cf5-41e2-8abd-fc279dbc372f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.620 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape98627d8-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:45:57 np0005593232 kernel: tape98627d8-40: left promiscuous mode
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.664 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.665 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e69e82c-468c-4658-98e2-a3657a819d05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.682 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d31522ae-54ee-4e89-999a-4b05bc6603ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.684 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4616e0ec-6db0-4c08-8604-38cb44a6bcb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.701 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b99f84-7747-4f44-b82b-8a8f701bd253]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543109, 'reachable_time': 17427, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287892, 'error': None, 'target': 'ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.704 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e98627d8-446e-4b60-8051-8e37123acd76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:45:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:45:57.705 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[190d95d1-f1ea-40c5-9758-7e97919d0521]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:45:57 np0005593232 systemd[1]: run-netns-ovnmeta\x2de98627d8\x2d446e\x2d4b60\x2d8051\x2d8e37123acd76.mount: Deactivated successfully.
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.787 250273 DEBUG nova.compute.manager [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-unplugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.788 250273 DEBUG oslo_concurrency.lockutils [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.789 250273 DEBUG oslo_concurrency.lockutils [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.789 250273 DEBUG oslo_concurrency.lockutils [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.789 250273 DEBUG nova.compute.manager [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] No waiting events found dispatching network-vif-unplugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:45:57 np0005593232 nova_compute[250269]: 2026-01-23 09:45:57.790 250273 DEBUG nova.compute.manager [req-8f60f5e6-41c2-42ab-b826-b6470cb9cf73 req-1cf0d580-f392-4be5-a692-f03ce58c56c5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-unplugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:45:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:45:58 np0005593232 podman[287992]: 2026-01-23 09:45:58.181361509 +0000 UTC m=+0.036370522 container create daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 04:45:58 np0005593232 systemd[1]: Started libpod-conmon-daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517.scope.
Jan 23 04:45:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:45:58 np0005593232 podman[287992]: 2026-01-23 09:45:58.165763724 +0000 UTC m=+0.020772757 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:45:58 np0005593232 podman[287992]: 2026-01-23 09:45:58.263285278 +0000 UTC m=+0.118294301 container init daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:45:58 np0005593232 podman[287992]: 2026-01-23 09:45:58.2695266 +0000 UTC m=+0.124535613 container start daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:45:58 np0005593232 podman[287992]: 2026-01-23 09:45:58.273080894 +0000 UTC m=+0.128089907 container attach daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:45:58 np0005593232 gallant_brattain[288009]: 167 167
Jan 23 04:45:58 np0005593232 systemd[1]: libpod-daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517.scope: Deactivated successfully.
Jan 23 04:45:58 np0005593232 podman[288014]: 2026-01-23 09:45:58.326212314 +0000 UTC m=+0.025692251 container died daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:45:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0d659fe196d5cf1d5c40f34261ecab2b65d9d5ccf06b4476a7742fe951e57ea8-merged.mount: Deactivated successfully.
Jan 23 04:45:58 np0005593232 podman[288014]: 2026-01-23 09:45:58.362630056 +0000 UTC m=+0.062109963 container remove daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:45:58 np0005593232 systemd[1]: libpod-conmon-daa0574a156d64db4a5dc66e73d3382a14eedf5b01e69caef62031d6c16b4517.scope: Deactivated successfully.
Jan 23 04:45:58 np0005593232 nova_compute[250269]: 2026-01-23 09:45:58.376 250273 INFO nova.virt.libvirt.driver [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Deleting instance files /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667_del#033[00m
Jan 23 04:45:58 np0005593232 nova_compute[250269]: 2026-01-23 09:45:58.377 250273 INFO nova.virt.libvirt.driver [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Deletion of /var/lib/nova/instances/0d38eba2-7627-421b-b5da-45eb53aca667_del complete#033[00m
Jan 23 04:45:58 np0005593232 podman[288036]: 2026-01-23 09:45:58.529977337 +0000 UTC m=+0.038793482 container create 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 04:45:58 np0005593232 systemd[1]: Started libpod-conmon-6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4.scope.
Jan 23 04:45:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:45:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:45:58 np0005593232 podman[288036]: 2026-01-23 09:45:58.604712047 +0000 UTC m=+0.113528212 container init 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:45:58 np0005593232 podman[288036]: 2026-01-23 09:45:58.514070233 +0000 UTC m=+0.022886388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:45:58 np0005593232 podman[288036]: 2026-01-23 09:45:58.613283217 +0000 UTC m=+0.122099362 container start 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:45:58 np0005593232 podman[288036]: 2026-01-23 09:45:58.617348615 +0000 UTC m=+0.126164770 container attach 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 04:45:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 23 04:45:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 23 04:45:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 23 04:45:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:45:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:45:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:45:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:45:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:45:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:45:58.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:45:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:45:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1601: 321 pgs: 321 active+clean; 246 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 2.7 MiB/s wr, 81 op/s
Jan 23 04:45:59 np0005593232 nostalgic_lumiere[288053]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:45:59 np0005593232 nostalgic_lumiere[288053]: --> relative data size: 1.0
Jan 23 04:45:59 np0005593232 nostalgic_lumiere[288053]: --> All data devices are unavailable
Jan 23 04:45:59 np0005593232 systemd[1]: libpod-6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4.scope: Deactivated successfully.
Jan 23 04:45:59 np0005593232 podman[288036]: 2026-01-23 09:45:59.451508445 +0000 UTC m=+0.960324590 container died 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:45:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6357a978a4e23f02ac1ff851a50980b9cdd03efa715f8651a45ae9383dcf4641-merged.mount: Deactivated successfully.
Jan 23 04:45:59 np0005593232 podman[288036]: 2026-01-23 09:45:59.506594301 +0000 UTC m=+1.015410446 container remove 6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 04:45:59 np0005593232 systemd[1]: libpod-conmon-6f57dd5307f6a117f284fc4f0e0a788712ee6fd67652f6b35e91504220bd79d4.scope: Deactivated successfully.
Jan 23 04:45:59 np0005593232 nova_compute[250269]: 2026-01-23 09:45:59.689 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.089579926 +0000 UTC m=+0.054823860 container create 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:46:00 np0005593232 systemd[1]: Started libpod-conmon-2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744.scope.
Jan 23 04:46:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.058804328 +0000 UTC m=+0.024048282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.165002796 +0000 UTC m=+0.130246750 container init 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.171306579 +0000 UTC m=+0.136550513 container start 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.174557394 +0000 UTC m=+0.139801348 container attach 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:46:00 np0005593232 sharp_keller[288236]: 167 167
Jan 23 04:46:00 np0005593232 systemd[1]: libpod-2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744.scope: Deactivated successfully.
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.176594964 +0000 UTC m=+0.141838898 container died 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:46:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f36b484bf222f3e9849f7273d555053b8c3045f9945029a391240a0b1eec66eb-merged.mount: Deactivated successfully.
Jan 23 04:46:00 np0005593232 podman[288220]: 2026-01-23 09:46:00.219893567 +0000 UTC m=+0.185137501 container remove 2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:46:00 np0005593232 systemd[1]: libpod-conmon-2087532d70aec9dcfd7b6ca200e187299c705d130b4846273ab94c4abc70a744.scope: Deactivated successfully.
Jan 23 04:46:00 np0005593232 podman[288261]: 2026-01-23 09:46:00.379369278 +0000 UTC m=+0.039304557 container create adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 04:46:00 np0005593232 systemd[1]: Started libpod-conmon-adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7.scope.
Jan 23 04:46:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:46:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370620b2a5ac608fd1a97e23406a5ce1dc46f2b93d4ee3eb4c07ce1991273602/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370620b2a5ac608fd1a97e23406a5ce1dc46f2b93d4ee3eb4c07ce1991273602/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370620b2a5ac608fd1a97e23406a5ce1dc46f2b93d4ee3eb4c07ce1991273602/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/370620b2a5ac608fd1a97e23406a5ce1dc46f2b93d4ee3eb4c07ce1991273602/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:00 np0005593232 podman[288261]: 2026-01-23 09:46:00.36298683 +0000 UTC m=+0.022922129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:46:00 np0005593232 podman[288261]: 2026-01-23 09:46:00.464695207 +0000 UTC m=+0.124630506 container init adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:46:00 np0005593232 podman[288261]: 2026-01-23 09:46:00.471068073 +0000 UTC m=+0.131003352 container start adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:46:00 np0005593232 podman[288261]: 2026-01-23 09:46:00.473761921 +0000 UTC m=+0.133697200 container attach adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:46:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:00.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:00.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]: {
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:    "0": [
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:        {
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "devices": [
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "/dev/loop3"
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            ],
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "lv_name": "ceph_lv0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "lv_size": "7511998464",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "name": "ceph_lv0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "tags": {
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.cluster_name": "ceph",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.crush_device_class": "",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.encrypted": "0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.osd_id": "0",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.type": "block",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:                "ceph.vdo": "0"
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            },
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "type": "block",
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:            "vg_name": "ceph_vg0"
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:        }
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]:    ]
Jan 23 04:46:01 np0005593232 goofy_snyder[288277]: }
Jan 23 04:46:01 np0005593232 systemd[1]: libpod-adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7.scope: Deactivated successfully.
Jan 23 04:46:01 np0005593232 podman[288261]: 2026-01-23 09:46:01.250092245 +0000 UTC m=+0.910027524 container died adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:46:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-370620b2a5ac608fd1a97e23406a5ce1dc46f2b93d4ee3eb4c07ce1991273602-merged.mount: Deactivated successfully.
Jan 23 04:46:01 np0005593232 nova_compute[250269]: 2026-01-23 09:46:01.294 250273 INFO nova.compute.manager [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Took 4.05 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:46:01 np0005593232 nova_compute[250269]: 2026-01-23 09:46:01.295 250273 DEBUG oslo.service.loopingcall [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:46:01 np0005593232 nova_compute[250269]: 2026-01-23 09:46:01.295 250273 DEBUG nova.compute.manager [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:46:01 np0005593232 nova_compute[250269]: 2026-01-23 09:46:01.295 250273 DEBUG nova.network.neutron [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:46:01 np0005593232 podman[288261]: 2026-01-23 09:46:01.304264135 +0000 UTC m=+0.964199424 container remove adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:46:01 np0005593232 systemd[1]: libpod-conmon-adb37981dc9c472fc6ecbf48c5ffb3a73696d276c0db6f5cc7bc69b386fb1cb7.scope: Deactivated successfully.
Jan 23 04:46:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1602: 321 pgs: 321 active+clean; 246 MiB data, 655 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.86534746 +0000 UTC m=+0.037579437 container create 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:46:01 np0005593232 systemd[1]: Started libpod-conmon-1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867.scope.
Jan 23 04:46:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.935199777 +0000 UTC m=+0.107431754 container init 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.943479109 +0000 UTC m=+0.115711086 container start 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.849007433 +0000 UTC m=+0.021239420 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:46:01 np0005593232 trusting_haibt[288457]: 167 167
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.947261559 +0000 UTC m=+0.119493566 container attach 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:46:01 np0005593232 systemd[1]: libpod-1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867.scope: Deactivated successfully.
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.948550877 +0000 UTC m=+0.120782874 container died 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 04:46:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-14ef75e16c994660c70279687045a5482cf11c802350c8a60e0dafe20aa49215-merged.mount: Deactivated successfully.
Jan 23 04:46:01 np0005593232 podman[288441]: 2026-01-23 09:46:01.989767819 +0000 UTC m=+0.161999806 container remove 1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:46:02 np0005593232 systemd[1]: libpod-conmon-1274c5c117e27bd7017d4c705373ae8622b423de0c1f6a047880c8019d513867.scope: Deactivated successfully.
Jan 23 04:46:02 np0005593232 podman[288478]: 2026-01-23 09:46:02.158264673 +0000 UTC m=+0.044728965 container create 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:46:02 np0005593232 systemd[1]: Started libpod-conmon-12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811.scope.
Jan 23 04:46:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:46:02 np0005593232 podman[288478]: 2026-01-23 09:46:02.137359054 +0000 UTC m=+0.023823376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:46:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552edce8af6605cfb49732c212d8d364e833a6719367272af4d8f015f5082a8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552edce8af6605cfb49732c212d8d364e833a6719367272af4d8f015f5082a8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552edce8af6605cfb49732c212d8d364e833a6719367272af4d8f015f5082a8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552edce8af6605cfb49732c212d8d364e833a6719367272af4d8f015f5082a8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:46:02 np0005593232 podman[288478]: 2026-01-23 09:46:02.24490086 +0000 UTC m=+0.131365152 container init 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:46:02 np0005593232 podman[288478]: 2026-01-23 09:46:02.252891143 +0000 UTC m=+0.139355445 container start 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:46:02 np0005593232 podman[288478]: 2026-01-23 09:46:02.257022924 +0000 UTC m=+0.143487206 container attach 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.470 250273 DEBUG nova.compute.manager [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.471 250273 DEBUG oslo_concurrency.lockutils [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.472 250273 DEBUG oslo_concurrency.lockutils [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.472 250273 DEBUG oslo_concurrency.lockutils [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.473 250273 DEBUG nova.compute.manager [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] No waiting events found dispatching network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.473 250273 WARNING nova.compute.manager [req-54205960-e31f-4137-85af-f1164f54ff9f req-60384d3f-a379-4dd0-ad07-2858ea4090ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received unexpected event network-vif-plugged-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:02 np0005593232 NetworkManager[49057]: <info>  [1769161562.7284] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Jan 23 04:46:02 np0005593232 NetworkManager[49057]: <info>  [1769161562.7302] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.789 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:02 np0005593232 nova_compute[250269]: 2026-01-23 09:46:02.925 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:02.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]: {
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:        "osd_id": 0,
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:        "type": "bluestore"
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]:    }
Jan 23 04:46:03 np0005593232 angry_zhukovsky[288494]: }
Jan 23 04:46:03 np0005593232 systemd[1]: libpod-12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811.scope: Deactivated successfully.
Jan 23 04:46:03 np0005593232 podman[288517]: 2026-01-23 09:46:03.163016628 +0000 UTC m=+0.021735725 container died 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:46:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-552edce8af6605cfb49732c212d8d364e833a6719367272af4d8f015f5082a8c-merged.mount: Deactivated successfully.
Jan 23 04:46:03 np0005593232 podman[288517]: 2026-01-23 09:46:03.219742353 +0000 UTC m=+0.078461450 container remove 12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:46:03 np0005593232 systemd[1]: libpod-conmon-12af248ff0c6006188f8f6ad633392ebf4db08d8668022e7dfaec162ad35c811.scope: Deactivated successfully.
Jan 23 04:46:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:46:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:46:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:46:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:46:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 791f55b3-1ac9-4e8d-b9ca-201cbb558f71 does not exist
Jan 23 04:46:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 31c58711-b277-47c4-a5ef-94a8faad60db does not exist
Jan 23 04:46:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 46f9b8ca-8fbf-4370-8d9e-9530c13ba95b does not exist
Jan 23 04:46:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1603: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.5 MiB/s wr, 70 op/s
Jan 23 04:46:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:46:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:46:04 np0005593232 nova_compute[250269]: 2026-01-23 09:46:04.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:04.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:04.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1604: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.5 MiB/s wr, 68 op/s
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.225 250273 DEBUG nova.network.neutron [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.277 250273 INFO nova.compute.manager [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Took 4.98 seconds to deallocate network for instance.#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.363 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.364 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.452 250273 DEBUG oslo_concurrency.processutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.709 250273 DEBUG nova.compute.manager [req-2b3a3073-9b78-41c1-b557-0e621bd1d5aa req-abe21f31-0086-432b-9d14-9ae001cb8b25 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Received event network-vif-deleted-78ad8ff7-ce83-49f8-b130-d439e99c2ba3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:46:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:06.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:46:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992329831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.932 250273 DEBUG oslo_concurrency.processutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:06.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.939 250273 DEBUG nova.compute.provider_tree [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.957 250273 DEBUG nova.scheduler.client.report [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:46:06 np0005593232 nova_compute[250269]: 2026-01-23 09:46:06.987 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:07 np0005593232 nova_compute[250269]: 2026-01-23 09:46:07.108 250273 INFO nova.scheduler.client.report [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Deleted allocations for instance 0d38eba2-7627-421b-b5da-45eb53aca667#033[00m
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1605: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.0 KiB/s wr, 48 op/s
Jan 23 04:46:07 np0005593232 nova_compute[250269]: 2026-01-23 09:46:07.525 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:08 np0005593232 nova_compute[250269]: 2026-01-23 09:46:08.123 250273 DEBUG oslo_concurrency.lockutils [None req-638beb1d-5a59-40ca-9b96-fe2bef39dcc2 1270518e615c4c63a54865bfe906ce5d c05913e2e5c046bf92e6c8c855833959 - - default default] Lock "0d38eba2-7627-421b-b5da-45eb53aca667" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:08.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1606: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Jan 23 04:46:09 np0005593232 nova_compute[250269]: 2026-01-23 09:46:09.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:10.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:10.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1607: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 39 op/s
Jan 23 04:46:12 np0005593232 nova_compute[250269]: 2026-01-23 09:46:12.482 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161557.480393, 0d38eba2-7627-421b-b5da-45eb53aca667 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:46:12 np0005593232 nova_compute[250269]: 2026-01-23 09:46:12.482 250273 INFO nova.compute.manager [-] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:46:12 np0005593232 nova_compute[250269]: 2026-01-23 09:46:12.527 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:46:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:12.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:46:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1608: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 2.5 KiB/s wr, 39 op/s
Jan 23 04:46:13 np0005593232 nova_compute[250269]: 2026-01-23 09:46:13.940 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:13 np0005593232 nova_compute[250269]: 2026-01-23 09:46:13.941 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.309528) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574309644, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 2044, "num_deletes": 263, "total_data_size": 3397319, "memory_usage": 3459408, "flush_reason": "Manual Compaction"}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574379644, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 3336685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33891, "largest_seqno": 35934, "table_properties": {"data_size": 3327268, "index_size": 5911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19761, "raw_average_key_size": 20, "raw_value_size": 3308268, "raw_average_value_size": 3446, "num_data_blocks": 256, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161403, "oldest_key_time": 1769161403, "file_creation_time": 1769161574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 70181 microseconds, and 10289 cpu microseconds.
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:46:14 np0005593232 nova_compute[250269]: 2026-01-23 09:46:14.404 250273 DEBUG nova.compute.manager [None req-768e3b4f-b3a6-4092-91aa-776a3c21412a - - - - - -] [instance: 0d38eba2-7627-421b-b5da-45eb53aca667] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.379719) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 3336685 bytes OK
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.379748) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.418116) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.418209) EVENT_LOG_v1 {"time_micros": 1769161574418189, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.418254) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 3388664, prev total WAL file size 3388664, number of live WAL files 2.
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.420428) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323535' seq:0, type:0; will stop at (end)
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(3258KB)], [74(8131KB)]
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574420551, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11663436, "oldest_snapshot_seqno": -1}
Jan 23 04:46:14 np0005593232 podman[288610]: 2026-01-23 09:46:14.486795009 +0000 UTC m=+0.140779177 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6247 keys, 11511062 bytes, temperature: kUnknown
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574593774, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11511062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11467198, "index_size": 27114, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 159661, "raw_average_key_size": 25, "raw_value_size": 11353015, "raw_average_value_size": 1817, "num_data_blocks": 1096, "num_entries": 6247, "num_filter_entries": 6247, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161574, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.594099) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11511062 bytes
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.666585) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 67.3 rd, 66.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 6787, records dropped: 540 output_compression: NoCompression
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.666625) EVENT_LOG_v1 {"time_micros": 1769161574666611, "job": 42, "event": "compaction_finished", "compaction_time_micros": 173349, "compaction_time_cpu_micros": 27459, "output_level": 6, "num_output_files": 1, "total_output_size": 11511062, "num_input_records": 6787, "num_output_records": 6247, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574667374, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161574668799, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.420254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.668991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.668998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.669000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.669002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:14.669005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:14 np0005593232 nova_compute[250269]: 2026-01-23 09:46:14.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:14 np0005593232 nova_compute[250269]: 2026-01-23 09:46:14.788 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:14 np0005593232 nova_compute[250269]: 2026-01-23 09:46:14.789 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:14 np0005593232 nova_compute[250269]: 2026-01-23 09:46:14.855 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:46:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:14.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:14.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:15 np0005593232 nova_compute[250269]: 2026-01-23 09:46:15.008 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:15 np0005593232 nova_compute[250269]: 2026-01-23 09:46:15.009 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:15 np0005593232 nova_compute[250269]: 2026-01-23 09:46:15.017 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:46:15 np0005593232 nova_compute[250269]: 2026-01-23 09:46:15.018 250273 INFO nova.compute.claims [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:46:15 np0005593232 nova_compute[250269]: 2026-01-23 09:46:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1609: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.208 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.401 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.402 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:46:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:46:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/838780434' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.686 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:16 np0005593232 nova_compute[250269]: 2026-01-23 09:46:16.692 250273 DEBUG nova.compute.provider_tree [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:46:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:16.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:16.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.197 250273 DEBUG nova.scheduler.client.report [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.236 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.237 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.312 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.313 250273 DEBUG nova.network.neutron [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.358 250273 INFO nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:46:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1610: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.409 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.528 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.669 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.670 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.671 250273 INFO nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Creating image(s)#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.698 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.874 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.911 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:17 np0005593232 nova_compute[250269]: 2026-01-23 09:46:17.917 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.000 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.001 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.002 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.003 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.037 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.041 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:18 np0005593232 nova_compute[250269]: 2026-01-23 09:46:18.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:46:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:18.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1611: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.615 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.680 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] resizing rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:19 np0005593232 nova_compute[250269]: 2026-01-23 09:46:19.898 250273 DEBUG nova.objects.instance [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lazy-loading 'migration_context' on Instance uuid 37d4741e-a7c3-49a4-ad2e-c54789bff05f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:46:20 np0005593232 nova_compute[250269]: 2026-01-23 09:46:20.307 250273 DEBUG nova.network.neutron [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 23 04:46:20 np0005593232 nova_compute[250269]: 2026-01-23 09:46:20.308 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:46:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:20.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:20.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.127 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.127 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Ensure instance console log exists: /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.128 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.128 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.128 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.130 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.134 250273 WARNING nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.144 250273 DEBUG nova.virt.libvirt.host [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.144 250273 DEBUG nova.virt.libvirt.host [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.157 250273 DEBUG nova.virt.libvirt.host [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.158 250273 DEBUG nova.virt.libvirt.host [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.159 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.159 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.160 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.160 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.160 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.160 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.160 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.161 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.161 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.161 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.161 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.162 250273 DEBUG nova.virt.hardware [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.165 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1612: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:46:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:46:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2419122426' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.623 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.652 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:21 np0005593232 nova_compute[250269]: 2026-01-23 09:46:21.655 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:46:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1473140393' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:46:22 np0005593232 nova_compute[250269]: 2026-01-23 09:46:22.068 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:22 np0005593232 nova_compute[250269]: 2026-01-23 09:46:22.072 250273 DEBUG nova.objects.instance [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37d4741e-a7c3-49a4-ad2e-c54789bff05f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:46:22 np0005593232 podman[288939]: 2026-01-23 09:46:22.393840886 +0000 UTC m=+0.049719052 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 23 04:46:22 np0005593232 nova_compute[250269]: 2026-01-23 09:46:22.530 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:22.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:22.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:23 np0005593232 nova_compute[250269]: 2026-01-23 09:46:23.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:23 np0005593232 nova_compute[250269]: 2026-01-23 09:46:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:46:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1613: 321 pgs: 321 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.246 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <uuid>37d4741e-a7c3-49a4-ad2e-c54789bff05f</uuid>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <name>instance-0000003a</name>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:name>tempest-ListImageFiltersTestJSON-server-1327103880</nova:name>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:46:21</nova:creationTime>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:user uuid="8d1d7c58442749759ba7dc3a19799796">tempest-ListImageFiltersTestJSON-1689583115-project-member</nova:user>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <nova:project uuid="5d69aaa276f94de98e4011fa17428b40">tempest-ListImageFiltersTestJSON-1689583115</nova:project>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="serial">37d4741e-a7c3-49a4-ad2e-c54789bff05f</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="uuid">37d4741e-a7c3-49a4-ad2e-c54789bff05f</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/console.log" append="off"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:46:24 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:46:24 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:46:24 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:46:24 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:46:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.277 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.278 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.278 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.278 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.278 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:46:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3645100256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.742 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.820 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.821 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.822 250273 INFO nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Using config drive#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.853 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:24.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.910 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:46:24 np0005593232 nova_compute[250269]: 2026-01-23 09:46:24.911 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:46:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.038 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.039 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4556MB free_disk=20.901107788085938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.039 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.040 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.290 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 37d4741e-a7c3-49a4-ad2e-c54789bff05f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.290 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.291 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:46:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1614: 321 pgs: 321 active+clean; 213 MiB data, 629 MiB used, 20 GiB / 21 GiB avail; 613 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.429 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.510 250273 INFO nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Creating config drive at /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.516 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5v1bm1uq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.646 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5v1bm1uq" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.675 250273 DEBUG nova.storage.rbd_utils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] rbd image 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.679 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:46:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:46:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680761777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.924 250273 DEBUG oslo_concurrency.processutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config 37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.925 250273 INFO nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Deleting local config drive /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f/disk.config because it was imported into RBD.#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.926 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.931 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.959 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.990 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:46:25 np0005593232 nova_compute[250269]: 2026-01-23 09:46:25.991 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:26 np0005593232 systemd-machined[215836]: New machine qemu-23-instance-0000003a.
Jan 23 04:46:26 np0005593232 systemd[1]: Started Virtual Machine qemu-23-instance-0000003a.
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.611 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161586.6107814, 37d4741e-a7c3-49a4-ad2e-c54789bff05f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.612 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.614 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.614 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.617 250273 INFO nova.virt.libvirt.driver [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance spawned successfully.#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.618 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.724 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.733 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.734 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.735 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.736 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.736 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.737 250273 DEBUG nova.virt.libvirt.driver [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.745 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:46:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:26.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:26.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.965 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.967 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161586.6117315, 37d4741e-a7c3-49a4-ad2e-c54789bff05f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:46:26 np0005593232 nova_compute[250269]: 2026-01-23 09:46:26.968 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] VM Started (Lifecycle Event)#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.046 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.049 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.053 250273 INFO nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 9.38 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.053 250273 DEBUG nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.154 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.276 250273 INFO nova.compute.manager [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 12.33 seconds to build instance.#033[00m
Jan 23 04:46:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1615: 321 pgs: 321 active+clean; 235 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.403 250273 DEBUG oslo_concurrency.lockutils [None req-7d558624-f1e3-4bbe-bdc3-47cb811d9df9 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:27 np0005593232 nova_compute[250269]: 2026-01-23 09:46:27.533 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:28.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:28.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1616: 321 pgs: 321 active+clean; 260 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 155 op/s
Jan 23 04:46:29 np0005593232 nova_compute[250269]: 2026-01-23 09:46:29.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:30.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:30.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1617: 321 pgs: 321 active+clean; 260 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Jan 23 04:46:32 np0005593232 nova_compute[250269]: 2026-01-23 09:46:32.535 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.611006) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592611045, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 421, "num_deletes": 251, "total_data_size": 318214, "memory_usage": 326168, "flush_reason": "Manual Compaction"}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592614654, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 315025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35935, "largest_seqno": 36355, "table_properties": {"data_size": 312605, "index_size": 520, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6165, "raw_average_key_size": 18, "raw_value_size": 307724, "raw_average_value_size": 943, "num_data_blocks": 23, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161574, "oldest_key_time": 1769161574, "file_creation_time": 1769161592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 3679 microseconds, and 1299 cpu microseconds.
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.614685) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 315025 bytes OK
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.614697) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.617210) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.617231) EVENT_LOG_v1 {"time_micros": 1769161592617218, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.617246) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 315592, prev total WAL file size 315592, number of live WAL files 2.
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.617739) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(307KB)], [77(10MB)]
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592617807, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11826087, "oldest_snapshot_seqno": -1}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6063 keys, 10005785 bytes, temperature: kUnknown
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592752368, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 10005785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9964473, "index_size": 25074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 156572, "raw_average_key_size": 25, "raw_value_size": 9854730, "raw_average_value_size": 1625, "num_data_blocks": 1003, "num_entries": 6063, "num_filter_entries": 6063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161592, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.752761) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10005785 bytes
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.755941) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 87.8 rd, 74.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.0 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(69.3) write-amplify(31.8) OK, records in: 6573, records dropped: 510 output_compression: NoCompression
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.755976) EVENT_LOG_v1 {"time_micros": 1769161592755961, "job": 44, "event": "compaction_finished", "compaction_time_micros": 134714, "compaction_time_cpu_micros": 42918, "output_level": 6, "num_output_files": 1, "total_output_size": 10005785, "num_input_records": 6573, "num_output_records": 6063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592756497, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161592758948, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.617643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.759136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.759143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.759146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.759147) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:46:32.759150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:46:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:32.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:32.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1618: 321 pgs: 321 active+clean; 260 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.6 MiB/s wr, 275 op/s
Jan 23 04:46:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:34 np0005593232 nova_compute[250269]: 2026-01-23 09:46:34.693 250273 DEBUG nova.compute.manager [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:34 np0005593232 nova_compute[250269]: 2026-01-23 09:46:34.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:34 np0005593232 nova_compute[250269]: 2026-01-23 09:46:34.761 250273 INFO nova.compute.manager [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] instance snapshotting#033[00m
Jan 23 04:46:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:34.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:34.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:35 np0005593232 nova_compute[250269]: 2026-01-23 09:46:35.352 250273 INFO nova.virt.libvirt.driver [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Beginning live snapshot process#033[00m
Jan 23 04:46:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1619: 321 pgs: 321 active+clean; 260 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 23 04:46:35 np0005593232 nova_compute[250269]: 2026-01-23 09:46:35.969 250273 DEBUG nova.virt.libvirt.imagebackend [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 04:46:36 np0005593232 nova_compute[250269]: 2026-01-23 09:46:36.546 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:36.546 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:46:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:36.549 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:46:36 np0005593232 nova_compute[250269]: 2026-01-23 09:46:36.565 250273 DEBUG nova.storage.rbd_utils [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] creating snapshot(2796fc68f7144d54aef81ba5828a91cd) on rbd image(37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:46:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 23 04:46:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 23 04:46:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 23 04:46:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:36.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:36 np0005593232 nova_compute[250269]: 2026-01-23 09:46:36.908 250273 DEBUG nova.storage.rbd_utils [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] cloning vms/37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk@2796fc68f7144d54aef81ba5828a91cd to images/a6dd6735-2531-476a-84da-d320f253f8a3 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:46:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:36.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:37 np0005593232 nova_compute[250269]: 2026-01-23 09:46:37.084 250273 DEBUG nova.storage.rbd_utils [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] flattening images/a6dd6735-2531-476a-84da-d320f253f8a3 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:46:37
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'images', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'volumes']
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1621: 321 pgs: 321 active+clean; 277 MiB data, 681 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 241 op/s
Jan 23 04:46:37 np0005593232 nova_compute[250269]: 2026-01-23 09:46:37.406 250273 DEBUG nova.storage.rbd_utils [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] removing snapshot(2796fc68f7144d54aef81ba5828a91cd) on rbd image(37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:46:37 np0005593232 nova_compute[250269]: 2026-01-23 09:46:37.536 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:46:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 23 04:46:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 23 04:46:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 23 04:46:37 np0005593232 nova_compute[250269]: 2026-01-23 09:46:37.911 250273 DEBUG nova.storage.rbd_utils [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] creating snapshot(snap) on rbd image(a6dd6735-2531-476a-84da-d320f253f8a3) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:46:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:38.552 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:46:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 23 04:46:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 23 04:46:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 23 04:46:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:38.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1624: 321 pgs: 321 active+clean; 308 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.5 MiB/s wr, 194 op/s
Jan 23 04:46:39 np0005593232 nova_compute[250269]: 2026-01-23 09:46:39.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:40.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1625: 321 pgs: 321 active+clean; 322 MiB data, 701 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 7.6 MiB/s wr, 328 op/s
Jan 23 04:46:42 np0005593232 nova_compute[250269]: 2026-01-23 09:46:42.462 250273 INFO nova.virt.libvirt.driver [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Snapshot image upload complete#033[00m
Jan 23 04:46:42 np0005593232 nova_compute[250269]: 2026-01-23 09:46:42.462 250273 INFO nova.compute.manager [None req-9330f475-4c69-4f53-8520-d005ad68412c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 7.70 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 04:46:42 np0005593232 nova_compute[250269]: 2026-01-23 09:46:42.538 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:42.601 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:42.601 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:46:42.602 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:46:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:42.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:42.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1626: 321 pgs: 321 active+clean; 306 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 12 MiB/s wr, 361 op/s
Jan 23 04:46:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 23 04:46:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 23 04:46:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 23 04:46:44 np0005593232 nova_compute[250269]: 2026-01-23 09:46:44.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:44.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1628: 321 pgs: 321 active+clean; 306 MiB data, 718 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.6 MiB/s wr, 210 op/s
Jan 23 04:46:45 np0005593232 podman[289322]: 2026-01-23 09:46:45.444191814 +0000 UTC m=+0.101081489 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006122736076214406 of space, bias 1.0, pg target 1.8368208228643217 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002894245533221664 of space, bias 1.0, pg target 0.8653794144332776 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:46:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:46:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:46.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:46.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1629: 321 pgs: 321 active+clean; 320 MiB data, 726 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 7.6 MiB/s wr, 252 op/s
Jan 23 04:46:47 np0005593232 nova_compute[250269]: 2026-01-23 09:46:47.540 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 23 04:46:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 23 04:46:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 23 04:46:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:48.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e223 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1631: 321 pgs: 321 active+clean; 326 MiB data, 736 MiB used, 20 GiB / 21 GiB avail; 967 KiB/s rd, 6.5 MiB/s wr, 190 op/s
Jan 23 04:46:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 23 04:46:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 23 04:46:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 23 04:46:49 np0005593232 nova_compute[250269]: 2026-01-23 09:46:49.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 23 04:46:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 23 04:46:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 23 04:46:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:46:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:46:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:50.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1634: 321 pgs: 321 active+clean; 363 MiB data, 754 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.4 MiB/s wr, 186 op/s
Jan 23 04:46:52 np0005593232 nova_compute[250269]: 2026-01-23 09:46:52.542 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:52.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:52.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1635: 321 pgs: 321 active+clean; 412 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 255 op/s
Jan 23 04:46:53 np0005593232 podman[289353]: 2026-01-23 09:46:53.424811196 +0000 UTC m=+0.069452297 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:46:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 23 04:46:54 np0005593232 nova_compute[250269]: 2026-01-23 09:46:54.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:54.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:54.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 23 04:46:55 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 23 04:46:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1637: 321 pgs: 321 active+clean; 412 MiB data, 785 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 226 op/s
Jan 23 04:46:56 np0005593232 nova_compute[250269]: 2026-01-23 09:46:56.719 250273 DEBUG nova.compute.manager [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:46:56 np0005593232 nova_compute[250269]: 2026-01-23 09:46:56.799 250273 INFO nova.compute.manager [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] instance snapshotting#033[00m
Jan 23 04:46:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:56.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:56.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1638: 321 pgs: 321 active+clean; 372 MiB data, 761 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 8.8 MiB/s wr, 209 op/s
Jan 23 04:46:57 np0005593232 nova_compute[250269]: 2026-01-23 09:46:57.445 250273 INFO nova.virt.libvirt.driver [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Beginning live snapshot process#033[00m
Jan 23 04:46:57 np0005593232 nova_compute[250269]: 2026-01-23 09:46:57.544 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:46:57 np0005593232 nova_compute[250269]: 2026-01-23 09:46:57.671 250273 DEBUG nova.virt.libvirt.imagebackend [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 04:46:58 np0005593232 nova_compute[250269]: 2026-01-23 09:46:58.271 250273 DEBUG nova.storage.rbd_utils [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] creating snapshot(cc31018fdb06489cb45341424c74aead) on rbd image(37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:46:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 23 04:46:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 23 04:46:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 23 04:46:58 np0005593232 nova_compute[250269]: 2026-01-23 09:46:58.552 250273 DEBUG nova.storage.rbd_utils [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] cloning vms/37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk@cc31018fdb06489cb45341424c74aead to images/8f15c480-cf5d-4aa4-933d-ac81e0da506c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 04:46:58 np0005593232 nova_compute[250269]: 2026-01-23 09:46:58.672 250273 DEBUG nova.storage.rbd_utils [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] flattening images/8f15c480-cf5d-4aa4-933d-ac81e0da506c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 04:46:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:46:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:46:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:46:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:46:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:46:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:46:58.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:46:59 np0005593232 nova_compute[250269]: 2026-01-23 09:46:59.209 250273 DEBUG nova.storage.rbd_utils [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] removing snapshot(cc31018fdb06489cb45341424c74aead) on rbd image(37d4741e-a7c3-49a4-ad2e-c54789bff05f_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:46:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:46:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1640: 321 pgs: 321 active+clean; 372 MiB data, 758 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 6.2 MiB/s wr, 157 op/s
Jan 23 04:46:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 23 04:46:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 23 04:46:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 23 04:46:59 np0005593232 nova_compute[250269]: 2026-01-23 09:46:59.541 250273 DEBUG nova.storage.rbd_utils [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] creating snapshot(snap) on rbd image(8f15c480-cf5d-4aa4-933d-ac81e0da506c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 04:46:59 np0005593232 nova_compute[250269]: 2026-01-23 09:46:59.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 23 04:47:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 23 04:47:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 23 04:47:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:00.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:00.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1643: 321 pgs: 321 active+clean; 401 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.3 MiB/s wr, 108 op/s
Jan 23 04:47:02 np0005593232 nova_compute[250269]: 2026-01-23 09:47:02.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:02.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1644: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Jan 23 04:47:03 np0005593232 nova_compute[250269]: 2026-01-23 09:47:03.403 250273 INFO nova.virt.libvirt.driver [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Snapshot image upload complete#033[00m
Jan 23 04:47:03 np0005593232 nova_compute[250269]: 2026-01-23 09:47:03.404 250273 INFO nova.compute.manager [None req-150bdda3-e14a-41a5-9b9a-1037ffe9394c 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 6.60 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 04:47:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 23 04:47:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 23 04:47:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 23 04:47:04 np0005593232 nova_compute[250269]: 2026-01-23 09:47:04.712 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:04.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:04.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:47:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:47:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1646: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 179 op/s
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fb936b05-b094-43a7-b3c1-68dcef420b24 does not exist
Jan 23 04:47:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 64ce3142-8894-4ad7-ba02-8d2545b8c695 does not exist
Jan 23 04:47:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ea5df5ac-d9eb-4bae-bb64-a1e301ab2a45 does not exist
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:47:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.78211791 +0000 UTC m=+0.045993203 container create 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:47:06 np0005593232 systemd[1]: Started libpod-conmon-2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50.scope.
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.756773071 +0000 UTC m=+0.020648354 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.881521409 +0000 UTC m=+0.145396682 container init 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.891287824 +0000 UTC m=+0.155163087 container start 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:47:06 np0005593232 mystifying_noyce[289855]: 167 167
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.896387483 +0000 UTC m=+0.160262766 container attach 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:47:06 np0005593232 systemd[1]: libpod-2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50.scope: Deactivated successfully.
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.897680141 +0000 UTC m=+0.161555414 container died 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:47:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-497d71192700df7c60d1fcb0ff03aa312b61cf4fa3b1b1240520ae2cbf9adddc-merged.mount: Deactivated successfully.
Jan 23 04:47:06 np0005593232 podman[289839]: 2026-01-23 09:47:06.939103789 +0000 UTC m=+0.202979052 container remove 2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:47:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:06.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:06 np0005593232 systemd[1]: libpod-conmon-2cfc45714bf75b8fc6faa2315f19406c7bb4b9d8c43918232762b7f9397fde50.scope: Deactivated successfully.
Jan 23 04:47:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:07.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.109714015 +0000 UTC m=+0.045795217 container create b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:47:07 np0005593232 systemd[1]: Started libpod-conmon-b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466.scope.
Jan 23 04:47:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.091601737 +0000 UTC m=+0.027682959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.187270757 +0000 UTC m=+0.123351979 container init b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.19558408 +0000 UTC m=+0.131665282 container start b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.19869253 +0000 UTC m=+0.134773732 container attach b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:47:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1647: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:07 np0005593232 nova_compute[250269]: 2026-01-23 09:47:07.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:07 np0005593232 nervous_rhodes[289896]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:47:07 np0005593232 nervous_rhodes[289896]: --> relative data size: 1.0
Jan 23 04:47:07 np0005593232 nervous_rhodes[289896]: --> All data devices are unavailable
Jan 23 04:47:07 np0005593232 systemd[1]: libpod-b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466.scope: Deactivated successfully.
Jan 23 04:47:07 np0005593232 podman[289880]: 2026-01-23 09:47:07.991542775 +0000 UTC m=+0.927624017 container died b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:47:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-20523b4e49ec00bfe078526a6619b02a7cac26702f534e4ca146435713b9697c-merged.mount: Deactivated successfully.
Jan 23 04:47:08 np0005593232 podman[289880]: 2026-01-23 09:47:08.066258054 +0000 UTC m=+1.002339256 container remove b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_rhodes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:47:08 np0005593232 systemd[1]: libpod-conmon-b39ebad4a20a4c256a192414ae78604bc91ed51e5ab2f4e8e0566a8c19491466.scope: Deactivated successfully.
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.640255756 +0000 UTC m=+0.036107094 container create 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:47:08 np0005593232 systemd[1]: Started libpod-conmon-878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014.scope.
Jan 23 04:47:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.626109193 +0000 UTC m=+0.021960551 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.72506772 +0000 UTC m=+0.120919068 container init 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.734482544 +0000 UTC m=+0.130333882 container start 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.737699308 +0000 UTC m=+0.133550666 container attach 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:47:08 np0005593232 hardcore_poincare[290079]: 167 167
Jan 23 04:47:08 np0005593232 systemd[1]: libpod-878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014.scope: Deactivated successfully.
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.740432258 +0000 UTC m=+0.136283616 container died 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:47:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7b53b7432f21bc1d298da41e64a417d79a91002c15fee1ac75c99dd058930892-merged.mount: Deactivated successfully.
Jan 23 04:47:08 np0005593232 podman[290063]: 2026-01-23 09:47:08.793100154 +0000 UTC m=+0.188951492 container remove 878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:47:08 np0005593232 systemd[1]: libpod-conmon-878ecde9257874a49630242ce9c705d1377727abc60344055de4d33e1d66d014.scope: Deactivated successfully.
Jan 23 04:47:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:08 np0005593232 podman[290103]: 2026-01-23 09:47:08.95649586 +0000 UTC m=+0.043367076 container create cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:47:09 np0005593232 systemd[1]: Started libpod-conmon-cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb.scope.
Jan 23 04:47:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 23 04:47:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:09.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:08.935381194 +0000 UTC m=+0.022252440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f38f8ee5f3ae7915cd8e46c662b71e7e63864d0baafdb881ccc2c69e10228ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f38f8ee5f3ae7915cd8e46c662b71e7e63864d0baafdb881ccc2c69e10228ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f38f8ee5f3ae7915cd8e46c662b71e7e63864d0baafdb881ccc2c69e10228ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f38f8ee5f3ae7915cd8e46c662b71e7e63864d0baafdb881ccc2c69e10228ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:09.052578352 +0000 UTC m=+0.139449588 container init cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:09.059528585 +0000 UTC m=+0.146399801 container start cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:09.064252503 +0000 UTC m=+0.151123749 container attach cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:47:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1648: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 5.3 MiB/s wr, 215 op/s
Jan 23 04:47:09 np0005593232 nova_compute[250269]: 2026-01-23 09:47:09.714 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:09 np0005593232 serene_gates[290120]: {
Jan 23 04:47:09 np0005593232 serene_gates[290120]:    "0": [
Jan 23 04:47:09 np0005593232 serene_gates[290120]:        {
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "devices": [
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "/dev/loop3"
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            ],
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "lv_name": "ceph_lv0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "lv_size": "7511998464",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "name": "ceph_lv0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "tags": {
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.cluster_name": "ceph",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.crush_device_class": "",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.encrypted": "0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.osd_id": "0",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.type": "block",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:                "ceph.vdo": "0"
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            },
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "type": "block",
Jan 23 04:47:09 np0005593232 serene_gates[290120]:            "vg_name": "ceph_vg0"
Jan 23 04:47:09 np0005593232 serene_gates[290120]:        }
Jan 23 04:47:09 np0005593232 serene_gates[290120]:    ]
Jan 23 04:47:09 np0005593232 serene_gates[290120]: }
Jan 23 04:47:09 np0005593232 systemd[1]: libpod-cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb.scope: Deactivated successfully.
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:09.814632539 +0000 UTC m=+0.901503775 container died cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:47:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8f38f8ee5f3ae7915cd8e46c662b71e7e63864d0baafdb881ccc2c69e10228ac-merged.mount: Deactivated successfully.
Jan 23 04:47:09 np0005593232 podman[290103]: 2026-01-23 09:47:09.869374945 +0000 UTC m=+0.956246161 container remove cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_gates, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:47:09 np0005593232 systemd[1]: libpod-conmon-cb33e91c9b060d2c024ad91f3817411686626200040c8ae471a5691e28b7adbb.scope: Deactivated successfully.
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.471900239 +0000 UTC m=+0.037297698 container create 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:47:10 np0005593232 systemd[1]: Started libpod-conmon-9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c.scope.
Jan 23 04:47:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.456957444 +0000 UTC m=+0.022354933 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.556145087 +0000 UTC m=+0.121542566 container init 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.563046298 +0000 UTC m=+0.128443747 container start 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.56653824 +0000 UTC m=+0.131935719 container attach 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:47:10 np0005593232 frosty_bhaskara[290299]: 167 167
Jan 23 04:47:10 np0005593232 systemd[1]: libpod-9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c.scope: Deactivated successfully.
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.567270331 +0000 UTC m=+0.132667790 container died 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:47:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-27f79550c81105463c4259078cc47b816c67d446910b4cbeba513e071ba7b746-merged.mount: Deactivated successfully.
Jan 23 04:47:10 np0005593232 podman[290282]: 2026-01-23 09:47:10.604043304 +0000 UTC m=+0.169440763 container remove 9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:47:10 np0005593232 systemd[1]: libpod-conmon-9ba30ec1a88fbeaffc77e26ccef09cab00688d604bcb1f85a6c103f76656be0c.scope: Deactivated successfully.
Jan 23 04:47:10 np0005593232 podman[290323]: 2026-01-23 09:47:10.751703511 +0000 UTC m=+0.036845776 container create 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:47:10 np0005593232 systemd[1]: Started libpod-conmon-4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2.scope.
Jan 23 04:47:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:47:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7547f9abe4e49c0c11fbf25652fab1420d54f68d402e82f0f7efbb3c398c19b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7547f9abe4e49c0c11fbf25652fab1420d54f68d402e82f0f7efbb3c398c19b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7547f9abe4e49c0c11fbf25652fab1420d54f68d402e82f0f7efbb3c398c19b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7547f9abe4e49c0c11fbf25652fab1420d54f68d402e82f0f7efbb3c398c19b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:47:10 np0005593232 podman[290323]: 2026-01-23 09:47:10.826646827 +0000 UTC m=+0.111789112 container init 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:47:10 np0005593232 podman[290323]: 2026-01-23 09:47:10.736164688 +0000 UTC m=+0.021306973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:47:10 np0005593232 podman[290323]: 2026-01-23 09:47:10.833570098 +0000 UTC m=+0.118712363 container start 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:47:10 np0005593232 podman[290323]: 2026-01-23 09:47:10.837331448 +0000 UTC m=+0.122473733 container attach 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:47:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:10.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:11.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1649: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 153 op/s
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]: {
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:        "osd_id": 0,
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:        "type": "bluestore"
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]:    }
Jan 23 04:47:11 np0005593232 quizzical_banach[290339]: }
Jan 23 04:47:11 np0005593232 systemd[1]: libpod-4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2.scope: Deactivated successfully.
Jan 23 04:47:11 np0005593232 conmon[290339]: conmon 4a9723ebaf4533513900 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2.scope/container/memory.events
Jan 23 04:47:11 np0005593232 podman[290323]: 2026-01-23 09:47:11.74974718 +0000 UTC m=+1.034889465 container died 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 04:47:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a7547f9abe4e49c0c11fbf25652fab1420d54f68d402e82f0f7efbb3c398c19b-merged.mount: Deactivated successfully.
Jan 23 04:47:11 np0005593232 podman[290323]: 2026-01-23 09:47:11.814326254 +0000 UTC m=+1.099468519 container remove 4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:47:11 np0005593232 systemd[1]: libpod-conmon-4a9723ebaf453351390026a2698ed8458bd452b7ec061ec30bdff155f9a8f2f2.scope: Deactivated successfully.
Jan 23 04:47:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:47:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:47:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 021e48c7-cce0-458c-abe3-e0f7bca6c83c does not exist
Jan 23 04:47:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c7200a23-0133-4ad4-94c9-5772d10c21e8 does not exist
Jan 23 04:47:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 40c61f4d-778c-4690-8410-a76db41b6280 does not exist
Jan 23 04:47:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:47:12 np0005593232 nova_compute[250269]: 2026-01-23 09:47:12.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:13.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1650: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 85 op/s
Jan 23 04:47:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 23 04:47:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 23 04:47:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 23 04:47:14 np0005593232 nova_compute[250269]: 2026-01-23 09:47:14.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:14.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1652: 321 pgs: 321 active+clean; 451 MiB data, 808 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 12 KiB/s wr, 85 op/s
Jan 23 04:47:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 23 04:47:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 23 04:47:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 23 04:47:15 np0005593232 nova_compute[250269]: 2026-01-23 09:47:15.986 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:15 np0005593232 nova_compute[250269]: 2026-01-23 09:47:15.986 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:16 np0005593232 podman[290424]: 2026-01-23 09:47:16.469946085 +0000 UTC m=+0.123530084 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:47:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 23 04:47:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 23 04:47:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 23 04:47:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:16.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:17.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:47:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1655: 321 pgs: 9 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 310 active+clean; 447 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.1 MiB/s wr, 141 op/s
Jan 23 04:47:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 23 04:47:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 23 04:47:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.782 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.782 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.783 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:47:17 np0005593232 nova_compute[250269]: 2026-01-23 09:47:17.783 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 37d4741e-a7c3-49a4-ad2e-c54789bff05f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:47:18 np0005593232 nova_compute[250269]: 2026-01-23 09:47:18.108 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:47:18 np0005593232 nova_compute[250269]: 2026-01-23 09:47:18.756 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:47:18 np0005593232 nova_compute[250269]: 2026-01-23 09:47:18.773 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:47:18 np0005593232 nova_compute[250269]: 2026-01-23 09:47:18.773 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:47:18 np0005593232 nova_compute[250269]: 2026-01-23 09:47:18.774 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:18.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:19.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:19 np0005593232 nova_compute[250269]: 2026-01-23 09:47:19.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:19 np0005593232 nova_compute[250269]: 2026-01-23 09:47:19.316 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:19 np0005593232 nova_compute[250269]: 2026-01-23 09:47:19.317 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:19 np0005593232 nova_compute[250269]: 2026-01-23 09:47:19.317 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:47:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1657: 321 pgs: 16 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 301 active+clean; 450 MiB data, 828 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 9.5 MiB/s wr, 391 op/s
Jan 23 04:47:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 23 04:47:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 23 04:47:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 23 04:47:19 np0005593232 nova_compute[250269]: 2026-01-23 09:47:19.719 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:20.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:21.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:21 np0005593232 nova_compute[250269]: 2026-01-23 09:47:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1659: 321 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 312 active+clean; 372 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 8.0 MiB/s wr, 388 op/s
Jan 23 04:47:22 np0005593232 nova_compute[250269]: 2026-01-23 09:47:22.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:47:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/265410423' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:47:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:22.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:23.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1660: 321 pgs: 321 active+clean; 158 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.7 MiB/s wr, 413 op/s
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.317 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.318 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.318 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.318 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.318 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:24 np0005593232 podman[290504]: 2026-01-23 09:47:24.400509646 +0000 UTC m=+0.053225036 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.555 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.556 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.557 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.557 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.557 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.559 250273 INFO nova.compute.manager [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Terminating instance#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.560 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.560 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquired lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.560 250273 DEBUG nova.network.neutron [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.721 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:24 np0005593232 nova_compute[250269]: 2026-01-23 09:47:24.778 250273 DEBUG nova.network.neutron [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:47:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 23 04:47:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 23 04:47:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:24.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:25.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.098 250273 DEBUG nova.network.neutron [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:47:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:47:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3873565128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.120 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Releasing lock "refresh_cache-37d4741e-a7c3-49a4-ad2e-c54789bff05f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.121 250273 DEBUG nova.compute.manager [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.137 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.819s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:25 np0005593232 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 23 04:47:25 np0005593232 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000003a.scope: Consumed 15.369s CPU time.
Jan 23 04:47:25 np0005593232 systemd-machined[215836]: Machine qemu-23-instance-0000003a terminated.
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.353 250273 INFO nova.virt.libvirt.driver [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance destroyed successfully.#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.354 250273 DEBUG nova.objects.instance [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lazy-loading 'resources' on Instance uuid 37d4741e-a7c3-49a4-ad2e-c54789bff05f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:47:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1662: 321 pgs: 321 active+clean; 158 MiB data, 686 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 306 op/s
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.438 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.438 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.616 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.618 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4575MB free_disk=20.922874450683594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.730 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 37d4741e-a7c3-49a4-ad2e-c54789bff05f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.731 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.731 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.776 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.881 250273 INFO nova.virt.libvirt.driver [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Deleting instance files /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f_del#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.882 250273 INFO nova.virt.libvirt.driver [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Deletion of /var/lib/nova/instances/37d4741e-a7c3-49a4-ad2e-c54789bff05f_del complete#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.947 250273 INFO nova.compute.manager [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.949 250273 DEBUG oslo.service.loopingcall [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.950 250273 DEBUG nova.compute.manager [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:47:25 np0005593232 nova_compute[250269]: 2026-01-23 09:47:25.950 250273 DEBUG nova.network.neutron [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:47:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:47:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033669675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.247 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.253 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.277 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.281 250273 DEBUG nova.network.neutron [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.308 250273 DEBUG nova.network.neutron [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.312 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.313 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.531 250273 INFO nova.compute.manager [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Took 0.58 seconds to deallocate network for instance.#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.581 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.582 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:26 np0005593232 nova_compute[250269]: 2026-01-23 09:47:26.634 250273 DEBUG oslo_concurrency.processutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:26.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:27.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:47:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468765719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.069 250273 DEBUG oslo_concurrency.processutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.075 250273 DEBUG nova.compute.provider_tree [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.099 250273 DEBUG nova.scheduler.client.report [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.123 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.152 250273 INFO nova.scheduler.client.report [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Deleted allocations for instance 37d4741e-a7c3-49a4-ad2e-c54789bff05f#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.221 250273 DEBUG oslo_concurrency.lockutils [None req-d1151b18-2db4-4205-805a-37c0a7bb1197 8d1d7c58442749759ba7dc3a19799796 5d69aaa276f94de98e4011fa17428b40 - - default default] Lock "37d4741e-a7c3-49a4-ad2e-c54789bff05f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.313 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:47:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1663: 321 pgs: 321 active+clean; 81 MiB data, 648 MiB used, 20 GiB / 21 GiB avail; 153 KiB/s rd, 57 KiB/s wr, 204 op/s
Jan 23 04:47:27 np0005593232 nova_compute[250269]: 2026-01-23 09:47:27.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 23 04:47:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 23 04:47:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 23 04:47:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 23 04:47:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:28.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:29.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1665: 321 pgs: 321 active+clean; 41 MiB data, 594 MiB used, 20 GiB / 21 GiB avail; 119 KiB/s rd, 7.6 KiB/s wr, 174 op/s
Jan 23 04:47:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 23 04:47:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 23 04:47:29 np0005593232 nova_compute[250269]: 2026-01-23 09:47:29.723 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:47:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:30.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:47:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:31.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1667: 321 pgs: 321 active+clean; 41 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 7.7 KiB/s wr, 140 op/s
Jan 23 04:47:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 23 04:47:32 np0005593232 nova_compute[250269]: 2026-01-23 09:47:32.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:32.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:47:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:33.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:47:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 23 04:47:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 23 04:47:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1669: 321 pgs: 321 active+clean; 41 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 4.8 KiB/s wr, 106 op/s
Jan 23 04:47:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:34 np0005593232 nova_compute[250269]: 2026-01-23 09:47:34.725 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:47:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:47:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:35.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1670: 321 pgs: 321 active+clean; 41 MiB data, 584 MiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 3.9 KiB/s wr, 68 op/s
Jan 23 04:47:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:37.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:37.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:47:37
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms', 'images']
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1671: 321 pgs: 321 active+clean; 53 MiB data, 568 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 939 KiB/s wr, 93 op/s
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:47:37 np0005593232 nova_compute[250269]: 2026-01-23 09:47:37.611 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:47:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:39.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:39.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 23 04:47:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 23 04:47:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 23 04:47:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1673: 321 pgs: 321 active+clean; 71 MiB data, 569 MiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 2.2 MiB/s wr, 42 op/s
Jan 23 04:47:39 np0005593232 nova_compute[250269]: 2026-01-23 09:47:39.727 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:40 np0005593232 nova_compute[250269]: 2026-01-23 09:47:40.352 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161645.3510551, 37d4741e-a7c3-49a4-ad2e-c54789bff05f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:47:40 np0005593232 nova_compute[250269]: 2026-01-23 09:47:40.353 250273 INFO nova.compute.manager [-] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:47:40 np0005593232 nova_compute[250269]: 2026-01-23 09:47:40.507 250273 DEBUG nova.compute.manager [None req-600d774a-75d5-477f-8053-01343b0b57db - - - - - -] [instance: 37d4741e-a7c3-49a4-ad2e-c54789bff05f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:47:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:41.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1674: 321 pgs: 321 active+clean; 78 MiB data, 570 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.3 MiB/s wr, 56 op/s
Jan 23 04:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:42.602 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:42.603 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:42.603 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:42 np0005593232 nova_compute[250269]: 2026-01-23 09:47:42.613 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:47:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:47:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:43.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1675: 321 pgs: 321 active+clean; 88 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1464957199' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:47:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1464957199' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:47:44 np0005593232 nova_compute[250269]: 2026-01-23 09:47:44.728 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:45.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:45.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1677: 321 pgs: 321 active+clean; 88 MiB data, 589 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.7 MiB/s wr, 41 op/s
Jan 23 04:47:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:45.563 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:47:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:45.564 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:47:45 np0005593232 nova_compute[250269]: 2026-01-23 09:47:45.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:47:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:47:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:47.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:47.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1678: 321 pgs: 321 active+clean; 92 MiB data, 590 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 541 KiB/s wr, 39 op/s
Jan 23 04:47:47 np0005593232 podman[290676]: 2026-01-23 09:47:47.463840704 +0000 UTC m=+0.121604781 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 04:47:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:47:47.566 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:47:47 np0005593232 nova_compute[250269]: 2026-01-23 09:47:47.614 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:49.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:49.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1679: 321 pgs: 321 active+clean; 109 MiB data, 593 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 751 KiB/s wr, 45 op/s
Jan 23 04:47:49 np0005593232 nova_compute[250269]: 2026-01-23 09:47:49.729 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:51.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:51.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1680: 321 pgs: 321 active+clean; 127 MiB data, 605 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.9 MiB/s wr, 40 op/s
Jan 23 04:47:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:47:51Z|00179|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 04:47:52 np0005593232 nova_compute[250269]: 2026-01-23 09:47:52.664 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:53.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:47:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:53.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:47:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1681: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 23 04:47:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:47:54 np0005593232 nova_compute[250269]: 2026-01-23 09:47:54.731 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:55.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 23 04:47:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 23 04:47:55 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 23 04:47:55 np0005593232 podman[290706]: 2026-01-23 09:47:55.389777225 +0000 UTC m=+0.052034922 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 04:47:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1683: 321 pgs: 321 active+clean; 134 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 113 op/s
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.544 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.544 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.569 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.653 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.654 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.662 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.663 250273 INFO nova.compute.claims [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:47:55 np0005593232 nova_compute[250269]: 2026-01-23 09:47:55.796 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:47:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3273666755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.267 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.274 250273 DEBUG nova.compute.provider_tree [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.295 250273 DEBUG nova.scheduler.client.report [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.328 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.329 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.398 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.399 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.432 250273 INFO nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.468 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.577 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.579 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.579 250273 INFO nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Creating image(s)#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.606 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.634 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.669 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.673 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.741 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.742 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.743 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.743 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.772 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:47:56 np0005593232 nova_compute[250269]: 2026-01-23 09:47:56.776 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9fd1f64a-4b5b-4638-9411-04027230851c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:47:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:57.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 23 04:47:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1684: 321 pgs: 321 active+clean; 149 MiB data, 611 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Jan 23 04:47:57 np0005593232 nova_compute[250269]: 2026-01-23 09:47:57.455 250273 DEBUG nova.policy [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d814ef2afe04103bb6aa24724d61b11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:47:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 23 04:47:57 np0005593232 nova_compute[250269]: 2026-01-23 09:47:57.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:47:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 23 04:47:58 np0005593232 nova_compute[250269]: 2026-01-23 09:47:58.028 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9fd1f64a-4b5b-4638-9411-04027230851c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:47:58 np0005593232 nova_compute[250269]: 2026-01-23 09:47:58.114 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] resizing rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:47:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:47:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:47:59.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:47:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:47:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:47:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:47:59.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:47:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.256 250273 DEBUG nova.objects.instance [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.274 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.274 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Ensure instance console log exists: /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.275 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.275 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.275 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:47:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1686: 321 pgs: 321 active+clean; 207 MiB data, 627 MiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 5.0 MiB/s wr, 261 op/s
Jan 23 04:47:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 23 04:47:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 23 04:47:59 np0005593232 nova_compute[250269]: 2026-01-23 09:47:59.732 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:00 np0005593232 nova_compute[250269]: 2026-01-23 09:48:00.196 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Successfully created port: 91e73992-335d-44cf-a5ca-3882d9ba3477 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:48:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:01.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:01.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1688: 321 pgs: 321 active+clean; 219 MiB data, 634 MiB used, 20 GiB / 21 GiB avail; 7.5 MiB/s rd, 5.7 MiB/s wr, 321 op/s
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.499 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Successfully updated port: 91e73992-335d-44cf-a5ca-3882d9ba3477 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.515 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.515 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.515 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:48:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 23 04:48:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 23 04:48:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.613 250273 DEBUG nova.compute.manager [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-changed-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.613 250273 DEBUG nova.compute.manager [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing instance network info cache due to event network-changed-91e73992-335d-44cf-a5ca-3882d9ba3477. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.613 250273 DEBUG oslo_concurrency.lockutils [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:01 np0005593232 nova_compute[250269]: 2026-01-23 09:48:01.846 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.898 250273 DEBUG nova.network.neutron [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.925 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.926 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Instance network_info: |[{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.927 250273 DEBUG oslo_concurrency.lockutils [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.928 250273 DEBUG nova.network.neutron [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing network info cache for port 91e73992-335d-44cf-a5ca-3882d9ba3477 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.933 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Start _get_guest_xml network_info=[{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.937 250273 WARNING nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.945 250273 DEBUG nova.virt.libvirt.host [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.945 250273 DEBUG nova.virt.libvirt.host [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.954 250273 DEBUG nova.virt.libvirt.host [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.955 250273 DEBUG nova.virt.libvirt.host [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.956 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.957 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.957 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.957 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.958 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.958 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.958 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.959 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.959 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.959 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.960 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.960 250273 DEBUG nova.virt.hardware [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:48:02 np0005593232 nova_compute[250269]: 2026-01-23 09:48:02.964 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:03.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1690: 321 pgs: 321 active+clean; 227 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 6.1 MiB/s wr, 299 op/s
Jan 23 04:48:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:48:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3445891282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:48:03 np0005593232 nova_compute[250269]: 2026-01-23 09:48:03.447 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:03 np0005593232 nova_compute[250269]: 2026-01-23 09:48:03.479 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:48:03 np0005593232 nova_compute[250269]: 2026-01-23 09:48:03.488 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3426647513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.145 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.147 250273 DEBUG nova.virt.libvirt.vif [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:47:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.147 250273 DEBUG nova.network.os_vif_util [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.148 250273 DEBUG nova.network.os_vif_util [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.149 250273 DEBUG nova.objects.instance [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.175 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <uuid>9fd1f64a-4b5b-4638-9411-04027230851c</uuid>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <name>instance-0000003f</name>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:48:02</nova:creationTime>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="serial">9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="uuid">9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk.config">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:cd:7d:6d"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <target dev="tap91e73992-33"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log" append="off"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:48:04 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:48:04 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:48:04 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:48:04 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.176 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Preparing to wait for external event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.176 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.176 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.177 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.177 250273 DEBUG nova.virt.libvirt.vif [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:47:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.178 250273 DEBUG nova.network.os_vif_util [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.178 250273 DEBUG nova.network.os_vif_util [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.179 250273 DEBUG os_vif [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.180 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.180 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.181 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.187 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91e73992-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.187 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91e73992-33, col_values=(('external_ids', {'iface-id': '91e73992-335d-44cf-a5ca-3882d9ba3477', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:7d:6d', 'vm-uuid': '9fd1f64a-4b5b-4638-9411-04027230851c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:04 np0005593232 NetworkManager[49057]: <info>  [1769161684.1905] manager: (tap91e73992-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.189 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.191 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.198 250273 INFO os_vif [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33')#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.273 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.274 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.274 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No VIF found with MAC fa:16:3e:cd:7d:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.275 250273 INFO nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Using config drive#033[00m
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.303 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 23 04:48:04 np0005593232 nova_compute[250269]: 2026-01-23 09:48:04.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 23 04:48:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 23 04:48:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:05.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:05.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:48:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1692: 321 pgs: 321 active+clean; 227 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.4 MiB/s wr, 138 op/s
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.433 250273 INFO nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Creating config drive at /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.443 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkoghs_sp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.558 250273 DEBUG nova.network.neutron [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updated VIF entry in instance network info cache for port 91e73992-335d-44cf-a5ca-3882d9ba3477. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.559 250273 DEBUG nova.network.neutron [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.578 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkoghs_sp" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.637 250273 DEBUG nova.storage.rbd_utils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] rbd image 9fd1f64a-4b5b-4638-9411-04027230851c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.641 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config 9fd1f64a-4b5b-4638-9411-04027230851c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:05 np0005593232 nova_compute[250269]: 2026-01-23 09:48:05.668 250273 DEBUG oslo_concurrency.lockutils [req-d6717d01-030e-430e-9627-629f324ae500 req-d73d7472-8411-4f82-bbe1-a5ba8c56e23e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:07.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1693: 321 pgs: 321 active+clean; 216 MiB data, 652 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 143 op/s
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:07 np0005593232 nova_compute[250269]: 2026-01-23 09:48:07.860 250273 DEBUG oslo_concurrency.processutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config 9fd1f64a-4b5b-4638-9411-04027230851c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:07 np0005593232 nova_compute[250269]: 2026-01-23 09:48:07.861 250273 INFO nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Deleting local config drive /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/disk.config because it was imported into RBD.#033[00m
Jan 23 04:48:07 np0005593232 kernel: tap91e73992-33: entered promiscuous mode
Jan 23 04:48:07 np0005593232 NetworkManager[49057]: <info>  [1769161687.9314] manager: (tap91e73992-33): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Jan 23 04:48:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:07Z|00180|binding|INFO|Claiming lport 91e73992-335d-44cf-a5ca-3882d9ba3477 for this chassis.
Jan 23 04:48:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:07Z|00181|binding|INFO|91e73992-335d-44cf-a5ca-3882d9ba3477: Claiming fa:16:3e:cd:7d:6d 10.100.0.6
Jan 23 04:48:07 np0005593232 nova_compute[250269]: 2026-01-23 09:48:07.964 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:07 np0005593232 nova_compute[250269]: 2026-01-23 09:48:07.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:07.984 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7d:6d 10.100.0.6'], port_security=['fa:16:3e:cd:7d:6d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9fd1f64a-4b5b-4638-9411-04027230851c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67a66804-b1f0-4687-a341-8c6bda3ca76c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70bd5cd0-0ee6-4cf8-9db7-3e75e486209b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=91e73992-335d-44cf-a5ca-3882d9ba3477) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:48:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:07.987 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 91e73992-335d-44cf-a5ca-3882d9ba3477 in datapath 3f2ef857-ecb1-4eae-8bff-88d44b044dff bound to our chassis#033[00m
Jan 23 04:48:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:07.990 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f2ef857-ecb1-4eae-8bff-88d44b044dff#033[00m
Jan 23 04:48:08 np0005593232 systemd-machined[215836]: New machine qemu-24-instance-0000003f.
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.023 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2cfe895-92fd-44f8-bd50-751c03ee6973]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.024 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f2ef857-e1 in ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.028 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f2ef857-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.028 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[03deab33-357c-4447-99c2-91deabf777c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.029 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[36955bc6-30b6-4a6f-8dc3-f216728d3aec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:08Z|00182|binding|INFO|Setting lport 91e73992-335d-44cf-a5ca-3882d9ba3477 ovn-installed in OVS
Jan 23 04:48:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:08Z|00183|binding|INFO|Setting lport 91e73992-335d-44cf-a5ca-3882d9ba3477 up in Southbound
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.054 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:08 np0005593232 systemd[1]: Started Virtual Machine qemu-24-instance-0000003f.
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.062 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[afdac901-41e0-45d4-8ffa-61fe4256f3d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 systemd-udevd[291109]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.081 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ad02d7c1-4505-479d-87ec-183e7297a43d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 NetworkManager[49057]: <info>  [1769161688.1021] device (tap91e73992-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:48:08 np0005593232 NetworkManager[49057]: <info>  [1769161688.1028] device (tap91e73992-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.119 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a218f204-09c3-4986-b129-8e078287a66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.126 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[675e3ed7-9f87-47dc-80c2-45e560482012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 NetworkManager[49057]: <info>  [1769161688.1274] manager: (tap3f2ef857-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Jan 23 04:48:08 np0005593232 systemd-udevd[291116]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.160 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[27e7cb1e-393f-485d-a5cf-912a49794086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.163 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a8fcfb93-a698-4830-8a23-b4ac174d3c40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 NetworkManager[49057]: <info>  [1769161688.1842] device (tap3f2ef857-e0): carrier: link connected
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.191 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f34d9312-4700-446a-9ecc-ad1d69f9742a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.209 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[88c9d98f-a148-402d-b153-4773b42ed591]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f2ef857-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:26:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560132, 'reachable_time': 26957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291139, 'error': None, 'target': 'ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.225 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eb625447-74e9-4611-8ddd-2a8dbf329e0d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feae:26d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 560132, 'tstamp': 560132}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291140, 'error': None, 'target': 'ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.242 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0b1176f6-2cd6-4f17-8d14-e97dfe4f0228]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f2ef857-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ae:26:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560132, 'reachable_time': 26957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291141, 'error': None, 'target': 'ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.280 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cee7b8f7-1196-4650-bcdb-c80cede383d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.336 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[918aa321-ffef-45eb-8214-a7014b9480e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.337 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f2ef857-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.338 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.338 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f2ef857-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.341 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:08 np0005593232 NetworkManager[49057]: <info>  [1769161688.3415] manager: (tap3f2ef857-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 23 04:48:08 np0005593232 kernel: tap3f2ef857-e0: entered promiscuous mode
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.345 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f2ef857-e0, col_values=(('external_ids', {'iface-id': '5ab572f5-a09b-44ef-93ec-cc372fcc8fe5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:08 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:08Z|00184|binding|INFO|Releasing lport 5ab572f5-a09b-44ef-93ec-cc372fcc8fe5 from this chassis (sb_readonly=0)
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.361 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.362 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f2ef857-ecb1-4eae-8bff-88d44b044dff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f2ef857-ecb1-4eae-8bff-88d44b044dff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.363 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[25fee21d-d71e-4206-b8f0-6f86ee7a01af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.364 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-3f2ef857-ecb1-4eae-8bff-88d44b044dff
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/3f2ef857-ecb1-4eae-8bff-88d44b044dff.pid.haproxy
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 3f2ef857-ecb1-4eae-8bff-88d44b044dff
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:48:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:08.365 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'env', 'PROCESS_TAG=haproxy-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f2ef857-ecb1-4eae-8bff-88d44b044dff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.759 250273 DEBUG nova.compute.manager [req-384dbe02-6531-48f2-8aee-69bb2323ca64 req-ef5d4f71-2ffe-4d95-b827-45c219ace6b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.760 250273 DEBUG oslo_concurrency.lockutils [req-384dbe02-6531-48f2-8aee-69bb2323ca64 req-ef5d4f71-2ffe-4d95-b827-45c219ace6b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.760 250273 DEBUG oslo_concurrency.lockutils [req-384dbe02-6531-48f2-8aee-69bb2323ca64 req-ef5d4f71-2ffe-4d95-b827-45c219ace6b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.761 250273 DEBUG oslo_concurrency.lockutils [req-384dbe02-6531-48f2-8aee-69bb2323ca64 req-ef5d4f71-2ffe-4d95-b827-45c219ace6b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:08 np0005593232 nova_compute[250269]: 2026-01-23 09:48:08.761 250273 DEBUG nova.compute.manager [req-384dbe02-6531-48f2-8aee-69bb2323ca64 req-ef5d4f71-2ffe-4d95-b827-45c219ace6b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Processing event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:48:08 np0005593232 podman[291198]: 2026-01-23 09:48:08.726463949 +0000 UTC m=+0.024671983 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.048 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.048 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161689.048278, 9fd1f64a-4b5b-4638-9411-04027230851c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.049 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] VM Started (Lifecycle Event)#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.054 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:48:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:09.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.057 250273 INFO nova.virt.libvirt.driver [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Instance spawned successfully.#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.057 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:48:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.083 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.091 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.097 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.097 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.098 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.098 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.099 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.099 250273 DEBUG nova.virt.libvirt.driver [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.143 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.144 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161689.0483615, 9fd1f64a-4b5b-4638-9411-04027230851c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.144 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.184 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:48:09 np0005593232 podman[291198]: 2026-01-23 09:48:09.186325595 +0000 UTC m=+0.484533589 container create 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.195 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161689.0529242, 9fd1f64a-4b5b-4638-9411-04027230851c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.196 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.204 250273 INFO nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Took 12.63 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.205 250273 DEBUG nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.216 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.221 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.254 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.295 250273 INFO nova.compute.manager [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Took 13.67 seconds to build instance.#033[00m
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.319 250273 DEBUG oslo_concurrency.lockutils [None req-e11a5d5f-235c-4c1e-aa7c-a04c36c766b8 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:09 np0005593232 systemd[1]: Started libpod-conmon-1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6.scope.
Jan 23 04:48:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f0f604ceb020d296811b8538aa4263ec9796635323de41753e1492c355c8776/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1694: 321 pgs: 321 active+clean; 206 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Jan 23 04:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 23 04:48:09 np0005593232 podman[291198]: 2026-01-23 09:48:09.617143825 +0000 UTC m=+0.915351839 container init 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:48:09 np0005593232 podman[291198]: 2026-01-23 09:48:09.62294245 +0000 UTC m=+0.921150444 container start 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 23 04:48:09 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [NOTICE]   (291235) : New worker (291237) forked
Jan 23 04:48:09 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [NOTICE]   (291235) : Loading success.
Jan 23 04:48:09 np0005593232 nova_compute[250269]: 2026-01-23 09:48:09.736 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 23 04:48:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 23 04:48:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:11.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:11.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1696: 321 pgs: 321 active+clean; 197 MiB data, 664 MiB used, 20 GiB / 21 GiB avail; 853 KiB/s rd, 3.4 MiB/s wr, 148 op/s
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.598 250273 DEBUG nova.compute.manager [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.598 250273 DEBUG oslo_concurrency.lockutils [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.598 250273 DEBUG oslo_concurrency.lockutils [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.599 250273 DEBUG oslo_concurrency.lockutils [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.599 250273 DEBUG nova.compute.manager [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.599 250273 WARNING nova.compute.manager [req-8d066887-e1f3-4c43-9a13-697327df681c req-ba2dfedb-59c1-4abd-91ea-3186d739c7b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:48:11 np0005593232 nova_compute[250269]: 2026-01-23 09:48:11.867 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:11 np0005593232 NetworkManager[49057]: <info>  [1769161691.8680] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 23 04:48:11 np0005593232 NetworkManager[49057]: <info>  [1769161691.8688] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 23 04:48:12 np0005593232 nova_compute[250269]: 2026-01-23 09:48:12.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:12 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:12Z|00185|binding|INFO|Releasing lport 5ab572f5-a09b-44ef-93ec-cc372fcc8fe5 from this chassis (sb_readonly=0)
Jan 23 04:48:12 np0005593232 nova_compute[250269]: 2026-01-23 09:48:12.126 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:13.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:13.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1697: 321 pgs: 321 active+clean; 156 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 293 op/s
Jan 23 04:48:13 np0005593232 nova_compute[250269]: 2026-01-23 09:48:13.689 250273 DEBUG nova.compute.manager [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-changed-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:13 np0005593232 nova_compute[250269]: 2026-01-23 09:48:13.691 250273 DEBUG nova.compute.manager [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing instance network info cache due to event network-changed-91e73992-335d-44cf-a5ca-3882d9ba3477. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:48:13 np0005593232 nova_compute[250269]: 2026-01-23 09:48:13.691 250273 DEBUG oslo_concurrency.lockutils [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:13 np0005593232 nova_compute[250269]: 2026-01-23 09:48:13.692 250273 DEBUG oslo_concurrency.lockutils [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:13 np0005593232 nova_compute[250269]: 2026-01-23 09:48:13.692 250273 DEBUG nova.network.neutron [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing network info cache for port 91e73992-335d-44cf-a5ca-3882d9ba3477 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:48:14 np0005593232 podman[291411]: 2026-01-23 09:48:14.021639946 +0000 UTC m=+0.807969004 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:48:14 np0005593232 podman[291411]: 2026-01-23 09:48:14.145211612 +0000 UTC m=+0.931540580 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:48:14 np0005593232 nova_compute[250269]: 2026-01-23 09:48:14.192 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:14 np0005593232 nova_compute[250269]: 2026-01-23 09:48:14.303 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:14 np0005593232 nova_compute[250269]: 2026-01-23 09:48:14.303 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:14 np0005593232 nova_compute[250269]: 2026-01-23 09:48:14.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:15.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:15.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:15 np0005593232 nova_compute[250269]: 2026-01-23 09:48:15.398 250273 DEBUG nova.network.neutron [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updated VIF entry in instance network info cache for port 91e73992-335d-44cf-a5ca-3882d9ba3477. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:48:15 np0005593232 nova_compute[250269]: 2026-01-23 09:48:15.399 250273 DEBUG nova.network.neutron [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1698: 321 pgs: 321 active+clean; 156 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 247 op/s
Jan 23 04:48:15 np0005593232 nova_compute[250269]: 2026-01-23 09:48:15.435 250273 DEBUG oslo_concurrency.lockutils [req-30ef45eb-6e37-477d-8f15-0b38c609aaed req-e00e8695-ad8f-40cf-b13e-107f6c91e980 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:16 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:16Z|00186|binding|INFO|Releasing lport 5ab572f5-a09b-44ef-93ec-cc372fcc8fe5 from this chassis (sb_readonly=0)
Jan 23 04:48:16 np0005593232 nova_compute[250269]: 2026-01-23 09:48:16.137 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:16 np0005593232 podman[291564]: 2026-01-23 09:48:16.764196271 +0000 UTC m=+0.976673205 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:48:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:17.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 04:48:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:17 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:17.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:17 np0005593232 podman[291564]: 2026-01-23 09:48:17.119162922 +0000 UTC m=+1.331639856 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:48:17 np0005593232 nova_compute[250269]: 2026-01-23 09:48:17.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:17 np0005593232 nova_compute[250269]: 2026-01-23 09:48:17.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:48:17 np0005593232 nova_compute[250269]: 2026-01-23 09:48:17.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:48:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1699: 321 pgs: 321 active+clean; 160 MiB data, 651 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.5 MiB/s wr, 229 op/s
Jan 23 04:48:18 np0005593232 podman[291621]: 2026-01-23 09:48:18.255190571 +0000 UTC m=+0.283220731 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 23 04:48:18 np0005593232 podman[291706]: 2026-01-23 09:48:18.393629261 +0000 UTC m=+0.092151304 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, release=1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.component=keepalived-container)
Jan 23 04:48:18 np0005593232 nova_compute[250269]: 2026-01-23 09:48:18.430 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:18 np0005593232 nova_compute[250269]: 2026-01-23 09:48:18.431 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:18 np0005593232 nova_compute[250269]: 2026-01-23 09:48:18.431 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:48:18 np0005593232 nova_compute[250269]: 2026-01-23 09:48:18.431 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:18 np0005593232 podman[291706]: 2026-01-23 09:48:18.69052465 +0000 UTC m=+0.389046693 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9)
Jan 23 04:48:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:48:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:19.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 04:48:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:19 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:19.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:19 np0005593232 nova_compute[250269]: 2026-01-23 09:48:19.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:48:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1700: 321 pgs: 321 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 193 op/s
Jan 23 04:48:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:19 np0005593232 nova_compute[250269]: 2026-01-23 09:48:19.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f7ca169-ac6f-4173-be17-64963ee1b792 does not exist
Jan 23 04:48:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4d99fdad-424d-4b8b-91de-df2fbdf3ae05 does not exist
Jan 23 04:48:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f24e55a1-3aab-484a-95bf-97e08fb8a29d does not exist
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:48:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:48:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 04:48:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000086s ======
Jan 23 04:48:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:21.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000086s
Jan 23 04:48:21 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1701: 321 pgs: 321 active+clean; 167 MiB data, 640 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 168 op/s
Jan 23 04:48:21 np0005593232 podman[292012]: 2026-01-23 09:48:21.51340662 +0000 UTC m=+0.019474935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:21 np0005593232 podman[292012]: 2026-01-23 09:48:21.878033696 +0000 UTC m=+0.384101981 container create 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:48:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:48:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:48:22 np0005593232 systemd[1]: Started libpod-conmon-056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091.scope.
Jan 23 04:48:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:22 np0005593232 podman[292012]: 2026-01-23 09:48:22.107501036 +0000 UTC m=+0.613569351 container init 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:48:22 np0005593232 podman[292012]: 2026-01-23 09:48:22.117066259 +0000 UTC m=+0.623134554 container start 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:48:22 np0005593232 thirsty_galileo[292028]: 167 167
Jan 23 04:48:22 np0005593232 systemd[1]: libpod-056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091.scope: Deactivated successfully.
Jan 23 04:48:22 np0005593232 conmon[292028]: conmon 056c366b66d3d5d418b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091.scope/container/memory.events
Jan 23 04:48:22 np0005593232 podman[292012]: 2026-01-23 09:48:22.30228656 +0000 UTC m=+0.808354905 container attach 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:48:22 np0005593232 podman[292012]: 2026-01-23 09:48:22.303830373 +0000 UTC m=+0.809898668 container died 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.430 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f57910f6464d4ed5c9f437dad841bcafa9fc42a37e4a0800037b7917c892583a-merged.mount: Deactivated successfully.
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.452 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.453 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.454 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.454 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.454 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.455 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.456 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:48:22 np0005593232 nova_compute[250269]: 2026-01-23 09:48:22.457 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:22 np0005593232 podman[292012]: 2026-01-23 09:48:22.470850167 +0000 UTC m=+0.976918462 container remove 056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:48:22 np0005593232 systemd[1]: libpod-conmon-056c366b66d3d5d418b0bbe5c208bdc44835e0848ab18d80171d0e0c4dbfa091.scope: Deactivated successfully.
Jan 23 04:48:22 np0005593232 podman[292052]: 2026-01-23 09:48:22.65480505 +0000 UTC m=+0.024797166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:22 np0005593232 podman[292052]: 2026-01-23 09:48:22.853513205 +0000 UTC m=+0.223505291 container create 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:48:22 np0005593232 systemd[1]: Started libpod-conmon-50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f.scope.
Jan 23 04:48:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:23 np0005593232 podman[292052]: 2026-01-23 09:48:23.00056294 +0000 UTC m=+0.370555036 container init 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:48:23 np0005593232 podman[292052]: 2026-01-23 09:48:23.01217862 +0000 UTC m=+0.382170706 container start 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:48:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:23.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:23 np0005593232 podman[292052]: 2026-01-23 09:48:23.113556355 +0000 UTC m=+0.483548461 container attach 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:48:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1702: 321 pgs: 321 active+clean; 181 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.3 MiB/s wr, 149 op/s
Jan 23 04:48:23 np0005593232 musing_khorana[292069]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:48:23 np0005593232 musing_khorana[292069]: --> relative data size: 1.0
Jan 23 04:48:23 np0005593232 musing_khorana[292069]: --> All data devices are unavailable
Jan 23 04:48:23 np0005593232 systemd[1]: libpod-50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f.scope: Deactivated successfully.
Jan 23 04:48:23 np0005593232 podman[292052]: 2026-01-23 09:48:23.916038682 +0000 UTC m=+1.286030768 container died 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1669d706a0225822878d0a4c64173d2a313b2ee94791bb186c6fe1332f212ff6-merged.mount: Deactivated successfully.
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.307 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.343 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.343 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.344 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.344 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.344 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:24 np0005593232 podman[292052]: 2026-01-23 09:48:24.733432902 +0000 UTC m=+2.103424988 container remove 50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_khorana, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.744 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:48:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1750853601' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:48:24 np0005593232 systemd[1]: libpod-conmon-50ed0a4641760760a3642fc91460142a33ad8becc55d432360fe9029aa497f5f.scope: Deactivated successfully.
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.814 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:24 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:24Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:7d:6d 10.100.0.6
Jan 23 04:48:24 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:24Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:7d:6d 10.100.0.6
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.890 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:48:24 np0005593232 nova_compute[250269]: 2026-01-23 09:48:24.891 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000003f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.050 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.051 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4359MB free_disk=20.905975341796875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.051 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.052 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:25.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.294 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9fd1f64a-4b5b-4638-9411-04027230851c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.295 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.295 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:48:25 np0005593232 podman[292261]: 2026-01-23 09:48:25.393147386 +0000 UTC m=+0.052686020 container create 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:48:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1703: 321 pgs: 321 active+clean; 181 MiB data, 653 MiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Jan 23 04:48:25 np0005593232 podman[292261]: 2026-01-23 09:48:25.362327489 +0000 UTC m=+0.021866153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.501 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:25 np0005593232 systemd[1]: Started libpod-conmon-3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb.scope.
Jan 23 04:48:25 np0005593232 podman[292275]: 2026-01-23 09:48:25.571136101 +0000 UTC m=+0.141853598 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:48:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:48:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/793081179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.959 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.966 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:48:25 np0005593232 nova_compute[250269]: 2026-01-23 09:48:25.985 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:48:26 np0005593232 podman[292261]: 2026-01-23 09:48:26.027939191 +0000 UTC m=+0.687477855 container init 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:48:26 np0005593232 nova_compute[250269]: 2026-01-23 09:48:26.033 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:48:26 np0005593232 nova_compute[250269]: 2026-01-23 09:48:26.033 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:26 np0005593232 podman[292261]: 2026-01-23 09:48:26.035682061 +0000 UTC m=+0.695220695 container start 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 04:48:26 np0005593232 gracious_payne[292293]: 167 167
Jan 23 04:48:26 np0005593232 systemd[1]: libpod-3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb.scope: Deactivated successfully.
Jan 23 04:48:26 np0005593232 podman[292261]: 2026-01-23 09:48:26.147573154 +0000 UTC m=+0.807111818 container attach 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:48:26 np0005593232 podman[292261]: 2026-01-23 09:48:26.148181681 +0000 UTC m=+0.807720315 container died 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:48:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-041c3a0cbce44080ef1b2e0e6d970f48bccff964052fb5d0edeaec2f7e23138a-merged.mount: Deactivated successfully.
Jan 23 04:48:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:27.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:27.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:27 np0005593232 podman[292261]: 2026-01-23 09:48:27.119250605 +0000 UTC m=+1.778789229 container remove 3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 04:48:27 np0005593232 systemd[1]: libpod-conmon-3a38c6bdecd2464cb9279a7e015779a72f1ab9d0cb9e61d68e9c833844ff9bbb.scope: Deactivated successfully.
Jan 23 04:48:27 np0005593232 podman[292346]: 2026-01-23 09:48:27.343233549 +0000 UTC m=+0.053941936 container create 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 04:48:27 np0005593232 systemd[1]: Started libpod-conmon-3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519.scope.
Jan 23 04:48:27 np0005593232 podman[292346]: 2026-01-23 09:48:27.322127809 +0000 UTC m=+0.032836206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1704: 321 pgs: 321 active+clean; 194 MiB data, 661 MiB used, 20 GiB / 21 GiB avail; 107 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 04:48:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc0e2a6d06036a60567dc4c62d481c916921509977f933b9b688a5977a4fb62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc0e2a6d06036a60567dc4c62d481c916921509977f933b9b688a5977a4fb62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc0e2a6d06036a60567dc4c62d481c916921509977f933b9b688a5977a4fb62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fc0e2a6d06036a60567dc4c62d481c916921509977f933b9b688a5977a4fb62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:27 np0005593232 podman[292346]: 2026-01-23 09:48:27.458633013 +0000 UTC m=+0.169341410 container init 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:48:27 np0005593232 podman[292346]: 2026-01-23 09:48:27.470605534 +0000 UTC m=+0.181313911 container start 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:48:27 np0005593232 podman[292346]: 2026-01-23 09:48:27.473826886 +0000 UTC m=+0.184535263 container attach 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:48:27 np0005593232 nova_compute[250269]: 2026-01-23 09:48:27.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:28 np0005593232 hungry_banach[292363]: {
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:    "0": [
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:        {
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "devices": [
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "/dev/loop3"
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            ],
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "lv_name": "ceph_lv0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "lv_size": "7511998464",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "name": "ceph_lv0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "tags": {
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.cluster_name": "ceph",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.crush_device_class": "",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.encrypted": "0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.osd_id": "0",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.type": "block",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:                "ceph.vdo": "0"
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            },
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "type": "block",
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:            "vg_name": "ceph_vg0"
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:        }
Jan 23 04:48:28 np0005593232 hungry_banach[292363]:    ]
Jan 23 04:48:28 np0005593232 hungry_banach[292363]: }
Jan 23 04:48:28 np0005593232 systemd[1]: libpod-3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519.scope: Deactivated successfully.
Jan 23 04:48:28 np0005593232 podman[292346]: 2026-01-23 09:48:28.302686243 +0000 UTC m=+1.013394660 container died 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:48:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8fc0e2a6d06036a60567dc4c62d481c916921509977f933b9b688a5977a4fb62-merged.mount: Deactivated successfully.
Jan 23 04:48:28 np0005593232 podman[292346]: 2026-01-23 09:48:28.36407857 +0000 UTC m=+1.074786947 container remove 3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:48:28 np0005593232 systemd[1]: libpod-conmon-3f4cbaed9ecc68430d7e3b2bb61cc93db053de886da61947eb1378b3600fa519.scope: Deactivated successfully.
Jan 23 04:48:28 np0005593232 podman[292524]: 2026-01-23 09:48:28.956752476 +0000 UTC m=+0.050076296 container create e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:48:28 np0005593232 systemd[1]: Started libpod-conmon-e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98.scope.
Jan 23 04:48:29 np0005593232 nova_compute[250269]: 2026-01-23 09:48:29.019 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:28.939199437 +0000 UTC m=+0.032523267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:29.042825836 +0000 UTC m=+0.136149656 container init e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:29.051442661 +0000 UTC m=+0.144766461 container start e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:48:29 np0005593232 priceless_khorana[292540]: 167 167
Jan 23 04:48:29 np0005593232 systemd[1]: libpod-e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98.scope: Deactivated successfully.
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:29.057242446 +0000 UTC m=+0.150566286 container attach e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:29.058356778 +0000 UTC m=+0.151680578 container died e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 04:48:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-08318dd7fc767e233ddb116d117ae1728070f3a70f46245f92e3050efefcc86f-merged.mount: Deactivated successfully.
Jan 23 04:48:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:29.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:29.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:29 np0005593232 podman[292524]: 2026-01-23 09:48:29.103457431 +0000 UTC m=+0.196781231 container remove e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:48:29 np0005593232 systemd[1]: libpod-conmon-e34e88ffc14100e79cd5d3e4b2bfe05c8e5c0e27ca1b3a2d63e1269b1f07cc98.scope: Deactivated successfully.
Jan 23 04:48:29 np0005593232 nova_compute[250269]: 2026-01-23 09:48:29.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:29 np0005593232 podman[292563]: 2026-01-23 09:48:29.304473732 +0000 UTC m=+0.048622055 container create da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 04:48:29 np0005593232 systemd[1]: Started libpod-conmon-da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e.scope.
Jan 23 04:48:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f5abd318a6fc23786bb22089b634ed624e3813da65483f4749fa1e1addafd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f5abd318a6fc23786bb22089b634ed624e3813da65483f4749fa1e1addafd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f5abd318a6fc23786bb22089b634ed624e3813da65483f4749fa1e1addafd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55f5abd318a6fc23786bb22089b634ed624e3813da65483f4749fa1e1addafd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:29 np0005593232 podman[292563]: 2026-01-23 09:48:29.283436103 +0000 UTC m=+0.027584476 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:48:29 np0005593232 podman[292563]: 2026-01-23 09:48:29.390941932 +0000 UTC m=+0.135090275 container init da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:48:29 np0005593232 podman[292563]: 2026-01-23 09:48:29.398472967 +0000 UTC m=+0.142621300 container start da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:48:29 np0005593232 podman[292563]: 2026-01-23 09:48:29.403079188 +0000 UTC m=+0.147227531 container attach da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:48:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1705: 321 pgs: 321 active+clean; 241 MiB data, 684 MiB used, 20 GiB / 21 GiB avail; 343 KiB/s rd, 3.8 MiB/s wr, 101 op/s
Jan 23 04:48:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:29 np0005593232 nova_compute[250269]: 2026-01-23 09:48:29.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]: {
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:        "osd_id": 0,
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:        "type": "bluestore"
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]:    }
Jan 23 04:48:30 np0005593232 elastic_cannon[292579]: }
Jan 23 04:48:30 np0005593232 systemd[1]: libpod-da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e.scope: Deactivated successfully.
Jan 23 04:48:30 np0005593232 podman[292563]: 2026-01-23 09:48:30.265106688 +0000 UTC m=+1.009255011 container died da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:48:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55f5abd318a6fc23786bb22089b634ed624e3813da65483f4749fa1e1addafd0-merged.mount: Deactivated successfully.
Jan 23 04:48:30 np0005593232 podman[292563]: 2026-01-23 09:48:30.871958398 +0000 UTC m=+1.616106741 container remove da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_cannon, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:48:30 np0005593232 systemd[1]: libpod-conmon-da8192db2f2d9d01499202f3e432981119876f5d200de3c521c16a594df9ba0e.scope: Deactivated successfully.
Jan 23 04:48:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:48:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:48:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7b32de04-ac15-4da6-b34d-589fb12339d0 does not exist
Jan 23 04:48:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 955539fa-183b-4873-a3d5-ad7420a84bf4 does not exist
Jan 23 04:48:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 30b35bc2-8bfa-4ba7-bc3c-beddd234375b does not exist
Jan 23 04:48:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:31.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:31.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1706: 321 pgs: 321 active+clean; 247 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 328 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 23 04:48:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:48:32 np0005593232 nova_compute[250269]: 2026-01-23 09:48:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:48:32 np0005593232 nova_compute[250269]: 2026-01-23 09:48:32.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:48:32 np0005593232 nova_compute[250269]: 2026-01-23 09:48:32.311 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:48:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:33.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:33.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1707: 321 pgs: 321 active+clean; 247 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 466 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 23 04:48:34 np0005593232 nova_compute[250269]: 2026-01-23 09:48:34.203 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:34 np0005593232 nova_compute[250269]: 2026-01-23 09:48:34.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1708: 321 pgs: 321 active+clean; 247 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 443 KiB/s rd, 2.6 MiB/s wr, 88 op/s
Jan 23 04:48:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:37.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:48:37
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms']
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1709: 321 pgs: 321 active+clean; 247 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 716 KiB/s rd, 2.6 MiB/s wr, 98 op/s
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:48:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:39.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:39.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:39 np0005593232 nova_compute[250269]: 2026-01-23 09:48:39.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1710: 321 pgs: 321 active+clean; 227 MiB data, 687 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Jan 23 04:48:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:39 np0005593232 nova_compute[250269]: 2026-01-23 09:48:39.751 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:40 np0005593232 nova_compute[250269]: 2026-01-23 09:48:40.032 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:41.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:41.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1711: 321 pgs: 321 active+clean; 223 MiB data, 685 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 222 KiB/s wr, 91 op/s
Jan 23 04:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:42.602 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:42.603 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:42.605 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:43.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:43.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1712: 321 pgs: 321 active+clean; 200 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 100 op/s
Jan 23 04:48:44 np0005593232 nova_compute[250269]: 2026-01-23 09:48:44.208 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:44 np0005593232 nova_compute[250269]: 2026-01-23 09:48:44.754 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:45.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1713: 321 pgs: 321 active+clean; 200 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 86 op/s
Jan 23 04:48:45 np0005593232 nova_compute[250269]: 2026-01-23 09:48:45.680 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:45 np0005593232 nova_compute[250269]: 2026-01-23 09:48:45.681 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:45 np0005593232 nova_compute[250269]: 2026-01-23 09:48:45.681 250273 DEBUG nova.objects.instance [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:45.870 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:48:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:45.871 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:48:45 np0005593232 nova_compute[250269]: 2026-01-23 09:48:45.878 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004343185836590359 of space, bias 1.0, pg target 1.3029557509771077 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:48:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:48:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:47.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:47.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1714: 321 pgs: 321 active+clean; 200 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 86 op/s
Jan 23 04:48:47 np0005593232 nova_compute[250269]: 2026-01-23 09:48:47.571 250273 DEBUG nova.objects.instance [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:47 np0005593232 nova_compute[250269]: 2026-01-23 09:48:47.598 250273 DEBUG nova.network.neutron [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:48:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:48:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 8544 writes, 37K keys, 8544 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 8544 writes, 8544 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1724 writes, 7794 keys, 1724 commit groups, 1.0 writes per commit group, ingest: 11.36 MB, 0.02 MB/s#012Interval WAL: 1724 writes, 1724 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     47.1      1.02              0.18        22    0.046       0      0       0.0       0.0#012  L6      1/0    9.54 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     93.5     77.4      2.39              0.60        21    0.114    111K    12K       0.0       0.0#012 Sum      1/0    9.54 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     65.6     68.4      3.40              0.78        43    0.079    111K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.9     69.9     71.2      0.79              0.20        10    0.079     32K   3085       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     93.5     77.4      2.39              0.60        21    0.114    111K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     47.2      1.01              0.18        21    0.048       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.047, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.07 MB/s read, 3.4 seconds#012Interval compaction: 0.06 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 25.15 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000436 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1468,24.28 MB,7.98622%) FilterBlock(44,318.67 KB,0.102369%) IndexBlock(44,574.75 KB,0.184631%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:48:48 np0005593232 nova_compute[250269]: 2026-01-23 09:48:48.128 250273 DEBUG nova.policy [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8d814ef2afe04103bb6aa24724d61b11', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:48:48 np0005593232 podman[292723]: 2026-01-23 09:48:48.433841094 +0000 UTC m=+0.090808508 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:48:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:49.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:49.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:49 np0005593232 nova_compute[250269]: 2026-01-23 09:48:49.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1715: 321 pgs: 321 active+clean; 200 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 6.2 KiB/s wr, 76 op/s
Jan 23 04:48:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:49 np0005593232 nova_compute[250269]: 2026-01-23 09:48:49.756 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:49 np0005593232 nova_compute[250269]: 2026-01-23 09:48:49.963 250273 DEBUG nova.network.neutron [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Successfully created port: dd3423ed-bdeb-4860-86ec-694be592c799 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:48:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:48:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:51.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:48:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:51.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1716: 321 pgs: 321 active+clean; 191 MiB data, 657 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 568 KiB/s wr, 28 op/s
Jan 23 04:48:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:52Z|00187|binding|INFO|Releasing lport 5ab572f5-a09b-44ef-93ec-cc372fcc8fe5 from this chassis (sb_readonly=0)
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.062 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.402 250273 DEBUG nova.network.neutron [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Successfully updated port: dd3423ed-bdeb-4860-86ec-694be592c799 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.445 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.446 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.446 250273 DEBUG nova.network.neutron [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.538 250273 DEBUG nova.compute.manager [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-changed-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.538 250273 DEBUG nova.compute.manager [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing instance network info cache due to event network-changed-dd3423ed-bdeb-4860-86ec-694be592c799. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:48:52 np0005593232 nova_compute[250269]: 2026-01-23 09:48:52.539 250273 DEBUG oslo_concurrency.lockutils [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:48:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:53.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1717: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 23 04:48:54 np0005593232 nova_compute[250269]: 2026-01-23 09:48:54.212 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:54 np0005593232 nova_compute[250269]: 2026-01-23 09:48:54.758 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:55.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.358 250273 DEBUG nova.network.neutron [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.382 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.383 250273 DEBUG oslo_concurrency.lockutils [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.384 250273 DEBUG nova.network.neutron [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Refreshing network info cache for port dd3423ed-bdeb-4860-86ec-694be592c799 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.386 250273 DEBUG nova.virt.libvirt.vif [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.387 250273 DEBUG nova.network.os_vif_util [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.387 250273 DEBUG nova.network.os_vif_util [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.388 250273 DEBUG os_vif [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.388 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.388 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.389 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.392 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.393 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdd3423ed-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.393 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdd3423ed-bd, col_values=(('external_ids', {'iface-id': 'dd3423ed-bdeb-4860-86ec-694be592c799', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:ae:52', 'vm-uuid': '9fd1f64a-4b5b-4638-9411-04027230851c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.4370] manager: (tapdd3423ed-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.440 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.442 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.444 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.444 250273 INFO os_vif [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd')#033[00m
Jan 23 04:48:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1718: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.445 250273 DEBUG nova.virt.libvirt.vif [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.445 250273 DEBUG nova.network.os_vif_util [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.446 250273 DEBUG nova.network.os_vif_util [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.449 250273 DEBUG nova.virt.libvirt.guest [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] attach device xml: <interface type="ethernet">
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:ee:ae:52"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <target dev="tapdd3423ed-bd"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:48:55 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 04:48:55 np0005593232 kernel: tapdd3423ed-bd: entered promiscuous mode
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.4651] manager: (tapdd3423ed-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 23 04:48:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:55Z|00188|binding|INFO|Claiming lport dd3423ed-bdeb-4860-86ec-694be592c799 for this chassis.
Jan 23 04:48:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:55Z|00189|binding|INFO|dd3423ed-bdeb-4860-86ec-694be592c799: Claiming fa:16:3e:ee:ae:52 10.10.10.196
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.464 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.469 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.477 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:ae:52 10.10.10.196'], port_security=['fa:16:3e:ee:ae:52 10.10.10.196'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.196/24', 'neutron:device_id': '9fd1f64a-4b5b-4638-9411-04027230851c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3098648-5285-4194-8c21-8a68f4de4a75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e0bd3e8-1715-40d2-9b26-9d78a1f41e00, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=dd3423ed-bdeb-4860-86ec-694be592c799) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.479 161902 INFO neutron.agent.ovn.metadata.agent [-] Port dd3423ed-bdeb-4860-86ec-694be592c799 in datapath 4268ec60-e2aa-4e94-8ccb-604b1c0b73dc bound to our chassis#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.480 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4268ec60-e2aa-4e94-8ccb-604b1c0b73dc#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.498 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07d9cfb9-0dc2-4b71-b728-d93495d2166f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.500 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4268ec60-e1 in ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.503 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4268ec60-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.503 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a884de-2869-40ac-8a22-15d28f5c1ea9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.504 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[38a0d09f-d9a4-4c4b-af2d-efb7787fd3ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 systemd-udevd[292760]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:48:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:55Z|00190|binding|INFO|Setting lport dd3423ed-bdeb-4860-86ec-694be592c799 ovn-installed in OVS
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.517 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[5ebc4a7b-5ff5-4661-a5b8-636b93a28376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:55Z|00191|binding|INFO|Setting lport dd3423ed-bdeb-4860-86ec-694be592c799 up in Southbound
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.519 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.5304] device (tapdd3423ed-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.5309] device (tapdd3423ed-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.534 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[81e01daf-36d9-4e54-a8a5-2e6257c3a8c9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.568 250273 DEBUG nova.virt.libvirt.driver [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.568 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6416a803-5fd5-4f4c-a0ad-16e2681397c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.569 250273 DEBUG nova.virt.libvirt.driver [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.570 250273 DEBUG nova.virt.libvirt.driver [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No VIF found with MAC fa:16:3e:cd:7d:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.575 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8fbb57fe-06b5-414c-bca0-4e3f0324c09e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 systemd-udevd[292764]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.5765] manager: (tap4268ec60-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.597 250273 DEBUG nova.virt.libvirt.guest [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:48:55</nova:creationTime>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:48:55 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    <nova:port uuid="dd3423ed-bdeb-4860-86ec-694be592c799">
Jan 23 04:48:55 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.10.10.196" ipVersion="4"/>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:48:55 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:48:55 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:48:55 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.609 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[da1a7ca1-cc96-48a4-8adc-f3dd33940496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.612 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f545bb59-9a06-4732-8856-b41ad7d13c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.633 250273 DEBUG oslo_concurrency.lockutils [None req-d549af69-422b-4831-a8af-baf86152f468 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 9.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.6370] device (tap4268ec60-e0): carrier: link connected
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.645 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[30c4391f-797e-4b4e-a81e-70467fc4b673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.661 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0f08f18d-c8b3-4225-8cba-3a6d1986c648]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4268ec60-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:b5:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564877, 'reachable_time': 19365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292801, 'error': None, 'target': 'ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 podman[292780]: 2026-01-23 09:48:55.671765466 +0000 UTC m=+0.057500725 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.676 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e0185c-6684-434b-a75a-78c05395ba36]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1b:b574'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564877, 'tstamp': 564877}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292806, 'error': None, 'target': 'ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.690 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a810567b-9eb6-4d68-9956-80dddc3dcb08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4268ec60-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1b:b5:74'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564877, 'reachable_time': 19365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292807, 'error': None, 'target': 'ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.713 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e5526d9b-7e41-4629-98d2-d762e6365669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.758 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[52ff9101-40f2-4e17-8b6b-47eb643c208b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.759 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4268ec60-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.760 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.760 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4268ec60-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.762 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 kernel: tap4268ec60-e0: entered promiscuous mode
Jan 23 04:48:55 np0005593232 NetworkManager[49057]: <info>  [1769161735.7642] manager: (tap4268ec60-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.765 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4268ec60-e0, col_values=(('external_ids', {'iface-id': '42f3dd17-1594-4beb-999d-7d9f312f29fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:55Z|00192|binding|INFO|Releasing lport 42f3dd17-1594-4beb-999d-7d9f312f29fe from this chassis (sb_readonly=0)
Jan 23 04:48:55 np0005593232 nova_compute[250269]: 2026-01-23 09:48:55.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.781 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4268ec60-e2aa-4e94-8ccb-604b1c0b73dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4268ec60-e2aa-4e94-8ccb-604b1c0b73dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.782 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d42abab1-c39f-41f5-8074-4c90ca43a37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.782 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/4268ec60-e2aa-4e94-8ccb-604b1c0b73dc.pid.haproxy
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 4268ec60-e2aa-4e94-8ccb-604b1c0b73dc
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.783 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'env', 'PROCESS_TAG=haproxy-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4268ec60-e2aa-4e94-8ccb-604b1c0b73dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:48:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:48:55.875 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:48:56 np0005593232 podman[292839]: 2026-01-23 09:48:56.116146583 +0000 UTC m=+0.024081699 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:48:56 np0005593232 podman[292839]: 2026-01-23 09:48:56.218399678 +0000 UTC m=+0.126334784 container create 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 04:48:56 np0005593232 systemd[1]: Started libpod-conmon-1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7.scope.
Jan 23 04:48:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:48:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6839d2d3993975d65794efe288844de66bd5b60879c0a9b886ad36bd75bf4ef6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:48:56 np0005593232 podman[292839]: 2026-01-23 09:48:56.314368812 +0000 UTC m=+0.222303928 container init 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:48:56 np0005593232 podman[292839]: 2026-01-23 09:48:56.320543329 +0000 UTC m=+0.228478425 container start 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 23 04:48:56 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [NOTICE]   (292858) : New worker (292860) forked
Jan 23 04:48:56 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [NOTICE]   (292858) : Loading success.
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.691 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.692 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.729 250273 DEBUG nova.objects.instance [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.792 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.867 250273 DEBUG nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.868 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.868 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.868 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.868 250273 DEBUG nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.868 250273 WARNING nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.869 250273 DEBUG nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.869 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.869 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.869 250273 DEBUG oslo_concurrency.lockutils [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.869 250273 DEBUG nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:48:56 np0005593232 nova_compute[250269]: 2026-01-23 09:48:56.870 250273 WARNING nova.compute.manager [req-1660aec8-5389-4664-9aaa-c68702517e9a req-3f457c56-0a12-41cc-84b1-b03b0cbd8442 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:48:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:48:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:57.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:48:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:57.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.209 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.209 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.210 250273 INFO nova.compute.manager [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Attaching volume 2a5f9051-d3ab-46f9-b4f7-644aa73bccd4 to /dev/vdb#033[00m
Jan 23 04:48:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1719: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.595 250273 DEBUG os_brick.utils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.598 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.612 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.612 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[49a9aa91-0cf7-49f7-8920-973497fdd0a3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.613 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.622 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.622 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[854a4a28-4e57-4c2e-adf4-4e5afd6c4601]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.623 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.632 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.632 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[1990fd95-f4f5-4448-8af8-ae458648502a]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.634 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[123c3343-af8e-463e-acc9-e3bdf0ab956c]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.634 250273 DEBUG oslo_concurrency.processutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.665 250273 DEBUG oslo_concurrency.processutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.667 250273 DEBUG os_brick.initiator.connectors.lightos [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.668 250273 DEBUG os_brick.initiator.connectors.lightos [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.668 250273 DEBUG os_brick.initiator.connectors.lightos [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.668 250273 DEBUG os_brick.utils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.668 250273 DEBUG nova.virt.block_device [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating existing volume attachment record: ff522077-4963-4ea6-b2b7-f4bf6bc6f3ba _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.774 250273 DEBUG nova.network.neutron [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updated VIF entry in instance network info cache for port dd3423ed-bdeb-4860-86ec-694be592c799. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.775 250273 DEBUG nova.network.neutron [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:48:57 np0005593232 nova_compute[250269]: 2026-01-23 09:48:57.797 250273 DEBUG oslo_concurrency.lockutils [req-d9815eec-e80b-4a84-bd5e-34f8a2f1a550 req-85865bdc-fb65-4b27-94b0-049384b4efc5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:48:58 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:58Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:ae:52 10.10.10.196
Jan 23 04:48:58 np0005593232 ovn_controller[151001]: 2026-01-23T09:48:58Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:ae:52 10.10.10.196
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.591 250273 DEBUG nova.objects.instance [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.618 250273 DEBUG nova.virt.libvirt.driver [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Attempting to attach volume 2a5f9051-d3ab-46f9-b4f7-644aa73bccd4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.621 250273 DEBUG nova.virt.libvirt.guest [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-2a5f9051-d3ab-46f9-b4f7-644aa73bccd4">
Jan 23 04:48:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 04:48:58 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  </auth>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:48:58 np0005593232 nova_compute[250269]:  <serial>2a5f9051-d3ab-46f9-b4f7-644aa73bccd4</serial>
Jan 23 04:48:58 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:48:58 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.748 250273 DEBUG nova.virt.libvirt.driver [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.749 250273 DEBUG nova.virt.libvirt.driver [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.749 250273 DEBUG nova.virt.libvirt.driver [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] No VIF found with MAC fa:16:3e:cd:7d:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:48:58 np0005593232 nova_compute[250269]: 2026-01-23 09:48:58.993 250273 DEBUG oslo_concurrency.lockutils [None req-2c45d4fa-519b-4e90-ae3f-70707313d5a0 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:48:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:48:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:48:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:48:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:48:59.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:48:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1720: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 04:48:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:48:59 np0005593232 nova_compute[250269]: 2026-01-23 09:48:59.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:00 np0005593232 nova_compute[250269]: 2026-01-23 09:49:00.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:00.497 162432 DEBUG eventlet.wsgi.server [-] (162432) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:00.499 162432 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /openstack/latest/meta_data.json HTTP/1.0#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: Accept: */*#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: Connection: close#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: Content-Type: text/plain#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: Host: 169.254.169.254#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: User-Agent: curl/7.84.0#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: X-Forwarded-For: 10.100.0.6#015
Jan 23 04:49:00 np0005593232 ovn_metadata_agent[161895]: X-Ovn-Network-Id: 3f2ef857-ecb1-4eae-8bff-88d44b044dff __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Jan 23 04:49:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:49:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:49:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:01.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1721: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 551 KiB/s rd, 1.8 MiB/s wr, 74 op/s
Jan 23 04:49:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:01.906 162432 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Jan 23 04:49:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:01.906 162432 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "GET /openstack/latest/meta_data.json HTTP/1.1" status: 200  len: 1916 time: 1.4074368#033[00m
Jan 23 04:49:01 np0005593232 haproxy-metadata-proxy-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291237]: 10.100.0.6:50260 [23/Jan/2026:09:49:00.495] listener listener/metadata 0/0/0/1411/1411 200 1900 - - ---- 1/1/0/0/0 0/0 "GET /openstack/latest/meta_data.json HTTP/1.1"
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.608 250273 DEBUG oslo_concurrency.lockutils [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.609 250273 DEBUG oslo_concurrency.lockutils [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.630 250273 INFO nova.compute.manager [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Detaching volume 2a5f9051-d3ab-46f9-b4f7-644aa73bccd4#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.824 250273 INFO nova.virt.block_device [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Attempting to driver detach volume 2a5f9051-d3ab-46f9-b4f7-644aa73bccd4 from mountpoint /dev/vdb#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.834 250273 DEBUG nova.virt.libvirt.driver [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Attempting to detach device vdb from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.834 250273 DEBUG nova.virt.libvirt.guest [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-2a5f9051-d3ab-46f9-b4f7-644aa73bccd4">
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <serial>2a5f9051-d3ab-46f9-b4f7-644aa73bccd4</serial>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:49:02 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.851 250273 INFO nova.virt.libvirt.driver [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully detached device vdb from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the persistent domain config.#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.851 250273 DEBUG nova.virt.libvirt.driver [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.852 250273 DEBUG nova.virt.libvirt.guest [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-2a5f9051-d3ab-46f9-b4f7-644aa73bccd4">
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  </source>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <serial>2a5f9051-d3ab-46f9-b4f7-644aa73bccd4</serial>
Jan 23 04:49:02 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 23 04:49:02 np0005593232 nova_compute[250269]: </disk>
Jan 23 04:49:02 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.930 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769161742.929977, 9fd1f64a-4b5b-4638-9411-04027230851c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.932 250273 DEBUG nova.virt.libvirt.driver [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9fd1f64a-4b5b-4638-9411-04027230851c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 04:49:02 np0005593232 nova_compute[250269]: 2026-01-23 09:49:02.935 250273 INFO nova.virt.libvirt.driver [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully detached device vdb from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the live domain config.#033[00m
Jan 23 04:49:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:49:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:03.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:49:03 np0005593232 nova_compute[250269]: 2026-01-23 09:49:03.177 250273 DEBUG nova.objects.instance [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:03 np0005593232 nova_compute[250269]: 2026-01-23 09:49:03.257 250273 DEBUG oslo_concurrency.lockutils [None req-24a216ee-cdb3-4d9b-bba3-0e4dc7d0c330 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1722: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 118 op/s
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.142 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-dd3423ed-bdeb-4860-86ec-694be592c799" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.142 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-dd3423ed-bdeb-4860-86ec-694be592c799" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.162 250273 DEBUG nova.objects.instance [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.185 250273 DEBUG nova.virt.libvirt.vif [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.185 250273 DEBUG nova.network.os_vif_util [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.186 250273 DEBUG nova.network.os_vif_util [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.190 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.193 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.196 250273 DEBUG nova.virt.libvirt.driver [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Attempting to detach device tapdd3423ed-bd from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.197 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] detach device xml: <interface type="ethernet">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:ee:ae:52"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <target dev="tapdd3423ed-bd"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.211 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.214 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface>not found in domain: <domain type='kvm' id='24'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <name>instance-0000003f</name>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <uuid>9fd1f64a-4b5b-4638-9411-04027230851c</uuid>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:48:55</nova:creationTime>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:port uuid="dd3423ed-bdeb-4860-86ec-694be592c799">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.10.10.196" ipVersion="4"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='serial'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='uuid'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk' index='2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk.config' index='1'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:cd:7d:6d'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='tap91e73992-33'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:ee:ae:52'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='tapdd3423ed-bd'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='net1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c733,c976</label>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c976</imagelabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.216 250273 INFO nova.virt.libvirt.driver [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully detached device tapdd3423ed-bd from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the persistent domain config.#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.216 250273 DEBUG nova.virt.libvirt.driver [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] (1/8): Attempting to detach device tapdd3423ed-bd with device alias net1 from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.216 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] detach device xml: <interface type="ethernet">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:ee:ae:52"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <target dev="tapdd3423ed-bd"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:49:04 np0005593232 kernel: tapdd3423ed-bd (unregistering): left promiscuous mode
Jan 23 04:49:04 np0005593232 NetworkManager[49057]: <info>  [1769161744.3147] device (tapdd3423ed-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.323 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:04Z|00193|binding|INFO|Releasing lport dd3423ed-bdeb-4860-86ec-694be592c799 from this chassis (sb_readonly=0)
Jan 23 04:49:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:04Z|00194|binding|INFO|Setting lport dd3423ed-bdeb-4860-86ec-694be592c799 down in Southbound
Jan 23 04:49:04 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:04Z|00195|binding|INFO|Removing iface tapdd3423ed-bd ovn-installed in OVS
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.328 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:04.333 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:ae:52 10.10.10.196'], port_security=['fa:16:3e:ee:ae:52 10.10.10.196'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.10.10.196/24', 'neutron:device_id': '9fd1f64a-4b5b-4638-9411-04027230851c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3098648-5285-4194-8c21-8a68f4de4a75', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1e0bd3e8-1715-40d2-9b26-9d78a1f41e00, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=dd3423ed-bdeb-4860-86ec-694be592c799) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:49:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:04.334 161902 INFO neutron.agent.ovn.metadata.agent [-] Port dd3423ed-bdeb-4860-86ec-694be592c799 in datapath 4268ec60-e2aa-4e94-8ccb-604b1c0b73dc unbound from our chassis#033[00m
Jan 23 04:49:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:04.335 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:49:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:04.336 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1aa810-8198-470a-a367-38ac66d7b70f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:04.337 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc namespace which is not needed anymore#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.341 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.344 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769161744.3439443, 9fd1f64a-4b5b-4638-9411-04027230851c => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.346 250273 DEBUG nova.virt.libvirt.driver [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Start waiting for the detach event from libvirt for device tapdd3423ed-bd with device alias net1 for instance 9fd1f64a-4b5b-4638-9411-04027230851c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.346 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.349 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface>not found in domain: <domain type='kvm' id='24'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <name>instance-0000003f</name>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <uuid>9fd1f64a-4b5b-4638-9411-04027230851c</uuid>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:48:55</nova:creationTime>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:port uuid="dd3423ed-bdeb-4860-86ec-694be592c799">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.10.10.196" ipVersion="4"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='serial'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='uuid'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk' index='2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk.config' index='1'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:cd:7d:6d'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target dev='tap91e73992-33'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c733,c976</label>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c976</imagelabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.349 250273 INFO nova.virt.libvirt.driver [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully detached device tapdd3423ed-bd from instance 9fd1f64a-4b5b-4638-9411-04027230851c from the live domain config.#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.350 250273 DEBUG nova.virt.libvirt.vif [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=InstanceDeviceMetadata,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.350 250273 DEBUG nova.network.os_vif_util [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.351 250273 DEBUG nova.network.os_vif_util [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.352 250273 DEBUG os_vif [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.353 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.353 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd3423ed-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.355 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.357 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.359 250273 INFO os_vif [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd')#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.360 250273 DEBUG nova.virt.libvirt.guest [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:49:04</nova:creationTime>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:04 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:04 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:04 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 04:49:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.876 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.917 250273 DEBUG nova.compute.manager [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-unplugged-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.918 250273 DEBUG oslo_concurrency.lockutils [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.918 250273 DEBUG oslo_concurrency.lockutils [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.918 250273 DEBUG oslo_concurrency.lockutils [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.919 250273 DEBUG nova.compute.manager [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-unplugged-dd3423ed-bdeb-4860-86ec-694be592c799 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:49:04 np0005593232 nova_compute[250269]: 2026-01-23 09:49:04.919 250273 WARNING nova.compute.manager [req-771af44e-e2a8-4a2d-af1d-4085e84532ff req-17d72718-278b-49e8-8019-2e1af4ce25a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-unplugged-dd3423ed-bdeb-4860-86ec-694be592c799 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:49:05 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [NOTICE]   (292858) : haproxy version is 2.8.14-c23fe91
Jan 23 04:49:05 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [NOTICE]   (292858) : path to executable is /usr/sbin/haproxy
Jan 23 04:49:05 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [WARNING]  (292858) : Exiting Master process...
Jan 23 04:49:05 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [ALERT]    (292858) : Current worker (292860) exited with code 143 (Terminated)
Jan 23 04:49:05 np0005593232 neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc[292854]: [WARNING]  (292858) : All workers exited. Exiting... (0)
Jan 23 04:49:05 np0005593232 systemd[1]: libpod-1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7.scope: Deactivated successfully.
Jan 23 04:49:05 np0005593232 podman[292976]: 2026-01-23 09:49:05.016028991 +0000 UTC m=+0.591953378 container died 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 04:49:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:49:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:05.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:49:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1723: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 76 op/s
Jan 23 04:49:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7-userdata-shm.mount: Deactivated successfully.
Jan 23 04:49:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6839d2d3993975d65794efe288844de66bd5b60879c0a9b886ad36bd75bf4ef6-merged.mount: Deactivated successfully.
Jan 23 04:49:05 np0005593232 podman[292976]: 2026-01-23 09:49:05.834652617 +0000 UTC m=+1.410577004 container cleanup 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 04:49:05 np0005593232 systemd[1]: libpod-conmon-1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7.scope: Deactivated successfully.
Jan 23 04:49:05 np0005593232 nova_compute[250269]: 2026-01-23 09:49:05.897 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:49:05 np0005593232 nova_compute[250269]: 2026-01-23 09:49:05.897 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquired lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:49:05 np0005593232 nova_compute[250269]: 2026-01-23 09:49:05.898 250273 DEBUG nova.network.neutron [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:49:06 np0005593232 podman[293008]: 2026-01-23 09:49:06.252071236 +0000 UTC m=+0.391612322 container remove 1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.259 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8a6ca455-055f-4653-87ec-242194de2003]: (4, ('Fri Jan 23 09:49:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc (1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7)\n1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7\nFri Jan 23 09:49:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc (1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7)\n1ab8563a1a0023075db73acc3da083c2ffa2293809ed28a798dc44558152fec7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.262 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2bba0e41-69c2-4576-8e2d-3e6a169566a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.263 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4268ec60-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:06 np0005593232 kernel: tap4268ec60-e0: left promiscuous mode
Jan 23 04:49:06 np0005593232 nova_compute[250269]: 2026-01-23 09:49:06.266 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:06 np0005593232 nova_compute[250269]: 2026-01-23 09:49:06.283 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.287 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[27e6de65-13b0-4f35-8cd7-9745eb44c2c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.303 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ab5768-e048-45f0-8ba0-092ebc1da8d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.305 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c65023a-e23c-4fa6-9587-920f052079b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.326 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[733ff652-69f6-49bf-bb03-c5b7dd2781af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564870, 'reachable_time': 26247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293024, 'error': None, 'target': 'ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.329 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4268ec60-e2aa-4e94-8ccb-604b1c0b73dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:49:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:06.329 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[705c9129-5080-4103-8da3-46a0f0a984c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:06 np0005593232 systemd[1]: run-netns-ovnmeta\x2d4268ec60\x2de2aa\x2d4e94\x2d8ccb\x2d604b1c0b73dc.mount: Deactivated successfully.
Jan 23 04:49:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:49:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.138 250273 DEBUG nova.compute.manager [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.139 250273 DEBUG oslo_concurrency.lockutils [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.139 250273 DEBUG oslo_concurrency.lockutils [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.140 250273 DEBUG oslo_concurrency.lockutils [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.140 250273 DEBUG nova.compute.manager [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.140 250273 WARNING nova.compute.manager [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-plugged-dd3423ed-bdeb-4860-86ec-694be592c799 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.140 250273 DEBUG nova.compute.manager [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-deleted-dd3423ed-bdeb-4860-86ec-694be592c799 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.141 250273 INFO nova.compute.manager [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Neutron deleted interface dd3423ed-bdeb-4860-86ec-694be592c799; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.141 250273 DEBUG nova.network.neutron [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:49:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:07.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.174 250273 DEBUG nova.objects.instance [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lazy-loading 'system_metadata' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.201 250273 DEBUG nova.objects.instance [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lazy-loading 'flavor' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.223 250273 DEBUG nova.virt.libvirt.vif [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.223 250273 DEBUG nova.network.os_vif_util [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.224 250273 DEBUG nova.network.os_vif_util [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.228 250273 DEBUG nova.virt.libvirt.guest [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.232 250273 DEBUG nova.virt.libvirt.guest [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface>not found in domain: <domain type='kvm' id='24'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <name>instance-0000003f</name>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <uuid>9fd1f64a-4b5b-4638-9411-04027230851c</uuid>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:49:04</nova:creationTime>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='serial'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='uuid'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk' index='2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk.config' index='1'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:cd:7d:6d'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='tap91e73992-33'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c733,c976</label>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c976</imagelabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.233 250273 DEBUG nova.virt.libvirt.guest [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.237 250273 DEBUG nova.virt.libvirt.guest [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:ee:ae:52"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapdd3423ed-bd"/></interface>not found in domain: <domain type='kvm' id='24'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <name>instance-0000003f</name>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <uuid>9fd1f64a-4b5b-4638-9411-04027230851c</uuid>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:49:04</nova:creationTime>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='serial'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='uuid'>9fd1f64a-4b5b-4638-9411-04027230851c</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk' index='2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/9fd1f64a-4b5b-4638-9411-04027230851c_disk.config' index='1'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:cd:7d:6d'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target dev='tap91e73992-33'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c/console.log' append='off'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c733,c976</label>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c733,c976</imagelabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.239 250273 WARNING nova.virt.libvirt.driver [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Detaching interface fa:16:3e:ee:ae:52 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapdd3423ed-bd' not found.#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.241 250273 DEBUG nova.virt.libvirt.vif [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.241 250273 DEBUG nova.network.os_vif_util [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converting VIF {"id": "dd3423ed-bdeb-4860-86ec-694be592c799", "address": "fa:16:3e:ee:ae:52", "network": {"id": "4268ec60-e2aa-4e94-8ccb-604b1c0b73dc", "bridge": "br-int", "label": "tempest-tagged-attachments-test-net-2064626369", "subnets": [{"cidr": "10.10.10.0/24", "dns": [], "gateway": {"address": "10.10.10.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.10.10.196", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdd3423ed-bd", "ovs_interfaceid": "dd3423ed-bdeb-4860-86ec-694be592c799", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.242 250273 DEBUG nova.network.os_vif_util [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.243 250273 DEBUG os_vif [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.245 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.246 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdd3423ed-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.246 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.250 250273 INFO os_vif [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:ae:52,bridge_name='br-int',has_traffic_filtering=True,id=dd3423ed-bdeb-4860-86ec-694be592c799,network=Network(4268ec60-e2aa-4e94-8ccb-604b1c0b73dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdd3423ed-bd')#033[00m
Jan 23 04:49:07 np0005593232 nova_compute[250269]: 2026-01-23 09:49:07.251 250273 DEBUG nova.virt.libvirt.guest [req-15eb2d9c-6bb2-4799-9f0f-75fbbd54ce1c req-2c53ec6f-8ac4-4db6-8458-1868a61262df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:name>tempest-device-tagging-server-1637579954</nova:name>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:49:07</nova:creationTime>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:user uuid="8d814ef2afe04103bb6aa24724d61b11">tempest-TaggedAttachmentsTest-107101210-project-member</nova:user>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:project uuid="15c7fbc4d6794364830639a1fee9ecf0">tempest-TaggedAttachmentsTest-107101210</nova:project>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    <nova:port uuid="91e73992-335d-44cf-a5ca-3882d9ba3477">
Jan 23 04:49:07 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:49:07 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:49:07 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1724: 321 pgs: 321 active+clean; 176 MiB data, 628 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 387 KiB/s wr, 90 op/s
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:08 np0005593232 nova_compute[250269]: 2026-01-23 09:49:08.557 250273 INFO nova.network.neutron [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Port dd3423ed-bdeb-4860-86ec-694be592c799 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 23 04:49:08 np0005593232 nova_compute[250269]: 2026-01-23 09:49:08.557 250273 DEBUG nova.network.neutron [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [{"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:49:08 np0005593232 nova_compute[250269]: 2026-01-23 09:49:08.713 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Releasing lock "refresh_cache-9fd1f64a-4b5b-4638-9411-04027230851c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:49:08 np0005593232 nova_compute[250269]: 2026-01-23 09:49:08.740 250273 DEBUG oslo_concurrency.lockutils [None req-09dcd5b9-5222-45bc-a6ee-4c8f6a5c61a7 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "interface-9fd1f64a-4b5b-4638-9411-04027230851c-dd3423ed-bdeb-4860-86ec-694be592c799" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 4.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1725: 321 pgs: 321 active+clean; 213 MiB data, 645 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 23 04:49:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.877 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.959 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.960 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.960 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.960 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.960 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.962 250273 INFO nova.compute.manager [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Terminating instance#033[00m
Jan 23 04:49:09 np0005593232 nova_compute[250269]: 2026-01-23 09:49:09.963 250273 DEBUG nova.compute.manager [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:49:10 np0005593232 kernel: tap91e73992-33 (unregistering): left promiscuous mode
Jan 23 04:49:10 np0005593232 NetworkManager[49057]: <info>  [1769161750.0339] device (tap91e73992-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.042 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:10Z|00196|binding|INFO|Releasing lport 91e73992-335d-44cf-a5ca-3882d9ba3477 from this chassis (sb_readonly=0)
Jan 23 04:49:10 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:10Z|00197|binding|INFO|Setting lport 91e73992-335d-44cf-a5ca-3882d9ba3477 down in Southbound
Jan 23 04:49:10 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:10Z|00198|binding|INFO|Removing iface tap91e73992-33 ovn-installed in OVS
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.053 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7d:6d 10.100.0.6'], port_security=['fa:16:3e:cd:7d:6d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '9fd1f64a-4b5b-4638-9411-04027230851c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15c7fbc4d6794364830639a1fee9ecf0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '67a66804-b1f0-4687-a341-8c6bda3ca76c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70bd5cd0-0ee6-4cf8-9db7-3e75e486209b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=91e73992-335d-44cf-a5ca-3882d9ba3477) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.055 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 91e73992-335d-44cf-a5ca-3882d9ba3477 in datapath 3f2ef857-ecb1-4eae-8bff-88d44b044dff unbound from our chassis#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.058 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f2ef857-ecb1-4eae-8bff-88d44b044dff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.062 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6460f8c7-b99b-4b1c-b336-0c1141b1fc0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.063 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff namespace which is not needed anymore#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.071 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 23 04:49:10 np0005593232 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d0000003f.scope: Consumed 16.564s CPU time.
Jan 23 04:49:10 np0005593232 systemd-machined[215836]: Machine qemu-24-instance-0000003f terminated.
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.245 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [NOTICE]   (291235) : haproxy version is 2.8.14-c23fe91
Jan 23 04:49:10 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [NOTICE]   (291235) : path to executable is /usr/sbin/haproxy
Jan 23 04:49:10 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [WARNING]  (291235) : Exiting Master process...
Jan 23 04:49:10 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [ALERT]    (291235) : Current worker (291237) exited with code 143 (Terminated)
Jan 23 04:49:10 np0005593232 neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff[291231]: [WARNING]  (291235) : All workers exited. Exiting... (0)
Jan 23 04:49:10 np0005593232 systemd[1]: libpod-1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6.scope: Deactivated successfully.
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.256 250273 INFO nova.virt.libvirt.driver [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Instance destroyed successfully.#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.257 250273 DEBUG nova.objects.instance [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lazy-loading 'resources' on Instance uuid 9fd1f64a-4b5b-4638-9411-04027230851c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:10 np0005593232 podman[293051]: 2026-01-23 09:49:10.259396568 +0000 UTC m=+0.109503963 container died 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.276 250273 DEBUG nova.virt.libvirt.vif [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:47:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-device-tagging-server-1637579954',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-device-tagging-server-1637579954',id=63,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJXr2k8IY5inNTCZwPVk5EfXAYp/4zzrsuhLp4zz6M69G4n9zDvSId1yAkQJtnYhKGxrxUt2whN+RRd76STJFNsQDbsDJL6/27IM6kTM9k4+zPmNQXcs5cl0x6R7jt2AVA==',key_name='tempest-keypair-382580899',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:48:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15c7fbc4d6794364830639a1fee9ecf0',ramdisk_id='',reservation_id='r-r4kdwucs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TaggedAttachmentsTest-107101210',owner_user_name='tempest-TaggedAttachmentsTest-107101210-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:48:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8d814ef2afe04103bb6aa24724d61b11',uuid=9fd1f64a-4b5b-4638-9411-04027230851c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.276 250273 DEBUG nova.network.os_vif_util [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converting VIF {"id": "91e73992-335d-44cf-a5ca-3882d9ba3477", "address": "fa:16:3e:cd:7d:6d", "network": {"id": "3f2ef857-ecb1-4eae-8bff-88d44b044dff", "bridge": "br-int", "label": "tempest-TaggedAttachmentsTest-1595805221-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15c7fbc4d6794364830639a1fee9ecf0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e73992-33", "ovs_interfaceid": "91e73992-335d-44cf-a5ca-3882d9ba3477", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.277 250273 DEBUG nova.network.os_vif_util [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.277 250273 DEBUG os_vif [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.279 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.279 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91e73992-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.285 250273 INFO os_vif [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:7d:6d,bridge_name='br-int',has_traffic_filtering=True,id=91e73992-335d-44cf-a5ca-3882d9ba3477,network=Network(3f2ef857-ecb1-4eae-8bff-88d44b044dff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e73992-33')#033[00m
Jan 23 04:49:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6-userdata-shm.mount: Deactivated successfully.
Jan 23 04:49:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6f0f604ceb020d296811b8538aa4263ec9796635323de41753e1492c355c8776-merged.mount: Deactivated successfully.
Jan 23 04:49:10 np0005593232 podman[293051]: 2026-01-23 09:49:10.310238962 +0000 UTC m=+0.160346337 container cleanup 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:49:10 np0005593232 systemd[1]: libpod-conmon-1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6.scope: Deactivated successfully.
Jan 23 04:49:10 np0005593232 podman[293106]: 2026-01-23 09:49:10.376778204 +0000 UTC m=+0.046168051 container remove 1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.384 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[67095c10-6de9-4325-aab6-e40cbe08ee19]: (4, ('Fri Jan 23 09:49:10 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff (1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6)\n1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6\nFri Jan 23 09:49:10 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff (1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6)\n1e0ae9fe2132bd1592e8ae80464d430e679cd76da5d5704a2fff3c88047572d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.387 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e00b14e9-ff88-4f89-86f7-c8f1bcf246eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.388 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f2ef857-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.389 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 kernel: tap3f2ef857-e0: left promiscuous mode
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.404 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.408 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[80cd4508-8cfd-4aba-8441-0e7ef1a15df1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.423 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[229c0cec-d68e-4f1a-bde8-172d9d80a29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.424 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8d24f179-0177-4c18-b7b9-e3b492ff61ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.440 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2c47a6fd-6b3a-4644-9514-7e63f0a2bb75]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 560125, 'reachable_time': 18455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293124, 'error': None, 'target': 'ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 systemd[1]: run-netns-ovnmeta\x2d3f2ef857\x2decb1\x2d4eae\x2d8bff\x2d88d44b044dff.mount: Deactivated successfully.
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.444 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f2ef857-ecb1-4eae-8bff-88d44b044dff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:49:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:10.444 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[7369a350-9cfa-4b1b-8609-11f24415b8e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.455 250273 DEBUG nova.compute.manager [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-unplugged-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.456 250273 DEBUG oslo_concurrency.lockutils [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.456 250273 DEBUG oslo_concurrency.lockutils [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.456 250273 DEBUG oslo_concurrency.lockutils [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.457 250273 DEBUG nova.compute.manager [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-unplugged-91e73992-335d-44cf-a5ca-3882d9ba3477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:49:10 np0005593232 nova_compute[250269]: 2026-01-23 09:49:10.457 250273 DEBUG nova.compute.manager [req-7d684f31-dbec-4afd-b3ef-03df1b279b97 req-33128fe0-a1de-4711-b280-db9eeef1f697 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-unplugged-91e73992-335d-44cf-a5ca-3882d9ba3477 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:49:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:11.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.202 250273 INFO nova.virt.libvirt.driver [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Deleting instance files /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c_del#033[00m
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.203 250273 INFO nova.virt.libvirt.driver [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Deletion of /var/lib/nova/instances/9fd1f64a-4b5b-4638-9411-04027230851c_del complete#033[00m
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.253 250273 INFO nova.compute.manager [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Took 1.29 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.254 250273 DEBUG oslo.service.loopingcall [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.254 250273 DEBUG nova.compute.manager [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:49:11 np0005593232 nova_compute[250269]: 2026-01-23 09:49:11.254 250273 DEBUG nova.network.neutron [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:49:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1726: 321 pgs: 321 active+clean; 188 MiB data, 636 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.710 250273 DEBUG nova.compute.manager [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.710 250273 DEBUG oslo_concurrency.lockutils [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.710 250273 DEBUG oslo_concurrency.lockutils [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.711 250273 DEBUG oslo_concurrency.lockutils [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.711 250273 DEBUG nova.compute.manager [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] No waiting events found dispatching network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:49:12 np0005593232 nova_compute[250269]: 2026-01-23 09:49:12.711 250273 WARNING nova.compute.manager [req-7ebed210-f4e2-40c7-bd50-78d147fd7f3f req-64e93c68-f64d-418e-b0e7-7fc48aa12edc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received unexpected event network-vif-plugged-91e73992-335d-44cf-a5ca-3882d9ba3477 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.051 250273 DEBUG nova.network.neutron [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.078 250273 INFO nova.compute.manager [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Took 1.82 seconds to deallocate network for instance.#033[00m
Jan 23 04:49:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.141 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.142 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.201 250273 DEBUG oslo_concurrency.processutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1727: 321 pgs: 321 active+clean; 148 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.1 MiB/s wr, 143 op/s
Jan 23 04:49:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:49:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/247405713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.666 250273 DEBUG oslo_concurrency.processutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.673 250273 DEBUG nova.compute.provider_tree [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.696 250273 DEBUG nova.scheduler.client.report [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.735 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.771 250273 INFO nova.scheduler.client.report [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Deleted allocations for instance 9fd1f64a-4b5b-4638-9411-04027230851c#033[00m
Jan 23 04:49:13 np0005593232 nova_compute[250269]: 2026-01-23 09:49:13.861 250273 DEBUG oslo_concurrency.lockutils [None req-9ae6588f-b980-4f0b-9949-e5f1b24fafbd 8d814ef2afe04103bb6aa24724d61b11 15c7fbc4d6794364830639a1fee9ecf0 - - default default] Lock "9fd1f64a-4b5b-4638-9411-04027230851c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:14 np0005593232 nova_compute[250269]: 2026-01-23 09:49:14.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:14 np0005593232 nova_compute[250269]: 2026-01-23 09:49:14.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:15 np0005593232 nova_compute[250269]: 2026-01-23 09:49:15.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:15 np0005593232 nova_compute[250269]: 2026-01-23 09:49:15.305 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:15 np0005593232 nova_compute[250269]: 2026-01-23 09:49:15.369 250273 DEBUG nova.compute.manager [req-45a75a56-9a09-4eda-b791-714ad574cf9d req-38fdca62-cc4d-44b3-a3bd-629cbf21aee8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Received event network-vif-deleted-91e73992-335d-44cf-a5ca-3882d9ba3477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1728: 321 pgs: 321 active+clean; 148 MiB data, 615 MiB used, 20 GiB / 21 GiB avail; 128 KiB/s rd, 3.1 MiB/s wr, 86 op/s
Jan 23 04:49:16 np0005593232 nova_compute[250269]: 2026-01-23 09:49:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:17.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:49:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:17.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:49:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1729: 321 pgs: 321 active+clean; 163 MiB data, 623 MiB used, 20 GiB / 21 GiB avail; 809 KiB/s rd, 3.8 MiB/s wr, 132 op/s
Jan 23 04:49:18 np0005593232 nova_compute[250269]: 2026-01-23 09:49:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:18 np0005593232 podman[293202]: 2026-01-23 09:49:18.573027451 +0000 UTC m=+0.081340547 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 04:49:18 np0005593232 nova_compute[250269]: 2026-01-23 09:49:18.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:19.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:19.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.334 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:49:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1730: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 179 op/s
Jan 23 04:49:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:19 np0005593232 nova_compute[250269]: 2026-01-23 09:49:19.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:20 np0005593232 nova_compute[250269]: 2026-01-23 09:49:20.325 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:21.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:21.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:21 np0005593232 nova_compute[250269]: 2026-01-23 09:49:21.329 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1731: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 179 op/s
Jan 23 04:49:22 np0005593232 nova_compute[250269]: 2026-01-23 09:49:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:22 np0005593232 nova_compute[250269]: 2026-01-23 09:49:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:22 np0005593232 nova_compute[250269]: 2026-01-23 09:49:22.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:49:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:23.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:23.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:23 np0005593232 nova_compute[250269]: 2026-01-23 09:49:23.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1732: 321 pgs: 321 active+clean; 117 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 188 op/s
Jan 23 04:49:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:49:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/143202670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.315 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.316 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.316 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.316 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.316 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:49:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652974489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.761 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.882 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.930 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.931 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4591MB free_disk=20.95269775390625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.931 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:24 np0005593232 nova_compute[250269]: 2026-01-23 09:49:24.932 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:25.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:25.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.255 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161750.254081, 9fd1f64a-4b5b-4638-9411-04027230851c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.256 250273 INFO nova.compute.manager [-] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.327 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.335 250273 DEBUG nova.compute.manager [None req-ab414e9c-7995-4103-9ec3-5933d9e149c1 - - - - - -] [instance: 9fd1f64a-4b5b-4638-9411-04027230851c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.431 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.432 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.452 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:49:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1733: 321 pgs: 321 active+clean; 117 MiB data, 595 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 831 KiB/s wr, 146 op/s
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.484 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.485 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.504 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.535 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:49:25 np0005593232 nova_compute[250269]: 2026-01-23 09:49:25.555 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:25 np0005593232 podman[293278]: 2026-01-23 09:49:25.894763362 +0000 UTC m=+0.048462407 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:49:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:49:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1630665419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:49:26 np0005593232 nova_compute[250269]: 2026-01-23 09:49:26.033 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:26 np0005593232 nova_compute[250269]: 2026-01-23 09:49:26.039 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:49:26 np0005593232 nova_compute[250269]: 2026-01-23 09:49:26.061 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:49:26 np0005593232 nova_compute[250269]: 2026-01-23 09:49:26.090 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:49:26 np0005593232 nova_compute[250269]: 2026-01-23 09:49:26.091 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:27.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:27.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1734: 321 pgs: 321 active+clean; 101 MiB data, 588 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.6 MiB/s wr, 172 op/s
Jan 23 04:49:29 np0005593232 nova_compute[250269]: 2026-01-23 09:49:29.091 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:49:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:29.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1735: 321 pgs: 321 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 167 op/s
Jan 23 04:49:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:29 np0005593232 nova_compute[250269]: 2026-01-23 09:49:29.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:30 np0005593232 nova_compute[250269]: 2026-01-23 09:49:30.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:31.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:31.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1736: 321 pgs: 321 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 106 op/s
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 216f7330-1f4a-4c9c-a026-33aaba2ed56a does not exist
Jan 23 04:49:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 62b83ca5-0943-4554-ab82-c6c55fac06e4 does not exist
Jan 23 04:49:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 57f7fc11-35e7-4afc-9dc7-336592b0fcc4 does not exist
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:49:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:49:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.054161969 +0000 UTC m=+0.020532238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:33.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:33.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:33 np0005593232 nova_compute[250269]: 2026-01-23 09:49:33.294 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:33 np0005593232 nova_compute[250269]: 2026-01-23 09:49:33.296 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1737: 321 pgs: 321 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.613154073 +0000 UTC m=+0.579524342 container create 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:49:33 np0005593232 systemd[1]: Started libpod-conmon-2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d.scope.
Jan 23 04:49:33 np0005593232 nova_compute[250269]: 2026-01-23 09:49:33.821 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:49:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.953193148 +0000 UTC m=+0.919563477 container init 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.96167564 +0000 UTC m=+0.928045909 container start 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.965200291 +0000 UTC m=+0.931570570 container attach 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:49:33 np0005593232 cranky_northcutt[293589]: 167 167
Jan 23 04:49:33 np0005593232 systemd[1]: libpod-2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d.scope: Deactivated successfully.
Jan 23 04:49:33 np0005593232 podman[293572]: 2026-01-23 09:49:33.970818482 +0000 UTC m=+0.937188791 container died 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:49:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e87cef61fc0b8295f9e32fb9aed3302c96621b8dab7d7f1a0e45aa444bf70d9f-merged.mount: Deactivated successfully.
Jan 23 04:49:34 np0005593232 podman[293572]: 2026-01-23 09:49:34.017939939 +0000 UTC m=+0.984310208 container remove 2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:49:34 np0005593232 systemd[1]: libpod-conmon-2293107af8e91b71a72b472d83e1bba913499ca589aa6af1756c3688d9de2c9d.scope: Deactivated successfully.
Jan 23 04:49:34 np0005593232 podman[293611]: 2026-01-23 09:49:34.218659208 +0000 UTC m=+0.062251141 container create e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:49:34 np0005593232 systemd[1]: Started libpod-conmon-e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1.scope.
Jan 23 04:49:34 np0005593232 podman[293611]: 2026-01-23 09:49:34.193645813 +0000 UTC m=+0.037237856 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:34 np0005593232 podman[293611]: 2026-01-23 09:49:34.32467812 +0000 UTC m=+0.168270163 container init e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:49:34 np0005593232 podman[293611]: 2026-01-23 09:49:34.3411188 +0000 UTC m=+0.184710763 container start e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:49:34 np0005593232 podman[293611]: 2026-01-23 09:49:34.345586498 +0000 UTC m=+0.189178441 container attach e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:49:34 np0005593232 nova_compute[250269]: 2026-01-23 09:49:34.358 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:34 np0005593232 nova_compute[250269]: 2026-01-23 09:49:34.359 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:34 np0005593232 nova_compute[250269]: 2026-01-23 09:49:34.367 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:49:34 np0005593232 nova_compute[250269]: 2026-01-23 09:49:34.367 250273 INFO nova.compute.claims [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:49:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:34 np0005593232 nova_compute[250269]: 2026-01-23 09:49:34.888 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:35 np0005593232 inspiring_archimedes[293628]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:49:35 np0005593232 inspiring_archimedes[293628]: --> relative data size: 1.0
Jan 23 04:49:35 np0005593232 inspiring_archimedes[293628]: --> All data devices are unavailable
Jan 23 04:49:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:49:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:35.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:49:35 np0005593232 systemd[1]: libpod-e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1.scope: Deactivated successfully.
Jan 23 04:49:35 np0005593232 podman[293611]: 2026-01-23 09:49:35.183183479 +0000 UTC m=+1.026775422 container died e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:49:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:35.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a2b0edfae9e2f9a0d9f951cf295b8802e1d4a3f4e98ff813e18388da4ae161df-merged.mount: Deactivated successfully.
Jan 23 04:49:35 np0005593232 podman[293611]: 2026-01-23 09:49:35.236549475 +0000 UTC m=+1.080141418 container remove e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:49:35 np0005593232 systemd[1]: libpod-conmon-e71dfa9be4cb6ea56318bfc31e4bb20cda3ca90a0ca404d7dd4a7ece5e68a1b1.scope: Deactivated successfully.
Jan 23 04:49:35 np0005593232 nova_compute[250269]: 2026-01-23 09:49:35.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1738: 321 pgs: 321 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 04:49:35 np0005593232 nova_compute[250269]: 2026-01-23 09:49:35.613 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:35 np0005593232 podman[293815]: 2026-01-23 09:49:35.850412079 +0000 UTC m=+0.041279161 container create abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:49:35 np0005593232 systemd[1]: Started libpod-conmon-abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc.scope.
Jan 23 04:49:35 np0005593232 podman[293815]: 2026-01-23 09:49:35.83259088 +0000 UTC m=+0.023457972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:35 np0005593232 podman[293815]: 2026-01-23 09:49:35.949523704 +0000 UTC m=+0.140390796 container init abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:49:35 np0005593232 podman[293815]: 2026-01-23 09:49:35.957812851 +0000 UTC m=+0.148679943 container start abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:49:35 np0005593232 podman[293815]: 2026-01-23 09:49:35.961824765 +0000 UTC m=+0.152691847 container attach abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:49:35 np0005593232 zen_lovelace[293831]: 167 167
Jan 23 04:49:35 np0005593232 systemd[1]: libpod-abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc.scope: Deactivated successfully.
Jan 23 04:49:36 np0005593232 podman[293836]: 2026-01-23 09:49:36.004557657 +0000 UTC m=+0.024896863 container died abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:49:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f7e257a2e2d07a4c2f7b7d8b2bb9328684796563b2182383f1201640388df181-merged.mount: Deactivated successfully.
Jan 23 04:49:36 np0005593232 podman[293836]: 2026-01-23 09:49:36.044820749 +0000 UTC m=+0.065159905 container remove abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:49:36 np0005593232 systemd[1]: libpod-conmon-abd34be9964ad5ac1e36542e83ec8db1d3be0915aec718f7ee726b15b146b4bc.scope: Deactivated successfully.
Jan 23 04:49:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:49:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/880770036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.073 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.083 250273 DEBUG nova.compute.provider_tree [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.109 250273 DEBUG nova.scheduler.client.report [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:49:36 np0005593232 podman[293859]: 2026-01-23 09:49:36.224337442 +0000 UTC m=+0.043039242 container create 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.260 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.261 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:49:36 np0005593232 systemd[1]: Started libpod-conmon-43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273.scope.
Jan 23 04:49:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82418870c34387f6ec7af47eeded5dee117a69da0e40a42605cfa8cbdf3bc24c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82418870c34387f6ec7af47eeded5dee117a69da0e40a42605cfa8cbdf3bc24c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82418870c34387f6ec7af47eeded5dee117a69da0e40a42605cfa8cbdf3bc24c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82418870c34387f6ec7af47eeded5dee117a69da0e40a42605cfa8cbdf3bc24c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:36 np0005593232 podman[293859]: 2026-01-23 09:49:36.20641914 +0000 UTC m=+0.025120960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:36 np0005593232 podman[293859]: 2026-01-23 09:49:36.307711376 +0000 UTC m=+0.126413206 container init 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:49:36 np0005593232 podman[293859]: 2026-01-23 09:49:36.313891193 +0000 UTC m=+0.132592993 container start 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:49:36 np0005593232 podman[293859]: 2026-01-23 09:49:36.317196618 +0000 UTC m=+0.135898418 container attach 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.562 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.563 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.605 250273 INFO nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.646 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.886 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.888 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.888 250273 INFO nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Creating image(s)#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.926 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:36 np0005593232 nova_compute[250269]: 2026-01-23 09:49:36.970 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.004 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.009 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]: {
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:    "0": [
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:        {
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "devices": [
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "/dev/loop3"
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            ],
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "lv_name": "ceph_lv0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "lv_size": "7511998464",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "name": "ceph_lv0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "tags": {
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.cluster_name": "ceph",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.crush_device_class": "",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.encrypted": "0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.osd_id": "0",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.type": "block",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:                "ceph.vdo": "0"
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            },
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "type": "block",
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:            "vg_name": "ceph_vg0"
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:        }
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]:    ]
Jan 23 04:49:37 np0005593232 dreamy_johnson[293876]: }
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.087 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.088 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.089 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.089 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:37 np0005593232 systemd[1]: libpod-43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273.scope: Deactivated successfully.
Jan 23 04:49:37 np0005593232 podman[293859]: 2026-01-23 09:49:37.105187831 +0000 UTC m=+0.923889641 container died 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.122 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.128 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-82418870c34387f6ec7af47eeded5dee117a69da0e40a42605cfa8cbdf3bc24c-merged.mount: Deactivated successfully.
Jan 23 04:49:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:37.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:37 np0005593232 podman[293859]: 2026-01-23 09:49:37.17020061 +0000 UTC m=+0.988902430 container remove 43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:49:37 np0005593232 systemd[1]: libpod-conmon-43a6c4f31687e255073ff95f89a8d7ffca1d04e33aae15bfc17cb267f8a2d273.scope: Deactivated successfully.
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:49:37
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'images', 'default.rgw.control', 'vms']
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:49:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:37.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1739: 321 pgs: 321 active+clean; 121 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.600 250273 DEBUG nova.policy [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4cb83b8ddd0644f898d4be1f7de0b930', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b8b9b5c378f24327912b08252b3c9636', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.742 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.812 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] resizing rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.854073295 +0000 UTC m=+0.039135800 container create d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:49:37 np0005593232 systemd[1]: Started libpod-conmon-d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074.scope.
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.918 250273 DEBUG nova.objects.instance [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'migration_context' on Instance uuid 1f303dea-3b7e-419e-b31c-b04209c0cd89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.837350737 +0000 UTC m=+0.022413262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.935536184 +0000 UTC m=+0.120598729 container init d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.943357908 +0000 UTC m=+0.128420413 container start d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:49:37 np0005593232 romantic_cori[294213]: 167 167
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.947798985 +0000 UTC m=+0.132861490 container attach d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.947 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.948 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Ensure instance console log exists: /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:49:37 np0005593232 systemd[1]: libpod-d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074.scope: Deactivated successfully.
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.948991469 +0000 UTC m=+0.134053974 container died d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.948 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.949 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:37 np0005593232 nova_compute[250269]: 2026-01-23 09:49:37.950 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b8454dc9fa6a43b5891550514874b277178a94e4693d1112332b3bc84253de87-merged.mount: Deactivated successfully.
Jan 23 04:49:37 np0005593232 podman[294171]: 2026-01-23 09:49:37.987434318 +0000 UTC m=+0.172496823 container remove d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:49:38 np0005593232 systemd[1]: libpod-conmon-d311115454e70bfcaeb0ecd987601b23bc304caf235a52398d22f8dc6af20074.scope: Deactivated successfully.
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:49:38 np0005593232 podman[294247]: 2026-01-23 09:49:38.137063897 +0000 UTC m=+0.039657495 container create ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:49:38 np0005593232 systemd[1]: Started libpod-conmon-ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073.scope.
Jan 23 04:49:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96debd518b3e1066a611b2310685a0ad82e120b6a5efb2e98d44927f36be0b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96debd518b3e1066a611b2310685a0ad82e120b6a5efb2e98d44927f36be0b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96debd518b3e1066a611b2310685a0ad82e120b6a5efb2e98d44927f36be0b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96debd518b3e1066a611b2310685a0ad82e120b6a5efb2e98d44927f36be0b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:38 np0005593232 podman[294247]: 2026-01-23 09:49:38.211013402 +0000 UTC m=+0.113606990 container init ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:49:38 np0005593232 podman[294247]: 2026-01-23 09:49:38.119862745 +0000 UTC m=+0.022456353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:49:38 np0005593232 podman[294247]: 2026-01-23 09:49:38.21758644 +0000 UTC m=+0.120180028 container start ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:49:38 np0005593232 podman[294247]: 2026-01-23 09:49:38.221920134 +0000 UTC m=+0.124513752 container attach ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]: {
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:        "osd_id": 0,
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:        "type": "bluestore"
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]:    }
Jan 23 04:49:39 np0005593232 ecstatic_ishizaka[294263]: }
Jan 23 04:49:39 np0005593232 systemd[1]: libpod-ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073.scope: Deactivated successfully.
Jan 23 04:49:39 np0005593232 podman[294247]: 2026-01-23 09:49:39.093048424 +0000 UTC m=+0.995642042 container died ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:49:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:39.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-96debd518b3e1066a611b2310685a0ad82e120b6a5efb2e98d44927f36be0b6a-merged.mount: Deactivated successfully.
Jan 23 04:49:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:39.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:39 np0005593232 podman[294247]: 2026-01-23 09:49:39.220849309 +0000 UTC m=+1.123442897 container remove ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_ishizaka, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:49:39 np0005593232 systemd[1]: libpod-conmon-ba436d2e913540d867eea2539c3015b83a5da94a4720c5888c7ee2f45be77073.scope: Deactivated successfully.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.363345) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779363523, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 2269, "num_deletes": 263, "total_data_size": 3800908, "memory_usage": 3849728, "flush_reason": "Manual Compaction"}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779419623, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 3733236, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36356, "largest_seqno": 38624, "table_properties": {"data_size": 3722897, "index_size": 6641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21975, "raw_average_key_size": 21, "raw_value_size": 3702066, "raw_average_value_size": 3549, "num_data_blocks": 286, "num_entries": 1043, "num_filter_entries": 1043, "num_deletions": 263, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161593, "oldest_key_time": 1769161593, "file_creation_time": 1769161779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 56272 microseconds, and 11813 cpu microseconds.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.419679) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 3733236 bytes OK
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.419710) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.422029) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.422048) EVENT_LOG_v1 {"time_micros": 1769161779422044, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.422066) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 3791350, prev total WAL file size 3811949, number of live WAL files 2.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.423485) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(3645KB)], [80(9771KB)]
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779423593, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13739021, "oldest_snapshot_seqno": -1}
Jan 23 04:49:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1740: 321 pgs: 321 active+clean; 160 MiB data, 603 MiB used, 20 GiB / 21 GiB avail; 242 KiB/s rd, 2.7 MiB/s wr, 47 op/s
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6571 keys, 11839637 bytes, temperature: kUnknown
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779630760, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 11839637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11793055, "index_size": 29062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 168211, "raw_average_key_size": 25, "raw_value_size": 11672551, "raw_average_value_size": 1776, "num_data_blocks": 1165, "num_entries": 6571, "num_filter_entries": 6571, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161779, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.631382) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 11839637 bytes
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.635937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.3 rd, 57.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7106, records dropped: 535 output_compression: NoCompression
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.635957) EVENT_LOG_v1 {"time_micros": 1769161779635948, "job": 46, "event": "compaction_finished", "compaction_time_micros": 207289, "compaction_time_cpu_micros": 39995, "output_level": 6, "num_output_files": 1, "total_output_size": 11839637, "num_input_records": 7106, "num_output_records": 6571, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.423285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.636551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.636560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.636562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.636565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:49:39.636567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779637329, "job": 0, "event": "table_file_deletion", "file_number": 82}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161779639233, "job": 0, "event": "table_file_deletion", "file_number": 80}
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ddb9e427-9604-4ed8-8911-4252ec27db51 does not exist
Jan 23 04:49:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7627fd26-a23f-45c5-a142-cb5402d1f7ed does not exist
Jan 23 04:49:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev da85de60-4a8b-47ce-8615-d98569ed18cc does not exist
Jan 23 04:49:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:39 np0005593232 nova_compute[250269]: 2026-01-23 09:49:39.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:49:40 np0005593232 nova_compute[250269]: 2026-01-23 09:49:40.334 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:49:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:41.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:49:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:41.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1741: 321 pgs: 321 active+clean; 167 MiB data, 608 MiB used, 20 GiB / 21 GiB avail; 9.7 KiB/s rd, 1.8 MiB/s wr, 17 op/s
Jan 23 04:49:41 np0005593232 nova_compute[250269]: 2026-01-23 09:49:41.826 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Successfully created port: a51c9fca-a888-4bb3-9111-6da4f08c6f6a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:42.603 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:42.604 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:42.604 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:43.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1742: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:49:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:49:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169005992' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:49:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:49:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169005992' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:49:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:44 np0005593232 nova_compute[250269]: 2026-01-23 09:49:44.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:45.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:45 np0005593232 nova_compute[250269]: 2026-01-23 09:49:45.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1743: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.503 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Successfully updated port: a51c9fca-a888-4bb3-9111-6da4f08c6f6a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.559 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.560 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquired lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.560 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031605146682486505 of space, bias 1.0, pg target 0.9481544004745952 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:49:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.859 250273 DEBUG nova.compute.manager [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.859 250273 DEBUG nova.compute.manager [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing instance network info cache due to event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:49:46 np0005593232 nova_compute[250269]: 2026-01-23 09:49:46.860 250273 DEBUG oslo_concurrency.lockutils [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:49:47 np0005593232 nova_compute[250269]: 2026-01-23 09:49:47.155 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:49:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:47.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1744: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.149 250273 DEBUG nova.network.neutron [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:49:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:49.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.186 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Releasing lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.187 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Instance network_info: |[{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.188 250273 DEBUG oslo_concurrency.lockutils [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.192 250273 DEBUG nova.network.neutron [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.197 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Start _get_guest_xml network_info=[{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.204 250273 WARNING nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.217 250273 DEBUG nova.virt.libvirt.host [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.217 250273 DEBUG nova.virt.libvirt.host [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.223 250273 DEBUG nova.virt.libvirt.host [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.223 250273 DEBUG nova.virt.libvirt.host [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.225 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.225 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.225 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.226 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.226 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.226 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:49:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:49:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:49.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.227 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.227 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.227 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.227 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.228 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.228 250273 DEBUG nova.virt.hardware [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.231 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1745: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:49:49 np0005593232 podman[294405]: 2026-01-23 09:49:49.483368322 +0000 UTC m=+0.134616851 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 04:49:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:49:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3472119871' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:49:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.699 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.727 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.731 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:49 np0005593232 nova_compute[250269]: 2026-01-23 09:49:49.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:49:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863711080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.216 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.217 250273 DEBUG nova.virt.libvirt.vif [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1362504631',display_name='tempest-SecurityGroupsTestJSON-server-1362504631',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1362504631',id=67,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-g2qanm6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:49:36Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=1f303dea-3b7e-419e-b31c-b04209c0cd89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.218 250273 DEBUG nova.network.os_vif_util [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.219 250273 DEBUG nova.network.os_vif_util [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.220 250273 DEBUG nova.objects.instance [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f303dea-3b7e-419e-b31c-b04209c0cd89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.271 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <uuid>1f303dea-3b7e-419e-b31c-b04209c0cd89</uuid>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <name>instance-00000043</name>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1362504631</nova:name>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:49:49</nova:creationTime>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:user uuid="4cb83b8ddd0644f898d4be1f7de0b930">tempest-SecurityGroupsTestJSON-432973814-project-member</nova:user>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:project uuid="b8b9b5c378f24327912b08252b3c9636">tempest-SecurityGroupsTestJSON-432973814</nova:project>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <nova:port uuid="a51c9fca-a888-4bb3-9111-6da4f08c6f6a">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="serial">1f303dea-3b7e-419e-b31c-b04209c0cd89</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="uuid">1f303dea-3b7e-419e-b31c-b04209c0cd89</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1f303dea-3b7e-419e-b31c-b04209c0cd89_disk">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e0:9b:bd"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <target dev="tapa51c9fca-a8"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/console.log" append="off"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:49:50 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:49:50 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:49:50 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:49:50 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.273 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Preparing to wait for external event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.273 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.273 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.274 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.274 250273 DEBUG nova.virt.libvirt.vif [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1362504631',display_name='tempest-SecurityGroupsTestJSON-server-1362504631',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1362504631',id=67,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-g2qanm6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:49:36Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=1f303dea-3b7e-419e-b31c-b04209c0cd89,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.275 250273 DEBUG nova.network.os_vif_util [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.276 250273 DEBUG nova.network.os_vif_util [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.276 250273 DEBUG os_vif [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.277 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.277 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.282 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa51c9fca-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.283 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa51c9fca-a8, col_values=(('external_ids', {'iface-id': 'a51c9fca-a888-4bb3-9111-6da4f08c6f6a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:9b:bd', 'vm-uuid': '1f303dea-3b7e-419e-b31c-b04209c0cd89'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.284 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:50 np0005593232 NetworkManager[49057]: <info>  [1769161790.2857] manager: (tapa51c9fca-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.286 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.292 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.294 250273 INFO os_vif [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8')#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.385 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.386 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.386 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No VIF found with MAC fa:16:3e:e0:9b:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.387 250273 INFO nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Using config drive#033[00m
Jan 23 04:49:50 np0005593232 nova_compute[250269]: 2026-01-23 09:49:50.421 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:51.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.401 250273 INFO nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Creating config drive at /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config#033[00m
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.409 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ujwh2lh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1746: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 436 KiB/s wr, 21 op/s
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.541 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8ujwh2lh" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.571 250273 DEBUG nova.storage.rbd_utils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.575 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.914 250273 DEBUG oslo_concurrency.processutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config 1f303dea-3b7e-419e-b31c-b04209c0cd89_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.915 250273 INFO nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Deleting local config drive /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89/disk.config because it was imported into RBD.#033[00m
Jan 23 04:49:51 np0005593232 kernel: tapa51c9fca-a8: entered promiscuous mode
Jan 23 04:49:51 np0005593232 NetworkManager[49057]: <info>  [1769161791.9931] manager: (tapa51c9fca-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 23 04:49:51 np0005593232 nova_compute[250269]: 2026-01-23 09:49:51.992 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:51Z|00199|binding|INFO|Claiming lport a51c9fca-a888-4bb3-9111-6da4f08c6f6a for this chassis.
Jan 23 04:49:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:51Z|00200|binding|INFO|a51c9fca-a888-4bb3-9111-6da4f08c6f6a: Claiming fa:16:3e:e0:9b:bd 10.100.0.3
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.001 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.014 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:9b:bd 10.100.0.3'], port_security=['fa:16:3e:e0:9b:bd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f303dea-3b7e-419e-b31c-b04209c0cd89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a51c9fca-a888-4bb3-9111-6da4f08c6f6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.016 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a51c9fca-a888-4bb3-9111-6da4f08c6f6a in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 bound to our chassis#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.018 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79c61601-48fe-4c3b-aac8-5ed602fc7629#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.031 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8437a7-482d-4746-9bab-6e88b35d4584]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.032 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79c61601-41 in ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:49:52 np0005593232 systemd-machined[215836]: New machine qemu-25-instance-00000043.
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.034 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79c61601-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.034 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5c61dc-d60d-4401-9f1c-cdaf38b770b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.035 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a379258b-8bfc-4dd7-ad43-13a3520c836d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.048 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5a9744-4545-40dd-b6da-ea3c9c496d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 systemd[1]: Started Virtual Machine qemu-25-instance-00000043.
Jan 23 04:49:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:52Z|00201|binding|INFO|Setting lport a51c9fca-a888-4bb3-9111-6da4f08c6f6a ovn-installed in OVS
Jan 23 04:49:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:52Z|00202|binding|INFO|Setting lport a51c9fca-a888-4bb3-9111-6da4f08c6f6a up in Southbound
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.108 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2498b2db-44c7-4008-8136-a580d33001b6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 systemd-udevd[294567]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:49:52 np0005593232 NetworkManager[49057]: <info>  [1769161792.1312] device (tapa51c9fca-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:49:52 np0005593232 NetworkManager[49057]: <info>  [1769161792.1321] device (tapa51c9fca-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.139 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[07961a97-ee3b-40dd-8c93-260910359059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 systemd-udevd[294571]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:49:52 np0005593232 NetworkManager[49057]: <info>  [1769161792.1457] manager: (tap79c61601-40): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.145 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba0c1f9-7539-49cb-9489-15f34a4cf8df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.180 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5134194a-d3c9-4177-984a-98cde5cb86df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.184 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ad54e910-62be-49fc-b303-36db46eee2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 NetworkManager[49057]: <info>  [1769161792.2103] device (tap79c61601-40): carrier: link connected
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.216 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c5b1867a-bd48-4fe3-a7ca-80d9249a7b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.232 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8020e6a2-bfc6-4601-9dc9-82a04cdf965c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 30068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294597, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.248 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc483ce-9609-4a97-b8ee-16ff93b8c937]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feab:1eaa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570535, 'tstamp': 570535}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294598, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.263 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ac05d0a1-3426-4381-ab77-5fa157f9d3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 30068, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294599, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.296 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[95e928f7-3277-4653-9898-4231f25d97ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.360 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5d87d9da-ec29-4d29-921b-fef78009291b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.362 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.362 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.362 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c61601-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 NetworkManager[49057]: <info>  [1769161792.3653] manager: (tap79c61601-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Jan 23 04:49:52 np0005593232 kernel: tap79c61601-40: entered promiscuous mode
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.368 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79c61601-40, col_values=(('external_ids', {'iface-id': '3b1c09cf-09e8-4a4c-8010-b5b376d985cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:49:52Z|00203|binding|INFO|Releasing lport 3b1c09cf-09e8-4a4c-8010-b5b376d985cb from this chassis (sb_readonly=0)
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.370 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.371 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79c61601-48fe-4c3b-aac8-5ed602fc7629.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79c61601-48fe-4c3b-aac8-5ed602fc7629.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.372 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b78d3cd-e364-4a95-a1dc-bc6422dce051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.373 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-79c61601-48fe-4c3b-aac8-5ed602fc7629
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/79c61601-48fe-4c3b-aac8-5ed602fc7629.pid.haproxy
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.373 250273 DEBUG nova.network.neutron [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updated VIF entry in instance network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 79c61601-48fe-4c3b-aac8-5ed602fc7629
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.373 250273 DEBUG nova.network.neutron [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.374 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'env', 'PROCESS_TAG=haproxy-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79c61601-48fe-4c3b-aac8-5ed602fc7629.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.424 250273 DEBUG oslo_concurrency.lockutils [req-c6e0ec4c-ac21-464c-bb3e-0924e7e8c6e0 req-eae3a746-8bea-48fb-8235-16d17b997fad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.445 250273 DEBUG nova.compute.manager [req-4c2bdbd6-7a68-4afc-9f63-0b15ec8693cb req-c73dd6d9-b90e-419f-a9cd-c475e6a7da62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.446 250273 DEBUG oslo_concurrency.lockutils [req-4c2bdbd6-7a68-4afc-9f63-0b15ec8693cb req-c73dd6d9-b90e-419f-a9cd-c475e6a7da62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.447 250273 DEBUG oslo_concurrency.lockutils [req-4c2bdbd6-7a68-4afc-9f63-0b15ec8693cb req-c73dd6d9-b90e-419f-a9cd-c475e6a7da62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.447 250273 DEBUG oslo_concurrency.lockutils [req-4c2bdbd6-7a68-4afc-9f63-0b15ec8693cb req-c73dd6d9-b90e-419f-a9cd-c475e6a7da62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.447 250273 DEBUG nova.compute.manager [req-4c2bdbd6-7a68-4afc-9f63-0b15ec8693cb req-c73dd6d9-b90e-419f-a9cd-c475e6a7da62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Processing event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:49:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:52.610 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.635 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161792.6350412, 1f303dea-3b7e-419e-b31c-b04209c0cd89 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.636 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] VM Started (Lifecycle Event)#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.638 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.641 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.649 250273 INFO nova.virt.libvirt.driver [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Instance spawned successfully.#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.649 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.663 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.668 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.671 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.672 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.672 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.673 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.673 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.674 250273 DEBUG nova.virt.libvirt.driver [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.708 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.709 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161792.6382473, 1f303dea-3b7e-419e-b31c-b04209c0cd89 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.709 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.739 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.744 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161792.641173, 1f303dea-3b7e-419e-b31c-b04209c0cd89 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.745 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.772 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.777 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.783 250273 INFO nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Took 15.90 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.784 250273 DEBUG nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:49:52 np0005593232 nova_compute[250269]: 2026-01-23 09:49:52.816 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:49:52 np0005593232 podman[294673]: 2026-01-23 09:49:52.743049444 +0000 UTC m=+0.025918052 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:49:53 np0005593232 nova_compute[250269]: 2026-01-23 09:49:53.143 250273 INFO nova.compute.manager [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Took 18.83 seconds to build instance.#033[00m
Jan 23 04:49:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:53 np0005593232 nova_compute[250269]: 2026-01-23 09:49:53.210 250273 DEBUG oslo_concurrency.lockutils [None req-20e667c7-a418-44bc-8971-843102f0453e 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:53.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1747: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 13 KiB/s wr, 15 op/s
Jan 23 04:49:54 np0005593232 podman[294673]: 2026-01-23 09:49:54.484293647 +0000 UTC m=+1.767162235 container create e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:49:54 np0005593232 systemd[1]: Started libpod-conmon-e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff.scope.
Jan 23 04:49:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:49:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/091f55f55b0514e0fbd150edd040621f43d1d5ed13961ec7a4f01035448fbb61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:49:54 np0005593232 podman[294673]: 2026-01-23 09:49:54.563528722 +0000 UTC m=+1.846397300 container init e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 04:49:54 np0005593232 podman[294673]: 2026-01-23 09:49:54.570912754 +0000 UTC m=+1.853781342 container start e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 04:49:54 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [NOTICE]   (294693) : New worker (294695) forked
Jan 23 04:49:54 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [NOTICE]   (294693) : Loading success.
Jan 23 04:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:54.696 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.978 250273 DEBUG nova.compute.manager [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.979 250273 DEBUG oslo_concurrency.lockutils [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.980 250273 DEBUG oslo_concurrency.lockutils [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.980 250273 DEBUG oslo_concurrency.lockutils [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.981 250273 DEBUG nova.compute.manager [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] No waiting events found dispatching network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:49:54 np0005593232 nova_compute[250269]: 2026-01-23 09:49:54.981 250273 WARNING nova.compute.manager [req-15307d05-8a6b-4aae-b228-1a30024cccec req-12da6679-5fa3-4dc2-8594-3b486136e343 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received unexpected event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a for instance with vm_state active and task_state None.#033[00m
Jan 23 04:49:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:49:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:49:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:55.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:55 np0005593232 nova_compute[250269]: 2026-01-23 09:49:55.284 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:49:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:49:55.698 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:49:56 np0005593232 podman[294705]: 2026-01-23 09:49:56.394399347 +0000 UTC m=+0.057599948 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:49:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1748: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 422 KiB/s rd, 12 KiB/s wr, 22 op/s
Jan 23 04:49:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:57.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1749: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 68 op/s
Jan 23 04:49:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:49:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:49:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:49:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:49:59.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:49:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.741 250273 DEBUG nova.compute.manager [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.741 250273 DEBUG nova.compute.manager [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing instance network info cache due to event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.741 250273 DEBUG oslo_concurrency.lockutils [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.742 250273 DEBUG oslo_concurrency.lockutils [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.742 250273 DEBUG nova.network.neutron [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:49:59 np0005593232 nova_compute[250269]: 2026-01-23 09:49:59.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 04:50:00 np0005593232 nova_compute[250269]: 2026-01-23 09:50:00.324 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1750: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 13 KiB/s wr, 68 op/s
Jan 23 04:50:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 04:50:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:01 np0005593232 nova_compute[250269]: 2026-01-23 09:50:01.900 250273 DEBUG nova.network.neutron [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updated VIF entry in instance network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:50:01 np0005593232 nova_compute[250269]: 2026-01-23 09:50:01.901 250273 DEBUG nova.network.neutron [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:01 np0005593232 nova_compute[250269]: 2026-01-23 09:50:01.927 250273 DEBUG oslo_concurrency.lockutils [req-ac77a27f-95c3-444f-aac4-df8319793846 req-1e363337-58d7-4af6-9a63-d5a24ca23a57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1751: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 68 op/s
Jan 23 04:50:02 np0005593232 nova_compute[250269]: 2026-01-23 09:50:02.499 250273 DEBUG nova.compute.manager [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:02 np0005593232 nova_compute[250269]: 2026-01-23 09:50:02.500 250273 DEBUG nova.compute.manager [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing instance network info cache due to event network-changed-a51c9fca-a888-4bb3-9111-6da4f08c6f6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:50:02 np0005593232 nova_compute[250269]: 2026-01-23 09:50:02.500 250273 DEBUG oslo_concurrency.lockutils [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:02 np0005593232 nova_compute[250269]: 2026-01-23 09:50:02.500 250273 DEBUG oslo_concurrency.lockutils [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:02 np0005593232 nova_compute[250269]: 2026-01-23 09:50:02.501 250273 DEBUG nova.network.neutron [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Refreshing network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:50:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 04:50:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:03.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1752: 321 pgs: 321 active+clean; 167 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 15 KiB/s wr, 68 op/s
Jan 23 04:50:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:04 np0005593232 nova_compute[250269]: 2026-01-23 09:50:04.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:05.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:05 np0005593232 nova_compute[250269]: 2026-01-23 09:50:05.189 250273 DEBUG nova.network.neutron [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updated VIF entry in instance network info cache for port a51c9fca-a888-4bb3-9111-6da4f08c6f6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:50:05 np0005593232 nova_compute[250269]: 2026-01-23 09:50:05.190 250273 DEBUG nova.network.neutron [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:05 np0005593232 nova_compute[250269]: 2026-01-23 09:50:05.208 250273 DEBUG oslo_concurrency.lockutils [req-99af0993-1721-4b40-b66d-8960a64d9869 req-5f38b55e-2501-467a-a26a-7faaed2eb4b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:05 np0005593232 nova_compute[250269]: 2026-01-23 09:50:05.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1753: 321 pgs: 321 active+clean; 170 MiB data, 624 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 235 KiB/s wr, 69 op/s
Jan 23 04:50:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:07Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:9b:bd 10.100.0.3
Jan 23 04:50:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:07Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:9b:bd 10.100.0.3
Jan 23 04:50:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:07.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:50:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1754: 321 pgs: 321 active+clean; 186 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 90 op/s
Jan 23 04:50:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:09.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:09 np0005593232 nova_compute[250269]: 2026-01-23 09:50:09.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:10 np0005593232 nova_compute[250269]: 2026-01-23 09:50:10.328 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1755: 321 pgs: 321 active+clean; 186 MiB data, 658 MiB used, 20 GiB / 21 GiB avail; 198 KiB/s rd, 1.6 MiB/s wr, 40 op/s
Jan 23 04:50:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:11.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1756: 321 pgs: 321 active+clean; 197 MiB data, 666 MiB used, 20 GiB / 21 GiB avail; 219 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 23 04:50:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:13.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:13.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1757: 321 pgs: 321 active+clean; 246 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 285 KiB/s rd, 3.9 MiB/s wr, 74 op/s
Jan 23 04:50:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:14 np0005593232 nova_compute[250269]: 2026-01-23 09:50:14.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:15.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:15.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:15 np0005593232 nova_compute[250269]: 2026-01-23 09:50:15.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:15 np0005593232 nova_compute[250269]: 2026-01-23 09:50:15.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1758: 321 pgs: 321 active+clean; 246 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 23 04:50:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:17.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:17.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:18 np0005593232 nova_compute[250269]: 2026-01-23 09:50:18.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:18 np0005593232 nova_compute[250269]: 2026-01-23 09:50:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1759: 321 pgs: 321 active+clean; 246 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 292 KiB/s rd, 3.7 MiB/s wr, 86 op/s
Jan 23 04:50:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:19.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:19 np0005593232 nova_compute[250269]: 2026-01-23 09:50:19.907 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:20 np0005593232 nova_compute[250269]: 2026-01-23 09:50:20.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:20 np0005593232 podman[294835]: 2026-01-23 09:50:20.481754036 +0000 UTC m=+0.117008957 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:50:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1760: 321 pgs: 321 active+clean; 246 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 94 KiB/s rd, 2.3 MiB/s wr, 48 op/s
Jan 23 04:50:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:21.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:21.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.646 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.647 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.647 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:50:21 np0005593232 nova_compute[250269]: 2026-01-23 09:50:21.647 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1f303dea-3b7e-419e-b31c-b04209c0cd89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1761: 321 pgs: 321 active+clean; 251 MiB data, 692 MiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Jan 23 04:50:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:23.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:23.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1762: 321 pgs: 321 active+clean; 293 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 3.7 MiB/s wr, 73 op/s
Jan 23 04:50:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.714 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [{"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.751 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-1f303dea-3b7e-419e-b31c-b04209c0cd89" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.751 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.751 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.752 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.752 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:50:24 np0005593232 nova_compute[250269]: 2026-01-23 09:50:24.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:25.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.378 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.985 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.986 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.986 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.986 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:50:25 np0005593232 nova_compute[250269]: 2026-01-23 09:50:25.986 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:50:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3464425231' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:50:26 np0005593232 nova_compute[250269]: 2026-01-23 09:50:26.439 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1763: 321 pgs: 321 active+clean; 293 MiB data, 710 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 23 04:50:26 np0005593232 podman[294888]: 2026-01-23 09:50:26.527414715 +0000 UTC m=+0.050257498 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:50:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:27.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1764: 321 pgs: 321 active+clean; 246 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.660 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.661 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.814 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.816 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4364MB free_disk=20.8555908203125GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.816 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:28 np0005593232 nova_compute[250269]: 2026-01-23 09:50:28.816 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:29.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:29 np0005593232 nova_compute[250269]: 2026-01-23 09:50:29.912 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:30 np0005593232 nova_compute[250269]: 2026-01-23 09:50:30.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1765: 321 pgs: 321 active+clean; 246 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 23 04:50:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:31.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.488 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1f303dea-3b7e-419e-b31c-b04209c0cd89 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.488 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.488 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:50:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1766: 321 pgs: 321 active+clean; 246 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.580 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.762 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.763 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.786 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:50:32 np0005593232 nova_compute[250269]: 2026-01-23 09:50:32.940 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:50:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286841287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.101 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.108 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.127 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:50:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:33.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:33.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.457 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.457 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.458 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.464 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.464 250273 INFO nova.compute.claims [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:50:33 np0005593232 nova_compute[250269]: 2026-01-23 09:50:33.811 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:50:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3264976161' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.246 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.252 250273 DEBUG nova.compute.provider_tree [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.269 250273 DEBUG nova.scheduler.client.report [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.318 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.318 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.384 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.384 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.406 250273 INFO nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.431 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:50:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1767: 321 pgs: 321 active+clean; 246 MiB data, 693 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 115 op/s
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.573 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.575 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.575 250273 INFO nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Creating image(s)#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.605 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.633 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.664 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.669 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.705 250273 DEBUG nova.policy [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4cb83b8ddd0644f898d4be1f7de0b930', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b8b9b5c378f24327912b08252b3c9636', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.736 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.737 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.737 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.738 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.765 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.770 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:34 np0005593232 nova_compute[250269]: 2026-01-23 09:50:34.913 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.103 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.196 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] resizing rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:50:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:35.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.366 250273 DEBUG nova.objects.instance [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'migration_context' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.381 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.389 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.390 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Ensure instance console log exists: /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.390 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.391 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.391 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.458 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:50:35 np0005593232 nova_compute[250269]: 2026-01-23 09:50:35.734 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Successfully created port: 36cd571d-7acb-4373-8e01-cd61dc0c3735 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:50:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1768: 321 pgs: 321 active+clean; 265 MiB data, 705 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 992 KiB/s wr, 106 op/s
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:50:37
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'backups']
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:50:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:37.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:37.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:50:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1769: 321 pgs: 321 active+clean; 236 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 193 op/s
Jan 23 04:50:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:39.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:39.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.758793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161839758934, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 739, "num_deletes": 250, "total_data_size": 1045171, "memory_usage": 1059496, "flush_reason": "Manual Compaction"}
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161839766170, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 668299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38625, "largest_seqno": 39363, "table_properties": {"data_size": 665073, "index_size": 1070, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8668, "raw_average_key_size": 20, "raw_value_size": 658303, "raw_average_value_size": 1567, "num_data_blocks": 49, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161779, "oldest_key_time": 1769161779, "file_creation_time": 1769161839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 7411 microseconds, and 3188 cpu microseconds.
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.766214) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 668299 bytes OK
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.766234) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.768842) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.768896) EVENT_LOG_v1 {"time_micros": 1769161839768883, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.768914) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1041428, prev total WAL file size 1041428, number of live WAL files 2.
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.769600) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323535' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(652KB)], [83(11MB)]
Jan 23 04:50:39 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161839769751, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 12507936, "oldest_snapshot_seqno": -1}
Jan 23 04:50:39 np0005593232 nova_compute[250269]: 2026-01-23 09:50:39.928 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.019 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Successfully updated port: 36cd571d-7acb-4373-8e01-cd61dc0c3735 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.048 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.048 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquired lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.048 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6505 keys, 9021149 bytes, temperature: kUnknown
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161840062154, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9021149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8979049, "index_size": 24710, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 167065, "raw_average_key_size": 25, "raw_value_size": 8863754, "raw_average_value_size": 1362, "num_data_blocks": 985, "num_entries": 6505, "num_filter_entries": 6505, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161839, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.062418) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9021149 bytes
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.065916) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 42.8 rd, 30.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 11.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(32.2) write-amplify(13.5) OK, records in: 6991, records dropped: 486 output_compression: NoCompression
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.065936) EVENT_LOG_v1 {"time_micros": 1769161840065928, "job": 48, "event": "compaction_finished", "compaction_time_micros": 292487, "compaction_time_cpu_micros": 27013, "output_level": 6, "num_output_files": 1, "total_output_size": 9021149, "num_input_records": 6991, "num_output_records": 6505, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161840066166, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161840068551, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:39.769414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.068611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.068615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.068617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.068619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:50:40.068621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.256 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:50:40 np0005593232 nova_compute[250269]: 2026-01-23 09:50:40.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1770: 321 pgs: 321 active+clean; 236 MiB data, 694 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.091 250273 DEBUG nova.compute.manager [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.092 250273 DEBUG nova.compute.manager [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing instance network info cache due to event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.092 250273 DEBUG oslo_concurrency.lockutils [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:41.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.897 250273 DEBUG nova.network.neutron [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.932 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Releasing lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.933 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance network_info: |[{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.933 250273 DEBUG oslo_concurrency.lockutils [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.933 250273 DEBUG nova.network.neutron [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.937 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start _get_guest_xml network_info=[{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.941 250273 WARNING nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.950 250273 DEBUG nova.virt.libvirt.host [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.951 250273 DEBUG nova.virt.libvirt.host [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.957 250273 DEBUG nova.virt.libvirt.host [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.958 250273 DEBUG nova.virt.libvirt.host [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.959 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.959 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.960 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.960 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.960 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.960 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.961 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.961 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.961 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.961 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.961 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.962 250273 DEBUG nova.virt.hardware [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:50:41 np0005593232 nova_compute[250269]: 2026-01-23 09:50:41.965 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317307379' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.402 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.432 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.436 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1771: 321 pgs: 321 active+clean; 213 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 04:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:42.603 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:42.604 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:42.605 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/201725708' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.901 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.904 250273 DEBUG nova.virt.libvirt.vif [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:50:34Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.904 250273 DEBUG nova.network.os_vif_util [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.906 250273 DEBUG nova.network.os_vif_util [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.907 250273 DEBUG nova.objects.instance [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:50:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.932 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <uuid>bc34e547-193c-4d2f-83ca-79f1ddcc0613</uuid>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <name>instance-00000046</name>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1935390298</nova:name>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:50:41</nova:creationTime>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:user uuid="4cb83b8ddd0644f898d4be1f7de0b930">tempest-SecurityGroupsTestJSON-432973814-project-member</nova:user>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:project uuid="b8b9b5c378f24327912b08252b3c9636">tempest-SecurityGroupsTestJSON-432973814</nova:project>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <nova:port uuid="36cd571d-7acb-4373-8e01-cd61dc0c3735">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="serial">bc34e547-193c-4d2f-83ca-79f1ddcc0613</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="uuid">bc34e547-193c-4d2f-83ca-79f1ddcc0613</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:9a:2d:73"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <target dev="tap36cd571d-7a"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/console.log" append="off"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:50:42 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:50:42 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:50:42 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:50:42 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.934 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Preparing to wait for external event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.935 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.935 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.936 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.936 250273 DEBUG nova.virt.libvirt.vif [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:50:34Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.937 250273 DEBUG nova.network.os_vif_util [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.938 250273 DEBUG nova.network.os_vif_util [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.938 250273 DEBUG os_vif [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.940 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.941 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.944 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36cd571d-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.945 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36cd571d-7a, col_values=(('external_ids', {'iface-id': '36cd571d-7acb-4373-8e01-cd61dc0c3735', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:2d:73', 'vm-uuid': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:42 np0005593232 NetworkManager[49057]: <info>  [1769161842.9475] manager: (tap36cd571d-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:42 np0005593232 nova_compute[250269]: 2026-01-23 09:50:42.957 250273 INFO os_vif [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a')#033[00m
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 60d5f659-593c-4788-a8e3-8b02734602f8 does not exist
Jan 23 04:50:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f996247e-880a-4515-a879-1e8292dd8164 does not exist
Jan 23 04:50:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 021ea82b-aa4e-44ad-8d9f-3739c973171f does not exist
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:50:43 np0005593232 nova_compute[250269]: 2026-01-23 09:50:43.225 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:50:43 np0005593232 nova_compute[250269]: 2026-01-23 09:50:43.226 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:50:43 np0005593232 nova_compute[250269]: 2026-01-23 09:50:43.226 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] No VIF found with MAC fa:16:3e:9a:2d:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:50:43 np0005593232 nova_compute[250269]: 2026-01-23 09:50:43.226 250273 INFO nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Using config drive#033[00m
Jan 23 04:50:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:43.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:43 np0005593232 nova_compute[250269]: 2026-01-23 09:50:43.259 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:43.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:43 np0005593232 podman[295534]: 2026-01-23 09:50:43.746137865 +0000 UTC m=+0.022065452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:43 np0005593232 podman[295534]: 2026-01-23 09:50:43.99597953 +0000 UTC m=+0.271907087 container create f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.003 250273 INFO nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Creating config drive at /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.009 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2du9y8rn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.142 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2du9y8rn" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.173 250273 DEBUG nova.storage.rbd_utils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] rbd image bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.179 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:44 np0005593232 systemd[1]: Started libpod-conmon-f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82.scope.
Jan 23 04:50:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:50:44 np0005593232 podman[295534]: 2026-01-23 09:50:44.42892896 +0000 UTC m=+0.704856547 container init f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:50:44 np0005593232 podman[295534]: 2026-01-23 09:50:44.438283048 +0000 UTC m=+0.714210625 container start f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 04:50:44 np0005593232 podman[295534]: 2026-01-23 09:50:44.442009794 +0000 UTC m=+0.717937381 container attach f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:50:44 np0005593232 modest_lumiere[295587]: 167 167
Jan 23 04:50:44 np0005593232 systemd[1]: libpod-f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82.scope: Deactivated successfully.
Jan 23 04:50:44 np0005593232 podman[295534]: 2026-01-23 09:50:44.445296888 +0000 UTC m=+0.721224455 container died f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.453 250273 DEBUG nova.network.neutron [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updated VIF entry in instance network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.454 250273 DEBUG nova.network.neutron [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b9b35179108211719d7c9541a2ef97616615b17570c7f81adf991152734dcaf0-merged.mount: Deactivated successfully.
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.485 250273 DEBUG oslo_concurrency.lockutils [req-865cae18-51ff-48c8-83e1-db07a02f3bcd req-2482e798-65b4-4a2f-a0c1-433bda3de136 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:44 np0005593232 podman[295534]: 2026-01-23 09:50:44.495741971 +0000 UTC m=+0.771669538 container remove f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:50:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1772: 321 pgs: 321 active+clean; 260 MiB data, 695 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 133 op/s
Jan 23 04:50:44 np0005593232 systemd[1]: libpod-conmon-f8156a8d60ba9bf08a384d79bd92d6a7a04bef44c02ea0c55a7dd70eaf5d3a82.scope: Deactivated successfully.
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.557 250273 DEBUG oslo_concurrency.processutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.378s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.558 250273 INFO nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Deleting local config drive /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/disk.config because it was imported into RBD.#033[00m
Jan 23 04:50:44 np0005593232 kernel: tap36cd571d-7a: entered promiscuous mode
Jan 23 04:50:44 np0005593232 NetworkManager[49057]: <info>  [1769161844.6146] manager: (tap36cd571d-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Jan 23 04:50:44 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:44Z|00204|binding|INFO|Claiming lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 for this chassis.
Jan 23 04:50:44 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:44Z|00205|binding|INFO|36cd571d-7acb-4373-8e01-cd61dc0c3735: Claiming fa:16:3e:9a:2d:73 10.100.0.7
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.679 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:2d:73 10.100.0.7'], port_security=['fa:16:3e:9a:2d:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=36cd571d-7acb-4373-8e01-cd61dc0c3735) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.680 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 36cd571d-7acb-4373-8e01-cd61dc0c3735 in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 bound to our chassis#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.682 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79c61601-48fe-4c3b-aac8-5ed602fc7629#033[00m
Jan 23 04:50:44 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:44Z|00206|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 ovn-installed in OVS
Jan 23 04:50:44 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:44Z|00207|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 up in Southbound
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:44 np0005593232 systemd-udevd[295640]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.700 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6a23f128-c1a4-4443-84a9-4bffc0b4a000]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 systemd-machined[215836]: New machine qemu-26-instance-00000046.
Jan 23 04:50:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:44 np0005593232 systemd[1]: Started Virtual Machine qemu-26-instance-00000046.
Jan 23 04:50:44 np0005593232 NetworkManager[49057]: <info>  [1769161844.7156] device (tap36cd571d-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:50:44 np0005593232 NetworkManager[49057]: <info>  [1769161844.7163] device (tap36cd571d-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:50:44 np0005593232 podman[295622]: 2026-01-23 09:50:44.711789149 +0000 UTC m=+0.073787731 container create 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.733 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb85afb-0da7-4d94-9a03-daed8d68cf2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.737 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a091f366-df28-4df6-a9ac-e7805f448fe7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 systemd[1]: Started libpod-conmon-6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3.scope.
Jan 23 04:50:44 np0005593232 podman[295622]: 2026-01-23 09:50:44.67442316 +0000 UTC m=+0.036421762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.764 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b76925d3-643e-4564-98f9-a7528d2e8983]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.781 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[40717521-d198-485c-bf61-5b5e9c7810aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 27629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295657, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.801 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f79eae43-cee3-4ed5-86af-522bdedd772e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570546, 'tstamp': 570546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295658, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570549, 'tstamp': 570549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295658, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.804 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:44 np0005593232 podman[295622]: 2026-01-23 09:50:44.807885197 +0000 UTC m=+0.169883789 container init 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.810 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c61601-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.811 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.811 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79c61601-40, col_values=(('external_ids', {'iface-id': '3b1c09cf-09e8-4a4c-8010-b5b376d985cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:44.811 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:44 np0005593232 podman[295622]: 2026-01-23 09:50:44.816521404 +0000 UTC m=+0.178519976 container start 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:50:44 np0005593232 podman[295622]: 2026-01-23 09:50:44.819931371 +0000 UTC m=+0.181929943 container attach 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 04:50:44 np0005593232 nova_compute[250269]: 2026-01-23 09:50:44.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.125 250273 DEBUG nova.compute.manager [req-edd9cfd5-0f9f-4ec2-a203-1269276593d3 req-2e90ec08-6eaa-4420-b524-21a74e086570 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.126 250273 DEBUG oslo_concurrency.lockutils [req-edd9cfd5-0f9f-4ec2-a203-1269276593d3 req-2e90ec08-6eaa-4420-b524-21a74e086570 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.126 250273 DEBUG oslo_concurrency.lockutils [req-edd9cfd5-0f9f-4ec2-a203-1269276593d3 req-2e90ec08-6eaa-4420-b524-21a74e086570 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.127 250273 DEBUG oslo_concurrency.lockutils [req-edd9cfd5-0f9f-4ec2-a203-1269276593d3 req-2e90ec08-6eaa-4420-b524-21a74e086570 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.127 250273 DEBUG nova.compute.manager [req-edd9cfd5-0f9f-4ec2-a203-1269276593d3 req-2e90ec08-6eaa-4420-b524-21a74e086570 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Processing event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:50:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:45.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.273 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161845.2725005, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.273 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Started (Lifecycle Event)#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.276 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.280 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.283 250273 INFO nova.virt.libvirt.driver [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance spawned successfully.#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.283 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.311 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.317 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.320 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.321 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.321 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.322 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.322 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.323 250273 DEBUG nova.virt.libvirt.driver [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:50:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:45.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.368 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.369 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161845.2726164, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.369 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.400 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.407 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161845.279305, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.407 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.430 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.434 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.440 250273 INFO nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Took 10.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.440 250273 DEBUG nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.455 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.524 250273 INFO nova.compute.manager [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Took 12.61 seconds to build instance.#033[00m
Jan 23 04:50:45 np0005593232 nova_compute[250269]: 2026-01-23 09:50:45.548 250273 DEBUG oslo_concurrency.lockutils [None req-3b2c5be5-80ed-48cb-a56d-dbea0b27cf29 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:45 np0005593232 gifted_proskuriakova[295650]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:50:45 np0005593232 gifted_proskuriakova[295650]: --> relative data size: 1.0
Jan 23 04:50:45 np0005593232 gifted_proskuriakova[295650]: --> All data devices are unavailable
Jan 23 04:50:45 np0005593232 systemd[1]: libpod-6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3.scope: Deactivated successfully.
Jan 23 04:50:45 np0005593232 conmon[295650]: conmon 6fba12929249de5f44aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3.scope/container/memory.events
Jan 23 04:50:45 np0005593232 podman[295622]: 2026-01-23 09:50:45.643820591 +0000 UTC m=+1.005819173 container died 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:50:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ed5a80b237b62b1798a4354b86141258425f93fc42c35b28638d7fa4795fe7e7-merged.mount: Deactivated successfully.
Jan 23 04:50:45 np0005593232 podman[295622]: 2026-01-23 09:50:45.696512057 +0000 UTC m=+1.058510629 container remove 6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 04:50:45 np0005593232 systemd[1]: libpod-conmon-6fba12929249de5f44aa77f4937cea12c0be8574991740bbb3e8d1a2801b3de3.scope: Deactivated successfully.
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.319488381 +0000 UTC m=+0.047894900 container create 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:50:46 np0005593232 systemd[1]: Started libpod-conmon-9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced.scope.
Jan 23 04:50:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.297443671 +0000 UTC m=+0.025850220 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.397401599 +0000 UTC m=+0.125808138 container init 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.403230776 +0000 UTC m=+0.131637285 container start 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:50:46 np0005593232 stoic_kepler[295885]: 167 167
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.409464554 +0000 UTC m=+0.137871093 container attach 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:50:46 np0005593232 systemd[1]: libpod-9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced.scope: Deactivated successfully.
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.411574975 +0000 UTC m=+0.139981494 container died 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 04:50:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-601dcddb4a0bdadebcfb6cda7deedba8f1f00c5892cda6d1ab4dd4466fd6c2eb-merged.mount: Deactivated successfully.
Jan 23 04:50:46 np0005593232 podman[295869]: 2026-01-23 09:50:46.453115702 +0000 UTC m=+0.181522221 container remove 9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:50:46 np0005593232 systemd[1]: libpod-conmon-9b594bed8ae44f686d0a22ef153972df22e8981c87dbab4668cde6330fbe4ced.scope: Deactivated successfully.
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1773: 321 pgs: 321 active+clean; 260 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 162 op/s
Jan 23 04:50:46 np0005593232 podman[295910]: 2026-01-23 09:50:46.620267832 +0000 UTC m=+0.042443784 container create 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:50:46 np0005593232 systemd[1]: Started libpod-conmon-627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495.scope.
Jan 23 04:50:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d2a01e78be58aa894dc295d5c8b6bc7187637f09049718623f0f8084e491d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d2a01e78be58aa894dc295d5c8b6bc7187637f09049718623f0f8084e491d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d2a01e78be58aa894dc295d5c8b6bc7187637f09049718623f0f8084e491d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7d2a01e78be58aa894dc295d5c8b6bc7187637f09049718623f0f8084e491d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:46 np0005593232 podman[295910]: 2026-01-23 09:50:46.602606787 +0000 UTC m=+0.024782779 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:46 np0005593232 podman[295910]: 2026-01-23 09:50:46.710082901 +0000 UTC m=+0.132258893 container init 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:50:46 np0005593232 podman[295910]: 2026-01-23 09:50:46.717619686 +0000 UTC m=+0.139795648 container start 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:50:46 np0005593232 podman[295910]: 2026-01-23 09:50:46.720820548 +0000 UTC m=+0.142996530 container attach 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0051412662269681745 of space, bias 1.0, pg target 1.5423798680904524 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:50:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:50:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.305 250273 DEBUG nova.compute.manager [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.307 250273 DEBUG oslo_concurrency.lockutils [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.308 250273 DEBUG oslo_concurrency.lockutils [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.308 250273 DEBUG oslo_concurrency.lockutils [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.308 250273 DEBUG nova.compute.manager [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.309 250273 WARNING nova.compute.manager [req-6748f794-e33d-4ea5-a1f4-b23598de3ac9 req-5d7f20c3-6aa8-4e4b-9981-d3c7592ddf0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:50:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:47.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:47 np0005593232 tender_elion[295926]: {
Jan 23 04:50:47 np0005593232 tender_elion[295926]:    "0": [
Jan 23 04:50:47 np0005593232 tender_elion[295926]:        {
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "devices": [
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "/dev/loop3"
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            ],
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "lv_name": "ceph_lv0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "lv_size": "7511998464",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "name": "ceph_lv0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "tags": {
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.cluster_name": "ceph",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.crush_device_class": "",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.encrypted": "0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.osd_id": "0",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.type": "block",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:                "ceph.vdo": "0"
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            },
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "type": "block",
Jan 23 04:50:47 np0005593232 tender_elion[295926]:            "vg_name": "ceph_vg0"
Jan 23 04:50:47 np0005593232 tender_elion[295926]:        }
Jan 23 04:50:47 np0005593232 tender_elion[295926]:    ]
Jan 23 04:50:47 np0005593232 tender_elion[295926]: }
Jan 23 04:50:47 np0005593232 systemd[1]: libpod-627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495.scope: Deactivated successfully.
Jan 23 04:50:47 np0005593232 podman[295910]: 2026-01-23 09:50:47.518077006 +0000 UTC m=+0.940252978 container died 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:50:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a7d2a01e78be58aa894dc295d5c8b6bc7187637f09049718623f0f8084e491d0-merged.mount: Deactivated successfully.
Jan 23 04:50:47 np0005593232 podman[295910]: 2026-01-23 09:50:47.605240999 +0000 UTC m=+1.027416961 container remove 627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:50:47 np0005593232 systemd[1]: libpod-conmon-627b0fb347a6f8928e35aabdde03a37c2d1bf5581e3f7c98fc6fcf7d405e8495.scope: Deactivated successfully.
Jan 23 04:50:47 np0005593232 nova_compute[250269]: 2026-01-23 09:50:47.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:48 np0005593232 podman[296088]: 2026-01-23 09:50:48.174355893 +0000 UTC m=+0.033706455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:50:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1028398102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:50:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1774: 321 pgs: 321 active+clean; 260 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 187 op/s
Jan 23 04:50:48 np0005593232 podman[296088]: 2026-01-23 09:50:48.626562264 +0000 UTC m=+0.485912806 container create 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:48.998 250273 DEBUG nova.compute.manager [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:48.999 250273 DEBUG nova.compute.manager [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing instance network info cache due to event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:48.999 250273 DEBUG oslo_concurrency.lockutils [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:48.999 250273 DEBUG oslo_concurrency.lockutils [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:48.999 250273 DEBUG nova.network.neutron [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:50:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:50:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:49.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:50:49 np0005593232 systemd[1]: Started libpod-conmon-8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf.scope.
Jan 23 04:50:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:49.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:49 np0005593232 podman[296088]: 2026-01-23 09:50:49.5730981 +0000 UTC m=+1.432448642 container init 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:50:49 np0005593232 podman[296088]: 2026-01-23 09:50:49.582268952 +0000 UTC m=+1.441619494 container start 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:50:49 np0005593232 loving_jones[296104]: 167 167
Jan 23 04:50:49 np0005593232 systemd[1]: libpod-8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf.scope: Deactivated successfully.
Jan 23 04:50:49 np0005593232 podman[296088]: 2026-01-23 09:50:49.835692069 +0000 UTC m=+1.695042611 container attach 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:50:49 np0005593232 podman[296088]: 2026-01-23 09:50:49.836191674 +0000 UTC m=+1.695542216 container died 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:49.924 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:49.934 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:49.935 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:49.935 250273 INFO nova.compute.manager [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Rebooting instance#033[00m
Jan 23 04:50:49 np0005593232 nova_compute[250269]: 2026-01-23 09:50:49.952 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c530685f12241d9961cbf884801c34a43283c6d96775ffc0fa1f54532e404cb3-merged.mount: Deactivated successfully.
Jan 23 04:50:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1775: 321 pgs: 321 active+clean; 260 MiB data, 688 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 23 04:50:50 np0005593232 podman[296088]: 2026-01-23 09:50:50.508613522 +0000 UTC m=+2.367964064 container remove 8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:50:50 np0005593232 systemd[1]: libpod-conmon-8d048b635ac9b261a1bcff41024783d5cb6c48953b843d2de8015828e2ba0ecf.scope: Deactivated successfully.
Jan 23 04:50:50 np0005593232 podman[296126]: 2026-01-23 09:50:50.674180417 +0000 UTC m=+0.083830389 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 23 04:50:50 np0005593232 podman[296152]: 2026-01-23 09:50:50.672424176 +0000 UTC m=+0.020213589 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:50:50 np0005593232 podman[296152]: 2026-01-23 09:50:50.815482237 +0000 UTC m=+0.163271630 container create 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:50:50 np0005593232 systemd[1]: Started libpod-conmon-7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a.scope.
Jan 23 04:50:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:50:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62f98565772c79cc277d96c30c9e0e12780f099126abb07c49101d437dea53a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62f98565772c79cc277d96c30c9e0e12780f099126abb07c49101d437dea53a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62f98565772c79cc277d96c30c9e0e12780f099126abb07c49101d437dea53a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f62f98565772c79cc277d96c30c9e0e12780f099126abb07c49101d437dea53a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:50:51 np0005593232 podman[296152]: 2026-01-23 09:50:51.188419942 +0000 UTC m=+0.536209365 container init 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:50:51 np0005593232 podman[296152]: 2026-01-23 09:50:51.195924556 +0000 UTC m=+0.543713949 container start 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 04:50:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:51.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:51.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:51 np0005593232 podman[296152]: 2026-01-23 09:50:51.362281363 +0000 UTC m=+0.710070786 container attach 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:50:51 np0005593232 nova_compute[250269]: 2026-01-23 09:50:51.693 250273 DEBUG nova.network.neutron [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updated VIF entry in instance network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:50:51 np0005593232 nova_compute[250269]: 2026-01-23 09:50:51.695 250273 DEBUG nova.network.neutron [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:51 np0005593232 nova_compute[250269]: 2026-01-23 09:50:51.721 250273 DEBUG oslo_concurrency.lockutils [req-cdb4dfea-5298-43e2-8306-8c325e2ce891 req-a845b21c-7034-44c5-8677-94a09124d29c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:51 np0005593232 nova_compute[250269]: 2026-01-23 09:50:51.722 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquired lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:51 np0005593232 nova_compute[250269]: 2026-01-23 09:50:51.722 250273 DEBUG nova.network.neutron [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]: {
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:        "osd_id": 0,
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:        "type": "bluestore"
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]:    }
Jan 23 04:50:52 np0005593232 flamboyant_mayer[296175]: }
Jan 23 04:50:52 np0005593232 systemd[1]: libpod-7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a.scope: Deactivated successfully.
Jan 23 04:50:52 np0005593232 podman[296152]: 2026-01-23 09:50:52.120835915 +0000 UTC m=+1.468625328 container died 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:50:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f62f98565772c79cc277d96c30c9e0e12780f099126abb07c49101d437dea53a-merged.mount: Deactivated successfully.
Jan 23 04:50:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1776: 321 pgs: 321 active+clean; 261 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Jan 23 04:50:52 np0005593232 podman[296152]: 2026-01-23 09:50:52.718728412 +0000 UTC m=+2.066517805 container remove 7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:50:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:50:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:52 np0005593232 systemd[1]: libpod-conmon-7dd704627af0f4990c87cdc65c64ed7757626ad727c40a49f5ba2433b329316a.scope: Deactivated successfully.
Jan 23 04:50:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:50:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a758788-c8e4-42ba-b5b9-6aaa6e8d6e76 does not exist
Jan 23 04:50:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d42100b2-e709-4e12-89f0-3d22056dab69 does not exist
Jan 23 04:50:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9874aa1a-198f-4a9c-a448-45537e04b6cd does not exist
Jan 23 04:50:52 np0005593232 nova_compute[250269]: 2026-01-23 09:50:52.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:50:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:53.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:50:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:50:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:53.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.740 250273 DEBUG nova.network.neutron [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.771 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Releasing lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.772 250273 DEBUG nova.compute.manager [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:53 np0005593232 kernel: tap36cd571d-7a (unregistering): left promiscuous mode
Jan 23 04:50:53 np0005593232 NetworkManager[49057]: <info>  [1769161853.9196] device (tap36cd571d-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:50:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:50:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:53Z|00208|binding|INFO|Releasing lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 from this chassis (sb_readonly=0)
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:53Z|00209|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 down in Southbound
Jan 23 04:50:53 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:53Z|00210|binding|INFO|Removing iface tap36cd571d-7a ovn-installed in OVS
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.940 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:2d:73 10.100.0.7'], port_security=['fa:16:3e:9a:2d:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc e4a4dbcd-7281-4e7f-8cb5-f1256115af4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=36cd571d-7acb-4373-8e01-cd61dc0c3735) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.941 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 36cd571d-7acb-4373-8e01-cd61dc0c3735 in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 unbound from our chassis#033[00m
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.942 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79c61601-48fe-4c3b-aac8-5ed602fc7629#033[00m
Jan 23 04:50:53 np0005593232 nova_compute[250269]: 2026-01-23 09:50:53.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.958 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2c0bed5a-422c-47ac-b219-1fdbdb4e3ac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:53 np0005593232 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000046.scope: Deactivated successfully.
Jan 23 04:50:53 np0005593232 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000046.scope: Consumed 9.209s CPU time.
Jan 23 04:50:53 np0005593232 systemd-machined[215836]: Machine qemu-26-instance-00000046 terminated.
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.992 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b2779d5b-a9a3-4436-bbec-c5f58978befa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:53.996 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefc620-cfc2-48f3-b8cc-97d9460b57ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.023 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3be16fc7-9ffa-464d-9679-d9114a875211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.043 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff75760-0503-4eb8-ad15-e57d365d80df]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 27629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296274, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.059 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[48b35846-432e-47ee-ab57-919c1caa4552]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570546, 'tstamp': 570546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296275, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570549, 'tstamp': 570549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296275, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.061 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.062 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.067 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.067 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c61601-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.068 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.068 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79c61601-40, col_values=(('external_ids', {'iface-id': '3b1c09cf-09e8-4a4c-8010-b5b376d985cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.069 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.128 250273 INFO nova.virt.libvirt.driver [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance destroyed successfully.#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.129 250273 DEBUG nova.objects.instance [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'resources' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.150 250273 DEBUG nova.virt.libvirt.vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:50:53Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.150 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.151 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.151 250273 DEBUG os_vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.153 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.153 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36cd571d-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.156 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.161 250273 INFO os_vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a')#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.168 250273 DEBUG nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start _get_guest_xml network_info=[{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.173 250273 WARNING nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.180 250273 DEBUG nova.virt.libvirt.host [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.181 250273 DEBUG nova.virt.libvirt.host [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.184 250273 DEBUG nova.virt.libvirt.host [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.185 250273 DEBUG nova.virt.libvirt.host [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.186 250273 DEBUG nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.186 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.186 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.187 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.187 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.187 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.188 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.188 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.188 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.188 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.189 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.189 250273 DEBUG nova.virt.hardware [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.189 250273 DEBUG nova.objects.instance [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.210 250273 DEBUG oslo_concurrency.processutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.267 250273 DEBUG nova.compute.manager [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.268 250273 DEBUG oslo_concurrency.lockutils [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.269 250273 DEBUG oslo_concurrency.lockutils [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.269 250273 DEBUG oslo_concurrency.lockutils [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.269 250273 DEBUG nova.compute.manager [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.270 250273 WARNING nova.compute.manager [req-45e6e301-c769-4da6-94de-651647b107f2 req-e4cf2d4c-634c-4a04-84b8-60a698ad2b30 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.292 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:54.295 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:50:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1777: 321 pgs: 321 active+clean; 286 MiB data, 748 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Jan 23 04:50:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:50:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/648159280' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.662 250273 DEBUG oslo_concurrency.processutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.711 250273 DEBUG oslo_concurrency.processutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:50:54 np0005593232 nova_compute[250269]: 2026-01-23 09:50:54.922 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.161 250273 DEBUG oslo_concurrency.processutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.163 250273 DEBUG nova.virt.libvirt.vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:50:53Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.163 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.164 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.166 250273 DEBUG nova.objects.instance [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.189 250273 DEBUG nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <uuid>bc34e547-193c-4d2f-83ca-79f1ddcc0613</uuid>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <name>instance-00000046</name>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1935390298</nova:name>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:50:54</nova:creationTime>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:user uuid="4cb83b8ddd0644f898d4be1f7de0b930">tempest-SecurityGroupsTestJSON-432973814-project-member</nova:user>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:project uuid="b8b9b5c378f24327912b08252b3c9636">tempest-SecurityGroupsTestJSON-432973814</nova:project>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <nova:port uuid="36cd571d-7acb-4373-8e01-cd61dc0c3735">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="serial">bc34e547-193c-4d2f-83ca-79f1ddcc0613</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="uuid">bc34e547-193c-4d2f-83ca-79f1ddcc0613</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc34e547-193c-4d2f-83ca-79f1ddcc0613_disk.config">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:9a:2d:73"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <target dev="tap36cd571d-7a"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613/console.log" append="off"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:50:55 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:50:55 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:50:55 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:50:55 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.192 250273 DEBUG nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.192 250273 DEBUG nova.virt.libvirt.driver [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] skipping disk for instance-00000046 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.193 250273 DEBUG nova.virt.libvirt.vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:50:53Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.193 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.194 250273 DEBUG nova.network.os_vif_util [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.194 250273 DEBUG os_vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.195 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.196 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.199 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36cd571d-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.199 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36cd571d-7a, col_values=(('external_ids', {'iface-id': '36cd571d-7acb-4373-8e01-cd61dc0c3735', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9a:2d:73', 'vm-uuid': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 NetworkManager[49057]: <info>  [1769161855.2020] manager: (tap36cd571d-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.208 250273 INFO os_vif [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a')#033[00m
Jan 23 04:50:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:55.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:55 np0005593232 kernel: tap36cd571d-7a: entered promiscuous mode
Jan 23 04:50:55 np0005593232 NetworkManager[49057]: <info>  [1769161855.2760] manager: (tap36cd571d-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Jan 23 04:50:55 np0005593232 systemd-udevd[296266]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:50:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:55Z|00211|binding|INFO|Claiming lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 for this chassis.
Jan 23 04:50:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:55Z|00212|binding|INFO|36cd571d-7acb-4373-8e01-cd61dc0c3735: Claiming fa:16:3e:9a:2d:73 10.100.0.7
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.278 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.285 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:2d:73 10.100.0.7'], port_security=['fa:16:3e:9a:2d:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc e4a4dbcd-7281-4e7f-8cb5-f1256115af4e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=36cd571d-7acb-4373-8e01-cd61dc0c3735) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.286 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 36cd571d-7acb-4373-8e01-cd61dc0c3735 in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 bound to our chassis#033[00m
Jan 23 04:50:55 np0005593232 NetworkManager[49057]: <info>  [1769161855.2879] device (tap36cd571d-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:50:55 np0005593232 NetworkManager[49057]: <info>  [1769161855.2886] device (tap36cd571d-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.289 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79c61601-48fe-4c3b-aac8-5ed602fc7629#033[00m
Jan 23 04:50:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:55Z|00213|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 ovn-installed in OVS
Jan 23 04:50:55 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:55Z|00214|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 up in Southbound
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.297 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.304 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[48cbf51a-78b0-4dde-bac3-56c025244355]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 systemd-machined[215836]: New machine qemu-27-instance-00000046.
Jan 23 04:50:55 np0005593232 systemd[1]: Started Virtual Machine qemu-27-instance-00000046.
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.336 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fc188ec4-5373-4005-9577-b49bdaf90218]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.339 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4fde4d04-e013-4a91-a419-3d07bab95e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:55.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.369 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[36ce5453-10a7-4a3c-a831-8f7c011905b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.384 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e037f20f-3bf0-485c-86b9-97a9840ebb04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 27629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296377, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.398 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c65671f5-1b67-4c75-8548-98a12050a80e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570546, 'tstamp': 570546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296378, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570549, 'tstamp': 570549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296378, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.399 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.400 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.401 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.403 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c61601-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.403 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.404 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79c61601-40, col_values=(('external_ids', {'iface-id': '3b1c09cf-09e8-4a4c-8010-b5b376d985cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:55.404 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.857 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for bc34e547-193c-4d2f-83ca-79f1ddcc0613 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.857 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161855.8567407, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.858 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.860 250273 DEBUG nova.compute.manager [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.863 250273 INFO nova.virt.libvirt.driver [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance rebooted successfully.#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.863 250273 DEBUG nova.compute.manager [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.996 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:55 np0005593232 nova_compute[250269]: 2026-01-23 09:50:55.999 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.032 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.033 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161855.8575747, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.033 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Started (Lifecycle Event)#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.056 250273 DEBUG oslo_concurrency.lockutils [None req-7d51968d-1ec6-41f4-8843-f0ebf0cdd86a 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 6.121s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.061 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.065 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.380 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.381 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.381 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.381 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.381 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.382 250273 WARNING nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.382 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.382 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.383 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.383 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.383 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.384 250273 WARNING nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.384 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.384 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.385 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.385 250273 DEBUG oslo_concurrency.lockutils [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.385 250273 DEBUG nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:50:56 np0005593232 nova_compute[250269]: 2026-01-23 09:50:56.385 250273 WARNING nova.compute.manager [req-f0833459-47cc-4f3f-a48a-0f9849aef8a9 req-234a4afd-c766-4546-ad0e-1b342ad491b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:50:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1778: 321 pgs: 321 active+clean; 292 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 183 op/s
Jan 23 04:50:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:57.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:57 np0005593232 podman[296422]: 2026-01-23 09:50:57.423672132 +0000 UTC m=+0.082605203 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 04:50:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1779: 321 pgs: 321 active+clean; 268 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.0 MiB/s wr, 291 op/s
Jan 23 04:50:58 np0005593232 nova_compute[250269]: 2026-01-23 09:50:58.785 250273 DEBUG nova.compute.manager [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:50:58 np0005593232 nova_compute[250269]: 2026-01-23 09:50:58.787 250273 DEBUG nova.compute.manager [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing instance network info cache due to event network-changed-36cd571d-7acb-4373-8e01-cd61dc0c3735. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:50:58 np0005593232 nova_compute[250269]: 2026-01-23 09:50:58.787 250273 DEBUG oslo_concurrency.lockutils [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:50:58 np0005593232 nova_compute[250269]: 2026-01-23 09:50:58.788 250273 DEBUG oslo_concurrency.lockutils [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:50:58 np0005593232 nova_compute[250269]: 2026-01-23 09:50:58.788 250273 DEBUG nova.network.neutron [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Refreshing network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:50:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:50:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:50:59.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:50:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:50:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:50:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:50:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.465 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.466 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.466 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.467 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.467 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.468 250273 INFO nova.compute.manager [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Terminating instance#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.469 250273 DEBUG nova.compute.manager [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:50:59 np0005593232 kernel: tap36cd571d-7a (unregistering): left promiscuous mode
Jan 23 04:50:59 np0005593232 NetworkManager[49057]: <info>  [1769161859.5067] device (tap36cd571d-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:50:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:59Z|00215|binding|INFO|Releasing lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 from this chassis (sb_readonly=0)
Jan 23 04:50:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:59Z|00216|binding|INFO|Setting lport 36cd571d-7acb-4373-8e01-cd61dc0c3735 down in Southbound
Jan 23 04:50:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:50:59Z|00217|binding|INFO|Removing iface tap36cd571d-7a ovn-installed in OVS
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.514 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.517 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.523 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9a:2d:73 10.100.0.7'], port_security=['fa:16:3e:9a:2d:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'bc34e547-193c-4d2f-83ca-79f1ddcc0613', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc e4a4dbcd-7281-4e7f-8cb5-f1256115af4e f0e9bcef-7d10-47df-8fd7-6d37f89c8dc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=36cd571d-7acb-4373-8e01-cd61dc0c3735) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.525 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 36cd571d-7acb-4373-8e01-cd61dc0c3735 in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 unbound from our chassis#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.528 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79c61601-48fe-4c3b-aac8-5ed602fc7629#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.546 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b5902980-4b0d-4977-b7b9-85bfdbbe193b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000046.scope: Deactivated successfully.
Jan 23 04:50:59 np0005593232 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000046.scope: Consumed 4.379s CPU time.
Jan 23 04:50:59 np0005593232 systemd-machined[215836]: Machine qemu-27-instance-00000046 terminated.
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.606 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[330db320-7612-4ea7-b3f7-23d94a66efde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.608 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c38aecf9-5846-4c6d-9e40-06e4a65fe74a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.642 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[24841465-c0b4-419b-bdbe-88f85b028034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.661 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ae4d7a-c334-4170-b111-da9f46bb7052]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79c61601-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ab:1e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570535, 'reachable_time': 27629, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296504, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.681 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7debbaa5-7ee8-4083-92fa-af94144913b3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570546, 'tstamp': 570546}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296505, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap79c61601-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 570549, 'tstamp': 570549}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296505, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.683 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.689 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.689 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c61601-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.690 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.691 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79c61601-40, col_values=(('external_ids', {'iface-id': '3b1c09cf-09e8-4a4c-8010-b5b376d985cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:50:59.691 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.714 250273 INFO nova.virt.libvirt.driver [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Instance destroyed successfully.#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.714 250273 DEBUG nova.objects.instance [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'resources' on Instance uuid bc34e547-193c-4d2f-83ca-79f1ddcc0613 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.757 250273 DEBUG nova.virt.libvirt.vif [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:50:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1935390298',display_name='tempest-SecurityGroupsTestJSON-server-1935390298',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1935390298',id=70,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-xc3tix1h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:50:56Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=bc34e547-193c-4d2f-83ca-79f1ddcc0613,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.758 250273 DEBUG nova.network.os_vif_util [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.760 250273 DEBUG nova.network.os_vif_util [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.761 250273 DEBUG os_vif [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.764 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36cd571d-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.771 250273 INFO os_vif [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9a:2d:73,bridge_name='br-int',has_traffic_filtering=True,id=36cd571d-7acb-4373-8e01-cd61dc0c3735,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36cd571d-7a')#033[00m
Jan 23 04:50:59 np0005593232 nova_compute[250269]: 2026-01-23 09:50:59.923 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:50:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.229 250273 DEBUG nova.compute.manager [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.229 250273 DEBUG oslo_concurrency.lockutils [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.229 250273 DEBUG oslo_concurrency.lockutils [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.230 250273 DEBUG oslo_concurrency.lockutils [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.230 250273 DEBUG nova.compute.manager [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.230 250273 DEBUG nova.compute.manager [req-71acaeeb-9524-4ec0-aa92-e4181355fb3e req-245fb36b-137e-42a8-b46f-2db94af96fab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-unplugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:51:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1780: 321 pgs: 321 active+clean; 268 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 250 op/s
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.609 250273 DEBUG nova.network.neutron [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updated VIF entry in instance network info cache for port 36cd571d-7acb-4373-8e01-cd61dc0c3735. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.610 250273 DEBUG nova.network.neutron [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [{"id": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "address": "fa:16:3e:9a:2d:73", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36cd571d-7a", "ovs_interfaceid": "36cd571d-7acb-4373-8e01-cd61dc0c3735", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.724 250273 DEBUG oslo_concurrency.lockutils [req-e6ecd5d5-8662-4464-ae6a-c618aed9e442 req-e3a3dfc0-81a9-4982-9cb3-933a2a2ed3d3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc34e547-193c-4d2f-83ca-79f1ddcc0613" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.836 250273 INFO nova.virt.libvirt.driver [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Deleting instance files /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613_del#033[00m
Jan 23 04:51:00 np0005593232 nova_compute[250269]: 2026-01-23 09:51:00.837 250273 INFO nova.virt.libvirt.driver [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Deletion of /var/lib/nova/instances/bc34e547-193c-4d2f-83ca-79f1ddcc0613_del complete#033[00m
Jan 23 04:51:01 np0005593232 nova_compute[250269]: 2026-01-23 09:51:01.037 250273 INFO nova.compute.manager [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Took 1.57 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:51:01 np0005593232 nova_compute[250269]: 2026-01-23 09:51:01.038 250273 DEBUG oslo.service.loopingcall [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:51:01 np0005593232 nova_compute[250269]: 2026-01-23 09:51:01.038 250273 DEBUG nova.compute.manager [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:51:01 np0005593232 nova_compute[250269]: 2026-01-23 09:51:01.038 250273 DEBUG nova.network.neutron [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:51:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:01.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:01.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.443 250273 DEBUG nova.compute.manager [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.443 250273 DEBUG oslo_concurrency.lockutils [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.443 250273 DEBUG oslo_concurrency.lockutils [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.444 250273 DEBUG oslo_concurrency.lockutils [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.444 250273 DEBUG nova.compute.manager [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] No waiting events found dispatching network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:51:02 np0005593232 nova_compute[250269]: 2026-01-23 09:51:02.444 250273 WARNING nova.compute.manager [req-d71890ea-3d02-4a4c-8ac9-cd023481c8bc req-c6f3b387-975a-490f-abaa-61a68637c6ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received unexpected event network-vif-plugged-36cd571d-7acb-4373-8e01-cd61dc0c3735 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:51:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1781: 321 pgs: 321 active+clean; 249 MiB data, 744 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 4.3 MiB/s wr, 327 op/s
Jan 23 04:51:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:03.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.731 250273 DEBUG nova.network.neutron [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.762 250273 INFO nova.compute.manager [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Took 2.72 seconds to deallocate network for instance.#033[00m
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.828 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.829 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.927 250273 DEBUG oslo_concurrency.processutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:03 np0005593232 nova_compute[250269]: 2026-01-23 09:51:03.955 250273 DEBUG nova.compute.manager [req-8a8fbfd5-3c3e-4929-ae83-482d8e91fcfe req-9a9c681a-d16c-44cb-84db-e480d7bc38dc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Received event network-vif-deleted-36cd571d-7acb-4373-8e01-cd61dc0c3735 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:51:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2594052383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.365 250273 DEBUG oslo_concurrency.processutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.371 250273 DEBUG nova.compute.provider_tree [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.390 250273 DEBUG nova.scheduler.client.report [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.413 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.486 250273 INFO nova.scheduler.client.report [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Deleted allocations for instance bc34e547-193c-4d2f-83ca-79f1ddcc0613#033[00m
Jan 23 04:51:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1782: 321 pgs: 321 active+clean; 214 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 5.7 MiB/s wr, 455 op/s
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.724 250273 DEBUG oslo_concurrency.lockutils [None req-14c13d15-6bf5-46b0-9b2e-d1504b7d730c 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "bc34e547-193c-4d2f-83ca-79f1ddcc0613" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:04 np0005593232 nova_compute[250269]: 2026-01-23 09:51:04.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:05.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:05.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1783: 321 pgs: 321 active+clean; 214 MiB data, 711 MiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.7 MiB/s wr, 417 op/s
Jan 23 04:51:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:07.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:07.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1784: 321 pgs: 321 active+clean; 175 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 3.4 MiB/s wr, 435 op/s
Jan 23 04:51:08 np0005593232 nova_compute[250269]: 2026-01-23 09:51:08.912 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:08 np0005593232 nova_compute[250269]: 2026-01-23 09:51:08.912 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:08 np0005593232 nova_compute[250269]: 2026-01-23 09:51:08.950 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.076 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.077 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.082 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.083 250273 INFO nova.compute.claims [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:51:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.331 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:09.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:51:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879816239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.781 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.788 250273 DEBUG nova.compute.provider_tree [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.816 250273 DEBUG nova.scheduler.client.report [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.880 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.882 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.928 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.954 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.955 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:51:09 np0005593232 nova_compute[250269]: 2026-01-23 09:51:09.980 250273 INFO nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:51:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.008 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.148 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.150 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.151 250273 INFO nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Creating image(s)#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.187 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.214 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.244 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.247 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.275 250273 DEBUG nova.policy [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77cda1e9a0404425a06c34637e696603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '390d19f683334995a5268cf9b4d5e464', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.326 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.327 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.328 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.328 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.354 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.358 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 bc80c9f1-76f1-4875-895d-9e80312eb293_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1785: 321 pgs: 321 active+clean; 175 MiB data, 689 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 1.8 MiB/s wr, 299 op/s
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.631 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 bc80c9f1-76f1-4875-895d-9e80312eb293_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.729 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] resizing rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:51:10 np0005593232 nova_compute[250269]: 2026-01-23 09:51:10.852 250273 DEBUG nova.objects.instance [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'migration_context' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:51:11 np0005593232 nova_compute[250269]: 2026-01-23 09:51:11.095 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:51:11 np0005593232 nova_compute[250269]: 2026-01-23 09:51:11.095 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Ensure instance console log exists: /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:51:11 np0005593232 nova_compute[250269]: 2026-01-23 09:51:11.096 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:11 np0005593232 nova_compute[250269]: 2026-01-23 09:51:11.096 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:11 np0005593232 nova_compute[250269]: 2026-01-23 09:51:11.096 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 04:51:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 04:51:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:12 np0005593232 nova_compute[250269]: 2026-01-23 09:51:12.485 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Successfully created port: 30248fe9-6da5-4acd-b60f-7c588745f8f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:51:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1786: 321 pgs: 321 active+clean; 182 MiB data, 690 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 303 op/s
Jan 23 04:51:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:13.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:13.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:13 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.361 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.362 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.362 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.362 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.362 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.364 250273 INFO nova.compute.manager [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Terminating instance#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.365 250273 DEBUG nova.compute.manager [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:51:14 np0005593232 kernel: tapa51c9fca-a8 (unregistering): left promiscuous mode
Jan 23 04:51:14 np0005593232 NetworkManager[49057]: <info>  [1769161874.4244] device (tapa51c9fca-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:51:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:14Z|00218|binding|INFO|Releasing lport a51c9fca-a888-4bb3-9111-6da4f08c6f6a from this chassis (sb_readonly=0)
Jan 23 04:51:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:14Z|00219|binding|INFO|Setting lport a51c9fca-a888-4bb3-9111-6da4f08c6f6a down in Southbound
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.432 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:14Z|00220|binding|INFO|Removing iface tapa51c9fca-a8 ovn-installed in OVS
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.441 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:9b:bd 10.100.0.3'], port_security=['fa:16:3e:e0:9b:bd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '1f303dea-3b7e-419e-b31c-b04209c0cd89', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b8b9b5c378f24327912b08252b3c9636', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ba9579fc-88ad-41cb-b94e-0302604f4fcc cda57b83-a7da-410e-b22a-8efc5a7dd98c d1f2c08e-1419-4134-8ab2-1ddce98a3536', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90d3d7bd-b3df-4803-a42b-299b97e45f23, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a51c9fca-a888-4bb3-9111-6da4f08c6f6a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.442 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a51c9fca-a888-4bb3-9111-6da4f08c6f6a in datapath 79c61601-48fe-4c3b-aac8-5ed602fc7629 unbound from our chassis#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.443 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79c61601-48fe-4c3b-aac8-5ed602fc7629, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.444 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[13f401da-6bdf-4336-a178-6f315beb1fc6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.444 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629 namespace which is not needed anymore#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.453 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 23 04:51:14 np0005593232 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000043.scope: Consumed 15.964s CPU time.
Jan 23 04:51:14 np0005593232 systemd-machined[215836]: Machine qemu-25-instance-00000043 terminated.
Jan 23 04:51:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1787: 321 pgs: 321 active+clean; 223 MiB data, 716 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.4 MiB/s wr, 289 op/s
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [NOTICE]   (294693) : haproxy version is 2.8.14-c23fe91
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [NOTICE]   (294693) : path to executable is /usr/sbin/haproxy
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [WARNING]  (294693) : Exiting Master process...
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [WARNING]  (294693) : Exiting Master process...
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [ALERT]    (294693) : Current worker (294695) exited with code 143 (Terminated)
Jan 23 04:51:14 np0005593232 neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629[294689]: [WARNING]  (294693) : All workers exited. Exiting... (0)
Jan 23 04:51:14 np0005593232 systemd[1]: libpod-e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff.scope: Deactivated successfully.
Jan 23 04:51:14 np0005593232 podman[296777]: 2026-01-23 09:51:14.574014737 +0000 UTC m=+0.044293887 container died e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.591 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.601 250273 INFO nova.virt.libvirt.driver [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Instance destroyed successfully.#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.602 250273 DEBUG nova.objects.instance [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lazy-loading 'resources' on Instance uuid 1f303dea-3b7e-419e-b31c-b04209c0cd89 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.613 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Successfully updated port: 30248fe9-6da5-4acd-b60f-7c588745f8f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:51:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-091f55f55b0514e0fbd150edd040621f43d1d5ed13961ec7a4f01035448fbb61-merged.mount: Deactivated successfully.
Jan 23 04:51:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff-userdata-shm.mount: Deactivated successfully.
Jan 23 04:51:14 np0005593232 podman[296777]: 2026-01-23 09:51:14.629170295 +0000 UTC m=+0.099449445 container cleanup e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:51:14 np0005593232 systemd[1]: libpod-conmon-e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff.scope: Deactivated successfully.
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.650 250273 DEBUG nova.virt.libvirt.vif [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1362504631',display_name='tempest-SecurityGroupsTestJSON-server-1362504631',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1362504631',id=67,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:49:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b8b9b5c378f24327912b08252b3c9636',ramdisk_id='',reservation_id='r-g2qanm6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-432973814',owner_user_name='tempest-SecurityGroupsTestJSON-432973814-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:49:52Z,user_data=None,user_id='4cb83b8ddd0644f898d4be1f7de0b930',uuid=1f303dea-3b7e-419e-b31c-b04209c0cd89,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.651 250273 DEBUG nova.network.os_vif_util [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converting VIF {"id": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "address": "fa:16:3e:e0:9b:bd", "network": {"id": "79c61601-48fe-4c3b-aac8-5ed602fc7629", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-527034102-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b8b9b5c378f24327912b08252b3c9636", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa51c9fca-a8", "ovs_interfaceid": "a51c9fca-a888-4bb3-9111-6da4f08c6f6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.652 250273 DEBUG nova.network.os_vif_util [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.653 250273 DEBUG os_vif [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.656 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.657 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa51c9fca-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.660 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.663 250273 INFO os_vif [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:9b:bd,bridge_name='br-int',has_traffic_filtering=True,id=a51c9fca-a888-4bb3-9111-6da4f08c6f6a,network=Network(79c61601-48fe-4c3b-aac8-5ed602fc7629),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa51c9fca-a8')#033[00m
Jan 23 04:51:14 np0005593232 podman[296815]: 2026-01-23 09:51:14.701316418 +0000 UTC m=+0.050292799 container remove e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.707 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[69b6da3d-cc1e-48b1-8d6f-d97a5ea5ad34]: (4, ('Fri Jan 23 09:51:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629 (e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff)\ne87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff\nFri Jan 23 09:51:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629 (e87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff)\ne87526dc56e79cf8811f67344d793cba02a906310ab79b7d2d8e68a6fd5040ff\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.711 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161859.709987, bc34e547-193c-4d2f-83ca-79f1ddcc0613 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.711 250273 INFO nova.compute.manager [-] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.710 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[18f1c7ee-099a-4521-acc2-f95552c3ba76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.716 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c61601-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.719 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 kernel: tap79c61601-40: left promiscuous mode
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.738 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ed4604-0a51-4f42-81a8-83d18ac64b5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.742 250273 DEBUG nova.compute.manager [None req-630ee090-614a-45a4-ae3d-c1960fd97339 - - - - - -] [instance: bc34e547-193c-4d2f-83ca-79f1ddcc0613] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.755 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbcb624-7646-4f35-94d3-197ceeed4e46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.757 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[69a74c52-84f7-48ad-873a-d8ba3309ac2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.773 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eeafb9a2-b847-49a7-ac98-2d170816898f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 570527, 'reachable_time': 20598, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296848, 'error': None, 'target': 'ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.775 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79c61601-48fe-4c3b-aac8-5ed602fc7629 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:51:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:14.775 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6bdf2fbd-1697-4376-9452-cfe3bd0900f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:14 np0005593232 systemd[1]: run-netns-ovnmeta\x2d79c61601\x2d48fe\x2d4c3b\x2daac8\x2d5ed602fc7629.mount: Deactivated successfully.
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.901 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.903 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.904 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:51:14 np0005593232 nova_compute[250269]: 2026-01-23 09:51:14.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.086 250273 INFO nova.virt.libvirt.driver [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Deleting instance files /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89_del#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.087 250273 INFO nova.virt.libvirt.driver [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Deletion of /var/lib/nova/instances/1f303dea-3b7e-419e-b31c-b04209c0cd89_del complete#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.143 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.189 250273 INFO nova.compute.manager [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.191 250273 DEBUG oslo.service.loopingcall [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.191 250273 DEBUG nova.compute.manager [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.192 250273 DEBUG nova.network.neutron [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:51:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:15.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:15.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.420 250273 DEBUG nova.compute.manager [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.420 250273 DEBUG nova.compute.manager [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing instance network info cache due to event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.421 250273 DEBUG oslo_concurrency.lockutils [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.458 250273 DEBUG nova.compute.manager [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-unplugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.459 250273 DEBUG oslo_concurrency.lockutils [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.460 250273 DEBUG oslo_concurrency.lockutils [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.460 250273 DEBUG oslo_concurrency.lockutils [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.461 250273 DEBUG nova.compute.manager [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] No waiting events found dispatching network-vif-unplugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:51:15 np0005593232 nova_compute[250269]: 2026-01-23 09:51:15.461 250273 DEBUG nova.compute.manager [req-cad95148-126d-468d-bfe0-4956a6ab1f78 req-4bd807ee-be07-4be1-9ce8-b5a1bc11ab31 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-unplugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:51:16 np0005593232 nova_compute[250269]: 2026-01-23 09:51:16.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1788: 321 pgs: 321 active+clean; 234 MiB data, 722 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.5 MiB/s wr, 171 op/s
Jan 23 04:51:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:51:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 25K writes, 96K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8548 syncs, 2.95 writes per sync, written: 0.08 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7539 writes, 28K keys, 7539 commit groups, 1.0 writes per commit group, ingest: 27.08 MB, 0.05 MB/s#012Interval WAL: 7539 writes, 2997 syncs, 2.52 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 04:51:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:17.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:17.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:17 np0005593232 nova_compute[250269]: 2026-01-23 09:51:17.973 250273 DEBUG nova.network.neutron [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:17 np0005593232 nova_compute[250269]: 2026-01-23 09:51:17.999 250273 INFO nova.compute.manager [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Took 2.81 seconds to deallocate network for instance.#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.102 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.102 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.204 250273 DEBUG nova.compute.manager [req-868ccef3-9480-49c9-bdac-429a111c2317 req-4e9ba365-97e8-47dd-ab6f-5e64ccaade20 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-deleted-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.242 250273 DEBUG oslo_concurrency.processutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1789: 321 pgs: 321 active+clean; 234 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 6.4 MiB/s wr, 219 op/s
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.584 250273 DEBUG nova.network.neutron [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.617 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.618 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Instance network_info: |[{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.619 250273 DEBUG oslo_concurrency.lockutils [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.621 250273 DEBUG nova.network.neutron [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.625 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Start _get_guest_xml network_info=[{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.630 250273 WARNING nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.637 250273 DEBUG nova.virt.libvirt.host [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.638 250273 DEBUG nova.virt.libvirt.host [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.641 250273 DEBUG nova.virt.libvirt.host [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.642 250273 DEBUG nova.virt.libvirt.host [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.644 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.644 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.645 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.645 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.645 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.646 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.646 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.646 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.647 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.647 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.647 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.647 250273 DEBUG nova.virt.hardware [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:51:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:51:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2226181881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.652 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.696 250273 DEBUG nova.compute.manager [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.697 250273 DEBUG oslo_concurrency.lockutils [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.697 250273 DEBUG oslo_concurrency.lockutils [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.697 250273 DEBUG oslo_concurrency.lockutils [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.698 250273 DEBUG nova.compute.manager [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] No waiting events found dispatching network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.698 250273 WARNING nova.compute.manager [req-b0e3dfd7-3d08-458a-bc59-b95dfc7f5326 req-ba8422d6-35d5-471e-9f4f-64c3357d5db5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Received unexpected event network-vif-plugged-a51c9fca-a888-4bb3-9111-6da4f08c6f6a for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.702 250273 DEBUG oslo_concurrency.processutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.710 250273 DEBUG nova.compute.provider_tree [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.733 250273 DEBUG nova.scheduler.client.report [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.757 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.795 250273 INFO nova.scheduler.client.report [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Deleted allocations for instance 1f303dea-3b7e-419e-b31c-b04209c0cd89#033[00m
Jan 23 04:51:18 np0005593232 nova_compute[250269]: 2026-01-23 09:51:18.919 250273 DEBUG oslo_concurrency.lockutils [None req-527bc92e-1bd9-4be6-9290-b9167817eee2 4cb83b8ddd0644f898d4be1f7de0b930 b8b9b5c378f24327912b08252b3c9636 - - default default] Lock "1f303dea-3b7e-419e-b31c-b04209c0cd89" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:51:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073513103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.113 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.137 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.141 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:19.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:19.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:51:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707003301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.589 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.593 250273 DEBUG nova.virt.libvirt.vif [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:51:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.593 250273 DEBUG nova.network.os_vif_util [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.596 250273 DEBUG nova.network.os_vif_util [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.598 250273 DEBUG nova.objects.instance [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'pci_devices' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.621 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <uuid>bc80c9f1-76f1-4875-895d-9e80312eb293</uuid>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <name>instance-00000049</name>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:name>tempest-tempest.common.compute-instance-511471311</nova:name>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:51:18</nova:creationTime>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:user uuid="77cda1e9a0404425a06c34637e696603">tempest-AttachInterfacesTestJSON-746967993-project-member</nova:user>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:project uuid="390d19f683334995a5268cf9b4d5e464">tempest-AttachInterfacesTestJSON-746967993</nova:project>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <nova:port uuid="30248fe9-6da5-4acd-b60f-7c588745f8f3">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="serial">bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="uuid">bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:d5:db:7e"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <target dev="tap30248fe9-6d"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log" append="off"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:51:19 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:51:19 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:51:19 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:51:19 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.623 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Preparing to wait for external event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.624 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.624 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.624 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.625 250273 DEBUG nova.virt.libvirt.vif [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:51:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.625 250273 DEBUG nova.network.os_vif_util [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.626 250273 DEBUG nova.network.os_vif_util [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.626 250273 DEBUG os_vif [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.627 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.627 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.627 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.631 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30248fe9-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.632 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30248fe9-6d, col_values=(('external_ids', {'iface-id': '30248fe9-6da5-4acd-b60f-7c588745f8f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:db:7e', 'vm-uuid': 'bc80c9f1-76f1-4875-895d-9e80312eb293'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.633 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:19 np0005593232 NetworkManager[49057]: <info>  [1769161879.6345] manager: (tap30248fe9-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.642 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.643 250273 INFO os_vif [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d')#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.744 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.744 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.745 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No VIF found with MAC fa:16:3e:d5:db:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.745 250273 INFO nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Using config drive#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.769 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:19 np0005593232 nova_compute[250269]: 2026-01-23 09:51:19.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:20 np0005593232 nova_compute[250269]: 2026-01-23 09:51:20.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1790: 321 pgs: 321 active+clean; 234 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 400 KiB/s rd, 6.4 MiB/s wr, 174 op/s
Jan 23 04:51:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:21.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.331 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.332 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:51:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:21.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:21 np0005593232 podman[297009]: 2026-01-23 09:51:21.450457933 +0000 UTC m=+0.111288463 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.729 250273 INFO nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Creating config drive at /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.736 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvu8w5ry1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.870 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvu8w5ry1" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.907 250273 DEBUG nova.storage.rbd_utils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] rbd image bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:51:21 np0005593232 nova_compute[250269]: 2026-01-23 09:51:21.912 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.082 250273 DEBUG oslo_concurrency.processutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.084 250273 INFO nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Deleting local config drive /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/disk.config because it was imported into RBD.#033[00m
Jan 23 04:51:22 np0005593232 kernel: tap30248fe9-6d: entered promiscuous mode
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.1477] manager: (tap30248fe9-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Jan 23 04:51:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:22Z|00221|binding|INFO|Claiming lport 30248fe9-6da5-4acd-b60f-7c588745f8f3 for this chassis.
Jan 23 04:51:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:22Z|00222|binding|INFO|30248fe9-6da5-4acd-b60f-7c588745f8f3: Claiming fa:16:3e:d5:db:7e 10.100.0.9
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.148 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.152 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.164 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.1666] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.1673] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Jan 23 04:51:22 np0005593232 systemd-udevd[297089]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:51:22 np0005593232 systemd-machined[215836]: New machine qemu-28-instance-00000049.
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.185 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:db:7e 10.100.0.9'], port_security=['fa:16:3e:d5:db:7e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bc80c9f1-76f1-4875-895d-9e80312eb293', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '390d19f683334995a5268cf9b4d5e464', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0547d145-6526-47bb-a492-48772f700715', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=396f5815-d5dc-4484-bb15-e71911e6f8a2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=30248fe9-6da5-4acd-b60f-7c588745f8f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.186 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 30248fe9-6da5-4acd-b60f-7c588745f8f3 in datapath 7808328e-22f9-46df-ac06-f8c3d6ad10c4 bound to our chassis#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.188 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7808328e-22f9-46df-ac06-f8c3d6ad10c4#033[00m
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.1938] device (tap30248fe9-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.1944] device (tap30248fe9-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:51:22 np0005593232 systemd[1]: Started Virtual Machine qemu-28-instance-00000049.
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.200 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[253b01bd-e698-4e7f-a326-6671cdfd5675]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.201 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7808328e-21 in ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.203 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7808328e-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.203 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[221144db-6efb-4d6f-86ed-1ba92a044fc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.204 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2d79d389-cbe6-4426-8ce0-f5c666ab2295]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.214 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[341735bc-d87c-4a3f-b733-e449128b63b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.237 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8186b8-f7d4-449d-b10d-d1352caf8667]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.265 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f2b656-9213-49e7-9ec4-813d5987444a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.2727] manager: (tap7808328e-20): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.271 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2141eaa6-bf8b-4de0-932f-7e5456b49be1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.300 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fa454629-c1e3-45c6-9413-5d87b5c37373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.302 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7040cf-5096-4181-9e79-8e83c42fc0f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.3241] device (tap7808328e-20): carrier: link connected
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.329 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7012c5-2c2a-43b9-9047-5e3f010c5bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.347 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c7524030-556f-4fae-bf72-cd5e81a26be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7808328e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:22:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579546, 'reachable_time': 39774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297122, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.351 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.363 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dcff7006-23ce-437f-b86f-7e5da83e3507]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:22ae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579546, 'tstamp': 579546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297123, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:22Z|00223|binding|INFO|Setting lport 30248fe9-6da5-4acd-b60f-7c588745f8f3 ovn-installed in OVS
Jan 23 04:51:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:22Z|00224|binding|INFO|Setting lport 30248fe9-6da5-4acd-b60f-7c588745f8f3 up in Southbound
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.380 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ceec3dcc-1b3f-4395-b4a1-11185ee0d84a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7808328e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:22:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579546, 'reachable_time': 39774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297124, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.410 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb2b23e-5a1a-4de1-b9fd-b18f2f85c73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.461 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ea93e431-c5fe-4ae9-9ca4-b740bc7d8d7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.462 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7808328e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.462 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.463 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7808328e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:22 np0005593232 NetworkManager[49057]: <info>  [1769161882.4653] manager: (tap7808328e-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 23 04:51:22 np0005593232 kernel: tap7808328e-20: entered promiscuous mode
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.466 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.468 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7808328e-20, col_values=(('external_ids', {'iface-id': 'db11772c-e758-43ff-997c-e8c835433e90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:51:22 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:22Z|00225|binding|INFO|Releasing lport db11772c-e758-43ff-997c-e8c835433e90 from this chassis (sb_readonly=0)
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.469 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.483 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.485 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7808328e-22f9-46df-ac06-f8c3d6ad10c4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7808328e-22f9-46df-ac06-f8c3d6ad10c4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.486 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d2c75d-03ae-4024-9ed8-cc2d4b43ba02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.486 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-7808328e-22f9-46df-ac06-f8c3d6ad10c4
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/7808328e-22f9-46df-ac06-f8c3d6ad10c4.pid.haproxy
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 7808328e-22f9-46df-ac06-f8c3d6ad10c4
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:51:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:22.488 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'env', 'PROCESS_TAG=haproxy-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7808328e-22f9-46df-ac06-f8c3d6ad10c4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:51:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1791: 321 pgs: 321 active+clean; 260 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 7.5 MiB/s wr, 177 op/s
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.737 250273 DEBUG nova.compute.manager [req-b20f1c46-6a5f-4e06-89ea-fefccd0b9399 req-ec9ead62-19ed-4b2f-beca-75ff48373cf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.738 250273 DEBUG oslo_concurrency.lockutils [req-b20f1c46-6a5f-4e06-89ea-fefccd0b9399 req-ec9ead62-19ed-4b2f-beca-75ff48373cf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.743 250273 DEBUG oslo_concurrency.lockutils [req-b20f1c46-6a5f-4e06-89ea-fefccd0b9399 req-ec9ead62-19ed-4b2f-beca-75ff48373cf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.743 250273 DEBUG oslo_concurrency.lockutils [req-b20f1c46-6a5f-4e06-89ea-fefccd0b9399 req-ec9ead62-19ed-4b2f-beca-75ff48373cf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.743 250273 DEBUG nova.compute.manager [req-b20f1c46-6a5f-4e06-89ea-fefccd0b9399 req-ec9ead62-19ed-4b2f-beca-75ff48373cf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Processing event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:51:22 np0005593232 podman[297155]: 2026-01-23 09:51:22.891516731 +0000 UTC m=+0.050074073 container create e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:51:22 np0005593232 systemd[1]: Started libpod-conmon-e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c.scope.
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.930 250273 DEBUG nova.network.neutron [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated VIF entry in instance network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.931 250273 DEBUG nova.network.neutron [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c24e6065da71dea0ac0848426c5c9e7169c59c90e4b8f7aa87931bf44e2b70be/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:22 np0005593232 podman[297155]: 2026-01-23 09:51:22.865810606 +0000 UTC m=+0.024367968 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:51:22 np0005593232 nova_compute[250269]: 2026-01-23 09:51:22.962 250273 DEBUG oslo_concurrency.lockutils [req-b3461747-93a6-466a-b0ea-ea60b033c8db req-8846f169-f91d-421e-898c-5a3ae3511ed0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:51:22 np0005593232 podman[297155]: 2026-01-23 09:51:22.974668749 +0000 UTC m=+0.133226091 container init e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 04:51:22 np0005593232 podman[297155]: 2026-01-23 09:51:22.980930388 +0000 UTC m=+0.139487730 container start e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:51:23 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [NOTICE]   (297174) : New worker (297176) forked
Jan 23 04:51:23 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [NOTICE]   (297174) : Loading success.
Jan 23 04:51:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:23.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.295 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161883.2950048, bc80c9f1-76f1-4875-895d-9e80312eb293 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.295 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] VM Started (Lifecycle Event)#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.297 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.300 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.303 250273 INFO nova.virt.libvirt.driver [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Instance spawned successfully.#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.303 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:51:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:23.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.577 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.579 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.742 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.743 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.743 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.744 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.744 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.744 250273 DEBUG nova.virt.libvirt.driver [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.943 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.945 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161883.2972832, bc80c9f1-76f1-4875-895d-9e80312eb293 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.946 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.984 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.988 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161883.2999759, bc80c9f1-76f1-4875-895d-9e80312eb293 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:51:23 np0005593232 nova_compute[250269]: 2026-01-23 09:51:23.988 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.015 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.018 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.028 250273 INFO nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Took 13.88 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.028 250273 DEBUG nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.044 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.110 250273 INFO nova.compute.manager [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Took 15.06 seconds to build instance.#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.128 250273 DEBUG oslo_concurrency.lockutils [None req-b4f262ff-98b8-479b-9487-ac5063d12792 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:51:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1792: 321 pgs: 321 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 401 KiB/s rd, 7.0 MiB/s wr, 179 op/s
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.635 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.925 250273 DEBUG nova.compute.manager [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.926 250273 DEBUG oslo_concurrency.lockutils [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.926 250273 DEBUG oslo_concurrency.lockutils [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.926 250273 DEBUG oslo_concurrency.lockutils [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.926 250273 DEBUG nova.compute.manager [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.926 250273 WARNING nova.compute.manager [req-4f97dc68-cbce-4bee-9231-a87abd86296d req-877f2e47-2549-4702-b830-fafcf47f0a2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:51:24 np0005593232 nova_compute[250269]: 2026-01-23 09:51:24.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:25.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:25 np0005593232 nova_compute[250269]: 2026-01-23 09:51:25.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:25 np0005593232 nova_compute[250269]: 2026-01-23 09:51:25.331 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:25 np0005593232 nova_compute[250269]: 2026-01-23 09:51:25.331 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:25.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1793: 321 pgs: 321 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.5 MiB/s wr, 163 op/s
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.560 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.561 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.561 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.562 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:51:26 np0005593232 nova_compute[250269]: 2026-01-23 09:51:26.562 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:51:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632689523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.005 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.094 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.095 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.242 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.243 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4397MB free_disk=20.880149841308594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.243 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.244 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:27.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.967 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance bc80c9f1-76f1-4875-895d-9e80312eb293 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.968 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:51:27 np0005593232 nova_compute[250269]: 2026-01-23 09:51:27.968 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.033 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:51:28 np0005593232 podman[297273]: 2026-01-23 09:51:28.389998734 +0000 UTC m=+0.051958037 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 04:51:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:51:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1647250811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.483 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.489 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.508 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:51:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1794: 321 pgs: 321 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 210 op/s
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.534 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:51:28 np0005593232 nova_compute[250269]: 2026-01-23 09:51:28.535 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:29.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:29.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:29 np0005593232 nova_compute[250269]: 2026-01-23 09:51:29.597 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161874.5962057, 1f303dea-3b7e-419e-b31c-b04209c0cd89 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:51:29 np0005593232 nova_compute[250269]: 2026-01-23 09:51:29.597 250273 INFO nova.compute.manager [-] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:51:29 np0005593232 nova_compute[250269]: 2026-01-23 09:51:29.624 250273 DEBUG nova.compute.manager [None req-7260cc93-c7c0-43b9-94d6-7fe8922cd132 - - - - - -] [instance: 1f303dea-3b7e-419e-b31c-b04209c0cd89] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:51:29 np0005593232 nova_compute[250269]: 2026-01-23 09:51:29.639 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:29 np0005593232 nova_compute[250269]: 2026-01-23 09:51:29.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1795: 321 pgs: 321 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 150 op/s
Jan 23 04:51:30 np0005593232 nova_compute[250269]: 2026-01-23 09:51:30.536 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:51:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:31.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:31.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1796: 321 pgs: 321 active+clean; 260 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.1 MiB/s wr, 150 op/s
Jan 23 04:51:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:33.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:33.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1797: 321 pgs: 321 active+clean; 260 MiB data, 746 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 53 KiB/s wr, 148 op/s
Jan 23 04:51:34 np0005593232 nova_compute[250269]: 2026-01-23 09:51:34.643 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:34 np0005593232 nova_compute[250269]: 2026-01-23 09:51:34.954 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:35.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:35.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:35 np0005593232 nova_compute[250269]: 2026-01-23 09:51:35.413 250273 DEBUG nova.compute.manager [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:35 np0005593232 nova_compute[250269]: 2026-01-23 09:51:35.414 250273 DEBUG nova.compute.manager [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing instance network info cache due to event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:51:35 np0005593232 nova_compute[250269]: 2026-01-23 09:51:35.414 250273 DEBUG oslo_concurrency.lockutils [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:51:35 np0005593232 nova_compute[250269]: 2026-01-23 09:51:35.414 250273 DEBUG oslo_concurrency.lockutils [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:51:35 np0005593232 nova_compute[250269]: 2026-01-23 09:51:35.414 250273 DEBUG nova.network.neutron [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:51:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:35Z|00226|binding|INFO|Releasing lport db11772c-e758-43ff-997c-e8c835433e90 from this chassis (sb_readonly=0)
Jan 23 04:51:36 np0005593232 nova_compute[250269]: 2026-01-23 09:51:36.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1798: 321 pgs: 321 active+clean; 282 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 1.7 MiB/s wr, 218 op/s
Jan 23 04:51:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:db:7e 10.100.0.9
Jan 23 04:51:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:51:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:db:7e 10.100.0.9
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:51:37
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups']
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:51:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:37.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:37 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:37 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:51:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1799: 321 pgs: 321 active+clean; 297 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 4.2 MiB/s wr, 305 op/s
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.547 250273 DEBUG nova.network.neutron [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated VIF entry in instance network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.548 250273 DEBUG nova.network.neutron [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.572 250273 DEBUG oslo_concurrency.lockutils [req-d6c3ef5f-a402-452d-bf37-19c4294e3218 req-e9db028d-b3c9-41b4-b8fe-edad2fa9a7fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.624 250273 DEBUG nova.compute.manager [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.625 250273 DEBUG nova.compute.manager [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing instance network info cache due to event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.625 250273 DEBUG oslo_concurrency.lockutils [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.625 250273 DEBUG oslo_concurrency.lockutils [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:51:38 np0005593232 nova_compute[250269]: 2026-01-23 09:51:38.625 250273 DEBUG nova.network.neutron [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:51:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:39.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:39.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:39 np0005593232 nova_compute[250269]: 2026-01-23 09:51:39.655 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:39 np0005593232 nova_compute[250269]: 2026-01-23 09:51:39.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1800: 321 pgs: 321 active+clean; 297 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 210 op/s
Jan 23 04:51:41 np0005593232 nova_compute[250269]: 2026-01-23 09:51:41.000 250273 DEBUG nova.network.neutron [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated VIF entry in instance network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:51:41 np0005593232 nova_compute[250269]: 2026-01-23 09:51:41.001 250273 DEBUG nova.network.neutron [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:51:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:41.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:42 np0005593232 nova_compute[250269]: 2026-01-23 09:51:42.102 250273 DEBUG oslo_concurrency.lockutils [req-63701f98-59a6-4ffb-8379-c08803082bc7 req-9a65c139-cbe4-4d7a-a257-7f0870c7953b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:51:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1801: 321 pgs: 321 active+clean; 279 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 228 op/s
Jan 23 04:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:42.604 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:42.605 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:42.606 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:51:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:43.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1802: 321 pgs: 321 active+clean; 279 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 228 op/s
Jan 23 04:51:44 np0005593232 nova_compute[250269]: 2026-01-23 09:51:44.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:44 np0005593232 nova_compute[250269]: 2026-01-23 09:51:44.958 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:45.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:45.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1803: 321 pgs: 321 active+clean; 279 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 228 op/s
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006509507898287735 of space, bias 1.0, pg target 1.9528523694863207 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:51:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:51:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:51:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:47.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:51:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:47.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1804: 321 pgs: 321 active+clean; 279 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 4.6 MiB/s wr, 194 op/s
Jan 23 04:51:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:49.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:49.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:49 np0005593232 nova_compute[250269]: 2026-01-23 09:51:49.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:49 np0005593232 nova_compute[250269]: 2026-01-23 09:51:49.991 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:50 np0005593232 nova_compute[250269]: 2026-01-23 09:51:50.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1805: 321 pgs: 321 active+clean; 279 MiB data, 775 MiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 04:51:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:51.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:51.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:52 np0005593232 podman[297354]: 2026-01-23 09:51:52.477504612 +0000 UTC m=+0.134072655 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:51:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1806: 321 pgs: 321 active+clean; 283 MiB data, 770 MiB used, 20 GiB / 21 GiB avail; 73 KiB/s rd, 3.0 MiB/s wr, 100 op/s
Jan 23 04:51:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:51:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:53.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:51:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:53.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:51:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:51:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1807: 321 pgs: 321 active+clean; 293 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.6 MiB/s wr, 146 op/s
Jan 23 04:51:54 np0005593232 nova_compute[250269]: 2026-01-23 09:51:54.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:51:54 np0005593232 nova_compute[250269]: 2026-01-23 09:51:54.993 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 62258d0b-d296-4471-8d2f-d4d32d3e0f02 does not exist
Jan 23 04:51:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 42332b97-a483-4b01-b540-7d271fbf2cde does not exist
Jan 23 04:51:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 94c08877-ebf6-4c93-9cdf-5af5132a0f96 does not exist
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:51:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:55.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:55.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.759537343 +0000 UTC m=+0.024046559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.86923429 +0000 UTC m=+0.133743466 container create 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:51:55 np0005593232 systemd[1]: Started libpod-conmon-5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11.scope.
Jan 23 04:51:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.966929883 +0000 UTC m=+0.231439089 container init 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.973238074 +0000 UTC m=+0.237747260 container start 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.976548858 +0000 UTC m=+0.241058104 container attach 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:51:55 np0005593232 goofy_lichterman[297791]: 167 167
Jan 23 04:51:55 np0005593232 systemd[1]: libpod-5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11.scope: Deactivated successfully.
Jan 23 04:51:55 np0005593232 conmon[297791]: conmon 5fbc97d08c07d7f2d1c4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11.scope/container/memory.events
Jan 23 04:51:55 np0005593232 podman[297774]: 2026-01-23 09:51:55.980880872 +0000 UTC m=+0.245390088 container died 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:51:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-89e219085c714e7ec0e708b82e32f0cefe3ee179d5f8a2d433fc5e90f8901fcf-merged.mount: Deactivated successfully.
Jan 23 04:51:56 np0005593232 podman[297774]: 2026-01-23 09:51:56.019987381 +0000 UTC m=+0.284496557 container remove 5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:51:56 np0005593232 systemd[1]: libpod-conmon-5fbc97d08c07d7f2d1c40df503e30273173b4f15563dc64927c689656588eb11.scope: Deactivated successfully.
Jan 23 04:51:56 np0005593232 podman[297815]: 2026-01-23 09:51:56.201149601 +0000 UTC m=+0.039720977 container create 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:51:56 np0005593232 systemd[1]: Started libpod-conmon-25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b.scope.
Jan 23 04:51:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:56 np0005593232 podman[297815]: 2026-01-23 09:51:56.184994659 +0000 UTC m=+0.023566055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:51:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:56 np0005593232 podman[297815]: 2026-01-23 09:51:56.290790274 +0000 UTC m=+0.129361680 container init 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:51:56 np0005593232 podman[297815]: 2026-01-23 09:51:56.299674548 +0000 UTC m=+0.138245924 container start 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:51:56 np0005593232 podman[297815]: 2026-01-23 09:51:56.303344743 +0000 UTC m=+0.141916119 container attach 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:51:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:51:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:51:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1808: 321 pgs: 321 active+clean; 278 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 171 op/s
Jan 23 04:51:57 np0005593232 relaxed_kare[297831]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:51:57 np0005593232 relaxed_kare[297831]: --> relative data size: 1.0
Jan 23 04:51:57 np0005593232 relaxed_kare[297831]: --> All data devices are unavailable
Jan 23 04:51:57 np0005593232 systemd[1]: libpod-25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b.scope: Deactivated successfully.
Jan 23 04:51:57 np0005593232 conmon[297831]: conmon 25884c4d1f09d4fbb803 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b.scope/container/memory.events
Jan 23 04:51:57 np0005593232 podman[297815]: 2026-01-23 09:51:57.174191236 +0000 UTC m=+1.012762632 container died 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:51:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5b9628bb1ea57bfc6aed065ffd29c4087b3776e601a3b86d3623e3e008c43c3e-merged.mount: Deactivated successfully.
Jan 23 04:51:57 np0005593232 podman[297815]: 2026-01-23 09:51:57.238180596 +0000 UTC m=+1.076751972 container remove 25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:51:57 np0005593232 systemd[1]: libpod-conmon-25884c4d1f09d4fbb80388903590e97d31762679765e8c6bf22aeb5cb785939b.scope: Deactivated successfully.
Jan 23 04:51:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:57.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:57.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.843083583 +0000 UTC m=+0.040301444 container create 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:51:57 np0005593232 systemd[1]: Started libpod-conmon-589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003.scope.
Jan 23 04:51:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.916053659 +0000 UTC m=+0.113271560 container init 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.825556401 +0000 UTC m=+0.022774282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.924445469 +0000 UTC m=+0.121663320 container start 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.928091394 +0000 UTC m=+0.125309275 container attach 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:51:57 np0005593232 naughty_khorana[298016]: 167 167
Jan 23 04:51:57 np0005593232 systemd[1]: libpod-589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003.scope: Deactivated successfully.
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.929482843 +0000 UTC m=+0.126700704 container died 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:51:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a5e93e938d4c94cab0c30cf2718a1078d15fb8bb01aca9afcc3f2e614fa99400-merged.mount: Deactivated successfully.
Jan 23 04:51:57 np0005593232 podman[298000]: 2026-01-23 09:51:57.966590264 +0000 UTC m=+0.163808125 container remove 589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_khorana, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:51:57 np0005593232 systemd[1]: libpod-conmon-589999e2da3c38121352d8c388b736f4c277469e50a632eadfd4d8025de05003.scope: Deactivated successfully.
Jan 23 04:51:58 np0005593232 podman[298039]: 2026-01-23 09:51:58.140658102 +0000 UTC m=+0.037350309 container create 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:51:58 np0005593232 systemd[1]: Started libpod-conmon-846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c.scope.
Jan 23 04:51:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c427f2c9e27141ea40c1ca78a3ad550f797c2cd1287ff572947e93ec2a94e2e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c427f2c9e27141ea40c1ca78a3ad550f797c2cd1287ff572947e93ec2a94e2e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c427f2c9e27141ea40c1ca78a3ad550f797c2cd1287ff572947e93ec2a94e2e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c427f2c9e27141ea40c1ca78a3ad550f797c2cd1287ff572947e93ec2a94e2e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:51:58 np0005593232 podman[298039]: 2026-01-23 09:51:58.208662097 +0000 UTC m=+0.105354304 container init 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 04:51:58 np0005593232 podman[298039]: 2026-01-23 09:51:58.215054089 +0000 UTC m=+0.111746296 container start 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:51:58 np0005593232 podman[298039]: 2026-01-23 09:51:58.12413829 +0000 UTC m=+0.020830517 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:51:58 np0005593232 podman[298039]: 2026-01-23 09:51:58.218189319 +0000 UTC m=+0.114881526 container attach 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:51:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1809: 321 pgs: 321 active+clean; 247 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Jan 23 04:51:58 np0005593232 nova_compute[250269]: 2026-01-23 09:51:58.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:58.776 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:51:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:51:58.777 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]: {
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:    "0": [
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:        {
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "devices": [
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "/dev/loop3"
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            ],
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "lv_name": "ceph_lv0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "lv_size": "7511998464",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "name": "ceph_lv0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "tags": {
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.cluster_name": "ceph",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.crush_device_class": "",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.encrypted": "0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.osd_id": "0",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.type": "block",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:                "ceph.vdo": "0"
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            },
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "type": "block",
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:            "vg_name": "ceph_vg0"
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:        }
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]:    ]
Jan 23 04:51:59 np0005593232 confident_ritchie[298055]: }
Jan 23 04:51:59 np0005593232 systemd[1]: libpod-846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c.scope: Deactivated successfully.
Jan 23 04:51:59 np0005593232 podman[298065]: 2026-01-23 09:51:59.099634285 +0000 UTC m=+0.026777577 container died 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:51:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c427f2c9e27141ea40c1ca78a3ad550f797c2cd1287ff572947e93ec2a94e2e8-merged.mount: Deactivated successfully.
Jan 23 04:51:59 np0005593232 podman[298065]: 2026-01-23 09:51:59.174678831 +0000 UTC m=+0.101822123 container remove 846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:51:59 np0005593232 systemd[1]: libpod-conmon-846c1808dc50c3573c934169596715d4009cae98f5ea69d65efb18aa2fb4dd5c.scope: Deactivated successfully.
Jan 23 04:51:59 np0005593232 podman[298064]: 2026-01-23 09:51:59.196471394 +0000 UTC m=+0.100103334 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 04:51:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:51:59.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:51:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:51:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:51:59.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:51:59 np0005593232 nova_compute[250269]: 2026-01-23 09:51:59.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.805258843 +0000 UTC m=+0.048043245 container create e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:51:59 np0005593232 systemd[1]: Started libpod-conmon-e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665.scope.
Jan 23 04:51:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.872778854 +0000 UTC m=+0.115563276 container init e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.780523745 +0000 UTC m=+0.023308237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.881012429 +0000 UTC m=+0.123796831 container start e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.884999313 +0000 UTC m=+0.127783715 container attach e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:51:59 np0005593232 jovial_wu[298304]: 167 167
Jan 23 04:51:59 np0005593232 systemd[1]: libpod-e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665.scope: Deactivated successfully.
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.887475734 +0000 UTC m=+0.130260136 container died e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:51:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-36af0552b4913daa7f8b5906b88d2b63d028c5390704768b492b9bd08e3df87a-merged.mount: Deactivated successfully.
Jan 23 04:51:59 np0005593232 podman[298287]: 2026-01-23 09:51:59.93175128 +0000 UTC m=+0.174535682 container remove e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:51:59 np0005593232 systemd[1]: libpod-conmon-e3504dd23d2b6525026ff6550db8f70fbb642c486e183a55364abafae3cf9665.scope: Deactivated successfully.
Jan 23 04:51:59 np0005593232 nova_compute[250269]: 2026-01-23 09:51:59.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:00 np0005593232 podman[298328]: 2026-01-23 09:52:00.099636091 +0000 UTC m=+0.037448442 container create 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 04:52:00 np0005593232 systemd[1]: Started libpod-conmon-091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17.scope.
Jan 23 04:52:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:52:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126bb473f14a54e8fc163780ab8a46ff18944bbaf2f7de26664502a9abb0aa80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:52:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126bb473f14a54e8fc163780ab8a46ff18944bbaf2f7de26664502a9abb0aa80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:52:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126bb473f14a54e8fc163780ab8a46ff18944bbaf2f7de26664502a9abb0aa80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:52:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126bb473f14a54e8fc163780ab8a46ff18944bbaf2f7de26664502a9abb0aa80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:52:00 np0005593232 podman[298328]: 2026-01-23 09:52:00.180456462 +0000 UTC m=+0.118268843 container init 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:52:00 np0005593232 podman[298328]: 2026-01-23 09:52:00.083918031 +0000 UTC m=+0.021730402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:52:00 np0005593232 podman[298328]: 2026-01-23 09:52:00.187263536 +0000 UTC m=+0.125075887 container start 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:52:00 np0005593232 podman[298328]: 2026-01-23 09:52:00.190087347 +0000 UTC m=+0.127899708 container attach 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Jan 23 04:52:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1810: 321 pgs: 321 active+clean; 247 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 153 op/s
Jan 23 04:52:00 np0005593232 nova_compute[250269]: 2026-01-23 09:52:00.729 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:00.780 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]: {
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:        "osd_id": 0,
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:        "type": "bluestore"
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]:    }
Jan 23 04:52:01 np0005593232 mystifying_dubinsky[298345]: }
Jan 23 04:52:01 np0005593232 systemd[1]: libpod-091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17.scope: Deactivated successfully.
Jan 23 04:52:01 np0005593232 podman[298328]: 2026-01-23 09:52:01.084202404 +0000 UTC m=+1.022014765 container died 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:52:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-126bb473f14a54e8fc163780ab8a46ff18944bbaf2f7de26664502a9abb0aa80-merged.mount: Deactivated successfully.
Jan 23 04:52:01 np0005593232 podman[298328]: 2026-01-23 09:52:01.142296726 +0000 UTC m=+1.080109077 container remove 091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:52:01 np0005593232 systemd[1]: libpod-conmon-091159274eae4bf23fa0c06cea3c71252ed2a3f033165efbc0f953ad015b0b17.scope: Deactivated successfully.
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:52:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 953eeb66-c7f5-411e-a639-6a892d2594f7 does not exist
Jan 23 04:52:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d0ea3e1d-ac85-4dc3-894b-0b172a697685 does not exist
Jan 23 04:52:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 726a9db6-3a98-49d7-acea-c59af506f149 does not exist
Jan 23 04:52:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:01.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:01.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:52:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.382 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.384 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.457 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:52:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1811: 321 pgs: 321 active+clean; 247 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.6 MiB/s wr, 175 op/s
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.687 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.688 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.700 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.701 250273 INFO nova.compute.claims [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:52:02 np0005593232 nova_compute[250269]: 2026-01-23 09:52:02.871 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:52:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773083208' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.316 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.323 250273 DEBUG nova.compute.provider_tree [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:52:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:03.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.375 250273 DEBUG nova.scheduler.client.report [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.405 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.406 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:52:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:03.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.563 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.564 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.604 250273 INFO nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.625 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.686 250273 DEBUG nova.compute.manager [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.686 250273 DEBUG nova.compute.manager [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing instance network info cache due to event network-changed-30248fe9-6da5-4acd-b60f-7c588745f8f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.687 250273 DEBUG oslo_concurrency.lockutils [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.687 250273 DEBUG oslo_concurrency.lockutils [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.688 250273 DEBUG nova.network.neutron [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.760 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.761 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.762 250273 INFO nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Creating image(s)#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.791 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.823 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.852 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.857 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.917 250273 DEBUG nova.policy [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cfac2191989448ead77e75ca3910ac4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86d938c8e2bb41a79012befd500d1088', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.925 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.926 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.927 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.928 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.953 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:03 np0005593232 nova_compute[250269]: 2026-01-23 09:52:03.957 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.269 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.332 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] resizing rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.432 250273 DEBUG nova.objects.instance [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'migration_context' on Instance uuid a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.467 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.468 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Ensure instance console log exists: /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.468 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.468 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.469 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1812: 321 pgs: 321 active+clean; 247 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 659 KiB/s wr, 178 op/s
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:04 np0005593232 nova_compute[250269]: 2026-01-23 09:52:04.996 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:05 np0005593232 nova_compute[250269]: 2026-01-23 09:52:05.053 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Successfully created port: d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:52:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:05.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1813: 321 pgs: 321 active+clean; 262 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 888 KiB/s wr, 117 op/s
Jan 23 04:52:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:07.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:07.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:08 np0005593232 nova_compute[250269]: 2026-01-23 09:52:08.393 250273 DEBUG nova.network.neutron [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated VIF entry in instance network info cache for port 30248fe9-6da5-4acd-b60f-7c588745f8f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:52:08 np0005593232 nova_compute[250269]: 2026-01-23 09:52:08.394 250273 DEBUG nova.network.neutron [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:08 np0005593232 nova_compute[250269]: 2026-01-23 09:52:08.534 250273 DEBUG oslo_concurrency.lockutils [req-5cc393d4-3d13-410a-a3e5-62b1c930e191 req-5428b171-8b16-4de7-90ae-38ab73c765ff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1814: 321 pgs: 321 active+clean; 293 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.267 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Successfully updated port: d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:52:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.345 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.345 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.345 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:52:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:09.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.674 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.843 250273 DEBUG nova.compute.manager [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-changed-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.843 250273 DEBUG nova.compute.manager [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Refreshing instance network info cache due to event network-changed-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.843 250273 DEBUG oslo_concurrency.lockutils [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:09 np0005593232 nova_compute[250269]: 2026-01-23 09:52:09.998 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:10 np0005593232 nova_compute[250269]: 2026-01-23 09:52:10.156 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:52:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1815: 321 pgs: 321 active+clean; 293 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 23 04:52:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:11.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:11.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:11 np0005593232 nova_compute[250269]: 2026-01-23 09:52:11.780 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:11 np0005593232 nova_compute[250269]: 2026-01-23 09:52:11.780 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:11 np0005593232 nova_compute[250269]: 2026-01-23 09:52:11.780 250273 DEBUG nova.objects.instance [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'flavor' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1816: 321 pgs: 321 active+clean; 297 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 97 op/s
Jan 23 04:52:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:13.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:13.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1817: 321 pgs: 321 active+clean; 324 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 129 op/s
Jan 23 04:52:14 np0005593232 nova_compute[250269]: 2026-01-23 09:52:14.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:14 np0005593232 nova_compute[250269]: 2026-01-23 09:52:14.961 250273 DEBUG nova.network.neutron [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating instance_info_cache with network_info: [{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.000 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.147 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.147 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance network_info: |[{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.148 250273 DEBUG oslo_concurrency.lockutils [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.148 250273 DEBUG nova.network.neutron [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Refreshing network info cache for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.150 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Start _get_guest_xml network_info=[{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.155 250273 WARNING nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.187 250273 DEBUG nova.virt.libvirt.host [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.188 250273 DEBUG nova.virt.libvirt.host [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.193 250273 DEBUG nova.virt.libvirt.host [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.194 250273 DEBUG nova.virt.libvirt.host [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.195 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.195 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.195 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.195 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.196 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.197 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.197 250273 DEBUG nova.virt.hardware [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.199 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:15.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:15.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:52:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2380759015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.714 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.744 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:15 np0005593232 nova_compute[250269]: 2026-01-23 09:52:15.748 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/182689859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.218 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.220 250273 DEBUG nova.virt.libvirt.vif [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-231426966',display_name='tempest-ServerDiskConfigTestJSON-server-231426966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-231426966',id=77,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-w5bvxhsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:52:03Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=a3c08e79-4f2b-42f2-bcac-21cbcfbc5247,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.221 250273 DEBUG nova.network.os_vif_util [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.222 250273 DEBUG nova.network.os_vif_util [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.223 250273 DEBUG nova.objects.instance [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'pci_devices' on Instance uuid a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.334 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <uuid>a3c08e79-4f2b-42f2-bcac-21cbcfbc5247</uuid>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <name>instance-0000004d</name>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-231426966</nova:name>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:52:15</nova:creationTime>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:user uuid="0cfac2191989448ead77e75ca3910ac4">tempest-ServerDiskConfigTestJSON-211417238-project-member</nova:user>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:project uuid="86d938c8e2bb41a79012befd500d1088">tempest-ServerDiskConfigTestJSON-211417238</nova:project>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <nova:port uuid="d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="serial">a3c08e79-4f2b-42f2-bcac-21cbcfbc5247</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="uuid">a3c08e79-4f2b-42f2-bcac-21cbcfbc5247</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:49:23:e7"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <target dev="tapd7d188d3-bf"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/console.log" append="off"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:52:16 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:52:16 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:52:16 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:52:16 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.335 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Preparing to wait for external event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.335 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.335 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.335 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.336 250273 DEBUG nova.virt.libvirt.vif [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-231426966',display_name='tempest-ServerDiskConfigTestJSON-server-231426966',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-231426966',id=77,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-w5bvxhsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:52:03Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=a3c08e79-4f2b-42f2-bcac-21cbcfbc5247,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.336 250273 DEBUG nova.network.os_vif_util [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.337 250273 DEBUG nova.network.os_vif_util [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.337 250273 DEBUG os_vif [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.337 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.338 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.338 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.341 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.341 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7d188d3-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.342 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7d188d3-bf, col_values=(('external_ids', {'iface-id': 'd7d188d3-bf5e-4df5-9ce1-6ba00d3f6728', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:23:e7', 'vm-uuid': 'a3c08e79-4f2b-42f2-bcac-21cbcfbc5247'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.343 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:16 np0005593232 NetworkManager[49057]: <info>  [1769161936.3441] manager: (tapd7d188d3-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.350 250273 INFO os_vif [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf')#033[00m
Jan 23 04:52:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1818: 321 pgs: 321 active+clean; 326 MiB data, 813 MiB used, 20 GiB / 21 GiB avail; 304 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.911 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.912 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.912 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No VIF found with MAC fa:16:3e:49:23:e7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.913 250273 INFO nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Using config drive#033[00m
Jan 23 04:52:16 np0005593232 nova_compute[250269]: 2026-01-23 09:52:16.971 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:17 np0005593232 nova_compute[250269]: 2026-01-23 09:52:17.224 250273 DEBUG nova.objects.instance [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'pci_requests' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:17 np0005593232 nova_compute[250269]: 2026-01-23 09:52:17.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:17 np0005593232 nova_compute[250269]: 2026-01-23 09:52:17.303 250273 DEBUG nova.network.neutron [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:52:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:17.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1819: 321 pgs: 321 active+clean; 326 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 306 KiB/s rd, 3.1 MiB/s wr, 89 op/s
Jan 23 04:52:19 np0005593232 nova_compute[250269]: 2026-01-23 09:52:19.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:19.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:19.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:19 np0005593232 nova_compute[250269]: 2026-01-23 09:52:19.532 250273 DEBUG nova.policy [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '77cda1e9a0404425a06c34637e696603', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '390d19f683334995a5268cf9b4d5e464', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:52:19 np0005593232 nova_compute[250269]: 2026-01-23 09:52:19.889 250273 INFO nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Creating config drive at /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config#033[00m
Jan 23 04:52:19 np0005593232 nova_compute[250269]: 2026-01-23 09:52:19.894 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ifax6lh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.002 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.029 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ifax6lh" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.058 250273 DEBUG nova.storage.rbd_utils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.062 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.238 250273 DEBUG nova.network.neutron [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updated VIF entry in instance network info cache for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.239 250273 DEBUG nova.network.neutron [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating instance_info_cache with network_info: [{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 23 04:52:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.282 250273 DEBUG oslo_concurrency.lockutils [req-ea4f40eb-0e4c-404e-9fa7-a3328e1edf88 req-25eb2ce3-cf5f-482f-b2a8-efd92a877637 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1821: 321 pgs: 321 active+clean; 326 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 347 KiB/s rd, 2.6 MiB/s wr, 76 op/s
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.571 250273 DEBUG oslo_concurrency.processutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.572 250273 INFO nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Deleting local config drive /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/disk.config because it was imported into RBD.#033[00m
Jan 23 04:52:20 np0005593232 kernel: tapd7d188d3-bf: entered promiscuous mode
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.6254] manager: (tapd7d188d3-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Jan 23 04:52:20 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:20Z|00227|binding|INFO|Claiming lport d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for this chassis.
Jan 23 04:52:20 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:20Z|00228|binding|INFO|d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728: Claiming fa:16:3e:49:23:e7 10.100.0.11
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.627 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.644 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:23:e7 10.100.0.11'], port_security=['fa:16:3e:49:23:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a3c08e79-4f2b-42f2-bcac-21cbcfbc5247', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86d938c8e2bb41a79012befd500d1088', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a7b70d2-dc13-4ace-b4e0-b2bcfa748347', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c61616-3f86-4228-bb78-0dc84e2b2157, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.645 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 in datapath 6d2cdc4c-47a0-475b-8e71-39465d365de3 bound to our chassis#033[00m
Jan 23 04:52:20 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:20Z|00229|binding|INFO|Setting lport d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 ovn-installed in OVS
Jan 23 04:52:20 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:20Z|00230|binding|INFO|Setting lport d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 up in Southbound
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.647 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.647 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d2cdc4c-47a0-475b-8e71-39465d365de3#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.652 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.657 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[514a4be3-dc0c-4fee-bf29-7555fffb2947]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.658 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d2cdc4c-41 in ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.661 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d2cdc4c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.661 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7643ad54-e1f7-40d8-b7e4-4d5b736dfdcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.662 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3945a051-db90-4998-aa91-d9b3db611d80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 systemd-machined[215836]: New machine qemu-29-instance-0000004d.
Jan 23 04:52:20 np0005593232 systemd-udevd[298815]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:52:20 np0005593232 systemd[1]: Started Virtual Machine qemu-29-instance-0000004d.
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.6768] device (tapd7d188d3-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.6773] device (tapd7d188d3-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.676 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[44fc2e88-7ded-4c01-84f9-92c6bcbb57f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.699 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b4904f96-bdf3-41f0-a453-0efee5ce97c7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.735 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[21a7dab4-5551-4b46-95ac-7f0d674b50bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.741 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1139396c-680d-46a3-a651-ad4e51298fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.7422] manager: (tap6d2cdc4c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/116)
Jan 23 04:52:20 np0005593232 systemd-udevd[298818]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.780 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d721b613-e177-40ba-8d21-e65f77c4e177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.784 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[295e9d4d-88ea-4a55-b05a-4037104b9750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.8081] device (tap6d2cdc4c-40): carrier: link connected
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.814 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9a9ec3af-e608-4f1e-942a-869f7f071596]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.831 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[337ef6a6-e9a6-4e30-b62f-e13a1f424a4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d2cdc4c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:5a:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585395, 'reachable_time': 36256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298847, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.848 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[69105cc2-7159-4fd8-874f-19cfa422057a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:5a26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 585395, 'tstamp': 585395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298848, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.865 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7dfbe75a-2696-4360-9325-8b0c7ff3d30d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d2cdc4c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:5a:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585395, 'reachable_time': 36256, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298849, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.894 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e1e9f6-817e-48d4-9672-d58dd68070a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.951 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab29732-6f2b-415f-830b-3fec7d21edf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.953 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d2cdc4c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.953 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.953 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d2cdc4c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 NetworkManager[49057]: <info>  [1769161940.9558] manager: (tap6d2cdc4c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Jan 23 04:52:20 np0005593232 kernel: tap6d2cdc4c-40: entered promiscuous mode
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.958 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d2cdc4c-40, col_values=(('external_ids', {'iface-id': '04f6c0b6-99ee-4958-bc01-68fa310042f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:20 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:20Z|00231|binding|INFO|Releasing lport 04f6c0b6-99ee-4958-bc01-68fa310042f0 from this chassis (sb_readonly=0)
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.960 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.961 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3db07dca-f775-416a-a08b-be726b775699]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.963 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-6d2cdc4c-47a0-475b-8e71-39465d365de3
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 6d2cdc4c-47a0-475b-8e71-39465d365de3
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:52:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:20.963 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'env', 'PROCESS_TAG=haproxy-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d2cdc4c-47a0-475b-8e71-39465d365de3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:52:20 np0005593232 nova_compute[250269]: 2026-01-23 09:52:20.977 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.103 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161941.1027622, a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.103 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] VM Started (Lifecycle Event)#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.135 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.138 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161941.1029358, a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.138 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.176 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.179 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.217 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:52:21 np0005593232 nova_compute[250269]: 2026-01-23 09:52:21.344 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:21.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:21 np0005593232 podman[298923]: 2026-01-23 09:52:21.341436501 +0000 UTC m=+0.024728958 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:52:21 np0005593232 podman[298923]: 2026-01-23 09:52:21.482574477 +0000 UTC m=+0.165866914 container create 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:52:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:21.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:21 np0005593232 systemd[1]: Started libpod-conmon-388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd.scope.
Jan 23 04:52:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:52:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2d944c520d4e3d5e47dc5994f3238ec70834c547033f5ba9fdba26448e3ca6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:52:21 np0005593232 podman[298923]: 2026-01-23 09:52:21.584155562 +0000 UTC m=+0.267448029 container init 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 04:52:21 np0005593232 podman[298923]: 2026-01-23 09:52:21.590355249 +0000 UTC m=+0.273647686 container start 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 04:52:21 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [NOTICE]   (298942) : New worker (298944) forked
Jan 23 04:52:21 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [NOTICE]   (298942) : Loading success.
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.153 250273 DEBUG nova.network.neutron [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Successfully updated port: af80eab2-c3b9-439d-baae-ee0d90b6cdda _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.197 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.197 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.197 250273 DEBUG nova.network.neutron [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:52:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1822: 321 pgs: 321 active+clean; 326 MiB data, 814 MiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.633 250273 DEBUG nova.compute.manager [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-changed-af80eab2-c3b9-439d-baae-ee0d90b6cdda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.634 250273 DEBUG nova.compute.manager [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing instance network info cache due to event network-changed-af80eab2-c3b9-439d-baae-ee0d90b6cdda. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.634 250273 DEBUG oslo_concurrency.lockutils [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:22 np0005593232 nova_compute[250269]: 2026-01-23 09:52:22.713 250273 WARNING nova.network.neutron [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] 7808328e-22f9-46df-ac06-f8c3d6ad10c4 already exists in list: networks containing: ['7808328e-22f9-46df-ac06-f8c3d6ad10c4']. ignoring it#033[00m
Jan 23 04:52:23 np0005593232 nova_compute[250269]: 2026-01-23 09:52:23.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:23 np0005593232 nova_compute[250269]: 2026-01-23 09:52:23.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:52:23 np0005593232 nova_compute[250269]: 2026-01-23 09:52:23.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:52:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:23.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:23 np0005593232 podman[298954]: 2026-01-23 09:52:23.42075096 +0000 UTC m=+0.080679998 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 23 04:52:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:23 np0005593232 nova_compute[250269]: 2026-01-23 09:52:23.687 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:52:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 23 04:52:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 23 04:52:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.156 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.157 250273 DEBUG nova.compute.manager [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.158 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.158 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.158 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.159 250273 DEBUG nova.compute.manager [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Processing event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.159 250273 DEBUG nova.compute.manager [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.159 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.159 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.159 250273 DEBUG oslo_concurrency.lockutils [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.160 250273 DEBUG nova.compute.manager [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] No waiting events found dispatching network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.160 250273 WARNING nova.compute.manager [req-b878993e-7ea1-4460-97d4-ae4c679a1207 req-74cd2e6d-ffff-46fb-9759-0c305a32fc85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received unexpected event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.160 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.164 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769161944.1642442, a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.164 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.166 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.170 250273 INFO nova.virt.libvirt.driver [-] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance spawned successfully.#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.171 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.198 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.201 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.258 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.258 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.258 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.259 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.259 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.259 250273 DEBUG nova.virt.libvirt.driver [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.276 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.474 250273 INFO nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Took 20.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.474 250273 DEBUG nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:52:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1824: 321 pgs: 321 active+clean; 366 MiB data, 840 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.3 MiB/s wr, 80 op/s
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.572 250273 INFO nova.compute.manager [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Took 21.92 seconds to build instance.#033[00m
Jan 23 04:52:24 np0005593232 nova_compute[250269]: 2026-01-23 09:52:24.609 250273 DEBUG oslo_concurrency.lockutils [None req-ea33f00d-d376-4c88-9b01-8687b0d5750c 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:25 np0005593232 nova_compute[250269]: 2026-01-23 09:52:25.004 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 23 04:52:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:25.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 23 04:52:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 23 04:52:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:25.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:26 np0005593232 nova_compute[250269]: 2026-01-23 09:52:26.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1826: 321 pgs: 321 active+clean; 405 MiB data, 860 MiB used, 20 GiB / 21 GiB avail; 9.0 MiB/s rd, 7.4 MiB/s wr, 201 op/s
Jan 23 04:52:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:27.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:27.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1827: 321 pgs: 321 active+clean; 405 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 8.7 MiB/s rd, 5.8 MiB/s wr, 225 op/s
Jan 23 04:52:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:29 np0005593232 podman[298983]: 2026-01-23 09:52:29.467885232 +0000 UTC m=+0.082581171 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:52:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:29.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:30 np0005593232 nova_compute[250269]: 2026-01-23 09:52:30.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 23 04:52:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 23 04:52:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 23 04:52:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1829: 321 pgs: 321 active+clean; 405 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 3.1 MiB/s wr, 185 op/s
Jan 23 04:52:31 np0005593232 nova_compute[250269]: 2026-01-23 09:52:31.348 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:31.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.066 250273 DEBUG nova.network.neutron [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.322 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.323 250273 DEBUG oslo_concurrency.lockutils [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.324 250273 DEBUG nova.network.neutron [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Refreshing network info cache for port af80eab2-c3b9-439d-baae-ee0d90b6cdda _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.327 250273 DEBUG nova.virt.libvirt.vif [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.327 250273 DEBUG nova.network.os_vif_util [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.328 250273 DEBUG nova.network.os_vif_util [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.329 250273 DEBUG os_vif [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.330 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.331 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.334 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.334 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf80eab2-c3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.335 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaf80eab2-c3, col_values=(('external_ids', {'iface-id': 'af80eab2-c3b9-439d-baae-ee0d90b6cdda', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:38:92', 'vm-uuid': 'bc80c9f1-76f1-4875-895d-9e80312eb293'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.384 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 NetworkManager[49057]: <info>  [1769161952.3850] manager: (tapaf80eab2-c3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/118)
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.392 250273 INFO os_vif [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3')#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.393 250273 DEBUG nova.virt.libvirt.vif [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.393 250273 DEBUG nova.network.os_vif_util [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.394 250273 DEBUG nova.network.os_vif_util [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.397 250273 DEBUG nova.virt.libvirt.guest [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] attach device xml: <interface type="ethernet">
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:7d:38:92"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <target dev="tapaf80eab2-c3"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:52:32 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 04:52:32 np0005593232 kernel: tapaf80eab2-c3: entered promiscuous mode
Jan 23 04:52:32 np0005593232 NetworkManager[49057]: <info>  [1769161952.4129] manager: (tapaf80eab2-c3): new Tun device (/org/freedesktop/NetworkManager/Devices/119)
Jan 23 04:52:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:32Z|00232|binding|INFO|Claiming lport af80eab2-c3b9-439d-baae-ee0d90b6cdda for this chassis.
Jan 23 04:52:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:32Z|00233|binding|INFO|af80eab2-c3b9-439d-baae-ee0d90b6cdda: Claiming fa:16:3e:7d:38:92 10.100.0.12
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.436 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:38:92 10.100.0.12'], port_security=['fa:16:3e:7d:38:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2098344084', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bc80c9f1-76f1-4875-895d-9e80312eb293', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2098344084', 'neutron:project_id': '390d19f683334995a5268cf9b4d5e464', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'e14d2748-8402-4583-8740-ef7703629f43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=396f5815-d5dc-4484-bb15-e71911e6f8a2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=af80eab2-c3b9-439d-baae-ee0d90b6cdda) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:52:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:32Z|00234|binding|INFO|Setting lport af80eab2-c3b9-439d-baae-ee0d90b6cdda ovn-installed in OVS
Jan 23 04:52:32 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:32Z|00235|binding|INFO|Setting lport af80eab2-c3b9-439d-baae-ee0d90b6cdda up in Southbound
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.437 161902 INFO neutron.agent.ovn.metadata.agent [-] Port af80eab2-c3b9-439d-baae-ee0d90b6cdda in datapath 7808328e-22f9-46df-ac06-f8c3d6ad10c4 bound to our chassis#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.439 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7808328e-22f9-46df-ac06-f8c3d6ad10c4#033[00m
Jan 23 04:52:32 np0005593232 systemd-udevd[299011]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.440 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.455 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4ab551-14f6-430f-939e-5577b12e6aa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 NetworkManager[49057]: <info>  [1769161952.4583] device (tapaf80eab2-c3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:52:32 np0005593232 NetworkManager[49057]: <info>  [1769161952.4599] device (tapaf80eab2-c3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.493 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e786d5-63d2-4991-8410-86b24099e297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.498 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0a16b99b-c7d8-47bc-a568-d9d917af72f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.541 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[15b4795d-343a-4281-85a2-c82a936dcceb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1830: 321 pgs: 321 active+clean; 405 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 2.5 MiB/s wr, 151 op/s
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.556 250273 DEBUG nova.virt.libvirt.driver [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.557 250273 DEBUG nova.virt.libvirt.driver [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.557 250273 DEBUG nova.virt.libvirt.driver [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No VIF found with MAC fa:16:3e:d5:db:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.557 250273 DEBUG nova.virt.libvirt.driver [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] No VIF found with MAC fa:16:3e:7d:38:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.561 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56d1bf15-c460-4d20-9acc-e0cb534f4398]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7808328e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:22:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579546, 'reachable_time': 39774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299019, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.580 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[22808a82-ff9e-4077-9347-05a43d6247bb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7808328e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579557, 'tstamp': 579557}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299020, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7808328e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579559, 'tstamp': 579559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299020, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.583 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7808328e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.586 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.587 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7808328e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.587 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.588 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7808328e-20, col_values=(('external_ids', {'iface-id': 'db11772c-e758-43ff-997c-e8c835433e90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:32.588 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.592 250273 DEBUG nova.virt.libvirt.guest [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:name>tempest-tempest.common.compute-instance-511471311</nova:name>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:52:32</nova:creationTime>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:user uuid="77cda1e9a0404425a06c34637e696603">tempest-AttachInterfacesTestJSON-746967993-project-member</nova:user>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:project uuid="390d19f683334995a5268cf9b4d5e464">tempest-AttachInterfacesTestJSON-746967993</nova:project>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:port uuid="30248fe9-6da5-4acd-b60f-7c588745f8f3">
Jan 23 04:52:32 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    <nova:port uuid="af80eab2-c3b9-439d-baae-ee0d90b6cdda">
Jan 23 04:52:32 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:32 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:52:32 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:52:32 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.648 250273 DEBUG oslo_concurrency.lockutils [None req-1060f59d-92cb-491c-a950-8106a9ed0160 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 20.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.921 250273 DEBUG nova.compute.manager [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.922 250273 DEBUG oslo_concurrency.lockutils [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.922 250273 DEBUG oslo_concurrency.lockutils [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.922 250273 DEBUG oslo_concurrency.lockutils [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.923 250273 DEBUG nova.compute.manager [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:32 np0005593232 nova_compute[250269]: 2026-01-23 09:52:32.923 250273 WARNING nova.compute.manager [req-abe40e82-2d88-4d24-bb22-5cae8a842686 req-f55ccfd7-9879-4476-9ee5-781dcdb302d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda for instance with vm_state active and task_state None.#033[00m
Jan 23 04:52:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:33 np0005593232 nova_compute[250269]: 2026-01-23 09:52:33.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:34 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:34Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:38:92 10.100.0.12
Jan 23 04:52:34 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:34Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:38:92 10.100.0.12
Jan 23 04:52:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1831: 321 pgs: 321 active+clean; 405 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.2 MiB/s wr, 144 op/s
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.010 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.110 250273 DEBUG nova.compute.manager [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.110 250273 DEBUG oslo_concurrency.lockutils [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.111 250273 DEBUG oslo_concurrency.lockutils [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.111 250273 DEBUG oslo_concurrency.lockutils [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.112 250273 DEBUG nova.compute.manager [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.112 250273 WARNING nova.compute.manager [req-751865e7-0257-4a10-8fd2-2f9e464b9611 req-7c42cbbf-6058-40b8-8a08-64a6599605a1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda for instance with vm_state active and task_state None.#033[00m
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.220405) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955220635, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1379, "num_deletes": 256, "total_data_size": 2216763, "memory_usage": 2263200, "flush_reason": "Manual Compaction"}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 23 04:52:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:35.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955426754, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2171668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39364, "largest_seqno": 40742, "table_properties": {"data_size": 2165256, "index_size": 3547, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14311, "raw_average_key_size": 20, "raw_value_size": 2151977, "raw_average_value_size": 3022, "num_data_blocks": 156, "num_entries": 712, "num_filter_entries": 712, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161840, "oldest_key_time": 1769161840, "file_creation_time": 1769161955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 206735 microseconds, and 9322 cpu microseconds.
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.427140) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2171668 bytes OK
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.427469) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.451719) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.451811) EVENT_LOG_v1 {"time_micros": 1769161955451791, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.451854) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2210574, prev total WAL file size 2210574, number of live WAL files 2.
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.453713) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323534' seq:72057594037927935, type:22 .. '6C6F676D0031353035' seq:0, type:0; will stop at (end)
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2120KB)], [86(8809KB)]
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955453797, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11192817, "oldest_snapshot_seqno": -1}
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.475 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.476 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:35.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.540 250273 DEBUG nova.objects.instance [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'flavor' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.585 250273 DEBUG nova.virt.libvirt.vif [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.586 250273 DEBUG nova.network.os_vif_util [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.586 250273 DEBUG nova.network.os_vif_util [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.589 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.591 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.594 250273 DEBUG nova.virt.libvirt.driver [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Attempting to detach device tapaf80eab2-c3 from instance bc80c9f1-76f1-4875-895d-9e80312eb293 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.594 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] detach device xml: <interface type="ethernet">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:7d:38:92"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <target dev="tapaf80eab2-c3"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6686 keys, 11052197 bytes, temperature: kUnknown
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955692065, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 11052197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11006633, "index_size": 27724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 171921, "raw_average_key_size": 25, "raw_value_size": 10886051, "raw_average_value_size": 1628, "num_data_blocks": 1110, "num_entries": 6686, "num_filter_entries": 6686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161955, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.692574) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 11052197 bytes
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.694 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.694979) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.9 rd, 46.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 8.6 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(10.2) write-amplify(5.1) OK, records in: 7217, records dropped: 531 output_compression: NoCompression
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.695028) EVENT_LOG_v1 {"time_micros": 1769161955695010, "job": 50, "event": "compaction_finished", "compaction_time_micros": 238553, "compaction_time_cpu_micros": 24200, "output_level": 6, "num_output_files": 1, "total_output_size": 11052197, "num_input_records": 7217, "num_output_records": 6686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955695883, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161955698611, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.453569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.698744) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.698758) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.698760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.698762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:52:35.698764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.698 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface>not found in domain: <domain type='kvm' id='28'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <name>instance-00000049</name>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <uuid>bc80c9f1-76f1-4875-895d-9e80312eb293</uuid>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:name>tempest-tempest.common.compute-instance-511471311</nova:name>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:52:32</nova:creationTime>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:user uuid="77cda1e9a0404425a06c34637e696603">tempest-AttachInterfacesTestJSON-746967993-project-member</nova:user>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:project uuid="390d19f683334995a5268cf9b4d5e464">tempest-AttachInterfacesTestJSON-746967993</nova:project>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:port uuid="30248fe9-6da5-4acd-b60f-7c588745f8f3">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:port uuid="af80eab2-c3b9-439d-baae-ee0d90b6cdda">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='serial'>bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='uuid'>bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk' index='2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config' index='1'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:d5:db:7e'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='tap30248fe9-6d'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:7d:38:92'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='tapaf80eab2-c3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='net1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log' append='off'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log' append='off'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c32,c35</label>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c32,c35</imagelabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.701 250273 INFO nova.virt.libvirt.driver [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully detached device tapaf80eab2-c3 from instance bc80c9f1-76f1-4875-895d-9e80312eb293 from the persistent domain config.#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.701 250273 DEBUG nova.virt.libvirt.driver [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] (1/8): Attempting to detach device tapaf80eab2-c3 with device alias net1 from instance bc80c9f1-76f1-4875-895d-9e80312eb293 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.701 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] detach device xml: <interface type="ethernet">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:7d:38:92"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <target dev="tapaf80eab2-c3"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </interface>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 04:52:35 np0005593232 kernel: tapaf80eab2-c3 (unregistering): left promiscuous mode
Jan 23 04:52:35 np0005593232 NetworkManager[49057]: <info>  [1769161955.8096] device (tapaf80eab2-c3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.818 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769161955.81783, bc80c9f1-76f1-4875-895d-9e80312eb293 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.820 250273 DEBUG nova.virt.libvirt.driver [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Start waiting for the detach event from libvirt for device tapaf80eab2-c3 with device alias net1 for instance bc80c9f1-76f1-4875-895d-9e80312eb293 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.821 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:35Z|00236|binding|INFO|Releasing lport af80eab2-c3b9-439d-baae-ee0d90b6cdda from this chassis (sb_readonly=0)
Jan 23 04:52:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:35Z|00237|binding|INFO|Setting lport af80eab2-c3b9-439d-baae-ee0d90b6cdda down in Southbound
Jan 23 04:52:35 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:35Z|00238|binding|INFO|Removing iface tapaf80eab2-c3 ovn-installed in OVS
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.825 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7d:38:92"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapaf80eab2-c3"/></interface>not found in domain: <domain type='kvm' id='28'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <name>instance-00000049</name>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <uuid>bc80c9f1-76f1-4875-895d-9e80312eb293</uuid>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:name>tempest-tempest.common.compute-instance-511471311</nova:name>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:52:32</nova:creationTime>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:user uuid="77cda1e9a0404425a06c34637e696603">tempest-AttachInterfacesTestJSON-746967993-project-member</nova:user>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:project uuid="390d19f683334995a5268cf9b4d5e464">tempest-AttachInterfacesTestJSON-746967993</nova:project>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:port uuid="30248fe9-6da5-4acd-b60f-7c588745f8f3">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:port uuid="af80eab2-c3b9-439d-baae-ee0d90b6cdda">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <resource>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </resource>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='serial'>bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='uuid'>bc80c9f1-76f1-4875-895d-9e80312eb293</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk' index='2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/bc80c9f1-76f1-4875-895d-9e80312eb293_disk.config' index='1'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </controller>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:d5:db:7e'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target dev='tap30248fe9-6d'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log' append='off'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      </target>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <source path='/dev/pts/0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293/console.log' append='off'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </console>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </input>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c32,c35</label>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c32,c35</imagelabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.830 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:38:92 10.100.0.12'], port_security=['fa:16:3e:7d:38:92 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-2098344084', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'bc80c9f1-76f1-4875-895d-9e80312eb293', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-2098344084', 'neutron:project_id': '390d19f683334995a5268cf9b4d5e464', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'e14d2748-8402-4583-8740-ef7703629f43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=396f5815-d5dc-4484-bb15-e71911e6f8a2, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=af80eab2-c3b9-439d-baae-ee0d90b6cdda) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.827 250273 INFO nova.virt.libvirt.driver [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully detached device tapaf80eab2-c3 from instance bc80c9f1-76f1-4875-895d-9e80312eb293 from the live domain config.#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.832 161902 INFO neutron.agent.ovn.metadata.agent [-] Port af80eab2-c3b9-439d-baae-ee0d90b6cdda in datapath 7808328e-22f9-46df-ac06-f8c3d6ad10c4 unbound from our chassis#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.833 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7808328e-22f9-46df-ac06-f8c3d6ad10c4#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.828 250273 DEBUG nova.virt.libvirt.vif [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.834 250273 DEBUG nova.network.os_vif_util [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.835 250273 DEBUG nova.network.os_vif_util [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.835 250273 DEBUG os_vif [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.837 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.837 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf80eab2-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.842 250273 INFO os_vif [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3')#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.843 250273 DEBUG nova.virt.libvirt.guest [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:name>tempest-tempest.common.compute-instance-511471311</nova:name>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 09:52:35</nova:creationTime>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:user uuid="77cda1e9a0404425a06c34637e696603">tempest-AttachInterfacesTestJSON-746967993-project-member</nova:user>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:project uuid="390d19f683334995a5268cf9b4d5e464">tempest-AttachInterfacesTestJSON-746967993</nova:project>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    <nova:port uuid="30248fe9-6da5-4acd-b60f-7c588745f8f3">
Jan 23 04:52:35 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 04:52:35 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 04:52:35 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.860 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[130fb2b1-d525-4711-862a-a98510a9f509]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.895 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ba71d3eb-873e-4267-9a49-6d074aa94bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.898 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[55e69699-92c1-4be2-a931-26d0b044a181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.929 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f41d0870-d9bd-48ef-9431-3867f6ee3b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.946 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eff50821-f480-4bbc-9828-7cef27bde74c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7808328e-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:22:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579546, 'reachable_time': 39774, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299034, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.963 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[efdf94a8-42e4-4124-8f1c-338cb0132628]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7808328e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579557, 'tstamp': 579557}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299035, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7808328e-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 579559, 'tstamp': 579559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299035, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.965 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7808328e-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:35 np0005593232 nova_compute[250269]: 2026-01-23 09:52:35.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.969 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7808328e-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.969 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.969 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7808328e-20, col_values=(('external_ids', {'iface-id': 'db11772c-e758-43ff-997c-e8c835433e90'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:35.970 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1832: 321 pgs: 321 active+clean; 405 MiB data, 864 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 409 B/s wr, 65 op/s
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:52:37
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'volumes', 'backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'images']
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.336 250273 DEBUG nova.network.neutron [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated VIF entry in instance network info cache for port af80eab2-c3b9-439d-baae-ee0d90b6cdda. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.336 250273 DEBUG nova.network.neutron [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.370 250273 DEBUG oslo_concurrency.lockutils [req-33e6e347-80b5-4352-a4ad-a12034fd41b6 req-9265324c-4074-48bb-9de0-063cc7da5f66 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.370 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.371 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.371 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.474 250273 DEBUG nova.compute.manager [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-unplugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.475 250273 DEBUG oslo_concurrency.lockutils [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.475 250273 DEBUG oslo_concurrency.lockutils [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.475 250273 DEBUG oslo_concurrency.lockutils [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.476 250273 DEBUG nova.compute.manager [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-unplugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:37 np0005593232 nova_compute[250269]: 2026-01-23 09:52:37.476 250273 WARNING nova.compute.manager [req-5f5ac4e1-4879-48f1-98c9-0e3b344cc7f5 req-f9578af7-5c49-493f-85d8-054450a1e014 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-unplugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda for instance with vm_state active and task_state None.#033[00m
Jan 23 04:52:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:37.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:52:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1833: 321 pgs: 321 active+clean; 362 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 1.7 MiB/s wr, 48 op/s
Jan 23 04:52:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:38Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:49:23:e7 10.100.0.11
Jan 23 04:52:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:38Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:49:23:e7 10.100.0.11
Jan 23 04:52:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:39.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.689 250273 DEBUG nova.compute.manager [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.690 250273 DEBUG oslo_concurrency.lockutils [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.690 250273 DEBUG oslo_concurrency.lockutils [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.690 250273 DEBUG oslo_concurrency.lockutils [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.690 250273 DEBUG nova.compute.manager [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:39 np0005593232 nova_compute[250269]: 2026-01-23 09:52:39.691 250273 WARNING nova.compute.manager [req-e508d2ce-4d30-49dd-befc-b6f5e4aeaf7c req-a36006df-8b7b-4f89-82d4-3ba1b3bca5f0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-plugged-af80eab2-c3b9-439d-baae-ee0d90b6cdda for instance with vm_state active and task_state None.#033[00m
Jan 23 04:52:40 np0005593232 nova_compute[250269]: 2026-01-23 09:52:40.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1834: 321 pgs: 321 active+clean; 362 MiB data, 857 MiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Jan 23 04:52:40 np0005593232 nova_compute[250269]: 2026-01-23 09:52:40.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:40 np0005593232 nova_compute[250269]: 2026-01-23 09:52:40.874 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:41.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.172 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.173 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.173 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.173 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.174 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.175 250273 INFO nova.compute.manager [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Terminating instance#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.175 250273 DEBUG nova.compute.manager [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:52:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 23 04:52:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 23 04:52:42 np0005593232 kernel: tap30248fe9-6d (unregistering): left promiscuous mode
Jan 23 04:52:42 np0005593232 NetworkManager[49057]: <info>  [1769161962.4607] device (tap30248fe9-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:52:42 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:42Z|00239|binding|INFO|Releasing lport 30248fe9-6da5-4acd-b60f-7c588745f8f3 from this chassis (sb_readonly=0)
Jan 23 04:52:42 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:42Z|00240|binding|INFO|Setting lport 30248fe9-6da5-4acd-b60f-7c588745f8f3 down in Southbound
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:42Z|00241|binding|INFO|Removing iface tap30248fe9-6d ovn-installed in OVS
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.533 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:db:7e 10.100.0.9'], port_security=['fa:16:3e:d5:db:7e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'bc80c9f1-76f1-4875-895d-9e80312eb293', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '390d19f683334995a5268cf9b4d5e464', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0547d145-6526-47bb-a492-48772f700715', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.235'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=396f5815-d5dc-4484-bb15-e71911e6f8a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=30248fe9-6da5-4acd-b60f-7c588745f8f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.534 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 30248fe9-6da5-4acd-b60f-7c588745f8f3 in datapath 7808328e-22f9-46df-ac06-f8c3d6ad10c4 unbound from our chassis#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.536 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.537 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7808328e-22f9-46df-ac06-f8c3d6ad10c4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.538 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[77bcdd42-9c14-43c4-ba3e-4f4e17ed80e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.539 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4 namespace which is not needed anymore#033[00m
Jan 23 04:52:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1836: 321 pgs: 321 active+clean; 351 MiB data, 876 MiB used, 20 GiB / 21 GiB avail; 215 KiB/s rd, 2.5 MiB/s wr, 72 op/s
Jan 23 04:52:42 np0005593232 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 23 04:52:42 np0005593232 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000049.scope: Consumed 17.013s CPU time.
Jan 23 04:52:42 np0005593232 systemd-machined[215836]: Machine qemu-28-instance-00000049 terminated.
Jan 23 04:52:42 np0005593232 NetworkManager[49057]: <info>  [1769161962.5917] manager: (tap30248fe9-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/120)
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.600 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.606 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.606 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.606 250273 INFO nova.virt.libvirt.driver [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Instance destroyed successfully.#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.607 250273 DEBUG nova.objects.instance [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lazy-loading 'resources' on Instance uuid bc80c9f1-76f1-4875-895d-9e80312eb293 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:42.608 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.630 250273 DEBUG nova.virt.libvirt.vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.631 250273 DEBUG nova.network.os_vif_util [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.632 250273 DEBUG nova.network.os_vif_util [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.632 250273 DEBUG os_vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.635 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30248fe9-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.638 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.641 250273 INFO os_vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:db:7e,bridge_name='br-int',has_traffic_filtering=True,id=30248fe9-6da5-4acd-b60f-7c588745f8f3,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30248fe9-6d')#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.642 250273 DEBUG nova.virt.libvirt.vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:51:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-511471311',display_name='tempest-tempest.common.compute-instance-511471311',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-511471311',id=73,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISF6L8g87ZfxLrm8Wwm+gzemsck5aetIhd8gCsjpNrTc2Fv/no3h23xzReyi9tgvOePkWLat/BN4ukRmY5i9SKOoCvqi25H2ncCjSqcqS+cT6X1PkedlTAGxBrEwc2adg==',key_name='tempest-keypair-1775870371',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:51:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='390d19f683334995a5268cf9b4d5e464',ramdisk_id='',reservation_id='r-h371e2sw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-746967993',owner_user_name='tempest-AttachInterfacesTestJSON-746967993-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:51:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='77cda1e9a0404425a06c34637e696603',uuid=bc80c9f1-76f1-4875-895d-9e80312eb293,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.642 250273 DEBUG nova.network.os_vif_util [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converting VIF {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.643 250273 DEBUG nova.network.os_vif_util [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.644 250273 DEBUG os_vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.645 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.645 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf80eab2-c3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.646 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.648 250273 INFO os_vif [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:38:92,bridge_name='br-int',has_traffic_filtering=True,id=af80eab2-c3b9-439d-baae-ee0d90b6cdda,network=Network(7808328e-22f9-46df-ac06-f8c3d6ad10c4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapaf80eab2-c3')#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.963 250273 DEBUG nova.compute.manager [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-unplugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.963 250273 DEBUG oslo_concurrency.lockutils [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.964 250273 DEBUG oslo_concurrency.lockutils [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.964 250273 DEBUG oslo_concurrency.lockutils [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.964 250273 DEBUG nova.compute.manager [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-unplugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:42 np0005593232 nova_compute[250269]: 2026-01-23 09:52:42.964 250273 DEBUG nova.compute.manager [req-94e033b2-0ae6-4f16-bce9-7537d3fbc206 req-ab6b3785-6e5d-4b16-a886-7955f1880017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-unplugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [NOTICE]   (297174) : haproxy version is 2.8.14-c23fe91
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [NOTICE]   (297174) : path to executable is /usr/sbin/haproxy
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [WARNING]  (297174) : Exiting Master process...
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [WARNING]  (297174) : Exiting Master process...
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [ALERT]    (297174) : Current worker (297176) exited with code 143 (Terminated)
Jan 23 04:52:42 np0005593232 neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4[297170]: [WARNING]  (297174) : All workers exited. Exiting... (0)
Jan 23 04:52:42 np0005593232 systemd[1]: libpod-e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c.scope: Deactivated successfully.
Jan 23 04:52:42 np0005593232 podman[299120]: 2026-01-23 09:52:42.995364819 +0000 UTC m=+0.353616993 container died e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 04:52:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c-userdata-shm.mount: Deactivated successfully.
Jan 23 04:52:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c24e6065da71dea0ac0848426c5c9e7169c59c90e4b8f7aa87931bf44e2b70be-merged.mount: Deactivated successfully.
Jan 23 04:52:43 np0005593232 podman[299120]: 2026-01-23 09:52:43.040267033 +0000 UTC m=+0.398519177 container cleanup e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:52:43 np0005593232 systemd[1]: libpod-conmon-e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c.scope: Deactivated successfully.
Jan 23 04:52:43 np0005593232 podman[299165]: 2026-01-23 09:52:43.141068436 +0000 UTC m=+0.079405292 container remove e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.147 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[241ce9e1-11ca-498c-8287-9f7596503a33]: (4, ('Fri Jan 23 09:52:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4 (e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c)\ne417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c\nFri Jan 23 09:52:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4 (e417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c)\ne417cd1eb59f9139290bd86e4f48aff148900f298585ccdf421b19f3df47163c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.149 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[08e6b475-9e8a-4e49-8048-9862a3d59e44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.150 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7808328e-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:43 np0005593232 nova_compute[250269]: 2026-01-23 09:52:43.151 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:43 np0005593232 kernel: tap7808328e-20: left promiscuous mode
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.158 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8860f1f5-91b4-44d9-8e4a-71743dfad80a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 nova_compute[250269]: 2026-01-23 09:52:43.173 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.175 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a266c108-f4ef-4bb6-97db-dd46e70d5d09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.177 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[63c9e79b-5a85-48f8-a057-9ddd3579e99b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.197 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2e367358-7f5a-452e-affa-b194b91f4f72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 579540, 'reachable_time': 37185, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299180, 'error': None, 'target': 'ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 systemd[1]: run-netns-ovnmeta\x2d7808328e\x2d22f9\x2d46df\x2dac06\x2df8c3d6ad10c4.mount: Deactivated successfully.
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.201 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7808328e-22f9-46df-ac06-f8c3d6ad10c4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:52:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:43.202 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ba29a3-00ad-420b-b860-24cf173cb063]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:43.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:43.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:43 np0005593232 nova_compute[250269]: 2026-01-23 09:52:43.642 250273 INFO nova.virt.libvirt.driver [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Deleting instance files /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293_del#033[00m
Jan 23 04:52:43 np0005593232 nova_compute[250269]: 2026-01-23 09:52:43.644 250273 INFO nova.virt.libvirt.driver [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Deletion of /var/lib/nova/instances/bc80c9f1-76f1-4875-895d-9e80312eb293_del complete#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.168 250273 INFO nova.compute.manager [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Took 1.99 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.168 250273 DEBUG oslo.service.loopingcall [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.169 250273 DEBUG nova.compute.manager [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.169 250273 DEBUG nova.network.neutron [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.251 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.252 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:44 np0005593232 nova_compute[250269]: 2026-01-23 09:52:44.252 250273 DEBUG nova.network.neutron [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:52:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1837: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 358 MiB data, 866 MiB used, 20 GiB / 21 GiB avail; 418 KiB/s rd, 2.6 MiB/s wr, 120 op/s
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:45.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.504 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "address": "fa:16:3e:7d:38:92", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaf80eab2-c3", "ovs_interfaceid": "af80eab2-c3b9-439d-baae-ee0d90b6cdda", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:52:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:45.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.550 250273 DEBUG nova.compute.manager [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.550 250273 DEBUG oslo_concurrency.lockutils [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.551 250273 DEBUG oslo_concurrency.lockutils [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.551 250273 DEBUG oslo_concurrency.lockutils [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.551 250273 DEBUG nova.compute.manager [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] No waiting events found dispatching network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.551 250273 WARNING nova.compute.manager [req-b87b0b80-115a-4d2f-84d5-ba60f656a932 req-464a6f39-1510-45af-9ff8-ad3a62b1c265 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received unexpected event network-vif-plugged-30248fe9-6da5-4acd-b60f-7c588745f8f3 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.654 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.655 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.655 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquired lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.656 250273 DEBUG nova.network.neutron [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.657 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.658 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.658 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.659 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.659 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.660 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.863 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.863 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.864 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.864 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:52:45 np0005593232 nova_compute[250269]: 2026-01-23 09:52:45.864 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:52:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1530105371' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.349 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1838: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 284 MiB data, 835 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.617 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.617 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.804 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.805 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4379MB free_disk=20.851722717285156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.806 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.806 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005697614228550158 of space, bias 1.0, pg target 1.7092842685650473 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0031256179624976734 of space, bias 1.0, pg target 0.9345597707868043 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:52:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.893 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating resource usage from migration 33811d13-3bff-46fc-9c7b-2a2a36548dcf#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.945 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance bc80c9f1-76f1-4875-895d-9e80312eb293 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.945 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration 33811d13-3bff-46fc-9c7b-2a2a36548dcf is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.946 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:52:46 np0005593232 nova_compute[250269]: 2026-01-23 09:52:46.946 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.051 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:47.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:52:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2855211444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.507 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.513 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.534 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:52:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:47.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.586 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.587 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.639 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.756 250273 DEBUG nova.network.neutron [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating instance_info_cache with network_info: [{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.806 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.957 250273 DEBUG nova.virt.libvirt.driver [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.958 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Creating file /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/d16fb8016dd448c794882625a675f86a.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 23 04:52:47 np0005593232 nova_compute[250269]: 2026-01-23 09:52:47.958 250273 DEBUG oslo_concurrency.processutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/d16fb8016dd448c794882625a675f86a.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.420 250273 DEBUG oslo_concurrency.processutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/d16fb8016dd448c794882625a675f86a.tmp" returned: 1 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.421 250273 DEBUG oslo_concurrency.processutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247/d16fb8016dd448c794882625a675f86a.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.421 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Creating directory /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.422 250273 DEBUG oslo_concurrency.processutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1839: 321 pgs: 321 active+clean; 200 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 922 KiB/s wr, 146 op/s
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.619 250273 DEBUG oslo_concurrency.processutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:48 np0005593232 nova_compute[250269]: 2026-01-23 09:52:48.623 250273 DEBUG nova.virt.libvirt.driver [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.386 250273 DEBUG nova.network.neutron [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:49.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.490 250273 INFO nova.compute.manager [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Took 5.32 seconds to deallocate network for instance.#033[00m
Jan 23 04:52:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:49.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.567 250273 DEBUG nova.compute.manager [req-76ab0878-dd33-4ccb-b3b3-9900df70664a req-8e21d70c-7fb6-419b-b639-7259f53f215b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Received event network-vif-deleted-30248fe9-6da5-4acd-b60f-7c588745f8f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.682 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.683 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:49 np0005593232 nova_compute[250269]: 2026-01-23 09:52:49.805 250273 DEBUG oslo_concurrency.processutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.031 250273 INFO nova.network.neutron [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Port af80eab2-c3b9-439d-baae-ee0d90b6cdda from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.032 250273 DEBUG nova.network.neutron [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Updating instance_info_cache with network_info: [{"id": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "address": "fa:16:3e:d5:db:7e", "network": {"id": "7808328e-22f9-46df-ac06-f8c3d6ad10c4", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-1070463615-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "390d19f683334995a5268cf9b4d5e464", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30248fe9-6d", "ovs_interfaceid": "30248fe9-6da5-4acd-b60f-7c588745f8f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.062 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.085 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Releasing lock "refresh_cache-bc80c9f1-76f1-4875-895d-9e80312eb293" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.156 250273 DEBUG oslo_concurrency.lockutils [None req-8983ea92-c074-49b9-a0ad-362c66f12107 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "interface-bc80c9f1-76f1-4875-895d-9e80312eb293-af80eab2-c3b9-439d-baae-ee0d90b6cdda" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 14.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914686311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.258 250273 DEBUG oslo_concurrency.processutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.264 250273 DEBUG nova.compute.provider_tree [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 23 04:52:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 23 04:52:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1841: 321 pgs: 321 active+clean; 200 MiB data, 784 MiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 160 KiB/s wr, 148 op/s
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.647 250273 DEBUG nova.scheduler.client.report [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.746 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.828 250273 INFO nova.scheduler.client.report [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Deleted allocations for instance bc80c9f1-76f1-4875-895d-9e80312eb293#033[00m
Jan 23 04:52:50 np0005593232 nova_compute[250269]: 2026-01-23 09:52:50.982 250273 DEBUG oslo_concurrency.lockutils [None req-ac278d0e-0bf1-48fa-bde6-c859714502dc 77cda1e9a0404425a06c34637e696603 390d19f683334995a5268cf9b4d5e464 - - default default] Lock "bc80c9f1-76f1-4875-895d-9e80312eb293" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:52:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:52:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:51 np0005593232 nova_compute[250269]: 2026-01-23 09:52:51.640 250273 INFO nova.virt.libvirt.driver [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 04:52:52 np0005593232 kernel: tapd7d188d3-bf (unregistering): left promiscuous mode
Jan 23 04:52:52 np0005593232 NetworkManager[49057]: <info>  [1769161972.3143] device (tapd7d188d3-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:52:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:52Z|00242|binding|INFO|Releasing lport d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 from this chassis (sb_readonly=0)
Jan 23 04:52:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:52Z|00243|binding|INFO|Setting lport d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 down in Southbound
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.394 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:52:52Z|00244|binding|INFO|Removing iface tapd7d188d3-bf ovn-installed in OVS
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.396 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:52.402 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:23:e7 10.100.0.11'], port_security=['fa:16:3e:49:23:e7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a3c08e79-4f2b-42f2-bcac-21cbcfbc5247', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86d938c8e2bb41a79012befd500d1088', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7a7b70d2-dc13-4ace-b4e0-b2bcfa748347', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c61616-3f86-4228-bb78-0dc84e2b2157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:52:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:52.403 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 in datapath 6d2cdc4c-47a0-475b-8e71-39465d365de3 unbound from our chassis#033[00m
Jan 23 04:52:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:52.405 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d2cdc4c-47a0-475b-8e71-39465d365de3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:52:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:52.406 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[26a3a6dd-866e-4f37-aa84-06a3cf262d10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:52.406 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 namespace which is not needed anymore#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Jan 23 04:52:52 np0005593232 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000004d.scope: Consumed 15.374s CPU time.
Jan 23 04:52:52 np0005593232 systemd-machined[215836]: Machine qemu-29-instance-0000004d terminated.
Jan 23 04:52:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1842: 321 pgs: 321 active+clean; 200 MiB data, 771 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 131 KiB/s wr, 121 op/s
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.642 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.674 250273 INFO nova.virt.libvirt.driver [-] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Instance destroyed successfully.#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.675 250273 DEBUG nova.virt.libvirt.vif [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-231426966',display_name='tempest-ServerDiskConfigTestJSON-server-231426966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-231426966',id=77,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:52:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-w5bvxhsu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:52:42Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=a3c08e79-4f2b-42f2-bcac-21cbcfbc5247,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "vif_mac": "fa:16:3e:49:23:e7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.675 250273 DEBUG nova.network.os_vif_util [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "vif_mac": "fa:16:3e:49:23:e7"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.676 250273 DEBUG nova.network.os_vif_util [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.676 250273 DEBUG os_vif [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.678 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d188d3-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.684 250273 INFO os_vif [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf')#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.688 250273 DEBUG nova.virt.libvirt.driver [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.688 250273 DEBUG nova.virt.libvirt.driver [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:52:52 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [NOTICE]   (298942) : haproxy version is 2.8.14-c23fe91
Jan 23 04:52:52 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [NOTICE]   (298942) : path to executable is /usr/sbin/haproxy
Jan 23 04:52:52 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [WARNING]  (298942) : Exiting Master process...
Jan 23 04:52:52 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [ALERT]    (298942) : Current worker (298944) exited with code 143 (Terminated)
Jan 23 04:52:52 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[298938]: [WARNING]  (298942) : All workers exited. Exiting... (0)
Jan 23 04:52:52 np0005593232 systemd[1]: libpod-388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd.scope: Deactivated successfully.
Jan 23 04:52:52 np0005593232 podman[299280]: 2026-01-23 09:52:52.801824962 +0000 UTC m=+0.319460521 container died 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.863 250273 DEBUG nova.compute.manager [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-unplugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.864 250273 DEBUG oslo_concurrency.lockutils [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.864 250273 DEBUG oslo_concurrency.lockutils [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.864 250273 DEBUG oslo_concurrency.lockutils [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.864 250273 DEBUG nova.compute.manager [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] No waiting events found dispatching network-vif-unplugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:52 np0005593232 nova_compute[250269]: 2026-01-23 09:52:52.865 250273 WARNING nova.compute.manager [req-53c8aa00-6ad4-4959-89e2-9a099a8e368d req-d150c9ae-7788-4e08-bd35-5bcbe7ee2398 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received unexpected event network-vif-unplugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 04:52:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd-userdata-shm.mount: Deactivated successfully.
Jan 23 04:52:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6d2d944c520d4e3d5e47dc5994f3238ec70834c547033f5ba9fdba26448e3ca6-merged.mount: Deactivated successfully.
Jan 23 04:52:53 np0005593232 nova_compute[250269]: 2026-01-23 09:52:53.016 250273 DEBUG neutronclient.v2_0.client [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 04:52:53 np0005593232 nova_compute[250269]: 2026-01-23 09:52:53.351 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:53 np0005593232 nova_compute[250269]: 2026-01-23 09:52:53.351 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:53 np0005593232 nova_compute[250269]: 2026-01-23 09:52:53.351 250273 DEBUG oslo_concurrency.lockutils [None req-b638ed8e-9e60-4a7f-9c7c-bc8fb72a8e6e 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:53 np0005593232 podman[299280]: 2026-01-23 09:52:53.842356528 +0000 UTC m=+1.359992087 container cleanup 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:52:53 np0005593232 systemd[1]: libpod-conmon-388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd.scope: Deactivated successfully.
Jan 23 04:52:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1843: 321 pgs: 321 active+clean; 200 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 36 KiB/s wr, 77 op/s
Jan 23 04:52:54 np0005593232 podman[299324]: 2026-01-23 09:52:54.79289609 +0000 UTC m=+0.930269109 container remove 388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.798 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[281b932b-e4f7-492e-b969-23d7ddd43ccd]: (4, ('Fri Jan 23 09:52:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 (388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd)\n388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd\nFri Jan 23 09:52:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 (388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd)\n388a2214c63efc4faef2e4e242c62a53b00e932d5b1157f6338c7bea379b2cfd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.801 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[615e7ecd-12c5-4465-988c-ca84ec3d4155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.801 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d2cdc4c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:52:54 np0005593232 nova_compute[250269]: 2026-01-23 09:52:54.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:54 np0005593232 kernel: tap6d2cdc4c-40: left promiscuous mode
Jan 23 04:52:54 np0005593232 nova_compute[250269]: 2026-01-23 09:52:54.860 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.863 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c9041ea7-6f79-4114-936d-3c894cad34b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.880 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9eca3f-c095-4a17-b57a-646fea3f9085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.881 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a60db8d0-8e81-49e9-9d23-2ca8d6df45d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.896 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[16860bd5-ca73-434a-b1c2-229f6a8f7059]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 585387, 'reachable_time': 20895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299365, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.898 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:52:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:52:54.898 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[d9ac0707-4854-4fda-92b4-316383567f16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:52:54 np0005593232 systemd[1]: run-netns-ovnmeta\x2d6d2cdc4c\x2d47a0\x2d475b\x2d8e71\x2d39465d365de3.mount: Deactivated successfully.
Jan 23 04:52:54 np0005593232 podman[299325]: 2026-01-23 09:52:54.901042949 +0000 UTC m=+1.020911851 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:52:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:55.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.488 250273 DEBUG nova.compute.manager [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.488 250273 DEBUG oslo_concurrency.lockutils [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.489 250273 DEBUG oslo_concurrency.lockutils [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.489 250273 DEBUG oslo_concurrency.lockutils [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.489 250273 DEBUG nova.compute.manager [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] No waiting events found dispatching network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:52:55 np0005593232 nova_compute[250269]: 2026-01-23 09:52:55.490 250273 WARNING nova.compute.manager [req-93d7a026-3cff-484c-af18-70deb8b07989 req-389fd91c-09b1-47c9-a2b6-ea2a344adc62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received unexpected event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 04:52:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:52:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:55.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:52:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1844: 321 pgs: 321 active+clean; 204 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 345 KiB/s wr, 68 op/s
Jan 23 04:52:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:57.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:57.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.604 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161962.6036806, bc80c9f1-76f1-4875-895d-9e80312eb293 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.605 250273 INFO nova.compute.manager [-] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.628 250273 DEBUG nova.compute.manager [None req-7c323f10-5ac9-49f9-8c49-341029d56fa6 - - - - - -] [instance: bc80c9f1-76f1-4875-895d-9e80312eb293] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.735 250273 DEBUG nova.compute.manager [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-changed-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.735 250273 DEBUG nova.compute.manager [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Refreshing instance network info cache due to event network-changed-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.736 250273 DEBUG oslo_concurrency.lockutils [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.736 250273 DEBUG oslo_concurrency.lockutils [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:52:57 np0005593232 nova_compute[250269]: 2026-01-23 09:52:57.736 250273 DEBUG nova.network.neutron [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Refreshing network info cache for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:52:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1845: 321 pgs: 321 active+clean; 180 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.7 MiB/s wr, 74 op/s
Jan 23 04:52:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:52:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:52:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:52:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:52:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:52:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:52:59.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:52:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 23 04:52:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 23 04:52:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 23 04:53:00 np0005593232 nova_compute[250269]: 2026-01-23 09:53:00.065 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:00 np0005593232 podman[299394]: 2026-01-23 09:53:00.101717996 +0000 UTC m=+0.053311904 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:53:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:00.223 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:53:00 np0005593232 nova_compute[250269]: 2026-01-23 09:53:00.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:00.224 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:53:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1847: 321 pgs: 321 active+clean; 180 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.7 MiB/s wr, 74 op/s
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.187 250273 DEBUG nova.network.neutron [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updated VIF entry in instance network info cache for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.187 250273 DEBUG nova.network.neutron [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating instance_info_cache with network_info: [{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.214 250273 DEBUG oslo_concurrency.lockutils [req-12f00ac8-dee6-4038-96ba-af1b6ddee68a req-950e0e59-085b-42d6-b3ba-0004af9f16ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:53:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:01.225 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:01.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.858 250273 DEBUG nova.compute.manager [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.859 250273 DEBUG oslo_concurrency.lockutils [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.859 250273 DEBUG oslo_concurrency.lockutils [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.859 250273 DEBUG oslo_concurrency.lockutils [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.860 250273 DEBUG nova.compute.manager [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] No waiting events found dispatching network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:53:01 np0005593232 nova_compute[250269]: 2026-01-23 09:53:01.860 250273 WARNING nova.compute.manager [req-5fc97b3c-954d-4d79-9f71-7a065ede6b91 req-51d72b32-fd44-44e9-852c-b2b66f7a4518 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received unexpected event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 04:53:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:53:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:53:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1848: 321 pgs: 321 active+clean; 190 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 61 KiB/s rd, 2.9 MiB/s wr, 98 op/s
Jan 23 04:53:02 np0005593232 nova_compute[250269]: 2026-01-23 09:53:02.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 98ea1675-5285-43da-84f6-d839f97f7f20 does not exist
Jan 23 04:53:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e727df05-7280-49b3-b9c0-3d9d05d834f0 does not exist
Jan 23 04:53:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a186c53-bb1b-4389-b1a4-a072f783a304 does not exist
Jan 23 04:53:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:03.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:53:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:03.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:03.830408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161983830565, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 542, "num_deletes": 252, "total_data_size": 567495, "memory_usage": 578728, "flush_reason": "Manual Compaction"}
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161983988584, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 561940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40743, "largest_seqno": 41284, "table_properties": {"data_size": 558918, "index_size": 994, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7458, "raw_average_key_size": 19, "raw_value_size": 552672, "raw_average_value_size": 1458, "num_data_blocks": 43, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161956, "oldest_key_time": 1769161956, "file_creation_time": 1769161983, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 158186 microseconds, and 3004 cpu microseconds.
Jan 23 04:53:03 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:03.996953065 +0000 UTC m=+0.021333765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.113 250273 DEBUG nova.compute.manager [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.114 250273 DEBUG oslo_concurrency.lockutils [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.115 250273 DEBUG oslo_concurrency.lockutils [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.116 250273 DEBUG oslo_concurrency.lockutils [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.116 250273 DEBUG nova.compute.manager [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] No waiting events found dispatching network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.117 250273 WARNING nova.compute.manager [req-a3cc047b-8146-473e-a28a-eecdd47b5dc4 req-ef7b250c-7ec8-4bef-a406-2c319943f2d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Received unexpected event network-vif-plugged-d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for instance with vm_state resized and task_state None.#033[00m
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:03.988630) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 561940 bytes OK
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:03.988655) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.129292) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.129438) EVENT_LOG_v1 {"time_micros": 1769161984129335, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.129465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 564409, prev total WAL file size 564409, number of live WAL files 2.
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.130402) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(548KB)], [89(10MB)]
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161984130518, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 11614137, "oldest_snapshot_seqno": -1}
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.254709788 +0000 UTC m=+0.279090468 container create 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6545 keys, 9739184 bytes, temperature: kUnknown
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161984472820, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 9739184, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9695760, "index_size": 25968, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 169739, "raw_average_key_size": 25, "raw_value_size": 9578652, "raw_average_value_size": 1463, "num_data_blocks": 1028, "num_entries": 6545, "num_filter_entries": 6545, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769161984, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:53:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1849: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 79 KiB/s rd, 4.3 MiB/s wr, 119 op/s
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.473178) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9739184 bytes
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.562617) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 33.9 rd, 28.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.5 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(38.0) write-amplify(17.3) OK, records in: 7065, records dropped: 520 output_compression: NoCompression
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.562649) EVENT_LOG_v1 {"time_micros": 1769161984562636, "job": 52, "event": "compaction_finished", "compaction_time_micros": 342443, "compaction_time_cpu_micros": 38479, "output_level": 6, "num_output_files": 1, "total_output_size": 9739184, "num_input_records": 7065, "num_output_records": 6545, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161984562981, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769161984564588, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.130274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.564644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.564649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.564652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.564654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:53:04.564656) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:53:04 np0005593232 systemd[1]: Started libpod-conmon-852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab.scope.
Jan 23 04:53:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.692617414 +0000 UTC m=+0.716998114 container init 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.700194005 +0000 UTC m=+0.724574685 container start 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:53:04 np0005593232 jovial_bhabha[299726]: 167 167
Jan 23 04:53:04 np0005593232 systemd[1]: libpod-852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab.scope: Deactivated successfully.
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.706944133 +0000 UTC m=+0.731324813 container attach 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.707661083 +0000 UTC m=+0.732041763 container died 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.733 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.734 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:04 np0005593232 nova_compute[250269]: 2026-01-23 09:53:04.734 250273 DEBUG nova.compute.manager [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Going to confirm migration 11 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 04:53:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6c0c769cdbf11d9674a14b7c0e9bea9d6b25844d105b695146b76b3803d07001-merged.mount: Deactivated successfully.
Jan 23 04:53:04 np0005593232 podman[299710]: 2026-01-23 09:53:04.760545554 +0000 UTC m=+0.784926234 container remove 852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:53:04 np0005593232 systemd[1]: libpod-conmon-852adfa7221d1ba4c2d26af07e96e27816a63c49ef9d3feca862b2ca890943ab.scope: Deactivated successfully.
Jan 23 04:53:04 np0005593232 podman[299749]: 2026-01-23 09:53:04.926573965 +0000 UTC m=+0.045832977 container create c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:53:04 np0005593232 systemd[1]: Started libpod-conmon-c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1.scope.
Jan 23 04:53:04 np0005593232 podman[299749]: 2026-01-23 09:53:04.905842968 +0000 UTC m=+0.025102000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:05 np0005593232 podman[299749]: 2026-01-23 09:53:05.031443303 +0000 UTC m=+0.150702345 container init c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:53:05 np0005593232 podman[299749]: 2026-01-23 09:53:05.040432083 +0000 UTC m=+0.159691095 container start c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:05 np0005593232 podman[299749]: 2026-01-23 09:53:05.044369793 +0000 UTC m=+0.163628805 container attach c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 04:53:05 np0005593232 nova_compute[250269]: 2026-01-23 09:53:05.067 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:05.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:05.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:05 np0005593232 vigilant_fermat[299765]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:53:05 np0005593232 vigilant_fermat[299765]: --> relative data size: 1.0
Jan 23 04:53:05 np0005593232 vigilant_fermat[299765]: --> All data devices are unavailable
Jan 23 04:53:05 np0005593232 systemd[1]: libpod-c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1.scope: Deactivated successfully.
Jan 23 04:53:05 np0005593232 podman[299781]: 2026-01-23 09:53:05.928864556 +0000 UTC m=+0.029008338 container died c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 04:53:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5929d5eb415816f97658447825741d7e10986122d7b9cc92e78849d893d59bcc-merged.mount: Deactivated successfully.
Jan 23 04:53:05 np0005593232 podman[299781]: 2026-01-23 09:53:05.987773685 +0000 UTC m=+0.087917457 container remove c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_fermat, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:53:05 np0005593232 systemd[1]: libpod-conmon-c6b067548afa08762c73aa744122641b0a89cfdeaab32cae4e03b4079ae5efc1.scope: Deactivated successfully.
Jan 23 04:53:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1850: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 4.0 MiB/s wr, 141 op/s
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.639912273 +0000 UTC m=+0.042338659 container create 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:53:06 np0005593232 systemd[1]: Started libpod-conmon-3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2.scope.
Jan 23 04:53:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.620494523 +0000 UTC m=+0.022920929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.717270446 +0000 UTC m=+0.119696842 container init 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.722757869 +0000 UTC m=+0.125184255 container start 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.726172554 +0000 UTC m=+0.128598940 container attach 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 23 04:53:06 np0005593232 nifty_hofstadter[299955]: 167 167
Jan 23 04:53:06 np0005593232 systemd[1]: libpod-3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2.scope: Deactivated successfully.
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.728286423 +0000 UTC m=+0.130712809 container died 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7624c9957ab40bfb1850f4aa10e2803fd567e6c995afb9f84e4553a7d80a6be6-merged.mount: Deactivated successfully.
Jan 23 04:53:06 np0005593232 podman[299939]: 2026-01-23 09:53:06.766770284 +0000 UTC m=+0.169196680 container remove 3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:53:06 np0005593232 systemd[1]: libpod-conmon-3065a653fb47d90b8fb7a3a8eed02b44bacb6da12f22be155e150d0254baf2b2.scope: Deactivated successfully.
Jan 23 04:53:06 np0005593232 podman[299978]: 2026-01-23 09:53:06.916382617 +0000 UTC m=+0.038224085 container create 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:53:06 np0005593232 systemd[1]: Started libpod-conmon-0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef.scope.
Jan 23 04:53:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb86c889b22288742bbe42c1f5f9666829eb8db9822a90dcd35638d48bdba03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb86c889b22288742bbe42c1f5f9666829eb8db9822a90dcd35638d48bdba03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb86c889b22288742bbe42c1f5f9666829eb8db9822a90dcd35638d48bdba03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1eb86c889b22288742bbe42c1f5f9666829eb8db9822a90dcd35638d48bdba03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:06 np0005593232 podman[299978]: 2026-01-23 09:53:06.901186214 +0000 UTC m=+0.023027712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:07 np0005593232 podman[299978]: 2026-01-23 09:53:07.001596559 +0000 UTC m=+0.123438047 container init 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:53:07 np0005593232 podman[299978]: 2026-01-23 09:53:07.010320141 +0000 UTC m=+0.132161609 container start 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:53:07 np0005593232 podman[299978]: 2026-01-23 09:53:07.013611473 +0000 UTC m=+0.135452961 container attach 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.053 250273 DEBUG neutronclient.v2_0.client [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.056 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.056 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.056 250273 DEBUG nova.network.neutron [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.057 250273 DEBUG nova.objects.instance [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'info_cache' on Instance uuid a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:53:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:07.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:07.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.673 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769161972.6726031, a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.674 250273 INFO nova.compute.manager [-] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:53:07 np0005593232 nova_compute[250269]: 2026-01-23 09:53:07.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]: {
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:    "0": [
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:        {
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "devices": [
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "/dev/loop3"
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            ],
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "lv_name": "ceph_lv0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "lv_size": "7511998464",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "name": "ceph_lv0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "tags": {
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.cluster_name": "ceph",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.crush_device_class": "",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.encrypted": "0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.osd_id": "0",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.type": "block",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:                "ceph.vdo": "0"
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            },
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "type": "block",
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:            "vg_name": "ceph_vg0"
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:        }
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]:    ]
Jan 23 04:53:07 np0005593232 zealous_goodall[299995]: }
Jan 23 04:53:07 np0005593232 systemd[1]: libpod-0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef.scope: Deactivated successfully.
Jan 23 04:53:07 np0005593232 podman[299978]: 2026-01-23 09:53:07.819993103 +0000 UTC m=+0.941834571 container died 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:53:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1eb86c889b22288742bbe42c1f5f9666829eb8db9822a90dcd35638d48bdba03-merged.mount: Deactivated successfully.
Jan 23 04:53:07 np0005593232 podman[299978]: 2026-01-23 09:53:07.87592138 +0000 UTC m=+0.997762848 container remove 0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:53:07 np0005593232 systemd[1]: libpod-conmon-0b682ff13d3e4f17cd6be288113e0771e1cdd255bb0b7375ea2e82c436fa5cef.scope: Deactivated successfully.
Jan 23 04:53:08 np0005593232 nova_compute[250269]: 2026-01-23 09:53:08.427 250273 DEBUG nova.compute.manager [None req-a43bcb89-1608-4c71-8eb9-08f267ca5244 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:53:08 np0005593232 nova_compute[250269]: 2026-01-23 09:53:08.433 250273 DEBUG nova.compute.manager [None req-a43bcb89-1608-4c71-8eb9-08f267ca5244 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: resized, current task_state: None, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:53:08 np0005593232 nova_compute[250269]: 2026-01-23 09:53:08.481 250273 INFO nova.compute.manager [None req-a43bcb89-1608-4c71-8eb9-08f267ca5244 - - - - - -] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.493928318 +0000 UTC m=+0.036856827 container create fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:53:08 np0005593232 systemd[1]: Started libpod-conmon-fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9.scope.
Jan 23 04:53:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1851: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.6 MiB/s wr, 146 op/s
Jan 23 04:53:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.477774868 +0000 UTC m=+0.020703407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.577210535 +0000 UTC m=+0.120139044 container init fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.58348548 +0000 UTC m=+0.126413989 container start fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.587016428 +0000 UTC m=+0.129944967 container attach fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:08 np0005593232 hungry_carver[300175]: 167 167
Jan 23 04:53:08 np0005593232 systemd[1]: libpod-fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9.scope: Deactivated successfully.
Jan 23 04:53:08 np0005593232 conmon[300175]: conmon fbb868c76ca80e89398e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9.scope/container/memory.events
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.590173946 +0000 UTC m=+0.133102455 container died fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:53:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-71e4c57ec48557a2586fc109c08a9ccc5b67dfae49cc5e698036baf3f4587204-merged.mount: Deactivated successfully.
Jan 23 04:53:08 np0005593232 podman[300159]: 2026-01-23 09:53:08.62840932 +0000 UTC m=+0.171337829 container remove fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_carver, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:53:08 np0005593232 systemd[1]: libpod-conmon-fbb868c76ca80e89398e9027ab1c4989930420ad43d428906deaed99493654e9.scope: Deactivated successfully.
Jan 23 04:53:08 np0005593232 podman[300199]: 2026-01-23 09:53:08.770468093 +0000 UTC m=+0.022431925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:53:09 np0005593232 podman[300199]: 2026-01-23 09:53:09.125329508 +0000 UTC m=+0.377293320 container create e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:53:09 np0005593232 systemd[1]: Started libpod-conmon-e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b.scope.
Jan 23 04:53:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bbc12ee44d3c0cf033e32c4ccda15a96a3482d82ef50f0dfb73cddb64728df9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bbc12ee44d3c0cf033e32c4ccda15a96a3482d82ef50f0dfb73cddb64728df9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bbc12ee44d3c0cf033e32c4ccda15a96a3482d82ef50f0dfb73cddb64728df9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8bbc12ee44d3c0cf033e32c4ccda15a96a3482d82ef50f0dfb73cddb64728df9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:09 np0005593232 podman[300199]: 2026-01-23 09:53:09.216502845 +0000 UTC m=+0.468466677 container init e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:53:09 np0005593232 podman[300199]: 2026-01-23 09:53:09.225991999 +0000 UTC m=+0.477955811 container start e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:53:09 np0005593232 podman[300199]: 2026-01-23 09:53:09.229913948 +0000 UTC m=+0.481877770 container attach e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:53:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:09.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:09.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:10 np0005593232 serene_darwin[300217]: {
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:        "osd_id": 0,
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:        "type": "bluestore"
Jan 23 04:53:10 np0005593232 serene_darwin[300217]:    }
Jan 23 04:53:10 np0005593232 serene_darwin[300217]: }
Jan 23 04:53:10 np0005593232 nova_compute[250269]: 2026-01-23 09:53:10.069 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:10 np0005593232 systemd[1]: libpod-e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b.scope: Deactivated successfully.
Jan 23 04:53:10 np0005593232 podman[300199]: 2026-01-23 09:53:10.097641896 +0000 UTC m=+1.349605728 container died e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:53:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8bbc12ee44d3c0cf033e32c4ccda15a96a3482d82ef50f0dfb73cddb64728df9-merged.mount: Deactivated successfully.
Jan 23 04:53:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:10 np0005593232 podman[300199]: 2026-01-23 09:53:10.326189876 +0000 UTC m=+1.578153688 container remove e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_darwin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:53:10 np0005593232 systemd[1]: libpod-conmon-e7232890c38b97e5b6635f76f2dbd3266e4c94f849529cefbaa00752bb28000b.scope: Deactivated successfully.
Jan 23 04:53:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:53:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1852: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 135 op/s
Jan 23 04:53:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:53:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ebed3d08-39d4-4e38-990c-b2ac9ed6bfef does not exist
Jan 23 04:53:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1564f9ab-5de6-4724-95e7-91b6782f8a36 does not exist
Jan 23 04:53:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 71bcede5-c0f3-43d3-bf4c-3aeccaf8656e does not exist
Jan 23 04:53:11 np0005593232 nova_compute[250269]: 2026-01-23 09:53:11.113 250273 DEBUG nova.network.neutron [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: a3c08e79-4f2b-42f2-bcac-21cbcfbc5247] Updating instance_info_cache with network_info: [{"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:53:11 np0005593232 nova_compute[250269]: 2026-01-23 09:53:11.350 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:53:11 np0005593232 nova_compute[250269]: 2026-01-23 09:53:11.351 250273 DEBUG nova.objects.instance [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'migration_context' on Instance uuid a3c08e79-4f2b-42f2-bcac-21cbcfbc5247 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:53:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:11.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:11 np0005593232 nova_compute[250269]: 2026-01-23 09:53:11.470 250273 DEBUG nova.storage.rbd_utils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] removing snapshot(nova-resize) on rbd image(a3c08e79-4f2b-42f2-bcac-21cbcfbc5247_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:53:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 23 04:53:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:53:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 23 04:53:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1853: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 23 04:53:12 np0005593232 nova_compute[250269]: 2026-01-23 09:53:12.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 23 04:53:13 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 23 04:53:13 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 23 04:53:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:13 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 23 04:53:13 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.475 250273 DEBUG nova.virt.libvirt.vif [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:52:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-231426966',display_name='tempest-ServerDiskConfigTestJSON-server-231426966',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-231426966',id=77,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:53:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-w5bvxhsu',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:53:02Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=a3c08e79-4f2b-42f2-bcac-21cbcfbc5247,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.476 250273 DEBUG nova.network.os_vif_util [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "address": "fa:16:3e:49:23:e7", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7d188d3-bf", "ovs_interfaceid": "d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.476 250273 DEBUG nova.network.os_vif_util [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.477 250273 DEBUG os_vif [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.478 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d188d3-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.479 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.480 250273 INFO os_vif [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:49:23:e7,bridge_name='br-int',has_traffic_filtering=True,id=d7d188d3-bf5e-4df5-9ce1-6ba00d3f6728,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7d188d3-bf')#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.481 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.481 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:13 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 23 04:53:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:13 np0005593232 nova_compute[250269]: 2026-01-23 09:53:13.702 250273 DEBUG oslo_concurrency.processutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:53:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1464700750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.144 250273 DEBUG oslo_concurrency.processutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.151 250273 DEBUG nova.compute.provider_tree [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.191 250273 DEBUG nova.scheduler.client.report [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.262 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.539 250273 INFO nova.scheduler.client.report [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Deleted allocation for migration 33811d13-3bff-46fc-9c7b-2a2a36548dcf#033[00m
Jan 23 04:53:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1855: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 181 op/s
Jan 23 04:53:14 np0005593232 nova_compute[250269]: 2026-01-23 09:53:14.752 250273 DEBUG oslo_concurrency.lockutils [None req-ecc3da77-d036-4743-8b5d-de9dbf93553f 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a3c08e79-4f2b-42f2-bcac-21cbcfbc5247" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 10.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:15 np0005593232 nova_compute[250269]: 2026-01-23 09:53:15.072 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:15.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:15.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1856: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 16 KiB/s wr, 178 op/s
Jan 23 04:53:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:17.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:17 np0005593232 nova_compute[250269]: 2026-01-23 09:53:17.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:18 np0005593232 nova_compute[250269]: 2026-01-23 09:53:18.002 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:18 np0005593232 nova_compute[250269]: 2026-01-23 09:53:18.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:18 np0005593232 nova_compute[250269]: 2026-01-23 09:53:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:18 np0005593232 nova_compute[250269]: 2026-01-23 09:53:18.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:53:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1857: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 29 KiB/s wr, 252 op/s
Jan 23 04:53:19 np0005593232 nova_compute[250269]: 2026-01-23 09:53:19.407 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:19 np0005593232 nova_compute[250269]: 2026-01-23 09:53:19.407 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:19.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:19.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:20 np0005593232 nova_compute[250269]: 2026-01-23 09:53:20.073 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 23 04:53:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1858: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 29 KiB/s wr, 252 op/s
Jan 23 04:53:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 23 04:53:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 23 04:53:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:21.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:21.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:22 np0005593232 nova_compute[250269]: 2026-01-23 09:53:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1860: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 14 KiB/s wr, 221 op/s
Jan 23 04:53:22 np0005593232 nova_compute[250269]: 2026-01-23 09:53:22.687 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:23 np0005593232 nova_compute[250269]: 2026-01-23 09:53:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:23 np0005593232 nova_compute[250269]: 2026-01-23 09:53:23.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:53:23 np0005593232 nova_compute[250269]: 2026-01-23 09:53:23.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:53:23 np0005593232 nova_compute[250269]: 2026-01-23 09:53:23.379 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:53:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:23.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1861: 321 pgs: 321 active+clean; 224 MiB data, 780 MiB used, 20 GiB / 21 GiB avail; 744 KiB/s rd, 1.4 MiB/s wr, 212 op/s
Jan 23 04:53:25 np0005593232 nova_compute[250269]: 2026-01-23 09:53:25.075 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:25 np0005593232 nova_compute[250269]: 2026-01-23 09:53:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:25 np0005593232 nova_compute[250269]: 2026-01-23 09:53:25.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:53:25 np0005593232 podman[300423]: 2026-01-23 09:53:25.429009783 +0000 UTC m=+0.086654413 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 04:53:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:25.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:25.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:26 np0005593232 nova_compute[250269]: 2026-01-23 09:53:26.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:26 np0005593232 nova_compute[250269]: 2026-01-23 09:53:26.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1862: 321 pgs: 321 active+clean; 240 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 628 KiB/s rd, 2.4 MiB/s wr, 210 op/s
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.337 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.337 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.337 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:27.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3074563579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.769 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.908 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.910 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4555MB free_disk=20.898292541503906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.910 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:27 np0005593232 nova_compute[250269]: 2026-01-23 09:53:27.910 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1863: 321 pgs: 321 active+clean; 191 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.6 MiB/s wr, 171 op/s
Jan 23 04:53:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:29.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:29 np0005593232 nova_compute[250269]: 2026-01-23 09:53:29.642 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:53:29 np0005593232 nova_compute[250269]: 2026-01-23 09:53:29.642 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:53:29 np0005593232 nova_compute[250269]: 2026-01-23 09:53:29.783 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:53:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347753620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.262 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.270 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.308 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.361 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.361 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.451s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:30 np0005593232 nova_compute[250269]: 2026-01-23 09:53:30.362 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:30 np0005593232 podman[300499]: 2026-01-23 09:53:30.384142264 +0000 UTC m=+0.047533474 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 04:53:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1864: 321 pgs: 321 active+clean; 191 MiB data, 788 MiB used, 20 GiB / 21 GiB avail; 429 KiB/s rd, 2.6 MiB/s wr, 171 op/s
Jan 23 04:53:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:53:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:31.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:53:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:31.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1865: 321 pgs: 321 active+clean; 167 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 404 KiB/s rd, 2.2 MiB/s wr, 175 op/s
Jan 23 04:53:32 np0005593232 nova_compute[250269]: 2026-01-23 09:53:32.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:33 np0005593232 nova_compute[250269]: 2026-01-23 09:53:33.380 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:33.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:33 np0005593232 nova_compute[250269]: 2026-01-23 09:53:33.650 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1866: 321 pgs: 321 active+clean; 167 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 384 KiB/s rd, 2.2 MiB/s wr, 159 op/s
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.095 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.398 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.399 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.436 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:53:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 04:53:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:35.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 04:53:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:35.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.757 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.757 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.771 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:53:35 np0005593232 nova_compute[250269]: 2026-01-23 09:53:35.772 250273 INFO nova.compute.claims [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.141 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:53:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/942114752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:53:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1867: 321 pgs: 321 active+clean; 167 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 354 KiB/s rd, 983 KiB/s wr, 127 op/s
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.585 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.591 250273 DEBUG nova.compute.provider_tree [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.643 250273 DEBUG nova.scheduler.client.report [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.703 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.704 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.927 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:53:36 np0005593232 nova_compute[250269]: 2026-01-23 09:53:36.928 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.022 250273 INFO nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.059 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:53:37
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms']
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:53:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:37.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:53:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:37.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.703 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.704 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.705 250273 INFO nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Creating image(s)#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.730 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.756 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.780 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.783 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.845 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.846 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.847 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.847 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.881 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:37 np0005593232 nova_compute[250269]: 2026-01-23 09:53:37.885 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:38 np0005593232 nova_compute[250269]: 2026-01-23 09:53:38.008 250273 DEBUG nova.policy [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0cfac2191989448ead77e75ca3910ac4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '86d938c8e2bb41a79012befd500d1088', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:53:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1868: 321 pgs: 321 active+clean; 167 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 277 KiB/s rd, 163 KiB/s wr, 103 op/s
Jan 23 04:53:38 np0005593232 nova_compute[250269]: 2026-01-23 09:53:38.702 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.817s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:38 np0005593232 nova_compute[250269]: 2026-01-23 09:53:38.790 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] resizing rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:53:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:39.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:39.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.728 250273 DEBUG nova.objects.instance [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'migration_context' on Instance uuid fffef24b-bb5b-41c6-a049-c1c4ba8f02fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.731 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Successfully created port: 10fe80c9-2f99-4371-a60e-b8b226c250aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.765 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.765 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Ensure instance console log exists: /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.766 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.766 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:39 np0005593232 nova_compute[250269]: 2026-01-23 09:53:39.767 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:40 np0005593232 nova_compute[250269]: 2026-01-23 09:53:40.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1869: 321 pgs: 321 active+clean; 167 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 33 KiB/s wr, 33 op/s
Jan 23 04:53:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:41.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:41.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1870: 321 pgs: 321 active+clean; 152 MiB data, 741 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 843 KiB/s wr, 40 op/s
Jan 23 04:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:42.606 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:42.607 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:42.607 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:42 np0005593232 nova_compute[250269]: 2026-01-23 09:53:42.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:43 np0005593232 nova_compute[250269]: 2026-01-23 09:53:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:53:43 np0005593232 nova_compute[250269]: 2026-01-23 09:53:43.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:53:43 np0005593232 nova_compute[250269]: 2026-01-23 09:53:43.335 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:53:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:43.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:43.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.188 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Successfully updated port: 10fe80c9-2f99-4371-a60e-b8b226c250aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.249 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.249 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.249 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.488 250273 DEBUG nova.compute.manager [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-changed-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.489 250273 DEBUG nova.compute.manager [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Refreshing instance network info cache due to event network-changed-10fe80c9-2f99-4371-a60e-b8b226c250aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.489 250273 DEBUG oslo_concurrency.lockutils [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:53:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1871: 321 pgs: 321 active+clean; 134 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 23 04:53:44 np0005593232 nova_compute[250269]: 2026-01-23 09:53:44.789 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:53:45 np0005593232 nova_compute[250269]: 2026-01-23 09:53:45.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:45.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:45.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1872: 321 pgs: 321 active+clean; 134 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000988921749953471 of space, bias 1.0, pg target 0.29667652498604136 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29700368160245677 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:53:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:53:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:47.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:47.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:47 np0005593232 nova_compute[250269]: 2026-01-23 09:53:47.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.329 250273 DEBUG nova.network.neutron [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating instance_info_cache with network_info: [{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.414 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.415 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance network_info: |[{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.415 250273 DEBUG oslo_concurrency.lockutils [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.415 250273 DEBUG nova.network.neutron [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Refreshing network info cache for port 10fe80c9-2f99-4371-a60e-b8b226c250aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.418 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Start _get_guest_xml network_info=[{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.421 250273 WARNING nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.435 250273 DEBUG nova.virt.libvirt.host [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.435 250273 DEBUG nova.virt.libvirt.host [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.475 250273 DEBUG nova.virt.libvirt.host [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.476 250273 DEBUG nova.virt.libvirt.host [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.478 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.478 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.478 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.479 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.479 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.479 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.479 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.480 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.480 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.480 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.481 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.481 250273 DEBUG nova.virt.hardware [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.484 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1873: 321 pgs: 321 active+clean; 134 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 23 04:53:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:53:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3256762605' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.945 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.972 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:48 np0005593232 nova_compute[250269]: 2026-01-23 09:53:48.977 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:53:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451045703' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.424 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.426 250273 DEBUG nova.virt.libvirt.vif [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1974132906',display_name='tempest-ServerDiskConfigTestJSON-server-1974132906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1974132906',id=79,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-09i9yxy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:53:37Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=fffef24b-bb5b-41c6-a049-c1c4ba8f02fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.426 250273 DEBUG nova.network.os_vif_util [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.427 250273 DEBUG nova.network.os_vif_util [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.428 250273 DEBUG nova.objects.instance [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'pci_devices' on Instance uuid fffef24b-bb5b-41c6-a049-c1c4ba8f02fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.447 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <uuid>fffef24b-bb5b-41c6-a049-c1c4ba8f02fb</uuid>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <name>instance-0000004f</name>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1974132906</nova:name>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:53:48</nova:creationTime>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:user uuid="0cfac2191989448ead77e75ca3910ac4">tempest-ServerDiskConfigTestJSON-211417238-project-member</nova:user>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:project uuid="86d938c8e2bb41a79012befd500d1088">tempest-ServerDiskConfigTestJSON-211417238</nova:project>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <nova:port uuid="10fe80c9-2f99-4371-a60e-b8b226c250aa">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="serial">fffef24b-bb5b-41c6-a049-c1c4ba8f02fb</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="uuid">fffef24b-bb5b-41c6-a049-c1c4ba8f02fb</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:45:b2:d4"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <target dev="tap10fe80c9-2f"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/console.log" append="off"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:53:49 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:53:49 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:53:49 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:53:49 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.449 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Preparing to wait for external event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.449 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.449 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.450 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.450 250273 DEBUG nova.virt.libvirt.vif [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1974132906',display_name='tempest-ServerDiskConfigTestJSON-server-1974132906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1974132906',id=79,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-09i9yxy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:53:37Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=fffef24b-bb5b-41c6-a049-c1c4ba8f02fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.451 250273 DEBUG nova.network.os_vif_util [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.451 250273 DEBUG nova.network.os_vif_util [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.452 250273 DEBUG os_vif [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.452 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.453 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.453 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.456 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.457 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap10fe80c9-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.458 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap10fe80c9-2f, col_values=(('external_ids', {'iface-id': '10fe80c9-2f99-4371-a60e-b8b226c250aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:b2:d4', 'vm-uuid': 'fffef24b-bb5b-41c6-a049-c1c4ba8f02fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:49 np0005593232 NetworkManager[49057]: <info>  [1769162029.4603] manager: (tap10fe80c9-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.459 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.462 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.471 250273 INFO os_vif [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f')#033[00m
Jan 23 04:53:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:49.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.556 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.557 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.557 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] No VIF found with MAC fa:16:3e:45:b2:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.558 250273 INFO nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Using config drive#033[00m
Jan 23 04:53:49 np0005593232 nova_compute[250269]: 2026-01-23 09:53:49.588 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:49.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:50 np0005593232 nova_compute[250269]: 2026-01-23 09:53:50.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1874: 321 pgs: 321 active+clean; 134 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 23 04:53:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:50 np0005593232 nova_compute[250269]: 2026-01-23 09:53:50.966 250273 INFO nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Creating config drive at /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config#033[00m
Jan 23 04:53:50 np0005593232 nova_compute[250269]: 2026-01-23 09:53:50.971 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2585ofcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:51 np0005593232 nova_compute[250269]: 2026-01-23 09:53:51.108 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2585ofcn" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:51 np0005593232 nova_compute[250269]: 2026-01-23 09:53:51.186 250273 DEBUG nova.storage.rbd_utils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] rbd image fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:53:51 np0005593232 nova_compute[250269]: 2026-01-23 09:53:51.189 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:53:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:51.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:51.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.173 250273 DEBUG oslo_concurrency.processutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.984s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.175 250273 INFO nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Deleting local config drive /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/disk.config because it was imported into RBD.#033[00m
Jan 23 04:53:52 np0005593232 kernel: tap10fe80c9-2f: entered promiscuous mode
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.2426] manager: (tap10fe80c9-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Jan 23 04:53:52 np0005593232 systemd-udevd[300900]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:53:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:53:52Z|00245|binding|INFO|Claiming lport 10fe80c9-2f99-4371-a60e-b8b226c250aa for this chassis.
Jan 23 04:53:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:53:52Z|00246|binding|INFO|10fe80c9-2f99-4371-a60e-b8b226c250aa: Claiming fa:16:3e:45:b2:d4 10.100.0.12
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.297 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.3102] device (tap10fe80c9-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.3116] device (tap10fe80c9-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:53:52 np0005593232 systemd-machined[215836]: New machine qemu-30-instance-0000004f.
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.341 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:b2:d4 10.100.0.12'], port_security=['fa:16:3e:45:b2:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fffef24b-bb5b-41c6-a049-c1c4ba8f02fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86d938c8e2bb41a79012befd500d1088', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7a7b70d2-dc13-4ace-b4e0-b2bcfa748347', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c61616-3f86-4228-bb78-0dc84e2b2157, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=10fe80c9-2f99-4371-a60e-b8b226c250aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.342 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 10fe80c9-2f99-4371-a60e-b8b226c250aa in datapath 6d2cdc4c-47a0-475b-8e71-39465d365de3 bound to our chassis#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.343 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d2cdc4c-47a0-475b-8e71-39465d365de3#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.356 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[23527f09-0533-4d21-807b-33ff9cd61d7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.356 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d2cdc4c-41 in ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.358 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d2cdc4c-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.358 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a034da-fdfc-4ece-9b93-58a39ebdb84e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.359 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5f10a416-7caa-46e6-8407-87397316e840]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:53:52Z|00247|binding|INFO|Setting lport 10fe80c9-2f99-4371-a60e-b8b226c250aa ovn-installed in OVS
Jan 23 04:53:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:53:52Z|00248|binding|INFO|Setting lport 10fe80c9-2f99-4371-a60e-b8b226c250aa up in Southbound
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 systemd[1]: Started Virtual Machine qemu-30-instance-0000004f.
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.372 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a68e5604-fcee-4ca8-9619-7121d0541dda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.386 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d6cb11f4-644e-4405-978e-305417047221]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.417 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b333ad30-9f06-4896-ae2c-8c5e0e15709d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.421 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[712f83f0-f647-4774-bf45-8fd45b04e14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.4222] manager: (tap6d2cdc4c-40): new Veth device (/org/freedesktop/NetworkManager/Devices/123)
Jan 23 04:53:52 np0005593232 systemd-udevd[300904]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.452 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaa9c09-8922-45ca-bbc2-b23e37debcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.456 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[69c4372c-4643-482d-b37f-2e00d0c0b9cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.4794] device (tap6d2cdc4c-40): carrier: link connected
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.484 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[174c82f5-977b-4425-87f4-5f9a7235045f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.499 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[540cc476-779a-472b-bb19-51aec8467ede]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d2cdc4c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:5a:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594562, 'reachable_time': 20154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300936, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.514 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[50994cec-3ae9-4aa9-8c22-b14e92fda647]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:5a26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 594562, 'tstamp': 594562}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300937, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.530 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7f725abd-adc4-4af3-9626-a73933969950]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d2cdc4c-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:5a:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 72], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594562, 'reachable_time': 20154, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300938, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.563 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[66828995-fc16-4fd1-9721-30ce62ab5bfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1875: 321 pgs: 321 active+clean; 134 MiB data, 724 MiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.630 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e8da13c-f4c3-4110-a80e-cea535c2cea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.631 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d2cdc4c-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.632 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.632 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d2cdc4c-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 kernel: tap6d2cdc4c-40: entered promiscuous mode
Jan 23 04:53:52 np0005593232 NetworkManager[49057]: <info>  [1769162032.6354] manager: (tap6d2cdc4c-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.637 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d2cdc4c-40, col_values=(('external_ids', {'iface-id': '04f6c0b6-99ee-4958-bc01-68fa310042f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.638 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:53:52Z|00249|binding|INFO|Releasing lport 04f6c0b6-99ee-4958-bc01-68fa310042f0 from this chassis (sb_readonly=0)
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.655 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.656 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cc473738-e5f8-4b6d-a2e5-33aa6f275a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.657 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-6d2cdc4c-47a0-475b-8e71-39465d365de3
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/6d2cdc4c-47a0-475b-8e71-39465d365de3.pid.haproxy
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 6d2cdc4c-47a0-475b-8e71-39465d365de3
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:53:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:52.657 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'env', 'PROCESS_TAG=haproxy-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d2cdc4c-47a0-475b-8e71-39465d365de3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.896 250273 DEBUG nova.network.neutron [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updated VIF entry in instance network info cache for port 10fe80c9-2f99-4371-a60e-b8b226c250aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.897 250273 DEBUG nova.network.neutron [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating instance_info_cache with network_info: [{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.899 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162032.8946786, fffef24b-bb5b-41c6-a049-c1c4ba8f02fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.899 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] VM Started (Lifecycle Event)#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.939 250273 DEBUG oslo_concurrency.lockutils [req-a5b01e55-e3dc-45b3-958f-922f92bf44e3 req-a296bf30-c8b7-4046-b27e-701777d8b3b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.941 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.945 250273 DEBUG nova.compute.manager [req-58e116a9-c678-40bb-99d8-f12331aff08b req-9e0c2b1a-6fa7-4e4a-94ba-8957a55a7ea4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.945 250273 DEBUG oslo_concurrency.lockutils [req-58e116a9-c678-40bb-99d8-f12331aff08b req-9e0c2b1a-6fa7-4e4a-94ba-8957a55a7ea4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.946 250273 DEBUG oslo_concurrency.lockutils [req-58e116a9-c678-40bb-99d8-f12331aff08b req-9e0c2b1a-6fa7-4e4a-94ba-8957a55a7ea4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.946 250273 DEBUG oslo_concurrency.lockutils [req-58e116a9-c678-40bb-99d8-f12331aff08b req-9e0c2b1a-6fa7-4e4a-94ba-8957a55a7ea4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.946 250273 DEBUG nova.compute.manager [req-58e116a9-c678-40bb-99d8-f12331aff08b req-9e0c2b1a-6fa7-4e4a-94ba-8957a55a7ea4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Processing event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.947 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.949 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162032.8949497, fffef24b-bb5b-41c6-a049-c1c4ba8f02fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.950 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.952 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.956 250273 INFO nova.virt.libvirt.driver [-] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance spawned successfully.#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.957 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.980 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.981 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.981 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.982 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.982 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.983 250273 DEBUG nova.virt.libvirt.driver [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.989 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.992 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162032.9506576, fffef24b-bb5b-41c6-a049-c1c4ba8f02fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:53:52 np0005593232 nova_compute[250269]: 2026-01-23 09:53:52.992 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.038 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.043 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:53:53 np0005593232 podman[301011]: 2026-01-23 09:53:53.06123963 +0000 UTC m=+0.088248967 container create fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 04:53:53 np0005593232 podman[301011]: 2026-01-23 09:53:52.997136996 +0000 UTC m=+0.024146353 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.095 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.177 250273 INFO nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Took 15.47 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.177 250273 DEBUG nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:53:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:53.214 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:53 np0005593232 systemd[1]: Started libpod-conmon-fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be.scope.
Jan 23 04:53:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:53:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17decb5d3bc96ff0ec2f54e374bf3c654fbc8dec6be2b248b61bc2ccf796b298/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:53:53 np0005593232 podman[301011]: 2026-01-23 09:53:53.366774312 +0000 UTC m=+0.393783659 container init fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:53:53 np0005593232 podman[301011]: 2026-01-23 09:53:53.376364459 +0000 UTC m=+0.403373796 container start fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:53:53 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [NOTICE]   (301030) : New worker (301032) forked
Jan 23 04:53:53 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [NOTICE]   (301030) : Loading success.
Jan 23 04:53:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:53.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:53:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:53.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.768 250273 INFO nova.compute.manager [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Took 18.18 seconds to build instance.#033[00m
Jan 23 04:53:53 np0005593232 nova_compute[250269]: 2026-01-23 09:53:53.790 250273 DEBUG oslo_concurrency.lockutils [None req-85cb5053-33a4-49fc-b4b5-b7a0e9b979de 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:53:53.938 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:53:54 np0005593232 nova_compute[250269]: 2026-01-23 09:53:54.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1876: 321 pgs: 321 active+clean; 134 MiB data, 721 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1018 KiB/s wr, 51 op/s
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.098 250273 DEBUG nova.compute.manager [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.099 250273 DEBUG oslo_concurrency.lockutils [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.099 250273 DEBUG oslo_concurrency.lockutils [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.099 250273 DEBUG oslo_concurrency.lockutils [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.099 250273 DEBUG nova.compute.manager [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] No waiting events found dispatching network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:53:55 np0005593232 nova_compute[250269]: 2026-01-23 09:53:55.100 250273 WARNING nova.compute.manager [req-d942bd02-0773-4560-bb7a-3fcebedb8bb4 req-9776a7ed-8306-4610-8798-9fc65a2887ed 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received unexpected event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa for instance with vm_state active and task_state None.#033[00m
Jan 23 04:53:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:53:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:55.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:53:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:55.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:53:55 np0005593232 podman[301043]: 2026-01-23 09:53:55.940963747 +0000 UTC m=+0.098131152 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 23 04:53:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1877: 321 pgs: 321 active+clean; 155 MiB data, 732 MiB used, 20 GiB / 21 GiB avail; 857 KiB/s rd, 945 KiB/s wr, 38 op/s
Jan 23 04:53:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:57.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:57.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1878: 321 pgs: 321 active+clean; 180 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:53:59 np0005593232 nova_compute[250269]: 2026-01-23 09:53:59.465 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:53:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:53:59.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:53:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:53:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:53:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:53:59.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:00 np0005593232 nova_compute[250269]: 2026-01-23 09:54:00.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:00 np0005593232 podman[301096]: 2026-01-23 09:54:00.55867488 +0000 UTC m=+0.058903610 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 04:54:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1879: 321 pgs: 321 active+clean; 180 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:54:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:01.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:01.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1880: 321 pgs: 321 active+clean; 180 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:54:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:03.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:03.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:03.941 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:04 np0005593232 nova_compute[250269]: 2026-01-23 09:54:04.305 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:04 np0005593232 nova_compute[250269]: 2026-01-23 09:54:04.305 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:04 np0005593232 nova_compute[250269]: 2026-01-23 09:54:04.306 250273 DEBUG nova.network.neutron [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:54:04 np0005593232 nova_compute[250269]: 2026-01-23 09:54:04.469 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1881: 321 pgs: 321 active+clean; 180 MiB data, 742 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 04:54:05 np0005593232 nova_compute[250269]: 2026-01-23 09:54:05.093 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:05.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:05.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1882: 321 pgs: 321 active+clean; 188 MiB data, 750 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 102 op/s
Jan 23 04:54:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:07Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:b2:d4 10.100.0.12
Jan 23 04:54:07 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:07Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:b2:d4 10.100.0.12
Jan 23 04:54:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:07.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:07.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1883: 321 pgs: 321 active+clean; 202 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 105 op/s
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.672 250273 DEBUG nova.network.neutron [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating instance_info_cache with network_info: [{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:09.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.713 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.847 250273 DEBUG nova.virt.libvirt.driver [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.848 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Creating file /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/6833f9b7a6484816aa45450a2f18250c.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 23 04:54:09 np0005593232 nova_compute[250269]: 2026-01-23 09:54:09.848 250273 DEBUG oslo_concurrency.processutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/6833f9b7a6484816aa45450a2f18250c.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.095 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.342 250273 DEBUG oslo_concurrency.processutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/6833f9b7a6484816aa45450a2f18250c.tmp" returned: 1 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.343 250273 DEBUG oslo_concurrency.processutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb/6833f9b7a6484816aa45450a2f18250c.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.344 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Creating directory /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.345 250273 DEBUG oslo_concurrency.processutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1884: 321 pgs: 321 active+clean; 202 MiB data, 766 MiB used, 20 GiB / 21 GiB avail; 169 KiB/s rd, 2.1 MiB/s wr, 43 op/s
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.631 250273 DEBUG oslo_concurrency.processutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:10 np0005593232 nova_compute[250269]: 2026-01-23 09:54:10.639 250273 DEBUG nova.virt.libvirt.driver [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 04:54:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:11.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 04:54:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:11.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 04:54:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.258 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.258 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.301 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.480 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.481 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.488 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.489 250273 INFO nova.compute.claims [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:54:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1885: 321 pgs: 321 active+clean; 209 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 83 op/s
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:54:12 np0005593232 nova_compute[250269]: 2026-01-23 09:54:12.698 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 06101616-adf7-4328-82a2-dc7aa1d1a211 does not exist
Jan 23 04:54:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d87a23c-d6db-45c3-84e6-2eea67004e84 does not exist
Jan 23 04:54:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c18d8ae1-afa5-4a29-9a6d-9e3d360ab62f does not exist
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 04:54:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.303 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.310 250273 DEBUG nova.compute.provider_tree [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.398 250273 DEBUG nova.scheduler.client.report [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.41315205 +0000 UTC m=+0.084762800 container create b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.35423834 +0000 UTC m=+0.025849140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:13 np0005593232 systemd[1]: Started libpod-conmon-b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05.scope.
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.499 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.500 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:54:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:13.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.570054376 +0000 UTC m=+0.241665166 container init b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.577193194 +0000 UTC m=+0.248803944 container start b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:54:13 np0005593232 blissful_archimedes[301458]: 167 167
Jan 23 04:54:13 np0005593232 systemd[1]: libpod-b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05.scope: Deactivated successfully.
Jan 23 04:54:13 np0005593232 conmon[301458]: conmon b906bbd7a68ab1efdcee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05.scope/container/memory.events
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.615 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.617 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.655 250273 INFO nova.virt.libvirt.driver [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.664 250273 INFO nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:54:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:13.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.690112597 +0000 UTC m=+0.361723347 container attach b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:54:13 np0005593232 podman[301441]: 2026-01-23 09:54:13.691008402 +0000 UTC m=+0.362619192 container died b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:54:13 np0005593232 kernel: tap10fe80c9-2f (unregistering): left promiscuous mode
Jan 23 04:54:13 np0005593232 NetworkManager[49057]: <info>  [1769162053.7196] device (tap10fe80c9-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:54:13 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:13Z|00250|binding|INFO|Releasing lport 10fe80c9-2f99-4371-a60e-b8b226c250aa from this chassis (sb_readonly=0)
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.730 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:13 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:13Z|00251|binding|INFO|Setting lport 10fe80c9-2f99-4371-a60e-b8b226c250aa down in Southbound
Jan 23 04:54:13 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:13Z|00252|binding|INFO|Removing iface tap10fe80c9-2f ovn-installed in OVS
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.737 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:54:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:13.740 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:b2:d4 10.100.0.12'], port_security=['fa:16:3e:45:b2:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'fffef24b-bb5b-41c6-a049-c1c4ba8f02fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '86d938c8e2bb41a79012befd500d1088', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7a7b70d2-dc13-4ace-b4e0-b2bcfa748347', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=99c61616-3f86-4228-bb78-0dc84e2b2157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=10fe80c9-2f99-4371-a60e-b8b226c250aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:54:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:13.742 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 10fe80c9-2f99-4371-a60e-b8b226c250aa in datapath 6d2cdc4c-47a0-475b-8e71-39465d365de3 unbound from our chassis#033[00m
Jan 23 04:54:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:13.744 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d2cdc4c-47a0-475b-8e71-39465d365de3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:54:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:13.748 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b98f5b61-d361-4d0d-8702-45575fd78cab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:13.750 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 namespace which is not needed anymore#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:13 np0005593232 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Jan 23 04:54:13 np0005593232 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000004f.scope: Consumed 14.267s CPU time.
Jan 23 04:54:13 np0005593232 systemd-machined[215836]: Machine qemu-30-instance-0000004f terminated.
Jan 23 04:54:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-61e505dfaa661055714a1317186f2b0da0cbe706cc93dfa2ee71546d78420424-merged.mount: Deactivated successfully.
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.889 250273 INFO nova.virt.libvirt.driver [-] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Instance destroyed successfully.#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.890 250273 DEBUG nova.virt.libvirt.vif [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1974132906',display_name='tempest-ServerDiskConfigTestJSON-server-1974132906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1974132906',id=79,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:53:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-09i9yxy4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:54:02Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=fffef24b-bb5b-41c6-a049-c1c4ba8f02fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "vif_mac": "fa:16:3e:45:b2:d4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.890 250273 DEBUG nova.network.os_vif_util [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "vif_mac": "fa:16:3e:45:b2:d4"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.891 250273 DEBUG nova.network.os_vif_util [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.891 250273 DEBUG os_vif [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.894 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10fe80c9-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.899 250273 INFO os_vif [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f')#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.904 250273 DEBUG nova.virt.libvirt.driver [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.905 250273 DEBUG nova.virt.libvirt.driver [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.940 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.941 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.941 250273 INFO nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Creating image(s)#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.969 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:13 np0005593232 nova_compute[250269]: 2026-01-23 09:54:13.997 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.156 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.160 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.194 250273 DEBUG nova.policy [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '040257fcfb8e485989e95807791e25f6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'bf8efe4dc7e34393b5cd5a5ef2735ecf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:54:14 np0005593232 podman[301441]: 2026-01-23 09:54:14.199443581 +0000 UTC m=+0.871054331 container remove b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 04:54:14 np0005593232 systemd[1]: libpod-conmon-b906bbd7a68ab1efdceea58a3c9fd9b93320ef5b98044089fa77e57e24875f05.scope: Deactivated successfully.
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.242 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.243 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.243 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.244 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.271 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.275 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b3b41870-7e94-44fd-84bb-575dbf15c745_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.364 250273 DEBUG neutronclient.v2_0.client [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 10fe80c9-2f99-4371-a60e-b8b226c250aa for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 04:54:14 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [NOTICE]   (301030) : haproxy version is 2.8.14-c23fe91
Jan 23 04:54:14 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [NOTICE]   (301030) : path to executable is /usr/sbin/haproxy
Jan 23 04:54:14 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [WARNING]  (301030) : Exiting Master process...
Jan 23 04:54:14 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [ALERT]    (301030) : Current worker (301032) exited with code 143 (Terminated)
Jan 23 04:54:14 np0005593232 neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3[301026]: [WARNING]  (301030) : All workers exited. Exiting... (0)
Jan 23 04:54:14 np0005593232 systemd[1]: libpod-fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be.scope: Deactivated successfully.
Jan 23 04:54:14 np0005593232 podman[301589]: 2026-01-23 09:54:14.393981864 +0000 UTC m=+0.079656887 container died fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:54:14 np0005593232 podman[301594]: 2026-01-23 09:54:14.390969751 +0000 UTC m=+0.064852176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.587 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.588 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.588 250273 DEBUG oslo_concurrency.lockutils [None req-0e3d80b6-54ee-4047-ae8f-401c4c3f54e7 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1886: 321 pgs: 321 active+clean; 213 MiB data, 768 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Jan 23 04:54:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be-userdata-shm.mount: Deactivated successfully.
Jan 23 04:54:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-17decb5d3bc96ff0ec2f54e374bf3c654fbc8dec6be2b248b61bc2ccf796b298-merged.mount: Deactivated successfully.
Jan 23 04:54:14 np0005593232 podman[301594]: 2026-01-23 09:54:14.811571945 +0000 UTC m=+0.485454350 container create 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:54:14 np0005593232 systemd[1]: Started libpod-conmon-969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3.scope.
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.901 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b3b41870-7e94-44fd-84bb-575dbf15c745_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:14 np0005593232 podman[301589]: 2026-01-23 09:54:14.904748848 +0000 UTC m=+0.590423871 container cleanup fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 04:54:14 np0005593232 systemd[1]: libpod-conmon-fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be.scope: Deactivated successfully.
Jan 23 04:54:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:14 np0005593232 podman[301594]: 2026-01-23 09:54:14.947992952 +0000 UTC m=+0.621875387 container init 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 04:54:14 np0005593232 podman[301594]: 2026-01-23 09:54:14.956773626 +0000 UTC m=+0.630656031 container start 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 04:54:14 np0005593232 nova_compute[250269]: 2026-01-23 09:54:14.996 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] resizing rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:54:15 np0005593232 podman[301594]: 2026-01-23 09:54:15.033305346 +0000 UTC m=+0.707187751 container attach 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.095 250273 DEBUG nova.objects.instance [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lazy-loading 'migration_context' on Instance uuid b3b41870-7e94-44fd-84bb-575dbf15c745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:15 np0005593232 podman[301663]: 2026-01-23 09:54:15.122408265 +0000 UTC m=+0.191637704 container remove fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.128 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6ace6efb-6b0d-4385-ae04-7341a6dd38a0]: (4, ('Fri Jan 23 09:54:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 (fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be)\nfd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be\nFri Jan 23 09:54:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 (fd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be)\nfd88602175ca360e1655391cb443ab7acd12e89837a8ecad758896db3a17d4be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.130 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f85a396-b6bc-4d97-beae-5abee72c2e13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.131 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.131 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Ensure instance console log exists: /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.131 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d2cdc4c-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.131 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.132 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.132 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:15 np0005593232 kernel: tap6d2cdc4c-40: left promiscuous mode
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.133 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.138 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[40d67a72-ec09-4e3f-8436-cf7aa2f06b4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 nova_compute[250269]: 2026-01-23 09:54:15.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.156 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fab119cb-1a8e-4dfb-82ed-9c58658677a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.157 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d824052f-a0af-4250-b5ac-7d211883c360]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.175 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1a29f581-3b75-4629-96c4-5f20cc2f9240]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 594555, 'reachable_time': 15919, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301747, 'error': None, 'target': 'ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 systemd[1]: run-netns-ovnmeta\x2d6d2cdc4c\x2d47a0\x2d475b\x2d8e71\x2d39465d365de3.mount: Deactivated successfully.
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.181 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d2cdc4c-47a0-475b-8e71-39465d365de3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:54:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:15.181 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2626b4-77f4-441c-a218-9d1ba1e97802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:15.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:15.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:15 np0005593232 hopeful_hoover[301655]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:54:15 np0005593232 hopeful_hoover[301655]: --> relative data size: 1.0
Jan 23 04:54:15 np0005593232 hopeful_hoover[301655]: --> All data devices are unavailable
Jan 23 04:54:15 np0005593232 systemd[1]: libpod-969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3.scope: Deactivated successfully.
Jan 23 04:54:15 np0005593232 podman[301594]: 2026-01-23 09:54:15.82474844 +0000 UTC m=+1.498630865 container died 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:54:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:16 np0005593232 nova_compute[250269]: 2026-01-23 09:54:16.267 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Successfully created port: 3262bd82-5538-4d9c-af99-430bed9a5120 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:54:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a21e88ddcc3e5dbbe00ff5a1c8588f0e2971914f453c6745380cab5c19053737-merged.mount: Deactivated successfully.
Jan 23 04:54:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1887: 321 pgs: 321 active+clean; 227 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 144 op/s
Jan 23 04:54:16 np0005593232 podman[301594]: 2026-01-23 09:54:16.956453713 +0000 UTC m=+2.630336118 container remove 969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_hoover, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:54:16 np0005593232 systemd[1]: libpod-conmon-969ad1a38cd2381bdf295c6c3d3a034353bce2e71ffc6cde65136904fc4524a3.scope: Deactivated successfully.
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.498 250273 DEBUG nova.compute.manager [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-unplugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.500 250273 DEBUG oslo_concurrency.lockutils [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.500 250273 DEBUG oslo_concurrency.lockutils [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.501 250273 DEBUG oslo_concurrency.lockutils [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.501 250273 DEBUG nova.compute.manager [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] No waiting events found dispatching network-vif-unplugged-10fe80c9-2f99-4371-a60e-b8b226c250aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:54:17 np0005593232 nova_compute[250269]: 2026-01-23 09:54:17.502 250273 WARNING nova.compute.manager [req-6aad24c0-e8bc-4a76-a464-b847fa3592f8 req-682054d2-1375-44a3-9df9-32a1b3a1c035 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received unexpected event network-vif-unplugged-10fe80c9-2f99-4371-a60e-b8b226c250aa for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 04:54:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:17.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:17 np0005593232 podman[301910]: 2026-01-23 09:54:17.535200909 +0000 UTC m=+0.020747058 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:17.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:17 np0005593232 podman[301910]: 2026-01-23 09:54:17.754753609 +0000 UTC m=+0.240299728 container create 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 04:54:17 np0005593232 systemd[1]: Started libpod-conmon-757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4.scope.
Jan 23 04:54:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:18 np0005593232 podman[301910]: 2026-01-23 09:54:18.047510796 +0000 UTC m=+0.533056935 container init 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 04:54:18 np0005593232 podman[301910]: 2026-01-23 09:54:18.056684321 +0000 UTC m=+0.542230440 container start 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:54:18 np0005593232 hungry_wilbur[301927]: 167 167
Jan 23 04:54:18 np0005593232 systemd[1]: libpod-757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4.scope: Deactivated successfully.
Jan 23 04:54:18 np0005593232 podman[301910]: 2026-01-23 09:54:18.097919219 +0000 UTC m=+0.583465338 container attach 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:54:18 np0005593232 podman[301910]: 2026-01-23 09:54:18.09833452 +0000 UTC m=+0.583880659 container died 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:54:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-982184b60180909ef92e55a96856dccc9b930cd44766259dcd9fa123e700a59e-merged.mount: Deactivated successfully.
Jan 23 04:54:18 np0005593232 podman[301910]: 2026-01-23 09:54:18.332107316 +0000 UTC m=+0.817653445 container remove 757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:54:18 np0005593232 systemd[1]: libpod-conmon-757213837f73b19c278dabba5514c53a83773a07bdca0bc2ae612b4ada5355c4.scope: Deactivated successfully.
Jan 23 04:54:18 np0005593232 podman[301954]: 2026-01-23 09:54:18.486956815 +0000 UTC m=+0.024685758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1888: 321 pgs: 321 active+clean; 260 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.2 MiB/s wr, 162 op/s
Jan 23 04:54:18 np0005593232 podman[301954]: 2026-01-23 09:54:18.722387197 +0000 UTC m=+0.260116120 container create ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:54:18 np0005593232 systemd[1]: Started libpod-conmon-ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e.scope.
Jan 23 04:54:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8424d5cd5a9f8527f1f1194998243af99228470b80c692edd3568bdad41a834/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8424d5cd5a9f8527f1f1194998243af99228470b80c692edd3568bdad41a834/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8424d5cd5a9f8527f1f1194998243af99228470b80c692edd3568bdad41a834/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8424d5cd5a9f8527f1f1194998243af99228470b80c692edd3568bdad41a834/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:18 np0005593232 podman[301954]: 2026-01-23 09:54:18.880233769 +0000 UTC m=+0.417962722 container init ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:54:18 np0005593232 podman[301954]: 2026-01-23 09:54:18.888442928 +0000 UTC m=+0.426171851 container start ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 04:54:18 np0005593232 nova_compute[250269]: 2026-01-23 09:54:18.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:18 np0005593232 podman[301954]: 2026-01-23 09:54:18.935497857 +0000 UTC m=+0.473226780 container attach ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.335 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.468 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Successfully updated port: 3262bd82-5538-4d9c-af99-430bed9a5120 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:54:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:19.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.515 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.515 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.515 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:54:19 np0005593232 nice_shamir[301970]: {
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:    "0": [
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:        {
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "devices": [
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "/dev/loop3"
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            ],
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "lv_name": "ceph_lv0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "lv_size": "7511998464",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "name": "ceph_lv0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "tags": {
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.cluster_name": "ceph",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.crush_device_class": "",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.encrypted": "0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.osd_id": "0",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.type": "block",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:                "ceph.vdo": "0"
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            },
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "type": "block",
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:            "vg_name": "ceph_vg0"
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:        }
Jan 23 04:54:19 np0005593232 nice_shamir[301970]:    ]
Jan 23 04:54:19 np0005593232 nice_shamir[301970]: }
Jan 23 04:54:19 np0005593232 systemd[1]: libpod-ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e.scope: Deactivated successfully.
Jan 23 04:54:19 np0005593232 podman[301954]: 2026-01-23 09:54:19.678539295 +0000 UTC m=+1.216268248 container died ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.692 250273 DEBUG nova.compute.manager [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.694 250273 DEBUG oslo_concurrency.lockutils [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.694 250273 DEBUG oslo_concurrency.lockutils [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.694 250273 DEBUG oslo_concurrency.lockutils [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.695 250273 DEBUG nova.compute.manager [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] No waiting events found dispatching network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:54:19 np0005593232 nova_compute[250269]: 2026-01-23 09:54:19.695 250273 WARNING nova.compute.manager [req-b16a9e95-5e4e-40e0-8209-893eae362738 req-5765c90d-f68a-43d7-bb05-588bea02a63e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received unexpected event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 04:54:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:19.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b8424d5cd5a9f8527f1f1194998243af99228470b80c692edd3568bdad41a834-merged.mount: Deactivated successfully.
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.000 250273 DEBUG nova.compute.manager [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.002 250273 DEBUG nova.compute.manager [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing instance network info cache due to event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.002 250273 DEBUG oslo_concurrency.lockutils [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:20 np0005593232 podman[301954]: 2026-01-23 09:54:20.081881219 +0000 UTC m=+1.619610142 container remove ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_shamir, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:54:20 np0005593232 systemd[1]: libpod-conmon-ddfc782ada4fcc691d82bb19448e698cc55a043130fe4f0ed13e3cfb84a3c85e.scope: Deactivated successfully.
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.115 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.388 250273 DEBUG nova.compute.manager [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-changed-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.388 250273 DEBUG nova.compute.manager [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Refreshing instance network info cache due to event network-changed-10fe80c9-2f99-4371-a60e-b8b226c250aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.388 250273 DEBUG oslo_concurrency.lockutils [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.389 250273 DEBUG oslo_concurrency.lockutils [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:20 np0005593232 nova_compute[250269]: 2026-01-23 09:54:20.389 250273 DEBUG nova.network.neutron [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Refreshing network info cache for port 10fe80c9-2f99-4371-a60e-b8b226c250aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:54:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1889: 321 pgs: 321 active+clean; 260 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 124 op/s
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.654825802 +0000 UTC m=+0.030212131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.799363765 +0000 UTC m=+0.174750044 container create aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:54:20 np0005593232 systemd[1]: Started libpod-conmon-aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82.scope.
Jan 23 04:54:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.968677676 +0000 UTC m=+0.344063975 container init aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.979802886 +0000 UTC m=+0.355189165 container start aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.985050542 +0000 UTC m=+0.360436921 container attach aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 04:54:20 np0005593232 brave_yonath[302201]: 167 167
Jan 23 04:54:20 np0005593232 systemd[1]: libpod-aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82.scope: Deactivated successfully.
Jan 23 04:54:20 np0005593232 podman[302157]: 2026-01-23 09:54:20.989610779 +0000 UTC m=+0.364997098 container died aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 04:54:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-67167a9bccb6f4eebc073f495099654b643ba15d110bddde92aa363f2c9a8b9b-merged.mount: Deactivated successfully.
Jan 23 04:54:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:21 np0005593232 podman[302157]: 2026-01-23 09:54:21.090432845 +0000 UTC m=+0.465819134 container remove aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:54:21 np0005593232 systemd[1]: libpod-conmon-aaf679237f0405b3834391ecbf559fa8da446a67926a156c274e2c1ec45f1b82.scope: Deactivated successfully.
Jan 23 04:54:21 np0005593232 podman[302227]: 2026-01-23 09:54:21.249493001 +0000 UTC m=+0.046125474 container create 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:54:21 np0005593232 podman[302227]: 2026-01-23 09:54:21.226556813 +0000 UTC m=+0.023189346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:54:21 np0005593232 systemd[1]: Started libpod-conmon-66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5.scope.
Jan 23 04:54:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4001f4b2de93ee0ed7b1afe55b574446d1c533663baddf91a08d1f5d3204ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4001f4b2de93ee0ed7b1afe55b574446d1c533663baddf91a08d1f5d3204ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4001f4b2de93ee0ed7b1afe55b574446d1c533663baddf91a08d1f5d3204ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d4001f4b2de93ee0ed7b1afe55b574446d1c533663baddf91a08d1f5d3204ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:21.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:21 np0005593232 podman[302227]: 2026-01-23 09:54:21.568768386 +0000 UTC m=+0.365400889 container init 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 04:54:21 np0005593232 podman[302227]: 2026-01-23 09:54:21.575662678 +0000 UTC m=+0.372295151 container start 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:54:21 np0005593232 podman[302227]: 2026-01-23 09:54:21.656466837 +0000 UTC m=+0.453099330 container attach 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:54:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:21.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:22 np0005593232 nova_compute[250269]: 2026-01-23 09:54:22.220 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]: {
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:        "osd_id": 0,
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:        "type": "bluestore"
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]:    }
Jan 23 04:54:22 np0005593232 youthful_margulis[302245]: }
Jan 23 04:54:22 np0005593232 systemd[1]: libpod-66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5.scope: Deactivated successfully.
Jan 23 04:54:22 np0005593232 podman[302227]: 2026-01-23 09:54:22.415476859 +0000 UTC m=+1.212109332 container died 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 04:54:22 np0005593232 nova_compute[250269]: 2026-01-23 09:54:22.445 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid b3b41870-7e94-44fd-84bb-575dbf15c745 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 04:54:22 np0005593232 nova_compute[250269]: 2026-01-23 09:54:22.446 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:22 np0005593232 nova_compute[250269]: 2026-01-23 09:54:22.518 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6d4001f4b2de93ee0ed7b1afe55b574446d1c533663baddf91a08d1f5d3204ca-merged.mount: Deactivated successfully.
Jan 23 04:54:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1890: 321 pgs: 321 active+clean; 239 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 127 op/s
Jan 23 04:54:22 np0005593232 podman[302227]: 2026-01-23 09:54:22.613920741 +0000 UTC m=+1.410553234 container remove 66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_margulis, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 04:54:22 np0005593232 systemd[1]: libpod-conmon-66b97a6c9f283730ea81cb395471b590e0570359e7af6e72b9292f7d453cb2b5.scope: Deactivated successfully.
Jan 23 04:54:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:54:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:54:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd452ff0-f2c8-421d-bc3b-b12b390c7b75 does not exist
Jan 23 04:54:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 43c08d12-858f-4626-bba8-9e8f7d588aff does not exist
Jan 23 04:54:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2ee15a77-fffb-4515-9525-e397b0451b5e does not exist
Jan 23 04:54:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:54:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:23.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:23.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:23 np0005593232 nova_compute[250269]: 2026-01-23 09:54:23.896 250273 DEBUG nova.network.neutron [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:23 np0005593232 nova_compute[250269]: 2026-01-23 09:54:23.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:24 np0005593232 nova_compute[250269]: 2026-01-23 09:54:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:24 np0005593232 nova_compute[250269]: 2026-01-23 09:54:24.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:54:24 np0005593232 nova_compute[250269]: 2026-01-23 09:54:24.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:54:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1891: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 111 op/s
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:25.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.671 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.671 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.700 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.701 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Instance network_info: |[{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.701 250273 DEBUG oslo_concurrency.lockutils [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.702 250273 DEBUG nova.network.neutron [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.704 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Start _get_guest_xml network_info=[{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:54:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:25.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.709 250273 WARNING nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.717 250273 DEBUG nova.virt.libvirt.host [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.717 250273 DEBUG nova.virt.libvirt.host [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.726 250273 DEBUG nova.virt.libvirt.host [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.727 250273 DEBUG nova.virt.libvirt.host [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.728 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.728 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.729 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.729 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.729 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.729 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.730 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.730 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.730 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.730 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.730 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.731 250273 DEBUG nova.virt.hardware [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:54:25 np0005593232 nova_compute[250269]: 2026-01-23 09:54:25.734 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:54:26 np0005593232 podman[302350]: 2026-01-23 09:54:26.445708142 +0000 UTC m=+0.098509462 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 04:54:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:54:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623775476' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.485 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.751s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.511 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.516 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1892: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 185 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 23 04:54:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:54:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/566506408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.957 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.959 250273 DEBUG nova.virt.libvirt.vif [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:54:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1595370930',display_name='tempest-AttachInterfacesUnderV243Test-server-1595370930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1595370930',id=82,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyf5h128AUMpfjBt1qf29lypfdoHeGHB/3CwSYlps3D0DZlGiUWRB29lfHF8y4riYrxOp4GC+XlBNfb99CaksvAr8c1p4UMqTuQ6hcTzvjiih7FGAE7LGbuNDDdzZODNA==',key_name='tempest-keypair-2005272547',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bf8efe4dc7e34393b5cd5a5ef2735ecf',ramdisk_id='',reservation_id='r-z491gojt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1038514400',owner_user_name='tempest-AttachInterfacesUnderV243Test-1038514400-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:54:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='040257fcfb8e485989e95807791e25f6',uuid=b3b41870-7e94-44fd-84bb-575dbf15c745,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.960 250273 DEBUG nova.network.os_vif_util [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converting VIF {"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.961 250273 DEBUG nova.network.os_vif_util [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:54:26 np0005593232 nova_compute[250269]: 2026-01-23 09:54:26.962 250273 DEBUG nova.objects.instance [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lazy-loading 'pci_devices' on Instance uuid b3b41870-7e94-44fd-84bb-575dbf15c745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:27.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.673 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <uuid>b3b41870-7e94-44fd-84bb-575dbf15c745</uuid>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <name>instance-00000052</name>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1595370930</nova:name>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:54:25</nova:creationTime>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:user uuid="040257fcfb8e485989e95807791e25f6">tempest-AttachInterfacesUnderV243Test-1038514400-project-member</nova:user>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:project uuid="bf8efe4dc7e34393b5cd5a5ef2735ecf">tempest-AttachInterfacesUnderV243Test-1038514400</nova:project>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <nova:port uuid="3262bd82-5538-4d9c-af99-430bed9a5120">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="serial">b3b41870-7e94-44fd-84bb-575dbf15c745</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="uuid">b3b41870-7e94-44fd-84bb-575dbf15c745</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b3b41870-7e94-44fd-84bb-575dbf15c745_disk">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:70:57:58"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <target dev="tap3262bd82-55"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/console.log" append="off"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:54:27 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:54:27 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:54:27 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:54:27 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.674 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Preparing to wait for external event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.675 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.675 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.675 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.676 250273 DEBUG nova.virt.libvirt.vif [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:54:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1595370930',display_name='tempest-AttachInterfacesUnderV243Test-server-1595370930',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1595370930',id=82,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyf5h128AUMpfjBt1qf29lypfdoHeGHB/3CwSYlps3D0DZlGiUWRB29lfHF8y4riYrxOp4GC+XlBNfb99CaksvAr8c1p4UMqTuQ6hcTzvjiih7FGAE7LGbuNDDdzZODNA==',key_name='tempest-keypair-2005272547',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='bf8efe4dc7e34393b5cd5a5ef2735ecf',ramdisk_id='',reservation_id='r-z491gojt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1038514400',owner_user_name='tempest-AttachInterfacesUnderV243Test-1038514400-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:54:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='040257fcfb8e485989e95807791e25f6',uuid=b3b41870-7e94-44fd-84bb-575dbf15c745,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.676 250273 DEBUG nova.network.os_vif_util [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converting VIF {"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.677 250273 DEBUG nova.network.os_vif_util [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.677 250273 DEBUG os_vif [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.678 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.679 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.682 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.682 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3262bd82-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.683 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3262bd82-55, col_values=(('external_ids', {'iface-id': '3262bd82-5538-4d9c-af99-430bed9a5120', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:57:58', 'vm-uuid': 'b3b41870-7e94-44fd-84bb-575dbf15c745'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:27 np0005593232 NetworkManager[49057]: <info>  [1769162067.6854] manager: (tap3262bd82-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/125)
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.687 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.693 250273 INFO os_vif [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55')#033[00m
Jan 23 04:54:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:27.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.845 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.845 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.845 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] No VIF found with MAC fa:16:3e:70:57:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.846 250273 INFO nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Using config drive#033[00m
Jan 23 04:54:27 np0005593232 nova_compute[250269]: 2026-01-23 09:54:27.870 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.400 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.400 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.400 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.401 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.401 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1893: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.4 MiB/s wr, 50 op/s
Jan 23 04:54:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:54:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3203047670' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.855 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.888 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162053.8876574, fffef24b-bb5b-41c6-a049-c1c4ba8f02fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.889 250273 INFO nova.compute.manager [-] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.950 250273 DEBUG nova.compute.manager [None req-1ca9accb-892d-4d9a-86b8-df6fa860c1d7 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:54:28 np0005593232 nova_compute[250269]: 2026-01-23 09:54:28.953 250273 DEBUG nova.compute.manager [None req-1ca9accb-892d-4d9a-86b8-df6fa860c1d7 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:54:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.003 250273 INFO nova.compute.manager [None req-1ca9accb-892d-4d9a-86b8-df6fa860c1d7 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 04:54:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.010 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.011 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000052 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.015 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.015 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000004f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.142 250273 INFO nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Creating config drive at /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.147 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9es44nkb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.202 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.204 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=20.922027587890625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.205 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.205 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.275 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration for instance fffef24b-bb5b-41c6-a049-c1c4ba8f02fb refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.281 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9es44nkb" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.316 250273 DEBUG nova.storage.rbd_utils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] rbd image b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.320 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.348 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating resource usage from migration 9a6f94e7-d356-40f3-896e-69daee93b7ad#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.349 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Starting to track outgoing migration 9a6f94e7-d356-40f3-896e-69daee93b7ad with flavor 68d42077-c749-4366-ba3e-07758debb02d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.395 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration 9a6f94e7-d356-40f3-896e-69daee93b7ad is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.396 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance b3b41870-7e94-44fd-84bb-575dbf15c745 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.396 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.396 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.426 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.469 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.469 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.483 250273 DEBUG nova.network.neutron [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updated VIF entry in instance network info cache for port 10fe80c9-2f99-4371-a60e-b8b226c250aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.484 250273 DEBUG nova.network.neutron [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating instance_info_cache with network_info: [{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.521 250273 DEBUG oslo_concurrency.lockutils [req-beb77c0d-7fe0-4f20-9ad1-c7c5d45bc557 req-77cdfd7f-dd57-4059-84e2-b19eac6aedc3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:29.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.526 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.557 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:54:29 np0005593232 nova_compute[250269]: 2026-01-23 09:54:29.675 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:29.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:54:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629855953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.121 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.126 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.169 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.180 250273 DEBUG oslo_concurrency.processutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config b3b41870-7e94-44fd-84bb-575dbf15c745_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.860s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.181 250273 INFO nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Deleting local config drive /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745/disk.config because it was imported into RBD.#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.186 250273 DEBUG nova.network.neutron [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updated VIF entry in instance network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.187 250273 DEBUG nova.network.neutron [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:30 np0005593232 kernel: tap3262bd82-55: entered promiscuous mode
Jan 23 04:54:30 np0005593232 NetworkManager[49057]: <info>  [1769162070.2370] manager: (tap3262bd82-55): new Tun device (/org/freedesktop/NetworkManager/Devices/126)
Jan 23 04:54:30 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:30Z|00253|binding|INFO|Claiming lport 3262bd82-5538-4d9c-af99-430bed9a5120 for this chassis.
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:30 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:30Z|00254|binding|INFO|3262bd82-5538-4d9c-af99-430bed9a5120: Claiming fa:16:3e:70:57:58 10.100.0.3
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:30 np0005593232 systemd-udevd[302539]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:54:30 np0005593232 systemd-machined[215836]: New machine qemu-31-instance-00000052.
Jan 23 04:54:30 np0005593232 NetworkManager[49057]: <info>  [1769162070.2857] device (tap3262bd82-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:54:30 np0005593232 NetworkManager[49057]: <info>  [1769162070.2868] device (tap3262bd82-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:54:30 np0005593232 systemd[1]: Started Virtual Machine qemu-31-instance-00000052.
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:30 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:30Z|00255|binding|INFO|Setting lport 3262bd82-5538-4d9c-af99-430bed9a5120 ovn-installed in OVS
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.320 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1895: 321 pgs: 321 active+clean; 213 MiB data, 767 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 31 op/s
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.857 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162070.857371, b3b41870-7e94-44fd-84bb-575dbf15c745 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:54:30 np0005593232 nova_compute[250269]: 2026-01-23 09:54:30.859 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] VM Started (Lifecycle Event)#033[00m
Jan 23 04:54:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:31Z|00256|binding|INFO|Setting lport 3262bd82-5538-4d9c-af99-430bed9a5120 up in Southbound
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.059 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:57:58 10.100.0.3'], port_security=['fa:16:3e:70:57:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b3b41870-7e94-44fd-84bb-575dbf15c745', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bf8efe4dc7e34393b5cd5a5ef2735ecf', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb2fef88-afa1-4a2c-98c2-a88fbf18d3ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ef82d7-36aa-4d49-89a7-35476c106254, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3262bd82-5538-4d9c-af99-430bed9a5120) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.060 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3262bd82-5538-4d9c-af99-430bed9a5120 in datapath 5f61b5ab-0868-4f2c-b962-bdb7ecf89061 bound to our chassis#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.061 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5f61b5ab-0868-4f2c-b962-bdb7ecf89061#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.074 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2723f4e-f65a-46a5-a5c7-1eb5e6f75c0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.075 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5f61b5ab-01 in ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.077 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5f61b5ab-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.077 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[65ac1090-bd7a-4c42-873d-40adedf634eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.078 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3285bab0-1ce4-4790-9397-5bb00eb11d78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.091 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[24597762-43b0-47de-96f7-4cb1e10345e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.103 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5eea91c4-5c64-42bb-acbe-08898f8797d4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.134 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[61528c80-b733-4682-8a59-3aa63d0606aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 systemd-udevd[302542]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:54:31 np0005593232 NetworkManager[49057]: <info>  [1769162071.1413] manager: (tap5f61b5ab-00): new Veth device (/org/freedesktop/NetworkManager/Devices/127)
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.141 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f06cef3f-7ef6-44a3-9cf6-d0facdb8d1a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.174 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e9fd5c70-e12b-4006-96e3-d4d961ca50cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.177 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6340a94a-ccb5-4571-96e5-f8fdbd64d075]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 podman[302593]: 2026-01-23 09:54:31.185599394 +0000 UTC m=+0.065681578 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 04:54:31 np0005593232 NetworkManager[49057]: <info>  [1769162071.1992] device (tap5f61b5ab-00): carrier: link connected
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.203 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[dc67d21e-ae43-4ec1-acb2-2e5fb328b420]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.218 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.218 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e85be2a4-c59c-443f-868a-d6aad9714cce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f61b5ab-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:12:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598434, 'reachable_time': 44021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302634, 'error': None, 'target': 'ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.223 250273 DEBUG oslo_concurrency.lockutils [req-957ee8f8-80a5-49f5-b408-7932dd4f6cc7 req-d144c76f-ec01-4e94-a12a-7995606662a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.225 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.225 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.226 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162070.8584218, b3b41870-7e94-44fd-84bb-575dbf15c745 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.226 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.236 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[48fa8b54-3eff-4901-b764-a0e2e0cc5c73]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe45:127a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598434, 'tstamp': 598434}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302635, 'error': None, 'target': 'ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.250 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.253 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.252 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6f043971-b21f-46ef-ae6a-bc297649418a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5f61b5ab-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:45:12:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598434, 'reachable_time': 44021, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302636, 'error': None, 'target': 'ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.286 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.288 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[850f72b4-5755-4a90-be03-71b6d23d0780]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.337 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f7e7a53d-66d4-48e6-bb17-5ca4cd50b3e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.339 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f61b5ab-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.339 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.340 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5f61b5ab-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:31 np0005593232 NetworkManager[49057]: <info>  [1769162071.3426] manager: (tap5f61b5ab-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 23 04:54:31 np0005593232 kernel: tap5f61b5ab-00: entered promiscuous mode
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.345 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.345 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5f61b5ab-00, col_values=(('external_ids', {'iface-id': '88e564fc-0530-4eb4-b73d-ff707c593fe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:31 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:31Z|00257|binding|INFO|Releasing lport 88e564fc-0530-4eb4-b73d-ff707c593fe1 from this chassis (sb_readonly=0)
Jan 23 04:54:31 np0005593232 nova_compute[250269]: 2026-01-23 09:54:31.360 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.360 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5f61b5ab-0868-4f2c-b962-bdb7ecf89061.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5f61b5ab-0868-4f2c-b962-bdb7ecf89061.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.361 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[590c002a-a17e-4f8e-a4b2-305cbf800912]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.363 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-5f61b5ab-0868-4f2c-b962-bdb7ecf89061
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/5f61b5ab-0868-4f2c-b962-bdb7ecf89061.pid.haproxy
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 5f61b5ab-0868-4f2c-b962-bdb7ecf89061
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:54:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:31.363 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'env', 'PROCESS_TAG=haproxy-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5f61b5ab-0868-4f2c-b962-bdb7ecf89061.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:54:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:31.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:31.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:31 np0005593232 podman[302669]: 2026-01-23 09:54:31.723899554 +0000 UTC m=+0.051614517 container create b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:54:31 np0005593232 systemd[1]: Started libpod-conmon-b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e.scope.
Jan 23 04:54:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:54:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55791aa0474064d9b0a8eaa4201ba7e15cd57ce30c9ed6e4c845b41d759f09ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:54:31 np0005593232 podman[302669]: 2026-01-23 09:54:31.695804682 +0000 UTC m=+0.023519665 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:54:31 np0005593232 podman[302669]: 2026-01-23 09:54:31.800699091 +0000 UTC m=+0.128414084 container init b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 04:54:31 np0005593232 podman[302669]: 2026-01-23 09:54:31.806499833 +0000 UTC m=+0.134214796 container start b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 04:54:31 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [NOTICE]   (302689) : New worker (302691) forked
Jan 23 04:54:31 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [NOTICE]   (302689) : Loading success.
Jan 23 04:54:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1896: 321 pgs: 321 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 18 KiB/s wr, 37 op/s
Jan 23 04:54:32 np0005593232 nova_compute[250269]: 2026-01-23 09:54:32.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.449 250273 DEBUG nova.compute.manager [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.449 250273 DEBUG oslo_concurrency.lockutils [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.449 250273 DEBUG oslo_concurrency.lockutils [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.450 250273 DEBUG oslo_concurrency.lockutils [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.450 250273 DEBUG nova.compute.manager [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] No waiting events found dispatching network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:54:33 np0005593232 nova_compute[250269]: 2026-01-23 09:54:33.450 250273 WARNING nova.compute.manager [req-e854574f-6a79-43c9-b5dd-5c6fe9d2e868 req-e0ec29a7-5a2f-43e1-9a64-d7af874824e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received unexpected event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa for instance with vm_state resized and task_state None.#033[00m
Jan 23 04:54:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:33.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:33.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1897: 321 pgs: 321 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 131 KiB/s rd, 18 KiB/s wr, 38 op/s
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.225 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.493 250273 DEBUG nova.compute.manager [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.493 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.494 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.494 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.494 250273 DEBUG nova.compute.manager [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Processing event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.494 250273 DEBUG nova.compute.manager [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.495 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.495 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.495 250273 DEBUG oslo_concurrency.lockutils [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.495 250273 DEBUG nova.compute.manager [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] No waiting events found dispatching network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.495 250273 WARNING nova.compute.manager [req-485f450f-6021-4ac4-997d-cc411b51cf80 req-d6e0b932-49bc-4177-ae34-8b3a67b21aaa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received unexpected event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.496 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.500 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162075.5004346, b3b41870-7e94-44fd-84bb-575dbf15c745 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.501 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.503 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.506 250273 INFO nova.virt.libvirt.driver [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Instance spawned successfully.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.507 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:54:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:35.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.544 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.548 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.548 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.549 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.549 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.549 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.550 250273 DEBUG nova.virt.libvirt.driver [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.553 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.596 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.647 250273 INFO nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Took 21.71 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.648 250273 DEBUG nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.651 250273 DEBUG nova.compute.manager [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.651 250273 DEBUG oslo_concurrency.lockutils [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.651 250273 DEBUG oslo_concurrency.lockutils [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.652 250273 DEBUG oslo_concurrency.lockutils [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.652 250273 DEBUG nova.compute.manager [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] No waiting events found dispatching network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.652 250273 WARNING nova.compute.manager [req-197e86f0-d823-4e36-98de-1bc05f257296 req-d5b4edd7-4c72-4e5e-8d1a-59530d5bf903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Received unexpected event network-vif-plugged-10fe80c9-2f99-4371-a60e-b8b226c250aa for instance with vm_state resized and task_state None.#033[00m
Jan 23 04:54:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:35.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.766 250273 INFO nova.compute.manager [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Took 23.35 seconds to build instance.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.789 250273 DEBUG oslo_concurrency.lockutils [None req-f8834f3a-cb5b-4764-9001-9d671c95c652 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.790 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 13.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.790 250273 INFO nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.790 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.873 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.873 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:35 np0005593232 nova_compute[250269]: 2026-01-23 09:54:35.874 250273 DEBUG nova.compute.manager [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Going to confirm migration 12 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 04:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1898: 321 pgs: 321 active+clean; 213 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 18 KiB/s wr, 79 op/s
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:54:37
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'images', 'default.rgw.control', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'vms']
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:54:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:37.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:54:37 np0005593232 nova_compute[250269]: 2026-01-23 09:54:37.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:37.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:54:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1899: 321 pgs: 321 active+clean; 256 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 196 op/s
Jan 23 04:54:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:39.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:39 np0005593232 nova_compute[250269]: 2026-01-23 09:54:39.564 250273 DEBUG neutronclient.v2_0.client [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 10fe80c9-2f99-4371-a60e-b8b226c250aa for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 04:54:39 np0005593232 nova_compute[250269]: 2026-01-23 09:54:39.565 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:39 np0005593232 nova_compute[250269]: 2026-01-23 09:54:39.566 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquired lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:39 np0005593232 nova_compute[250269]: 2026-01-23 09:54:39.566 250273 DEBUG nova.network.neutron [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:54:39 np0005593232 nova_compute[250269]: 2026-01-23 09:54:39.566 250273 DEBUG nova.objects.instance [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'info_cache' on Instance uuid fffef24b-bb5b-41c6-a049-c1c4ba8f02fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:54:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:39.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:40 np0005593232 nova_compute[250269]: 2026-01-23 09:54:40.142 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1900: 321 pgs: 321 active+clean; 256 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 23 04:54:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:41.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:41.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:42 np0005593232 NetworkManager[49057]: <info>  [1769162082.1494] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 23 04:54:42 np0005593232 NetworkManager[49057]: <info>  [1769162082.1501] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Jan 23 04:54:42 np0005593232 nova_compute[250269]: 2026-01-23 09:54:42.148 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:42 np0005593232 nova_compute[250269]: 2026-01-23 09:54:42.320 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:42 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:42Z|00258|binding|INFO|Releasing lport 88e564fc-0530-4eb4-b73d-ff707c593fe1 from this chassis (sb_readonly=0)
Jan 23 04:54:42 np0005593232 nova_compute[250269]: 2026-01-23 09:54:42.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1901: 321 pgs: 321 active+clean; 260 MiB data, 783 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 187 op/s
Jan 23 04:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:42.608 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:42.609 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:42.609 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:42 np0005593232 nova_compute[250269]: 2026-01-23 09:54:42.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:42.648 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:42.649 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:54:42 np0005593232 nova_compute[250269]: 2026-01-23 09:54:42.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.061 250273 DEBUG nova.compute.manager [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.062 250273 DEBUG nova.compute.manager [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing instance network info cache due to event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.062 250273 DEBUG oslo_concurrency.lockutils [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.062 250273 DEBUG oslo_concurrency.lockutils [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.062 250273 DEBUG nova.network.neutron [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:54:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:54:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:43.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:54:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:54:43.651 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.717 250273 DEBUG nova.network.neutron [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] [instance: fffef24b-bb5b-41c6-a049-c1c4ba8f02fb] Updating instance_info_cache with network_info: [{"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:43.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.744 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Releasing lock "refresh_cache-fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.745 250273 DEBUG nova.objects.instance [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lazy-loading 'migration_context' on Instance uuid fffef24b-bb5b-41c6-a049-c1c4ba8f02fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:54:43 np0005593232 nova_compute[250269]: 2026-01-23 09:54:43.855 250273 DEBUG nova.storage.rbd_utils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] removing snapshot(nova-resize) on rbd image(fffef24b-bb5b-41c6-a049-c1c4ba8f02fb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 04:54:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1902: 321 pgs: 321 active+clean; 260 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 179 op/s
Jan 23 04:54:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 23 04:54:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 23 04:54:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.803 250273 DEBUG nova.virt.libvirt.vif [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:53:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1974132906',display_name='tempest-ServerDiskConfigTestJSON-server-1974132906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1974132906',id=79,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:54:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='86d938c8e2bb41a79012befd500d1088',ramdisk_id='',reservation_id='r-09i9yxy4',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerDiskConfigTestJSON-211417238',owner_user_name='tempest-ServerDiskConfigTestJSON-211417238-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:54:32Z,user_data=None,user_id='0cfac2191989448ead77e75ca3910ac4',uuid=fffef24b-bb5b-41c6-a049-c1c4ba8f02fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.804 250273 DEBUG nova.network.os_vif_util [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converting VIF {"id": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "address": "fa:16:3e:45:b2:d4", "network": {"id": "6d2cdc4c-47a0-475b-8e71-39465d365de3", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-1859353210-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "86d938c8e2bb41a79012befd500d1088", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap10fe80c9-2f", "ovs_interfaceid": "10fe80c9-2f99-4371-a60e-b8b226c250aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.805 250273 DEBUG nova.network.os_vif_util [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.805 250273 DEBUG os_vif [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.807 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap10fe80c9-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.808 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.810 250273 INFO os_vif [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:b2:d4,bridge_name='br-int',has_traffic_filtering=True,id=10fe80c9-2f99-4371-a60e-b8b226c250aa,network=Network(6d2cdc4c-47a0-475b-8e71-39465d365de3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap10fe80c9-2f')#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.811 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:54:44 np0005593232 nova_compute[250269]: 2026-01-23 09:54:44.811 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.001 250273 DEBUG oslo_concurrency.processutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.184 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.455 250273 DEBUG oslo_concurrency.processutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.462 250273 DEBUG nova.compute.provider_tree [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.499 250273 DEBUG nova.scheduler.client.report [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:54:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:45.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.582 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.751 250273 INFO nova.scheduler.client.report [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Deleted allocation for migration 9a6f94e7-d356-40f3-896e-69daee93b7ad#033[00m
Jan 23 04:54:45 np0005593232 nova_compute[250269]: 2026-01-23 09:54:45.826 250273 DEBUG oslo_concurrency.lockutils [None req-f8a20b2a-b2f9-4314-9f38-7ad89b93d412 0cfac2191989448ead77e75ca3910ac4 86d938c8e2bb41a79012befd500d1088 - - default default] Lock "fffef24b-bb5b-41c6-a049-c1c4ba8f02fb" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 9.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:54:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1904: 321 pgs: 321 active+clean; 260 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 163 op/s
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004153434999069421 of space, bias 1.0, pg target 1.2460304997208262 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009900122720081892 of space, bias 1.0, pg target 0.29601366933044854 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:54:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 04:54:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:47.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:47 np0005593232 nova_compute[250269]: 2026-01-23 09:54:47.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:47.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:47 np0005593232 nova_compute[250269]: 2026-01-23 09:54:47.872 250273 DEBUG nova.network.neutron [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updated VIF entry in instance network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:54:47 np0005593232 nova_compute[250269]: 2026-01-23 09:54:47.873 250273 DEBUG nova.network.neutron [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:54:47 np0005593232 nova_compute[250269]: 2026-01-23 09:54:47.940 250273 DEBUG oslo_concurrency.lockutils [req-f35fb353-4097-43f7-b088-a30d6f5f8deb req-26f79a83-ea14-496d-973d-01a8c7b38970 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:54:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1905: 321 pgs: 321 active+clean; 273 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 112 op/s
Jan 23 04:54:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:48Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:57:58 10.100.0.3
Jan 23 04:54:48 np0005593232 ovn_controller[151001]: 2026-01-23T09:54:48Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:57:58 10.100.0.3
Jan 23 04:54:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:49.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:49.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:50 np0005593232 nova_compute[250269]: 2026-01-23 09:54:50.185 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1906: 321 pgs: 321 active+clean; 273 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 1.7 MiB/s wr, 112 op/s
Jan 23 04:54:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 23 04:54:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 23 04:54:51 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 23 04:54:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:51.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:51.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1908: 321 pgs: 321 active+clean; 283 MiB data, 801 MiB used, 20 GiB / 21 GiB avail; 916 KiB/s rd, 3.0 MiB/s wr, 137 op/s
Jan 23 04:54:52 np0005593232 nova_compute[250269]: 2026-01-23 09:54:52.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:53.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:53.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1909: 321 pgs: 321 active+clean; 244 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.6 MiB/s wr, 250 op/s
Jan 23 04:54:55 np0005593232 nova_compute[250269]: 2026-01-23 09:54:55.188 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:54:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:55.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:54:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:54:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1910: 321 pgs: 321 active+clean; 213 MiB data, 782 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.6 MiB/s wr, 243 op/s
Jan 23 04:54:57 np0005593232 podman[302821]: 2026-01-23 09:54:57.42636263 +0000 UTC m=+0.079844712 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:54:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:57.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:57 np0005593232 nova_compute[250269]: 2026-01-23 09:54:57.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:54:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1911: 321 pgs: 321 active+clean; 213 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 998 KiB/s wr, 178 op/s
Jan 23 04:54:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:54:59.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:54:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:54:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:54:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:54:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:00 np0005593232 nova_compute[250269]: 2026-01-23 09:55:00.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1912: 321 pgs: 321 active+clean; 213 MiB data, 776 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 998 KiB/s wr, 178 op/s
Jan 23 04:55:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:01 np0005593232 podman[302899]: 2026-01-23 09:55:01.421734504 +0000 UTC m=+0.076386846 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 04:55:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:01.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:01.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:02 np0005593232 nova_compute[250269]: 2026-01-23 09:55:02.364 250273 DEBUG nova.objects.instance [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lazy-loading 'flavor' on Instance uuid b3b41870-7e94-44fd-84bb-575dbf15c745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:55:02 np0005593232 nova_compute[250269]: 2026-01-23 09:55:02.399 250273 DEBUG oslo_concurrency.lockutils [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:55:02 np0005593232 nova_compute[250269]: 2026-01-23 09:55:02.400 250273 DEBUG oslo_concurrency.lockutils [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:55:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1913: 321 pgs: 321 active+clean; 260 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.7 MiB/s wr, 189 op/s
Jan 23 04:55:02 np0005593232 nova_compute[250269]: 2026-01-23 09:55:02.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:03.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:03.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1914: 321 pgs: 321 active+clean; 260 MiB data, 793 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 162 op/s
Jan 23 04:55:05 np0005593232 nova_compute[250269]: 2026-01-23 09:55:05.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:55:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:05.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:55:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:05.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:06 np0005593232 nova_compute[250269]: 2026-01-23 09:55:06.089 250273 DEBUG nova.network.neutron [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:55:06 np0005593232 nova_compute[250269]: 2026-01-23 09:55:06.305 250273 DEBUG nova.compute.manager [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:55:06 np0005593232 nova_compute[250269]: 2026-01-23 09:55:06.305 250273 DEBUG nova.compute.manager [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing instance network info cache due to event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:55:06 np0005593232 nova_compute[250269]: 2026-01-23 09:55:06.306 250273 DEBUG oslo_concurrency.lockutils [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:55:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1915: 321 pgs: 321 active+clean; 269 MiB data, 800 MiB used, 20 GiB / 21 GiB avail; 114 KiB/s rd, 2.4 MiB/s wr, 60 op/s
Jan 23 04:55:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:07.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:07.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:07 np0005593232 nova_compute[250269]: 2026-01-23 09:55:07.789 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1916: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 23 04:55:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:55:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:55:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:09.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.872 250273 DEBUG nova.network.neutron [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.928 250273 DEBUG oslo_concurrency.lockutils [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.929 250273 DEBUG nova.compute.manager [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.929 250273 DEBUG nova.compute.manager [None req-4665919c-8915-4c64-a5ad-fd5b01c262ff 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] network_info to inject: |[{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.932 250273 DEBUG oslo_concurrency.lockutils [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:55:09 np0005593232 nova_compute[250269]: 2026-01-23 09:55:09.932 250273 DEBUG nova.network.neutron [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:55:10 np0005593232 nova_compute[250269]: 2026-01-23 09:55:10.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1917: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 3.9 MiB/s wr, 105 op/s
Jan 23 04:55:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:11.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:11.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.085 250273 DEBUG nova.objects.instance [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lazy-loading 'flavor' on Instance uuid b3b41870-7e94-44fd-84bb-575dbf15c745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.122 250273 DEBUG oslo_concurrency.lockutils [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:55:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1918: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 362 KiB/s rd, 4.0 MiB/s wr, 117 op/s
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.882 250273 DEBUG nova.network.neutron [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updated VIF entry in instance network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.883 250273 DEBUG nova.network.neutron [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.925 250273 DEBUG oslo_concurrency.lockutils [req-9d4783d2-0cfc-4f51-b396-57bdc719e242 req-3a84d2cc-35fd-464b-81d6-7bb3be790246 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:55:12 np0005593232 nova_compute[250269]: 2026-01-23 09:55:12.926 250273 DEBUG oslo_concurrency.lockutils [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:55:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:13.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1919: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 320 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Jan 23 04:55:15 np0005593232 nova_compute[250269]: 2026-01-23 09:55:15.130 250273 DEBUG nova.network.neutron [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:55:15 np0005593232 nova_compute[250269]: 2026-01-23 09:55:15.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:15.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:15.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:16 np0005593232 nova_compute[250269]: 2026-01-23 09:55:16.470 250273 DEBUG nova.compute.manager [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:55:16 np0005593232 nova_compute[250269]: 2026-01-23 09:55:16.471 250273 DEBUG nova.compute.manager [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing instance network info cache due to event network-changed-3262bd82-5538-4d9c-af99-430bed9a5120. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:55:16 np0005593232 nova_compute[250269]: 2026-01-23 09:55:16.471 250273 DEBUG oslo_concurrency.lockutils [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:55:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 23 04:55:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 23 04:55:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 23 04:55:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1921: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 23 04:55:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:17.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:17.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:17 np0005593232 nova_compute[250269]: 2026-01-23 09:55:17.793 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1922: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 33 KiB/s wr, 130 op/s
Jan 23 04:55:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:19.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:20 np0005593232 nova_compute[250269]: 2026-01-23 09:55:20.200 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:20 np0005593232 nova_compute[250269]: 2026-01-23 09:55:20.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1923: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 33 KiB/s wr, 130 op/s
Jan 23 04:55:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.303 250273 DEBUG nova.network.neutron [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.381 250273 DEBUG oslo_concurrency.lockutils [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.382 250273 DEBUG nova.compute.manager [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.382 250273 DEBUG nova.compute.manager [None req-6a75b0a6-005f-47f5-ad61-27a018c94708 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] network_info to inject: |[{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.384 250273 DEBUG oslo_concurrency.lockutils [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:55:21 np0005593232 nova_compute[250269]: 2026-01-23 09:55:21.385 250273 DEBUG nova.network.neutron [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Refreshing network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:55:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:21.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:21.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1924: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.3 KiB/s wr, 258 op/s
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.810 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.810 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.810 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.811 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.811 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.812 250273 INFO nova.compute.manager [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Terminating instance#033[00m
Jan 23 04:55:22 np0005593232 nova_compute[250269]: 2026-01-23 09:55:22.813 250273 DEBUG nova.compute.manager [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:55:23 np0005593232 kernel: tap3262bd82-55 (unregistering): left promiscuous mode
Jan 23 04:55:23 np0005593232 NetworkManager[49057]: <info>  [1769162123.0065] device (tap3262bd82-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.018 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 ovn_controller[151001]: 2026-01-23T09:55:23Z|00259|binding|INFO|Releasing lport 3262bd82-5538-4d9c-af99-430bed9a5120 from this chassis (sb_readonly=0)
Jan 23 04:55:23 np0005593232 ovn_controller[151001]: 2026-01-23T09:55:23Z|00260|binding|INFO|Setting lport 3262bd82-5538-4d9c-af99-430bed9a5120 down in Southbound
Jan 23 04:55:23 np0005593232 ovn_controller[151001]: 2026-01-23T09:55:23Z|00261|binding|INFO|Removing iface tap3262bd82-55 ovn-installed in OVS
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.028 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:57:58 10.100.0.3'], port_security=['fa:16:3e:70:57:58 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b3b41870-7e94-44fd-84bb-575dbf15c745', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bf8efe4dc7e34393b5cd5a5ef2735ecf', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'fb2fef88-afa1-4a2c-98c2-a88fbf18d3ca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c5ef82d7-36aa-4d49-89a7-35476c106254, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3262bd82-5538-4d9c-af99-430bed9a5120) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.029 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3262bd82-5538-4d9c-af99-430bed9a5120 in datapath 5f61b5ab-0868-4f2c-b962-bdb7ecf89061 unbound from our chassis#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.031 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5f61b5ab-0868-4f2c-b962-bdb7ecf89061, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.033 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[efee5de0-d9ed-47aa-93f8-42a8d47f7741]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.034 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061 namespace which is not needed anymore#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.047 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 23 04:55:23 np0005593232 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d00000052.scope: Consumed 14.841s CPU time.
Jan 23 04:55:23 np0005593232 systemd-machined[215836]: Machine qemu-31-instance-00000052 terminated.
Jan 23 04:55:23 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [NOTICE]   (302689) : haproxy version is 2.8.14-c23fe91
Jan 23 04:55:23 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [NOTICE]   (302689) : path to executable is /usr/sbin/haproxy
Jan 23 04:55:23 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [WARNING]  (302689) : Exiting Master process...
Jan 23 04:55:23 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [ALERT]    (302689) : Current worker (302691) exited with code 143 (Terminated)
Jan 23 04:55:23 np0005593232 neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061[302684]: [WARNING]  (302689) : All workers exited. Exiting... (0)
Jan 23 04:55:23 np0005593232 systemd[1]: libpod-b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e.scope: Deactivated successfully.
Jan 23 04:55:23 np0005593232 podman[303003]: 2026-01-23 09:55:23.191661752 +0000 UTC m=+0.064225608 container died b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:55:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e-userdata-shm.mount: Deactivated successfully.
Jan 23 04:55:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55791aa0474064d9b0a8eaa4201ba7e15cd57ce30c9ed6e4c845b41d759f09ce-merged.mount: Deactivated successfully.
Jan 23 04:55:23 np0005593232 podman[303003]: 2026-01-23 09:55:23.241489209 +0000 UTC m=+0.114053055 container cleanup b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 04:55:23 np0005593232 systemd[1]: libpod-conmon-b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e.scope: Deactivated successfully.
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.252 250273 INFO nova.virt.libvirt.driver [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Instance destroyed successfully.#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.254 250273 DEBUG nova.objects.instance [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lazy-loading 'resources' on Instance uuid b3b41870-7e94-44fd-84bb-575dbf15c745 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:55:23 np0005593232 podman[303044]: 2026-01-23 09:55:23.316899787 +0000 UTC m=+0.045558628 container remove b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.322 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c1bdc373-6707-4ecd-b4ce-1fad83b1fa69]: (4, ('Fri Jan 23 09:55:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061 (b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e)\nb2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e\nFri Jan 23 09:55:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061 (b2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e)\nb2afd8d1efa4168ee4e05592c1df88a6d9aae2181c7a57b5d38cefa459472b4e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.324 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9db22c39-d30c-4e38-af32-6758fa27ca02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.325 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5f61b5ab-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.327 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 kernel: tap5f61b5ab-00: left promiscuous mode
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.344 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.348 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[adf861a1-868a-409b-b12c-6ab0199a454a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.361 250273 DEBUG nova.virt.libvirt.vif [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:54:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1595370930',display_name='tempest-AttachInterfacesUnderV243Test-server-1595370930',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1595370930',id=82,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNyf5h128AUMpfjBt1qf29lypfdoHeGHB/3CwSYlps3D0DZlGiUWRB29lfHF8y4riYrxOp4GC+XlBNfb99CaksvAr8c1p4UMqTuQ6hcTzvjiih7FGAE7LGbuNDDdzZODNA==',key_name='tempest-keypair-2005272547',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:54:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='bf8efe4dc7e34393b5cd5a5ef2735ecf',ramdisk_id='',reservation_id='r-z491gojt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1038514400',owner_user_name='tempest-AttachInterfacesUnderV243Test-1038514400-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:55:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='040257fcfb8e485989e95807791e25f6',uuid=b3b41870-7e94-44fd-84bb-575dbf15c745,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.361 250273 DEBUG nova.network.os_vif_util [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converting VIF {"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.362 250273 DEBUG nova.network.os_vif_util [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.363 250273 DEBUG os_vif [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.364 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3262bd82-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.365 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b896e55-7484-42ce-af52-c7e85c449486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.367 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.367 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b53165-2025-400f-9ad0-684dd574258c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.371 250273 INFO os_vif [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:57:58,bridge_name='br-int',has_traffic_filtering=True,id=3262bd82-5538-4d9c-af99-430bed9a5120,network=Network(5f61b5ab-0868-4f2c-b962-bdb7ecf89061),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3262bd82-55')#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.383 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae25206-1cfa-4614-8cc5-3a472e63dfaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598427, 'reachable_time': 42814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303101, 'error': None, 'target': 'ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.387 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5f61b5ab-0868-4f2c-b962-bdb7ecf89061 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:55:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:23.387 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a2883335-d0b0-4efe-87bc-e63abdccff4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:55:23 np0005593232 systemd[1]: run-netns-ovnmeta\x2d5f61b5ab\x2d0868\x2d4f2c\x2db962\x2dbdb7ecf89061.mount: Deactivated successfully.
Jan 23 04:55:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:23.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.828 250273 INFO nova.virt.libvirt.driver [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Deleting instance files /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745_del#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.829 250273 INFO nova.virt.libvirt.driver [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Deletion of /var/lib/nova/instances/b3b41870-7e94-44fd-84bb-575dbf15c745_del complete#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.936 250273 INFO nova.compute.manager [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.936 250273 DEBUG oslo.service.loopingcall [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.937 250273 DEBUG nova.compute.manager [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:55:23 np0005593232 nova_compute[250269]: 2026-01-23 09:55:23.937 250273 DEBUG nova.network.neutron [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.164 250273 DEBUG nova.compute.manager [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-unplugged-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.165 250273 DEBUG oslo_concurrency.lockutils [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.165 250273 DEBUG oslo_concurrency.lockutils [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.165 250273 DEBUG oslo_concurrency.lockutils [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.165 250273 DEBUG nova.compute.manager [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] No waiting events found dispatching network-vif-unplugged-3262bd82-5538-4d9c-af99-430bed9a5120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.165 250273 DEBUG nova.compute.manager [req-cc4a6026-120c-497c-b445-acd69f5e13b4 req-456e4e7d-0333-432e-ae22-758d8f107710 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-unplugged-3262bd82-5538-4d9c-af99-430bed9a5120 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:55:24 np0005593232 nova_compute[250269]: 2026-01-23 09:55:24.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 38dc0161-34b8-4167-8815-2a9e27251e4d does not exist
Jan 23 04:55:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 769bc7e3-d77f-480b-83c4-31a07432ea99 does not exist
Jan 23 04:55:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f8c00e15-9e30-4a9b-b72f-deb411ce3ca8 does not exist
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:55:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:55:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1925: 321 pgs: 321 active+clean; 293 MiB data, 836 MiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 3.3 KiB/s wr, 258 op/s
Jan 23 04:55:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:55:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.133979914 +0000 UTC m=+0.035585422 container create 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 04:55:25 np0005593232 systemd[1]: Started libpod-conmon-123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed.scope.
Jan 23 04:55:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.118177834 +0000 UTC m=+0.019783362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.215604595 +0000 UTC m=+0.117210133 container init 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.221598642 +0000 UTC m=+0.123204150 container start 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.224783451 +0000 UTC m=+0.126388979 container attach 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:55:25 np0005593232 hopeful_noether[303374]: 167 167
Jan 23 04:55:25 np0005593232 systemd[1]: libpod-123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed.scope: Deactivated successfully.
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.228239707 +0000 UTC m=+0.129845235 container died 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:55:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ef619ab392012d018789bb16660f3636197ac7691d185520e1becd71d8715cc7-merged.mount: Deactivated successfully.
Jan 23 04:55:25 np0005593232 podman[303357]: 2026-01-23 09:55:25.270096052 +0000 UTC m=+0.171701560 container remove 123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_noether, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:55:25 np0005593232 systemd[1]: libpod-conmon-123238ae2742e6bba433d900125225d19d1eccacdf8fbd296a72df234c62d0ed.scope: Deactivated successfully.
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.417 250273 DEBUG nova.network.neutron [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:55:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:25.418 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:55:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:25.419 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.420 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:25 np0005593232 podman[303397]: 2026-01-23 09:55:25.450993826 +0000 UTC m=+0.050435184 container create e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.478 250273 INFO nova.compute.manager [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Took 1.54 seconds to deallocate network for instance.#033[00m
Jan 23 04:55:25 np0005593232 systemd[1]: Started libpod-conmon-e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636.scope.
Jan 23 04:55:25 np0005593232 podman[303397]: 2026-01-23 09:55:25.429651262 +0000 UTC m=+0.029092640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:25 np0005593232 podman[303397]: 2026-01-23 09:55:25.552522921 +0000 UTC m=+0.151964289 container init e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:55:25 np0005593232 podman[303397]: 2026-01-23 09:55:25.560064611 +0000 UTC m=+0.159505959 container start e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:55:25 np0005593232 podman[303397]: 2026-01-23 09:55:25.563569469 +0000 UTC m=+0.163010827 container attach e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.564 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.565 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:25.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.648 250273 DEBUG oslo_concurrency.processutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.702 250273 DEBUG nova.compute.manager [req-2920f474-c720-4ae6-a20f-3b3b7719ff46 req-050f5373-eef6-405c-a6a1-9fe8e590df37 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-deleted-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:55:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:25.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.831 250273 DEBUG nova.network.neutron [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updated VIF entry in instance network info cache for port 3262bd82-5538-4d9c-af99-430bed9a5120. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.832 250273 DEBUG nova.network.neutron [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Updating instance_info_cache with network_info: [{"id": "3262bd82-5538-4d9c-af99-430bed9a5120", "address": "fa:16:3e:70:57:58", "network": {"id": "5f61b5ab-0868-4f2c-b962-bdb7ecf89061", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1157777070-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "bf8efe4dc7e34393b5cd5a5ef2735ecf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3262bd82-55", "ovs_interfaceid": "3262bd82-5538-4d9c-af99-430bed9a5120", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:55:25 np0005593232 nova_compute[250269]: 2026-01-23 09:55:25.871 250273 DEBUG oslo_concurrency.lockutils [req-fe2ebcf3-e496-4114-89d2-220c2ceb49ae req-4a0e0c0f-f654-4986-a16d-04b58e91a89d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b3b41870-7e94-44fd-84bb-575dbf15c745" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:55:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e260 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:55:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/706396818' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.159 250273 DEBUG oslo_concurrency.processutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.166 250273 DEBUG nova.compute.provider_tree [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.204 250273 DEBUG nova.scheduler.client.report [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.231 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.260 250273 INFO nova.scheduler.client.report [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Deleted allocations for instance b3b41870-7e94-44fd-84bb-575dbf15c745#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.385 250273 DEBUG nova.compute.manager [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.385 250273 DEBUG oslo_concurrency.lockutils [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.386 250273 DEBUG oslo_concurrency.lockutils [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.386 250273 DEBUG oslo_concurrency.lockutils [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.386 250273 DEBUG nova.compute.manager [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] No waiting events found dispatching network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.387 250273 WARNING nova.compute.manager [req-256f2922-1567-4819-bed2-3b337a3cfe8b req-3b726290-0698-45eb-a032-27d569fe960b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Received unexpected event network-vif-plugged-3262bd82-5538-4d9c-af99-430bed9a5120 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.395 250273 DEBUG oslo_concurrency.lockutils [None req-4a305b3f-0419-4986-bc4f-dfa1cf5b8f9e 040257fcfb8e485989e95807791e25f6 bf8efe4dc7e34393b5cd5a5ef2735ecf - - default default] Lock "b3b41870-7e94-44fd-84bb-575dbf15c745" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.409 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:55:26 np0005593232 nova_compute[250269]: 2026-01-23 09:55:26.411 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:26 np0005593232 gifted_leavitt[303414]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:55:26 np0005593232 gifted_leavitt[303414]: --> relative data size: 1.0
Jan 23 04:55:26 np0005593232 gifted_leavitt[303414]: --> All data devices are unavailable
Jan 23 04:55:26 np0005593232 systemd[1]: libpod-e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636.scope: Deactivated successfully.
Jan 23 04:55:26 np0005593232 podman[303452]: 2026-01-23 09:55:26.513642418 +0000 UTC m=+0.024954895 container died e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:55:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-024c5281d00a2f19bc565975ba7ed71f1f80e9682b63504676c1fbacc94cae84-merged.mount: Deactivated successfully.
Jan 23 04:55:26 np0005593232 podman[303452]: 2026-01-23 09:55:26.786770609 +0000 UTC m=+0.298083066 container remove e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:55:26 np0005593232 systemd[1]: libpod-conmon-e40015ae3709607b32be2e643f8e7a9595be097c4769b6e4d31033c793ae2636.scope: Deactivated successfully.
Jan 23 04:55:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1926: 321 pgs: 321 active+clean; 269 MiB data, 823 MiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 3.9 KiB/s wr, 236 op/s
Jan 23 04:55:27 np0005593232 nova_compute[250269]: 2026-01-23 09:55:27.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:27 np0005593232 nova_compute[250269]: 2026-01-23 09:55:27.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.415170536 +0000 UTC m=+0.091833066 container create 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.343105281 +0000 UTC m=+0.019767821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:27 np0005593232 systemd[1]: Started libpod-conmon-98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e.scope.
Jan 23 04:55:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.513093821 +0000 UTC m=+0.189756371 container init 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.522055011 +0000 UTC m=+0.198717531 container start 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 04:55:27 np0005593232 silly_leavitt[303622]: 167 167
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.526781652 +0000 UTC m=+0.203444202 container attach 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:55:27 np0005593232 systemd[1]: libpod-98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e.scope: Deactivated successfully.
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.527920934 +0000 UTC m=+0.204583464 container died 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:55:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ab98735622bd7e813e563202a6b12336705a58fce25bf62616b8771158d6bcd8-merged.mount: Deactivated successfully.
Jan 23 04:55:27 np0005593232 podman[303606]: 2026-01-23 09:55:27.5741298 +0000 UTC m=+0.250792320 container remove 98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_leavitt, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:55:27 np0005593232 systemd[1]: libpod-conmon-98f726c96ce777bde232cb9df215dfd2895c1b4586e5bc4c581bfb97a1ef002e.scope: Deactivated successfully.
Jan 23 04:55:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:27.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:27 np0005593232 podman[303624]: 2026-01-23 09:55:27.619627296 +0000 UTC m=+0.141024075 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 04:55:27 np0005593232 podman[303670]: 2026-01-23 09:55:27.743806611 +0000 UTC m=+0.042371870 container create c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:55:27 np0005593232 systemd[1]: Started libpod-conmon-c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b.scope.
Jan 23 04:55:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:27.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:27 np0005593232 podman[303670]: 2026-01-23 09:55:27.723114166 +0000 UTC m=+0.021679455 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133a7188d09a7b676d6a5664c81c19b058ce025b732c965d73e402ec53b0918e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133a7188d09a7b676d6a5664c81c19b058ce025b732c965d73e402ec53b0918e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133a7188d09a7b676d6a5664c81c19b058ce025b732c965d73e402ec53b0918e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133a7188d09a7b676d6a5664c81c19b058ce025b732c965d73e402ec53b0918e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:27 np0005593232 podman[303670]: 2026-01-23 09:55:27.841223172 +0000 UTC m=+0.139788461 container init c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:55:27 np0005593232 podman[303670]: 2026-01-23 09:55:27.847486487 +0000 UTC m=+0.146051746 container start c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:55:27 np0005593232 podman[303670]: 2026-01-23 09:55:27.852526587 +0000 UTC m=+0.151091846 container attach c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:55:28 np0005593232 nova_compute[250269]: 2026-01-23 09:55:28.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:28 np0005593232 nova_compute[250269]: 2026-01-23 09:55:28.367 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 23 04:55:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 23 04:55:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 23 04:55:28 np0005593232 interesting_gates[303686]: {
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:    "0": [
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:        {
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "devices": [
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "/dev/loop3"
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            ],
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "lv_name": "ceph_lv0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "lv_size": "7511998464",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "name": "ceph_lv0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "tags": {
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.cluster_name": "ceph",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.crush_device_class": "",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.encrypted": "0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.osd_id": "0",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.type": "block",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:                "ceph.vdo": "0"
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            },
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "type": "block",
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:            "vg_name": "ceph_vg0"
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:        }
Jan 23 04:55:28 np0005593232 interesting_gates[303686]:    ]
Jan 23 04:55:28 np0005593232 interesting_gates[303686]: }
Jan 23 04:55:28 np0005593232 systemd[1]: libpod-c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b.scope: Deactivated successfully.
Jan 23 04:55:28 np0005593232 podman[303696]: 2026-01-23 09:55:28.78193066 +0000 UTC m=+0.031887748 container died c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:55:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1928: 321 pgs: 321 active+clean; 214 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.7 KiB/s wr, 184 op/s
Jan 23 04:55:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-133a7188d09a7b676d6a5664c81c19b058ce025b732c965d73e402ec53b0918e-merged.mount: Deactivated successfully.
Jan 23 04:55:29 np0005593232 podman[303696]: 2026-01-23 09:55:29.422944378 +0000 UTC m=+0.672901446 container remove c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_gates, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:55:29 np0005593232 systemd[1]: libpod-conmon-c0e5a4dd9623bc8478d6ee095b4b8d869864a1d1643a259b759417112372757b.scope: Deactivated successfully.
Jan 23 04:55:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:29.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:29.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.063015131 +0000 UTC m=+0.026325414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.289 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.336 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.399264148 +0000 UTC m=+0.362574401 container create 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 04:55:30 np0005593232 systemd[1]: Started libpod-conmon-665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75.scope.
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.454 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.455 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.456 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:55:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.584792421 +0000 UTC m=+0.548102694 container init 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.592213168 +0000 UTC m=+0.555523421 container start 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:55:30 np0005593232 xenodochial_elion[303867]: 167 167
Jan 23 04:55:30 np0005593232 systemd[1]: libpod-665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75.scope: Deactivated successfully.
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.664724496 +0000 UTC m=+0.628034769 container attach 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.665383054 +0000 UTC m=+0.628693307 container died 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:55:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b68c1bfeaf099d858eb12bb76718a26222f4ef812b0a3cdaa8b9e0214891c5c7-merged.mount: Deactivated successfully.
Jan 23 04:55:30 np0005593232 podman[303850]: 2026-01-23 09:55:30.713827192 +0000 UTC m=+0.677137445 container remove 665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:55:30 np0005593232 systemd[1]: libpod-conmon-665b91633e6a843b4970f288245da34ae87b58d0103585164619822361b0ad75.scope: Deactivated successfully.
Jan 23 04:55:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1929: 321 pgs: 321 active+clean; 214 MiB data, 790 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.7 KiB/s wr, 184 op/s
Jan 23 04:55:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:55:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2549922614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:55:30 np0005593232 podman[303914]: 2026-01-23 09:55:30.908759487 +0000 UTC m=+0.054365494 container create 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:55:30 np0005593232 nova_compute[250269]: 2026-01-23 09:55:30.930 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:55:30 np0005593232 systemd[1]: Started libpod-conmon-99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70.scope.
Jan 23 04:55:30 np0005593232 podman[303914]: 2026-01-23 09:55:30.891007903 +0000 UTC m=+0.036613910 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:55:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:55:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4079ce73a55a659bb96adc56ed455e3f195b3f2b9f6d7491a08fe3db33ff9e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4079ce73a55a659bb96adc56ed455e3f195b3f2b9f6d7491a08fe3db33ff9e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4079ce73a55a659bb96adc56ed455e3f195b3f2b9f6d7491a08fe3db33ff9e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4079ce73a55a659bb96adc56ed455e3f195b3f2b9f6d7491a08fe3db33ff9e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:55:31 np0005593232 podman[303914]: 2026-01-23 09:55:31.002745772 +0000 UTC m=+0.148351799 container init 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:55:31 np0005593232 podman[303914]: 2026-01-23 09:55:31.010897279 +0000 UTC m=+0.156503286 container start 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:55:31 np0005593232 podman[303914]: 2026-01-23 09:55:31.015340963 +0000 UTC m=+0.160946990 container attach 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:55:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.132 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.133 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4502MB free_disk=20.921703338623047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.134 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.134 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:31.422 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.455 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.455 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:55:31 np0005593232 nova_compute[250269]: 2026-01-23 09:55:31.506 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:55:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:31.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:31.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]: {
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:        "osd_id": 0,
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:        "type": "bluestore"
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]:    }
Jan 23 04:55:31 np0005593232 sweet_ritchie[303932]: }
Jan 23 04:55:31 np0005593232 systemd[1]: libpod-99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70.scope: Deactivated successfully.
Jan 23 04:55:31 np0005593232 podman[303914]: 2026-01-23 09:55:31.952167363 +0000 UTC m=+1.097773370 container died 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:55:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:55:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830682939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:55:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4079ce73a55a659bb96adc56ed455e3f195b3f2b9f6d7491a08fe3db33ff9e2f-merged.mount: Deactivated successfully.
Jan 23 04:55:32 np0005593232 nova_compute[250269]: 2026-01-23 09:55:32.009 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:55:32 np0005593232 podman[303914]: 2026-01-23 09:55:32.015777682 +0000 UTC m=+1.161383689 container remove 99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_ritchie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:55:32 np0005593232 nova_compute[250269]: 2026-01-23 09:55:32.017 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:55:32 np0005593232 systemd[1]: libpod-conmon-99790ca13fa6349f7ba3e5c90910d36e113ba3b6bc2ef69b20e84844d8fd5e70.scope: Deactivated successfully.
Jan 23 04:55:32 np0005593232 nova_compute[250269]: 2026-01-23 09:55:32.041 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:55:32 np0005593232 podman[303976]: 2026-01-23 09:55:32.055986351 +0000 UTC m=+0.075645775 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 04:55:32 np0005593232 nova_compute[250269]: 2026-01-23 09:55:32.075 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:55:32 np0005593232 nova_compute[250269]: 2026-01-23 09:55:32.076 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f333649a-810d-4d30-9c25-39dba319719c does not exist
Jan 23 04:55:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ad47a939-0b7e-493b-8fb0-41598a093702 does not exist
Jan 23 04:55:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 35d0df41-49ba-4b5a-b802-6e944bc270d0 does not exist
Jan 23 04:55:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1930: 321 pgs: 321 active+clean; 119 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 543 KiB/s rd, 2.6 MiB/s wr, 180 op/s
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:55:33 np0005593232 nova_compute[250269]: 2026-01-23 09:55:33.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:33.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1931: 321 pgs: 321 active+clean; 119 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 543 KiB/s rd, 2.6 MiB/s wr, 180 op/s
Jan 23 04:55:35 np0005593232 nova_compute[250269]: 2026-01-23 09:55:35.033 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:55:35 np0005593232 nova_compute[250269]: 2026-01-23 09:55:35.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:35.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 23 04:55:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 23 04:55:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 23 04:55:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1933: 321 pgs: 321 active+clean; 121 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 650 KiB/s rd, 3.1 MiB/s wr, 178 op/s
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:55:37
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.control', 'backups', 'volumes', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:55:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:37.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:37.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:37 np0005593232 nova_compute[250269]: 2026-01-23 09:55:37.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:38 np0005593232 nova_compute[250269]: 2026-01-23 09:55:38.132 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:55:38 np0005593232 nova_compute[250269]: 2026-01-23 09:55:38.249 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162123.2487042, b3b41870-7e94-44fd-84bb-575dbf15c745 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:55:38 np0005593232 nova_compute[250269]: 2026-01-23 09:55:38.250 250273 INFO nova.compute.manager [-] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:55:38 np0005593232 nova_compute[250269]: 2026-01-23 09:55:38.303 250273 DEBUG nova.compute.manager [None req-eddcc9de-f7ad-4372-91a0-705a5ae7e733 - - - - - -] [instance: b3b41870-7e94-44fd-84bb-575dbf15c745] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:55:38 np0005593232 nova_compute[250269]: 2026-01-23 09:55:38.371 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1934: 321 pgs: 321 active+clean; 121 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.6 MiB/s wr, 152 op/s
Jan 23 04:55:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:39.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:39.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:40 np0005593232 nova_compute[250269]: 2026-01-23 09:55:40.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1935: 321 pgs: 321 active+clean; 121 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 538 KiB/s rd, 2.6 MiB/s wr, 152 op/s
Jan 23 04:55:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:41.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:41.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:42.609 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:42.609 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:55:42.609 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:55:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1936: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 24 op/s
Jan 23 04:55:43 np0005593232 nova_compute[250269]: 2026-01-23 09:55:43.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1937: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 24 op/s
Jan 23 04:55:45 np0005593232 nova_compute[250269]: 2026-01-23 09:55:45.211 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:45.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1938: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 19 op/s
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.0001635783082077052 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002166503815373162 of space, bias 1.0, pg target 0.6499511446119486 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:55:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:55:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 04:55:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 04:55:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:47.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:48 np0005593232 nova_compute[250269]: 2026-01-23 09:55:48.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1939: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 20 KiB/s wr, 29 op/s
Jan 23 04:55:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:49.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:50 np0005593232 nova_compute[250269]: 2026-01-23 09:55:50.212 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1940: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 24 op/s
Jan 23 04:55:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:51.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1941: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 24 op/s
Jan 23 04:55:53 np0005593232 nova_compute[250269]: 2026-01-23 09:55:53.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:53.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:53.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1942: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Jan 23 04:55:55 np0005593232 nova_compute[250269]: 2026-01-23 09:55:55.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:55:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:55.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:55:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:55.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:55:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1943: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Jan 23 04:55:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:57.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:57.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:58 np0005593232 nova_compute[250269]: 2026-01-23 09:55:58.378 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:55:58 np0005593232 podman[304122]: 2026-01-23 09:55:58.457078516 +0000 UTC m=+0.117102380 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 04:55:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1944: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 4.7 KiB/s wr, 12 op/s
Jan 23 04:55:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:55:59.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:55:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:55:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:55:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:55:59.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:00 np0005593232 nova_compute[250269]: 2026-01-23 09:56:00.341 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1945: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:01.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:01.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:02 np0005593232 podman[304200]: 2026-01-23 09:56:02.389728497 +0000 UTC m=+0.049976412 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 04:56:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1946: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:03 np0005593232 nova_compute[250269]: 2026-01-23 09:56:03.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:03.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:03.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1947: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:05 np0005593232 nova_compute[250269]: 2026-01-23 09:56:05.343 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:05.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:05.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1948: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:07.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:07.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:08 np0005593232 nova_compute[250269]: 2026-01-23 09:56:08.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1949: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:09.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:09.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:10 np0005593232 nova_compute[250269]: 2026-01-23 09:56:10.344 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1950: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:11.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:11.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:12 np0005593232 nova_compute[250269]: 2026-01-23 09:56:12.080 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:12.079 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:56:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:12.081 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:56:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1951: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:13 np0005593232 nova_compute[250269]: 2026-01-23 09:56:13.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:13.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:13.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1952: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:15 np0005593232 nova_compute[250269]: 2026-01-23 09:56:15.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:15.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:15.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.167 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.167 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.198 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.330 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.331 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.342 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.344 250273 INFO nova.compute.claims [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.553 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1953: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:56:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:56:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980961041' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.989 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:16 np0005593232 nova_compute[250269]: 2026-01-23 09:56:16.997 250273 DEBUG nova.compute.provider_tree [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.027 250273 DEBUG nova.scheduler.client.report [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.064 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.066 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.172 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.173 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.209 250273 INFO nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.239 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.456 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.458 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.459 250273 INFO nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Creating image(s)#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.491 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.519 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.550 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.554 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.626 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.628 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.629 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.629 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.659 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.662 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7da901b2-a103-4bdd-aebe-54f024af47bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:17.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:17 np0005593232 nova_compute[250269]: 2026-01-23 09:56:17.851 250273 DEBUG nova.policy [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0c18a146d425428f8ba82d37fcdb9c02', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75a27ed7c12e4bfba34376ef35a14d04', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:56:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:17.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.190 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7da901b2-a103-4bdd-aebe-54f024af47bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.267 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] resizing rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.412 250273 DEBUG nova.objects.instance [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'migration_context' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.439 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.440 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Ensure instance console log exists: /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.440 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.440 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:18 np0005593232 nova_compute[250269]: 2026-01-23 09:56:18.441 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1954: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 23 04:56:19 np0005593232 nova_compute[250269]: 2026-01-23 09:56:19.265 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Successfully created port: f799aed2-670b-4bb6-8d95-cbc3b8ed9861 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:56:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:19.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:19.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:20.083 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:20 np0005593232 nova_compute[250269]: 2026-01-23 09:56:20.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:20 np0005593232 nova_compute[250269]: 2026-01-23 09:56:20.377 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1955: 321 pgs: 321 active+clean; 121 MiB data, 751 MiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 23 04:56:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:21.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:21.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.276 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Successfully updated port: f799aed2-670b-4bb6-8d95-cbc3b8ed9861 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.304 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.305 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquired lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.305 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.460 250273 DEBUG nova.compute.manager [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-changed-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.461 250273 DEBUG nova.compute.manager [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Refreshing instance network info cache due to event network-changed-f799aed2-670b-4bb6-8d95-cbc3b8ed9861. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.461 250273 DEBUG oslo_concurrency.lockutils [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:56:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1956: 321 pgs: 321 active+clean; 114 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 23 04:56:22 np0005593232 nova_compute[250269]: 2026-01-23 09:56:22.846 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:56:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:56:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962859247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:56:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:56:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962859247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:56:23 np0005593232 nova_compute[250269]: 2026-01-23 09:56:23.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:23 np0005593232 nova_compute[250269]: 2026-01-23 09:56:23.408 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:56:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:23.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:56:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:23.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.096 250273 DEBUG nova.network.neutron [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Updating instance_info_cache with network_info: [{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.131 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Releasing lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.132 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance network_info: |[{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.132 250273 DEBUG oslo_concurrency.lockutils [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.133 250273 DEBUG nova.network.neutron [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Refreshing network info cache for port f799aed2-670b-4bb6-8d95-cbc3b8ed9861 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.136 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start _get_guest_xml network_info=[{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.141 250273 WARNING nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.146 250273 DEBUG nova.virt.libvirt.host [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.147 250273 DEBUG nova.virt.libvirt.host [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.151 250273 DEBUG nova.virt.libvirt.host [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.152 250273 DEBUG nova.virt.libvirt.host [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.153 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.154 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.154 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.154 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.155 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.155 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.155 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.156 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.156 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.156 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.156 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.157 250273 DEBUG nova.virt.hardware [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.161 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:56:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1383816889' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.590 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.624 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:24 np0005593232 nova_compute[250269]: 2026-01-23 09:56:24.629 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1957: 321 pgs: 321 active+clean; 114 MiB data, 772 MiB used, 20 GiB / 21 GiB avail; 46 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 23 04:56:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:56:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804573301' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.112 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.114 250273 DEBUG nova.virt.libvirt.vif [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:56:17Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.114 250273 DEBUG nova.network.os_vif_util [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.115 250273 DEBUG nova.network.os_vif_util [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.117 250273 DEBUG nova.objects.instance [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.149 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <uuid>7da901b2-a103-4bdd-aebe-54f024af47bf</uuid>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <name>instance-00000055</name>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:name>tempest-InstanceActionsTestJSON-server-363490500</nova:name>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:56:24</nova:creationTime>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:user uuid="0c18a146d425428f8ba82d37fcdb9c02">tempest-InstanceActionsTestJSON-487384975-project-member</nova:user>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:project uuid="75a27ed7c12e4bfba34376ef35a14d04">tempest-InstanceActionsTestJSON-487384975</nova:project>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <nova:port uuid="f799aed2-670b-4bb6-8d95-cbc3b8ed9861">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="serial">7da901b2-a103-4bdd-aebe-54f024af47bf</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="uuid">7da901b2-a103-4bdd-aebe-54f024af47bf</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7da901b2-a103-4bdd-aebe-54f024af47bf_disk">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e5:10:cf"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <target dev="tapf799aed2-67"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/console.log" append="off"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:56:25 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:56:25 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:56:25 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:56:25 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.150 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Preparing to wait for external event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.151 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.151 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.151 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.152 250273 DEBUG nova.virt.libvirt.vif [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:56:17Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.153 250273 DEBUG nova.network.os_vif_util [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.153 250273 DEBUG nova.network.os_vif_util [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.154 250273 DEBUG os_vif [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.155 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.156 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.160 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.160 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf799aed2-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.161 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf799aed2-67, col_values=(('external_ids', {'iface-id': 'f799aed2-670b-4bb6-8d95-cbc3b8ed9861', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:10:cf', 'vm-uuid': '7da901b2-a103-4bdd-aebe-54f024af47bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:25 np0005593232 NetworkManager[49057]: <info>  [1769162185.1636] manager: (tapf799aed2-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/131)
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.170 250273 INFO os_vif [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67')#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.290 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.291 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.291 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] No VIF found with MAC fa:16:3e:e5:10:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.291 250273 INFO nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Using config drive#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.318 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:25 np0005593232 nova_compute[250269]: 2026-01-23 09:56:25.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:25.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:25.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.164 250273 INFO nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Creating config drive at /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.169 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlkjy8wt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.300 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxlkjy8wt" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.334 250273 DEBUG nova.storage.rbd_utils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] rbd image 7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.338 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config 7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.367 250273 DEBUG nova.network.neutron [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Updated VIF entry in instance network info cache for port f799aed2-670b-4bb6-8d95-cbc3b8ed9861. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.368 250273 DEBUG nova.network.neutron [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Updating instance_info_cache with network_info: [{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.462 250273 DEBUG oslo_concurrency.lockutils [req-1f05f244-e622-470d-ac5e-dcdbac8d6a4b req-dcaf27b6-14dc-43ab-bc51-fcef32038692 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.509 250273 DEBUG oslo_concurrency.processutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config 7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.510 250273 INFO nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Deleting local config drive /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/disk.config because it was imported into RBD.#033[00m
Jan 23 04:56:26 np0005593232 kernel: tapf799aed2-67: entered promiscuous mode
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.5673] manager: (tapf799aed2-67): new Tun device (/org/freedesktop/NetworkManager/Devices/132)
Jan 23 04:56:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:26Z|00262|binding|INFO|Claiming lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for this chassis.
Jan 23 04:56:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:26Z|00263|binding|INFO|f799aed2-670b-4bb6-8d95-cbc3b8ed9861: Claiming fa:16:3e:e5:10:cf 10.100.0.4
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:26 np0005593232 systemd-udevd[304605]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:56:26 np0005593232 systemd-machined[215836]: New machine qemu-32-instance-00000055.
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.646 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:10:cf 10.100.0.4'], port_security=['fa:16:3e:e5:10:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7da901b2-a103-4bdd-aebe-54f024af47bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75a27ed7c12e4bfba34376ef35a14d04', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f5dde0fe-1955-4c3c-a689-2b3b780903e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de10fee-58f4-4406-945d-ce76b4916b26, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f799aed2-670b-4bb6-8d95-cbc3b8ed9861) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.647 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f799aed2-670b-4bb6-8d95-cbc3b8ed9861 in datapath a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 bound to our chassis#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.649 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d4c037-3e0a-4dd6-8e59-7c34955a40c5#033[00m
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.6573] device (tapf799aed2-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.6581] device (tapf799aed2-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.661 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[435f2c79-d127-4b41-8071-05d6642d5b0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.662 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d4c037-31 in ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.663 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d4c037-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.663 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6053981-1c12-47fd-909c-cd271b65bc51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.664 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[154c3ad5-2437-443a-8297-eccfc6917299]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 systemd[1]: Started Virtual Machine qemu-32-instance-00000055.
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:26Z|00264|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 ovn-installed in OVS
Jan 23 04:56:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:26Z|00265|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 up in Southbound
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.682 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b8fcc644-6842-47c7-9096-346f51971b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.697 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[20fb9879-5c4b-43c4-ad9a-a7150c410c3c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.727 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e727e1da-16e9-40c0-bb76-912b622c3caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.7336] manager: (tapa0d4c037-30): new Veth device (/org/freedesktop/NetworkManager/Devices/133)
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.732 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bcd68914-28b5-4f90-b879-1b9f7651db5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.764 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddbe04f-92a1-4f63-96a9-e34d198ea9b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.767 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d5d754-892a-4fe4-b03a-eff8983f4e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.7892] device (tapa0d4c037-30): carrier: link connected
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.795 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4446b6bd-6cdf-4d64-96d9-9d0ac8bb11d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.812 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cb7b5daa-b9a1-4dcc-a851-79f9285c8249]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d4c037-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:8f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609993, 'reachable_time': 15615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304638, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1958: 321 pgs: 321 active+clean; 88 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.829 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae46e40-c6b8-4912-b688-f7ad34bb43cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:8f67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 609993, 'tstamp': 609993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304639, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.843 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e823634d-014f-4c35-b745-e85e1286907d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d4c037-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:8f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 78], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609993, 'reachable_time': 15615, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304640, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.872 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a14e3b47-4501-4ab2-97b7-1676adc457f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.932 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b7d23c2e-5de0-4ce2-a2f2-992c05404054]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.934 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d4c037-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.934 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.935 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d4c037-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:26 np0005593232 NetworkManager[49057]: <info>  [1769162186.9371] manager: (tapa0d4c037-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 23 04:56:26 np0005593232 kernel: tapa0d4c037-30: entered promiscuous mode
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.944 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d4c037-30, col_values=(('external_ids', {'iface-id': '2ec70d18-d892-413f-bf34-d9e626ddc8bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:26 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:26Z|00266|binding|INFO|Releasing lport 2ec70d18-d892-413f-bf34-d9e626ddc8bc from this chassis (sb_readonly=0)
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.945 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.948 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.949 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f7495bc-b1f5-42ac-a419-0ad179de8b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.950 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID a0d4c037-3e0a-4dd6-8e59-7c34955a40c5
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:56:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:26.950 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'env', 'PROCESS_TAG=haproxy-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:56:26 np0005593232 nova_compute[250269]: 2026-01-23 09:56:26.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:27 np0005593232 nova_compute[250269]: 2026-01-23 09:56:27.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:27 np0005593232 nova_compute[250269]: 2026-01-23 09:56:27.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:56:27 np0005593232 nova_compute[250269]: 2026-01-23 09:56:27.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:56:27 np0005593232 podman[304672]: 2026-01-23 09:56:27.332396566 +0000 UTC m=+0.069154438 container create e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 04:56:27 np0005593232 systemd[1]: Started libpod-conmon-e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f.scope.
Jan 23 04:56:27 np0005593232 podman[304672]: 2026-01-23 09:56:27.284689099 +0000 UTC m=+0.021447001 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:56:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3deb6090ab8631190286363a3de047171f5acd1ceb1a5af5ad50b6993e4dee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:27 np0005593232 podman[304672]: 2026-01-23 09:56:27.409597021 +0000 UTC m=+0.146354893 container init e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:56:27 np0005593232 podman[304672]: 2026-01-23 09:56:27.416334973 +0000 UTC m=+0.153092845 container start e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 04:56:27 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [NOTICE]   (304692) : New worker (304694) forked
Jan 23 04:56:27 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [NOTICE]   (304692) : Loading success.
Jan 23 04:56:27 np0005593232 nova_compute[250269]: 2026-01-23 09:56:27.669 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:56:27 np0005593232 nova_compute[250269]: 2026-01-23 09:56:27.670 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:56:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:27.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:27.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.225 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162188.2249641, 7da901b2-a103-4bdd-aebe-54f024af47bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.226 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Started (Lifecycle Event)#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.384 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.389 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162188.225098, 7da901b2-a103-4bdd-aebe-54f024af47bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.389 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:56:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1959: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.963 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:28 np0005593232 nova_compute[250269]: 2026-01-23 09:56:28.966 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:56:29 np0005593232 nova_compute[250269]: 2026-01-23 09:56:29.437 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:56:29 np0005593232 podman[304746]: 2026-01-23 09:56:29.464285881 +0000 UTC m=+0.128242719 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 04:56:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:29.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:29.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.903614) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162189903720, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2160, "num_deletes": 254, "total_data_size": 3775109, "memory_usage": 3835504, "flush_reason": "Manual Compaction"}
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162189960753, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 3712948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41285, "largest_seqno": 43444, "table_properties": {"data_size": 3703079, "index_size": 6235, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20882, "raw_average_key_size": 20, "raw_value_size": 3683228, "raw_average_value_size": 3653, "num_data_blocks": 271, "num_entries": 1008, "num_filter_entries": 1008, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769161984, "oldest_key_time": 1769161984, "file_creation_time": 1769162189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 57334 microseconds, and 8771 cpu microseconds.
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.960943) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 3712948 bytes OK
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.960995) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.962746) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.962761) EVENT_LOG_v1 {"time_micros": 1769162189962756, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.962778) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3766095, prev total WAL file size 3766095, number of live WAL files 2.
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.964020) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(3625KB)], [92(9510KB)]
Jan 23 04:56:29 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162189964083, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 13452132, "oldest_snapshot_seqno": -1}
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7028 keys, 11522180 bytes, temperature: kUnknown
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162190071553, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11522180, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11474115, "index_size": 29393, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17605, "raw_key_size": 180755, "raw_average_key_size": 25, "raw_value_size": 11347237, "raw_average_value_size": 1614, "num_data_blocks": 1169, "num_entries": 7028, "num_filter_entries": 7028, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162189, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.071846) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11522180 bytes
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.074690) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.0 rd, 107.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.3 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7553, records dropped: 525 output_compression: NoCompression
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.074735) EVENT_LOG_v1 {"time_micros": 1769162190074707, "job": 54, "event": "compaction_finished", "compaction_time_micros": 107609, "compaction_time_cpu_micros": 31549, "output_level": 6, "num_output_files": 1, "total_output_size": 11522180, "num_input_records": 7553, "num_output_records": 7028, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162190075626, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162190077349, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:29.963932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.077410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.077414) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.077415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.077417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:56:30.077418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.424 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.425 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.425 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.425 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.425 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.600 250273 DEBUG nova.compute.manager [req-618ac4e9-ca42-4ca7-8a23-8ece0493b22b req-903d3684-ded2-486c-9ea1-5348941860bb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.602 250273 DEBUG oslo_concurrency.lockutils [req-618ac4e9-ca42-4ca7-8a23-8ece0493b22b req-903d3684-ded2-486c-9ea1-5348941860bb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.602 250273 DEBUG oslo_concurrency.lockutils [req-618ac4e9-ca42-4ca7-8a23-8ece0493b22b req-903d3684-ded2-486c-9ea1-5348941860bb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.603 250273 DEBUG oslo_concurrency.lockutils [req-618ac4e9-ca42-4ca7-8a23-8ece0493b22b req-903d3684-ded2-486c-9ea1-5348941860bb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.603 250273 DEBUG nova.compute.manager [req-618ac4e9-ca42-4ca7-8a23-8ece0493b22b req-903d3684-ded2-486c-9ea1-5348941860bb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Processing event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.604 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.608 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.612 250273 INFO nova.virt.libvirt.driver [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance spawned successfully.#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.612 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.615 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162190.614842, 7da901b2-a103-4bdd-aebe-54f024af47bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.615 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.656 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.663 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.667 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.668 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.668 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.669 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.669 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.670 250273 DEBUG nova.virt.libvirt.driver [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.701 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.759 250273 INFO nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Took 13.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.760 250273 DEBUG nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1960: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.840 250273 INFO nova.compute.manager [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Took 14.57 seconds to build instance.#033[00m
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.879 250273 DEBUG oslo_concurrency.lockutils [None req-5a39823e-927e-4d37-9b27-c84f54426777 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:56:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/253468711' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:56:30 np0005593232 nova_compute[250269]: 2026-01-23 09:56:30.930 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.025 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.025 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:56:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.187 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.188 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4420MB free_disk=20.96752166748047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.188 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.189 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.310 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 7da901b2-a103-4bdd-aebe-54f024af47bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.310 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.311 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.388 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:31.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:56:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116480676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.858 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.864 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.886 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:56:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.913 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:56:31 np0005593232 nova_compute[250269]: 2026-01-23 09:56:31.913 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:32 np0005593232 podman[304843]: 2026-01-23 09:56:32.702152373 +0000 UTC m=+0.096523716 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.717 250273 DEBUG nova.compute.manager [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.718 250273 DEBUG oslo_concurrency.lockutils [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.718 250273 DEBUG oslo_concurrency.lockutils [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.718 250273 DEBUG oslo_concurrency.lockutils [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.719 250273 DEBUG nova.compute.manager [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:32 np0005593232 nova_compute[250269]: 2026-01-23 09:56:32.719 250273 WARNING nova.compute.manager [req-ff5650b6-7c21-4028-98fc-1eedf6f4067d req-82c0d742-129b-4fde-9e76-65a6f177c45c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:56:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1961: 321 pgs: 321 active+clean; 134 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e03db703-75d2-43e2-9d16-5b03b513d11a does not exist
Jan 23 04:56:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 98cf9872-4fda-4c95-ada3-5ec5c54b65d0 does not exist
Jan 23 04:56:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 79c38963-02d3-4849-9daa-81ba56049296 does not exist
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:56:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:56:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:33.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.705 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.707 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.707 250273 INFO nova.compute.manager [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Rebooting instance#033[00m
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.739 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.739 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquired lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:56:33 np0005593232 nova_compute[250269]: 2026-01-23 09:56:33.740 250273 DEBUG nova.network.neutron [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:56:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:33.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:33.959123183 +0000 UTC m=+0.021048629 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.075686249 +0000 UTC m=+0.137611675 container create 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:56:34 np0005593232 systemd[1]: Started libpod-conmon-244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7.scope.
Jan 23 04:56:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.268000859 +0000 UTC m=+0.329926305 container init 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:56:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:56:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.278164918 +0000 UTC m=+0.340090354 container start 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 04:56:34 np0005593232 angry_shamir[305125]: 167 167
Jan 23 04:56:34 np0005593232 systemd[1]: libpod-244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7.scope: Deactivated successfully.
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.292488575 +0000 UTC m=+0.354414031 container attach 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.293466863 +0000 UTC m=+0.355392279 container died 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 04:56:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d152448763f123e35a672b15821720a52206a1aec4ce9ed3f251d3e418169757-merged.mount: Deactivated successfully.
Jan 23 04:56:34 np0005593232 podman[305109]: 2026-01-23 09:56:34.385199932 +0000 UTC m=+0.447125348 container remove 244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:56:34 np0005593232 systemd[1]: libpod-conmon-244be35ece451a5b836484914a058cb6ca40fd934ad4ef7749b5359a5d1ae8c7.scope: Deactivated successfully.
Jan 23 04:56:34 np0005593232 podman[305148]: 2026-01-23 09:56:34.558516971 +0000 UTC m=+0.055191550 container create e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:56:34 np0005593232 systemd[1]: Started libpod-conmon-e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b.scope.
Jan 23 04:56:34 np0005593232 podman[305148]: 2026-01-23 09:56:34.529695972 +0000 UTC m=+0.026370571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:34 np0005593232 podman[305148]: 2026-01-23 09:56:34.669330363 +0000 UTC m=+0.166004962 container init e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:56:34 np0005593232 podman[305148]: 2026-01-23 09:56:34.677109514 +0000 UTC m=+0.173784093 container start e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 04:56:34 np0005593232 podman[305148]: 2026-01-23 09:56:34.682206659 +0000 UTC m=+0.178881238 container attach e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:56:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1962: 321 pgs: 321 active+clean; 134 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 04:56:35 np0005593232 nova_compute[250269]: 2026-01-23 09:56:35.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:35 np0005593232 nova_compute[250269]: 2026-01-23 09:56:35.385 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:35 np0005593232 optimistic_mendel[305165]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:56:35 np0005593232 optimistic_mendel[305165]: --> relative data size: 1.0
Jan 23 04:56:35 np0005593232 optimistic_mendel[305165]: --> All data devices are unavailable
Jan 23 04:56:35 np0005593232 systemd[1]: libpod-e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b.scope: Deactivated successfully.
Jan 23 04:56:35 np0005593232 conmon[305165]: conmon e541e29ec462c0af7035 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b.scope/container/memory.events
Jan 23 04:56:35 np0005593232 podman[305148]: 2026-01-23 09:56:35.479701432 +0000 UTC m=+0.976376011 container died e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:56:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c5804e59589f4304b5f4c4403903d04d212e912204a6b24cf3f7fa1c4fcf51f8-merged.mount: Deactivated successfully.
Jan 23 04:56:35 np0005593232 podman[305148]: 2026-01-23 09:56:35.614902107 +0000 UTC m=+1.111576686 container remove e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_mendel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:56:35 np0005593232 systemd[1]: libpod-conmon-e541e29ec462c0af7035fcdeda438ea34bbe88fda103813b8a7549e674c7b89b.scope: Deactivated successfully.
Jan 23 04:56:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:35.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:35 np0005593232 nova_compute[250269]: 2026-01-23 09:56:35.914 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:56:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.29885745 +0000 UTC m=+0.081348784 container create 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 04:56:36 np0005593232 systemd[1]: Started libpod-conmon-1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212.scope.
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.254624932 +0000 UTC m=+0.037116326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.4613251 +0000 UTC m=+0.243816454 container init 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.469150433 +0000 UTC m=+0.251641767 container start 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 04:56:36 np0005593232 suspicious_burnell[305351]: 167 167
Jan 23 04:56:36 np0005593232 systemd[1]: libpod-1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212.scope: Deactivated successfully.
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.480 250273 DEBUG nova.network.neutron [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Updating instance_info_cache with network_info: [{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.501061301 +0000 UTC m=+0.283552635 container attach 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.502573184 +0000 UTC m=+0.285064518 container died 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.510 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Releasing lock "refresh_cache-7da901b2-a103-4bdd-aebe-54f024af47bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.512 250273 DEBUG nova.compute.manager [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:36 np0005593232 kernel: tapf799aed2-67 (unregistering): left promiscuous mode
Jan 23 04:56:36 np0005593232 NetworkManager[49057]: <info>  [1769162196.6861] device (tapf799aed2-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:56:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1c7058a0baae630f4a709b9f493c5e3b07ac2b60e6aa07149ad673c976493c55-merged.mount: Deactivated successfully.
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:36Z|00267|binding|INFO|Releasing lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 from this chassis (sb_readonly=0)
Jan 23 04:56:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:36Z|00268|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 down in Southbound
Jan 23 04:56:36 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:36Z|00269|binding|INFO|Removing iface tapf799aed2-67 ovn-installed in OVS
Jan 23 04:56:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:36.702 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:10:cf 10.100.0.4'], port_security=['fa:16:3e:e5:10:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7da901b2-a103-4bdd-aebe-54f024af47bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75a27ed7c12e4bfba34376ef35a14d04', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f5dde0fe-1955-4c3c-a689-2b3b780903e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de10fee-58f4-4406-945d-ce76b4916b26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f799aed2-670b-4bb6-8d95-cbc3b8ed9861) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:56:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:36.703 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f799aed2-670b-4bb6-8d95-cbc3b8ed9861 in datapath a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 unbound from our chassis#033[00m
Jan 23 04:56:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:36.705 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:56:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:36.706 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[75d4062e-5b45-418c-b1f0-e38b245d3b27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:36.707 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 namespace which is not needed anymore#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.714 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 23 04:56:36 np0005593232 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d00000055.scope: Consumed 7.752s CPU time.
Jan 23 04:56:36 np0005593232 systemd-machined[215836]: Machine qemu-32-instance-00000055 terminated.
Jan 23 04:56:36 np0005593232 podman[305335]: 2026-01-23 09:56:36.751801202 +0000 UTC m=+0.534292526 container remove 1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:56:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1963: 321 pgs: 321 active+clean; 134 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [NOTICE]   (304692) : haproxy version is 2.8.14-c23fe91
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [NOTICE]   (304692) : path to executable is /usr/sbin/haproxy
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [WARNING]  (304692) : Exiting Master process...
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [WARNING]  (304692) : Exiting Master process...
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [ALERT]    (304692) : Current worker (304694) exited with code 143 (Terminated)
Jan 23 04:56:36 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[304688]: [WARNING]  (304692) : All workers exited. Exiting... (0)
Jan 23 04:56:36 np0005593232 systemd[1]: libpod-e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f.scope: Deactivated successfully.
Jan 23 04:56:36 np0005593232 podman[305395]: 2026-01-23 09:56:36.861379909 +0000 UTC m=+0.057088695 container died e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:56:36 np0005593232 kernel: tapf799aed2-67: entered promiscuous mode
Jan 23 04:56:36 np0005593232 kernel: tapf799aed2-67 (unregistering): left promiscuous mode
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.869 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.881 250273 INFO nova.virt.libvirt.driver [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance destroyed successfully.#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.882 250273 DEBUG nova.objects.instance [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'resources' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.899 250273 DEBUG nova.virt.libvirt.vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:56:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:56:36Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.899 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.900 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.900 250273 DEBUG os_vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.902 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf799aed2-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.906 250273 INFO os_vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67')#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.914 250273 DEBUG nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start _get_guest_xml network_info=[{"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.918 250273 WARNING nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.922 250273 DEBUG nova.virt.libvirt.host [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.923 250273 DEBUG nova.virt.libvirt.host [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.926 250273 DEBUG nova.virt.libvirt.host [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.926 250273 DEBUG nova.virt.libvirt.host [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.928 250273 DEBUG nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.928 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.928 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.929 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.929 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.929 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.930 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.930 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.930 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.931 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.931 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.931 250273 DEBUG nova.virt.hardware [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.931 250273 DEBUG nova.objects.instance [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:36 np0005593232 nova_compute[250269]: 2026-01-23 09:56:36.958 250273 DEBUG oslo_concurrency.processutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f-userdata-shm.mount: Deactivated successfully.
Jan 23 04:56:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cf3deb6090ab8631190286363a3de047171f5acd1ceb1a5af5ad50b6993e4dee-merged.mount: Deactivated successfully.
Jan 23 04:56:37 np0005593232 podman[305395]: 2026-01-23 09:56:37.051423714 +0000 UTC m=+0.247132480 container cleanup e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 04:56:37 np0005593232 systemd[1]: libpod-conmon-e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f.scope: Deactivated successfully.
Jan 23 04:56:37 np0005593232 podman[305418]: 2026-01-23 09:56:37.034506223 +0000 UTC m=+0.153180178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:56:37
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:56:37 np0005593232 podman[305418]: 2026-01-23 09:56:37.23904028 +0000 UTC m=+0.357714215 container create 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 04:56:37 np0005593232 systemd[1]: Started libpod-conmon-4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206.scope.
Jan 23 04:56:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac9e4f679536cb37d65b7e2761c728e26a4f3652b764312df18c1de6c12e66d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac9e4f679536cb37d65b7e2761c728e26a4f3652b764312df18c1de6c12e66d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac9e4f679536cb37d65b7e2761c728e26a4f3652b764312df18c1de6c12e66d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac9e4f679536cb37d65b7e2761c728e26a4f3652b764312df18c1de6c12e66d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:37 np0005593232 podman[305454]: 2026-01-23 09:56:37.332968652 +0000 UTC m=+0.258452402 container remove e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 04:56:37 np0005593232 podman[305418]: 2026-01-23 09:56:37.342829582 +0000 UTC m=+0.461503537 container init 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.340 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[72ba11fc-0179-4068-88f9-43f25ad7832e]: (4, ('Fri Jan 23 09:56:36 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 (e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f)\ne224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f\nFri Jan 23 09:56:37 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 (e224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f)\ne224d8d9ce3fe71335bc84e11d5572528a91e039fe36698639b00d58659d2c9f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.343 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[15e58f35-2e4f-489d-8799-9152221c06e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.344 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d4c037-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.348 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:37 np0005593232 kernel: tapa0d4c037-30: left promiscuous mode
Jan 23 04:56:37 np0005593232 podman[305418]: 2026-01-23 09:56:37.351627243 +0000 UTC m=+0.470301168 container start 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.353 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a10df094-2c39-4589-a563-7e057a3ff985]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 podman[305418]: 2026-01-23 09:56:37.356256034 +0000 UTC m=+0.474929969 container attach 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.373 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7d8085-5740-4617-b882-683cbfcaef4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.378 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a3193086-49cc-4424-bc35-ee24902ad54d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:56:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3604488103' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.393 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[def645af-837e-4f99-99a4-6d5cd4acd1fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 609986, 'reachable_time': 43656, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305495, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 systemd[1]: run-netns-ovnmeta\x2da0d4c037\x2d3e0a\x2d4dd6\x2d8e59\x2d7c34955a40c5.mount: Deactivated successfully.
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.400 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:37.400 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[815fd2a0-7fcf-40ba-960d-cc0f6a716902]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.402 250273 DEBUG oslo_concurrency.processutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:37 np0005593232 systemd[1]: libpod-conmon-1eccddcb7820a50fb5e833c18cf4e114624d137920bb0f4962a702c994981212.scope: Deactivated successfully.
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.442 250273 DEBUG oslo_concurrency.processutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.670 250273 DEBUG nova.compute.manager [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.671 250273 DEBUG oslo_concurrency.lockutils [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.671 250273 DEBUG oslo_concurrency.lockutils [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.671 250273 DEBUG oslo_concurrency.lockutils [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.672 250273 DEBUG nova.compute.manager [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.672 250273 WARNING nova.compute.manager [req-d49df736-cffa-469a-a461-535ed6da900c req-65101cf0-0d36-4ef2-8c64-57be1e2ae69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 04:56:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:56:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818037112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.891 250273 DEBUG oslo_concurrency.processutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.893 250273 DEBUG nova.virt.libvirt.vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:56:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:56:36Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.893 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.894 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:37 np0005593232 nova_compute[250269]: 2026-01-23 09:56:37.895 250273 DEBUG nova.objects.instance [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.017 250273 DEBUG nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <uuid>7da901b2-a103-4bdd-aebe-54f024af47bf</uuid>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <name>instance-00000055</name>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:name>tempest-InstanceActionsTestJSON-server-363490500</nova:name>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:56:36</nova:creationTime>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:user uuid="0c18a146d425428f8ba82d37fcdb9c02">tempest-InstanceActionsTestJSON-487384975-project-member</nova:user>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:project uuid="75a27ed7c12e4bfba34376ef35a14d04">tempest-InstanceActionsTestJSON-487384975</nova:project>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <nova:port uuid="f799aed2-670b-4bb6-8d95-cbc3b8ed9861">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="serial">7da901b2-a103-4bdd-aebe-54f024af47bf</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="uuid">7da901b2-a103-4bdd-aebe-54f024af47bf</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7da901b2-a103-4bdd-aebe-54f024af47bf_disk">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7da901b2-a103-4bdd-aebe-54f024af47bf_disk.config">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e5:10:cf"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <target dev="tapf799aed2-67"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf/console.log" append="off"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:56:38 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:56:38 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:56:38 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:56:38 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.019 250273 DEBUG nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.020 250273 DEBUG nova.virt.libvirt.driver [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.021 250273 DEBUG nova.virt.libvirt.vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:56:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:56:36Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.021 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.022 250273 DEBUG nova.network.os_vif_util [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.022 250273 DEBUG os_vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.023 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.024 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.024 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.027 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf799aed2-67, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.027 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf799aed2-67, col_values=(('external_ids', {'iface-id': 'f799aed2-670b-4bb6-8d95-cbc3b8ed9861', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:10:cf', 'vm-uuid': '7da901b2-a103-4bdd-aebe-54f024af47bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.0302] manager: (tapf799aed2-67): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.036 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.037 250273 INFO os_vif [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67')#033[00m
Jan 23 04:56:38 np0005593232 kernel: tapf799aed2-67: entered promiscuous mode
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.1108] manager: (tapf799aed2-67): new Tun device (/org/freedesktop/NetworkManager/Devices/136)
Jan 23 04:56:38 np0005593232 systemd-udevd[305373]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:56:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:38Z|00270|binding|INFO|Claiming lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for this chassis.
Jan 23 04:56:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:38Z|00271|binding|INFO|f799aed2-670b-4bb6-8d95-cbc3b8ed9861: Claiming fa:16:3e:e5:10:cf 10.100.0.4
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.112 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.122 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:10:cf 10.100.0.4'], port_security=['fa:16:3e:e5:10:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7da901b2-a103-4bdd-aebe-54f024af47bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75a27ed7c12e4bfba34376ef35a14d04', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'f5dde0fe-1955-4c3c-a689-2b3b780903e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de10fee-58f4-4406-945d-ce76b4916b26, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f799aed2-670b-4bb6-8d95-cbc3b8ed9861) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.124 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f799aed2-670b-4bb6-8d95-cbc3b8ed9861 in datapath a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 bound to our chassis#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.125 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a0d4c037-3e0a-4dd6-8e59-7c34955a40c5#033[00m
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.1269] device (tapf799aed2-67): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.1280] device (tapf799aed2-67): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:56:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:38Z|00272|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 ovn-installed in OVS
Jan 23 04:56:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:38Z|00273|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 up in Southbound
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.132 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.140 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4861e290-0317-4b75-bddb-b497d84d668f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.141 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa0d4c037-31 in ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.143 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa0d4c037-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.143 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fa03ab08-2597-42f8-a4ae-b84d53d80651]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.144 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdb90a5-c4fc-420c-bb0f-3b1313bffdcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.155 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[db5cd9f0-74a1-4879-8300-5be0b90130dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 systemd-machined[215836]: New machine qemu-33-instance-00000055.
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:56:38 np0005593232 systemd[1]: Started Virtual Machine qemu-33-instance-00000055.
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.178 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[829236d1-dc65-4ee5-80fe-2aee9303b6c5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]: {
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:    "0": [
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:        {
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "devices": [
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "/dev/loop3"
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            ],
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "lv_name": "ceph_lv0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "lv_size": "7511998464",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "name": "ceph_lv0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "tags": {
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.cluster_name": "ceph",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.crush_device_class": "",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.encrypted": "0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.osd_id": "0",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.type": "block",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:                "ceph.vdo": "0"
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            },
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "type": "block",
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:            "vg_name": "ceph_vg0"
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:        }
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]:    ]
Jan 23 04:56:38 np0005593232 xenodochial_cray[305486]: }
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.207 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[70bd441b-4fe4-4849-be8a-6262ace2ed59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.2143] manager: (tapa0d4c037-30): new Veth device (/org/freedesktop/NetworkManager/Devices/137)
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.213 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd25bdd-80d0-4f0b-a1e4-6ce90956ca8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 podman[305418]: 2026-01-23 09:56:38.241121342 +0000 UTC m=+1.359795277 container died 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 04:56:38 np0005593232 systemd[1]: libpod-4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206.scope: Deactivated successfully.
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.242 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[aaed075a-b5df-4d49-b1d9-2334808ceff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.246 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e02fdcea-dbb1-4441-9ba5-9a0e4f00ebfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cac9e4f679536cb37d65b7e2761c728e26a4f3652b764312df18c1de6c12e66d-merged.mount: Deactivated successfully.
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.2755] device (tapa0d4c037-30): carrier: link connected
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.282 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[06cc6950-2cef-4719-a342-53e0a9668997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 podman[305418]: 2026-01-23 09:56:38.301051506 +0000 UTC m=+1.419725441 container remove 4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.306 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[74f5af4c-6161-4ae2-8c75-d4dd30f33d52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d4c037-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:8f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611141, 'reachable_time': 43087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305599, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 systemd[1]: libpod-conmon-4c0957a164d3def9becb1b80d3bfb8e7c42aa60134053132bc6cf6031f842206.scope: Deactivated successfully.
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.324 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[144d4707-d9dd-4bfa-9b9e-01b4147b1e09]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea3:8f67'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 611141, 'tstamp': 611141}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305600, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.341 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb6dd60-68c8-4077-822f-81f1e580b9cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa0d4c037-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a3:8f:67'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 81], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611141, 'reachable_time': 43087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305601, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.371 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe0554c-0aa0-4dc2-98fd-1059745f5778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.426 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d40d28e6-3c39-4f56-a15a-10f3372affd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.428 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d4c037-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.428 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.428 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0d4c037-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 kernel: tapa0d4c037-30: entered promiscuous mode
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 NetworkManager[49057]: <info>  [1769162198.4305] manager: (tapa0d4c037-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/138)
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.433 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.434 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa0d4c037-30, col_values=(('external_ids', {'iface-id': '2ec70d18-d892-413f-bf34-d9e626ddc8bc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.436 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.437 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.438 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e21fae5-d6bc-4ace-9efa-df10911c6de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:38 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:38Z|00274|binding|INFO|Releasing lport 2ec70d18-d892-413f-bf34-d9e626ddc8bc from this chassis (sb_readonly=0)
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.438 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.pid.haproxy
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID a0d4c037-3e0a-4dd6-8e59-7c34955a40c5
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:56:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:38.439 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'env', 'PROCESS_TAG=haproxy-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a0d4c037-3e0a-4dd6-8e59-7c34955a40c5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.454 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.781 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 7da901b2-a103-4bdd-aebe-54f024af47bf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.782 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162198.7810082, 7da901b2-a103-4bdd-aebe-54f024af47bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.783 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.785 250273 DEBUG nova.compute.manager [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.789 250273 INFO nova.virt.libvirt.driver [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance rebooted successfully.#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.790 250273 DEBUG nova.compute.manager [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:38 np0005593232 podman[305784]: 2026-01-23 09:56:38.810736243 +0000 UTC m=+0.058702861 container create 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:56:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1964: 321 pgs: 321 active+clean; 134 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.850 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.854 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:56:38 np0005593232 systemd[1]: Started libpod-conmon-0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9.scope.
Jan 23 04:56:38 np0005593232 podman[305784]: 2026-01-23 09:56:38.779502625 +0000 UTC m=+0.027469263 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:56:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3191da74bfa0a1a9302e7de9294310f4601985b1a731382950580feec6867d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.894 250273 DEBUG oslo_concurrency.lockutils [None req-db6fb512-a245-425a-a083-4a895222921a 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.896 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.897 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162198.7818449, 7da901b2-a103-4bdd-aebe-54f024af47bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.897 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Started (Lifecycle Event)#033[00m
Jan 23 04:56:38 np0005593232 podman[305784]: 2026-01-23 09:56:38.905618722 +0000 UTC m=+0.153585360 container init 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 04:56:38 np0005593232 podman[305784]: 2026-01-23 09:56:38.910755168 +0000 UTC m=+0.158721786 container start 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.924 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:38 np0005593232 nova_compute[250269]: 2026-01-23 09:56:38.935 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:56:38 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [NOTICE]   (305828) : New worker (305831) forked
Jan 23 04:56:38 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [NOTICE]   (305828) : Loading success.
Jan 23 04:56:38 np0005593232 podman[305830]: 2026-01-23 09:56:38.986037479 +0000 UTC m=+0.039875065 container create 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:56:39 np0005593232 systemd[1]: Started libpod-conmon-1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010.scope.
Jan 23 04:56:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:38.970322892 +0000 UTC m=+0.024160508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:39.068252887 +0000 UTC m=+0.122090483 container init 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:39.078954432 +0000 UTC m=+0.132792038 container start 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:39.082793641 +0000 UTC m=+0.136631227 container attach 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 04:56:39 np0005593232 wonderful_goldstine[305856]: 167 167
Jan 23 04:56:39 np0005593232 systemd[1]: libpod-1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010.scope: Deactivated successfully.
Jan 23 04:56:39 np0005593232 conmon[305856]: conmon 1c12871aa470767ad7ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010.scope/container/memory.events
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:39.086525217 +0000 UTC m=+0.140362793 container died 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:56:39 np0005593232 podman[305830]: 2026-01-23 09:56:39.121662667 +0000 UTC m=+0.175500253 container remove 1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_goldstine, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:56:39 np0005593232 systemd[1]: libpod-conmon-1c12871aa470767ad7ea0add65ebf980cb3b0e4a445aa9093706d8239509f010.scope: Deactivated successfully.
Jan 23 04:56:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ac61a419175dce57dbf0aee5c931da9b2a4e1a972bb033280e78d458ec8c563f-merged.mount: Deactivated successfully.
Jan 23 04:56:39 np0005593232 podman[305880]: 2026-01-23 09:56:39.305467804 +0000 UTC m=+0.044099225 container create 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:56:39 np0005593232 systemd[1]: Started libpod-conmon-3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96.scope.
Jan 23 04:56:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:56:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c5784c8a38a40d90055e96f1a4979d509fe3d559bc4a23278a66cf42e0333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c5784c8a38a40d90055e96f1a4979d509fe3d559bc4a23278a66cf42e0333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c5784c8a38a40d90055e96f1a4979d509fe3d559bc4a23278a66cf42e0333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c2c5784c8a38a40d90055e96f1a4979d509fe3d559bc4a23278a66cf42e0333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:56:39 np0005593232 podman[305880]: 2026-01-23 09:56:39.286568747 +0000 UTC m=+0.025200198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:56:39 np0005593232 podman[305880]: 2026-01-23 09:56:39.394027493 +0000 UTC m=+0.132658944 container init 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:56:39 np0005593232 podman[305880]: 2026-01-23 09:56:39.400642201 +0000 UTC m=+0.139273632 container start 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:56:39 np0005593232 podman[305880]: 2026-01-23 09:56:39.404480341 +0000 UTC m=+0.143111772 container attach 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:56:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:39.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.794 250273 DEBUG nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.795 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.795 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.796 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.796 250273 DEBUG nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.796 250273 WARNING nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.796 250273 DEBUG nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.796 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.797 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.797 250273 DEBUG oslo_concurrency.lockutils [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.797 250273 DEBUG nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:39 np0005593232 nova_compute[250269]: 2026-01-23 09:56:39.797 250273 WARNING nova.compute.manager [req-dc866444-e01e-4c4a-b415-f029075b3747 req-551b425e-e431-40e4-af74-f47ecccc8999 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:56:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:39.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]: {
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:        "osd_id": 0,
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:        "type": "bluestore"
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]:    }
Jan 23 04:56:40 np0005593232 ecstatic_raman[305896]: }
Jan 23 04:56:40 np0005593232 systemd[1]: libpod-3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96.scope: Deactivated successfully.
Jan 23 04:56:40 np0005593232 podman[305880]: 2026-01-23 09:56:40.276382378 +0000 UTC m=+1.015013839 container died 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:56:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3c2c5784c8a38a40d90055e96f1a4979d509fe3d559bc4a23278a66cf42e0333-merged.mount: Deactivated successfully.
Jan 23 04:56:40 np0005593232 podman[305880]: 2026-01-23 09:56:40.328529002 +0000 UTC m=+1.067160433 container remove 3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:56:40 np0005593232 systemd[1]: libpod-conmon-3d6cb56e062d894762a43db085ae2a6607aad6de1ca095b8a30e3cc595928e96.scope: Deactivated successfully.
Jan 23 04:56:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:56:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:56:40 np0005593232 nova_compute[250269]: 2026-01-23 09:56:40.386 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b8fad912-b046-49ae-8288-6d32344981ff does not exist
Jan 23 04:56:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4a4c6519-4ab3-4814-a87c-9376f01a3649 does not exist
Jan 23 04:56:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 492903d5-3499-4d27-a3d3-b34a5a40ecab does not exist
Jan 23 04:56:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1965: 321 pgs: 321 active+clean; 134 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 23 04:56:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:56:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:41.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.856 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.857 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.858 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.858 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.858 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.859 250273 INFO nova.compute.manager [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Terminating instance#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.860 250273 DEBUG nova.compute.manager [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:56:41 np0005593232 kernel: tapf799aed2-67 (unregistering): left promiscuous mode
Jan 23 04:56:41 np0005593232 NetworkManager[49057]: <info>  [1769162201.9007] device (tapf799aed2-67): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:56:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:56:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:41.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:56:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:41Z|00275|binding|INFO|Releasing lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 from this chassis (sb_readonly=0)
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.916 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:41Z|00276|binding|INFO|Setting lport f799aed2-670b-4bb6-8d95-cbc3b8ed9861 down in Southbound
Jan 23 04:56:41 np0005593232 ovn_controller[151001]: 2026-01-23T09:56:41Z|00277|binding|INFO|Removing iface tapf799aed2-67 ovn-installed in OVS
Jan 23 04:56:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:41.929 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:10:cf 10.100.0.4'], port_security=['fa:16:3e:e5:10:cf 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7da901b2-a103-4bdd-aebe-54f024af47bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75a27ed7c12e4bfba34376ef35a14d04', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'f5dde0fe-1955-4c3c-a689-2b3b780903e4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2de10fee-58f4-4406-945d-ce76b4916b26, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f799aed2-670b-4bb6-8d95-cbc3b8ed9861) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:56:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:41.931 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f799aed2-670b-4bb6-8d95-cbc3b8ed9861 in datapath a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 unbound from our chassis#033[00m
Jan 23 04:56:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:41.932 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:56:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:41.933 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[52a83801-49c1-4827-a51a-6a3e3bbcc0a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:41.934 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 namespace which is not needed anymore#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.953 250273 DEBUG nova.compute.manager [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.953 250273 DEBUG oslo_concurrency.lockutils [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.953 250273 DEBUG oslo_concurrency.lockutils [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.954 250273 DEBUG oslo_concurrency.lockutils [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.954 250273 DEBUG nova.compute.manager [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:41 np0005593232 nova_compute[250269]: 2026-01-23 09:56:41.954 250273 WARNING nova.compute.manager [req-76993366-9f83-4a36-9b7c-fbac0990882c req-f5f8ddcd-a594-4bf3-b092-65ef56383e0c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:56:41 np0005593232 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 23 04:56:41 np0005593232 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d00000055.scope: Consumed 3.830s CPU time.
Jan 23 04:56:41 np0005593232 systemd-machined[215836]: Machine qemu-33-instance-00000055 terminated.
Jan 23 04:56:42 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [NOTICE]   (305828) : haproxy version is 2.8.14-c23fe91
Jan 23 04:56:42 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [NOTICE]   (305828) : path to executable is /usr/sbin/haproxy
Jan 23 04:56:42 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [WARNING]  (305828) : Exiting Master process...
Jan 23 04:56:42 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [ALERT]    (305828) : Current worker (305831) exited with code 143 (Terminated)
Jan 23 04:56:42 np0005593232 neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5[305824]: [WARNING]  (305828) : All workers exited. Exiting... (0)
Jan 23 04:56:42 np0005593232 systemd[1]: libpod-0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9.scope: Deactivated successfully.
Jan 23 04:56:42 np0005593232 podman[306056]: 2026-01-23 09:56:42.066554525 +0000 UTC m=+0.044982030 container died 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.082 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9-userdata-shm.mount: Deactivated successfully.
Jan 23 04:56:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2c3191da74bfa0a1a9302e7de9294310f4601985b1a731382950580feec6867d-merged.mount: Deactivated successfully.
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.101 250273 INFO nova.virt.libvirt.driver [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Instance destroyed successfully.#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.102 250273 DEBUG nova.objects.instance [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lazy-loading 'resources' on Instance uuid 7da901b2-a103-4bdd-aebe-54f024af47bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:56:42 np0005593232 podman[306056]: 2026-01-23 09:56:42.108191529 +0000 UTC m=+0.086619014 container cleanup 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 04:56:42 np0005593232 systemd[1]: libpod-conmon-0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9.scope: Deactivated successfully.
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.130 250273 DEBUG nova.virt.libvirt.vif [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:56:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-363490500',display_name='tempest-InstanceActionsTestJSON-server-363490500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-363490500',id=85,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:56:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75a27ed7c12e4bfba34376ef35a14d04',ramdisk_id='',reservation_id='r-npola8zv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-487384975',owner_user_name='tempest-InstanceActionsTestJSON-487384975-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:56:38Z,user_data=None,user_id='0c18a146d425428f8ba82d37fcdb9c02',uuid=7da901b2-a103-4bdd-aebe-54f024af47bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.131 250273 DEBUG nova.network.os_vif_util [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converting VIF {"id": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "address": "fa:16:3e:e5:10:cf", "network": {"id": "a0d4c037-3e0a-4dd6-8e59-7c34955a40c5", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-1317479630-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75a27ed7c12e4bfba34376ef35a14d04", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf799aed2-67", "ovs_interfaceid": "f799aed2-670b-4bb6-8d95-cbc3b8ed9861", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.132 250273 DEBUG nova.network.os_vif_util [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.133 250273 DEBUG os_vif [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.135 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf799aed2-67, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.141 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.143 250273 INFO os_vif [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:10:cf,bridge_name='br-int',has_traffic_filtering=True,id=f799aed2-670b-4bb6-8d95-cbc3b8ed9861,network=Network(a0d4c037-3e0a-4dd6-8e59-7c34955a40c5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf799aed2-67')#033[00m
Jan 23 04:56:42 np0005593232 podman[306091]: 2026-01-23 09:56:42.183351447 +0000 UTC m=+0.050531528 container remove 0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.189 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[be90d39e-642b-4ff9-a87c-7833ef5e0929]: (4, ('Fri Jan 23 09:56:42 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 (0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9)\n0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9\nFri Jan 23 09:56:42 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 (0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9)\n0cb46d555b6b09cbb0b695b4d2d76f95f34c6daa7534311c81bf55b1b83ec7a9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.191 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0045f6c-e8db-4e59-bfac-6f1a635bdaf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.192 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0d4c037-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 kernel: tapa0d4c037-30: left promiscuous mode
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.210 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[99b2f9c7-47c9-4362-9f9c-de5247d7a206]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.225 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4667aa59-6b46-41c9-a07f-3592dfa5d56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.226 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f81fa7d3-693e-4313-9385-312ca778fe66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.241 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac53d07-b4c2-42cb-b6bf-b9d2bedb3868]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 611134, 'reachable_time': 34288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306121, 'error': None, 'target': 'ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.243 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a0d4c037-3e0a-4dd6-8e59-7c34955a40c5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.243 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6822a57d-9250-4408-8beb-906017c6979f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:56:42 np0005593232 systemd[1]: run-netns-ovnmeta\x2da0d4c037\x2d3e0a\x2d4dd6\x2d8e59\x2d7c34955a40c5.mount: Deactivated successfully.
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.552 250273 INFO nova.virt.libvirt.driver [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Deleting instance files /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf_del#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.553 250273 INFO nova.virt.libvirt.driver [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Deletion of /var/lib/nova/instances/7da901b2-a103-4bdd-aebe-54f024af47bf_del complete#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.610 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.610 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:56:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.661 250273 INFO nova.compute.manager [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.662 250273 DEBUG oslo.service.loopingcall [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.663 250273 DEBUG nova.compute.manager [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:56:42 np0005593232 nova_compute[250269]: 2026-01-23 09:56:42.663 250273 DEBUG nova.network.neutron [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:56:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1966: 321 pgs: 321 active+clean; 96 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 265 op/s
Jan 23 04:56:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:43.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:43.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.247 250273 DEBUG nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.247 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.247 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.248 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.248 250273 DEBUG nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.248 250273 DEBUG nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-unplugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.248 250273 DEBUG nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.248 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.249 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.249 250273 DEBUG oslo_concurrency.lockutils [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.249 250273 DEBUG nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] No waiting events found dispatching network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:56:44 np0005593232 nova_compute[250269]: 2026-01-23 09:56:44.249 250273 WARNING nova.compute.manager [req-b30dc886-dc96-4c72-b709-ecc8558938d0 req-b31028b0-b716-427f-b524-39a9dbd1cb21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received unexpected event network-vif-plugged-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:56:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:56:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800745796' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:56:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:56:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2800745796' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:56:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1967: 321 pgs: 321 active+clean; 96 MiB data, 765 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 15 KiB/s wr, 173 op/s
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.388 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.535 250273 DEBUG nova.network.neutron [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.576 250273 INFO nova.compute.manager [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Took 2.91 seconds to deallocate network for instance.#033[00m
Jan 23 04:56:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.708 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.708 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.794 250273 DEBUG oslo_concurrency.processutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:56:45 np0005593232 nova_compute[250269]: 2026-01-23 09:56:45.821 250273 DEBUG nova.compute.manager [req-3589cbb0-883a-4065-9490-7679a730b660 req-887c89f6-a0ae-4f25-bfa9-fcca7416f5bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Received event network-vif-deleted-f799aed2-670b-4bb6-8d95-cbc3b8ed9861 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:56:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:45.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:56:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4163048306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.226 250273 DEBUG oslo_concurrency.processutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.233 250273 DEBUG nova.compute.provider_tree [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.285 250273 DEBUG nova.scheduler.client.report [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.349 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.427 250273 INFO nova.scheduler.client.report [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Deleted allocations for instance 7da901b2-a103-4bdd-aebe-54f024af47bf#033[00m
Jan 23 04:56:46 np0005593232 nova_compute[250269]: 2026-01-23 09:56:46.555 250273 DEBUG oslo_concurrency.lockutils [None req-6f91a2e5-c4d0-410b-b21b-0a18d9fe6466 0c18a146d425428f8ba82d37fcdb9c02 75a27ed7c12e4bfba34376ef35a14d04 - - default default] Lock "7da901b2-a103-4bdd-aebe-54f024af47bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1968: 321 pgs: 321 active+clean; 88 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 182 op/s
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00099691891168807 of space, bias 1.0, pg target 0.299075673506421 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:56:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:56:47 np0005593232 nova_compute[250269]: 2026-01-23 09:56:47.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:47.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:47.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1969: 321 pgs: 321 active+clean; 88 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 16 KiB/s wr, 178 op/s
Jan 23 04:56:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:49.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:50 np0005593232 nova_compute[250269]: 2026-01-23 09:56:50.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1970: 321 pgs: 321 active+clean; 88 MiB data, 747 MiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 16 KiB/s wr, 176 op/s
Jan 23 04:56:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:56:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:56:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:51.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:52 np0005593232 nova_compute[250269]: 2026-01-23 09:56:52.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1971: 321 pgs: 321 active+clean; 136 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 3.3 MiB/s wr, 260 op/s
Jan 23 04:56:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:53.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1972: 321 pgs: 321 active+clean; 136 MiB data, 764 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 94 op/s
Jan 23 04:56:55 np0005593232 nova_compute[250269]: 2026-01-23 09:56:55.393 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:55.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:55.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:56:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1973: 321 pgs: 321 active+clean; 161 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 98 op/s
Jan 23 04:56:57 np0005593232 nova_compute[250269]: 2026-01-23 09:56:57.099 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162202.0971665, 7da901b2-a103-4bdd-aebe-54f024af47bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:56:57 np0005593232 nova_compute[250269]: 2026-01-23 09:56:57.099 250273 INFO nova.compute.manager [-] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:56:57 np0005593232 nova_compute[250269]: 2026-01-23 09:56:57.167 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:57.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:56:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:57.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:56:58 np0005593232 nova_compute[250269]: 2026-01-23 09:56:58.168 250273 DEBUG nova.compute.manager [None req-3cd1b7a4-4f99-451e-ae26-a5b6a6014981 - - - - - -] [instance: 7da901b2-a103-4bdd-aebe-54f024af47bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:56:58 np0005593232 nova_compute[250269]: 2026-01-23 09:56:58.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:56:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1974: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 23 04:56:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:56:59.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:56:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:56:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:56:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:56:59.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:00 np0005593232 nova_compute[250269]: 2026-01-23 09:57:00.396 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:00 np0005593232 podman[306155]: 2026-01-23 09:57:00.439331975 +0000 UTC m=+0.090758762 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 04:57:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1975: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Jan 23 04:57:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:01.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:01.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:02 np0005593232 nova_compute[250269]: 2026-01-23 09:57:02.171 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1976: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 351 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Jan 23 04:57:03 np0005593232 podman[306235]: 2026-01-23 09:57:03.384187374 +0000 UTC m=+0.048106280 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 04:57:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:03.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:03.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1977: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 690 KiB/s wr, 16 op/s
Jan 23 04:57:05 np0005593232 nova_compute[250269]: 2026-01-23 09:57:05.397 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:05.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:05.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1978: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 690 KiB/s wr, 16 op/s
Jan 23 04:57:07 np0005593232 nova_compute[250269]: 2026-01-23 09:57:07.173 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:07.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:07.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:08.803 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:57:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:08.804 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:57:08 np0005593232 nova_compute[250269]: 2026-01-23 09:57:08.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1979: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 31 KiB/s wr, 12 op/s
Jan 23 04:57:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:09.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:10 np0005593232 nova_compute[250269]: 2026-01-23 09:57:10.399 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:10.807 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1980: 321 pgs: 321 active+clean; 167 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 15 KiB/s wr, 1 op/s
Jan 23 04:57:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:11.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:12 np0005593232 nova_compute[250269]: 2026-01-23 09:57:12.176 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1981: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 29 op/s
Jan 23 04:57:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:13.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:13.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1982: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 27 op/s
Jan 23 04:57:15 np0005593232 nova_compute[250269]: 2026-01-23 09:57:15.401 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:57:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:15.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:57:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:15.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1983: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 27 op/s
Jan 23 04:57:17 np0005593232 nova_compute[250269]: 2026-01-23 09:57:17.179 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:17.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:17.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1984: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.8 KiB/s wr, 27 op/s
Jan 23 04:57:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:19.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:19.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:20 np0005593232 nova_compute[250269]: 2026-01-23 09:57:20.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:20 np0005593232 nova_compute[250269]: 2026-01-23 09:57:20.403 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1985: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 04:57:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:21.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:21.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:22 np0005593232 nova_compute[250269]: 2026-01-23 09:57:22.182 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1986: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 04:57:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:23.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:23.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:24 np0005593232 nova_compute[250269]: 2026-01-23 09:57:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1987: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:25 np0005593232 nova_compute[250269]: 2026-01-23 09:57:25.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:25 np0005593232 nova_compute[250269]: 2026-01-23 09:57:25.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:25.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:25.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1988: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:27 np0005593232 nova_compute[250269]: 2026-01-23 09:57:27.186 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:27 np0005593232 nova_compute[250269]: 2026-01-23 09:57:27.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:27 np0005593232 nova_compute[250269]: 2026-01-23 09:57:27.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:57:27 np0005593232 nova_compute[250269]: 2026-01-23 09:57:27.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:57:27 np0005593232 nova_compute[250269]: 2026-01-23 09:57:27.325 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:57:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:27.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:57:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:27.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:57:28 np0005593232 nova_compute[250269]: 2026-01-23 09:57:28.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:28 np0005593232 nova_compute[250269]: 2026-01-23 09:57:28.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:28 np0005593232 nova_compute[250269]: 2026-01-23 09:57:28.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:57:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1989: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:29.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:29.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:30 np0005593232 nova_compute[250269]: 2026-01-23 09:57:30.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1990: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:31 np0005593232 podman[306318]: 2026-01-23 09:57:31.483805821 +0000 UTC m=+0.123661819 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 04:57:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:31.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:31.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.232 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.361 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.362 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.363 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.363 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.364 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:57:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/218973362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:57:32 np0005593232 nova_compute[250269]: 2026-01-23 09:57:32.831 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1991: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.040 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.042 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4556MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.042 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.042 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.150 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.151 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.218 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:57:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1633467360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.670 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.676 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.704 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.732 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:57:33 np0005593232 nova_compute[250269]: 2026-01-23 09:57:33.733 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:57:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:57:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:33.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:34 np0005593232 podman[306393]: 2026-01-23 09:57:34.390851393 +0000 UTC m=+0.050199399 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 04:57:34 np0005593232 nova_compute[250269]: 2026-01-23 09:57:34.728 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1992: 321 pgs: 321 active+clean; 88 MiB data, 735 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:57:35 np0005593232 nova_compute[250269]: 2026-01-23 09:57:35.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:35.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:35.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:36 np0005593232 nova_compute[250269]: 2026-01-23 09:57:36.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:57:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1993: 321 pgs: 321 active+clean; 103 MiB data, 745 MiB used, 20 GiB / 21 GiB avail; 597 B/s rd, 782 KiB/s wr, 1 op/s
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:57:37
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.rgw.root', 'cephfs.cephfs.meta']
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:57:37 np0005593232 nova_compute[250269]: 2026-01-23 09:57:37.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:57:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:37.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:57:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1994: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:57:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:39.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:39.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:40 np0005593232 nova_compute[250269]: 2026-01-23 09:57:40.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1995: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:57:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:57:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:57:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:41.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:42 np0005593232 nova_compute[250269]: 2026-01-23 09:57:42.239 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 04:57:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 04:57:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:42.610 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1996: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.030 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.031 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.057 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.181 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.181 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.193 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.194 250273 INFO nova.compute.claims [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7548512a-f147-41da-8ad1-11d3e5533b94 does not exist
Jan 23 04:57:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e9d4b18-aca1-48b6-a74d-9eae6b9b5f28 does not exist
Jan 23 04:57:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev be232229-9694-4fd6-b46b-b88f01a935e0 does not exist
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.434 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.821215263 +0000 UTC m=+0.041430299 container create 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:57:43 np0005593232 systemd[1]: Started libpod-conmon-7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5.scope.
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:57:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1272106989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.890 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.800527355 +0000 UTC m=+0.020742411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.899 250273 DEBUG nova.compute.provider_tree [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.908351282 +0000 UTC m=+0.128566338 container init 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.915701161 +0000 UTC m=+0.135916217 container start 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:57:43 np0005593232 angry_hertz[306775]: 167 167
Jan 23 04:57:43 np0005593232 systemd[1]: libpod-7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5.scope: Deactivated successfully.
Jan 23 04:57:43 np0005593232 conmon[306775]: conmon 7c236d1193925f412f13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5.scope/container/memory.events
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.921 250273 DEBUG nova.scheduler.client.report [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.922182655 +0000 UTC m=+0.142397701 container attach 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.923334928 +0000 UTC m=+0.143549954 container died 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 04:57:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fe3a53c12669fc5efe8ceb21598c233387e2974d271c7504106ad371e864946b-merged.mount: Deactivated successfully.
Jan 23 04:57:43 np0005593232 podman[306758]: 2026-01-23 09:57:43.969967074 +0000 UTC m=+0.190182110 container remove 7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.969 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:43 np0005593232 nova_compute[250269]: 2026-01-23 09:57:43.971 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:57:43 np0005593232 systemd[1]: libpod-conmon-7c236d1193925f412f13158c893cdb57f2d1e699bc0bc444fa9e7f1d32a5a3f5.scope: Deactivated successfully.
Jan 23 04:57:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:43.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:44 np0005593232 podman[306802]: 2026-01-23 09:57:44.127191896 +0000 UTC m=+0.040465132 container create 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 04:57:44 np0005593232 systemd[1]: Started libpod-conmon-8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b.scope.
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.182 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.182 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:57:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:44 np0005593232 podman[306802]: 2026-01-23 09:57:44.203938689 +0000 UTC m=+0.117211965 container init 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 23 04:57:44 np0005593232 podman[306802]: 2026-01-23 09:57:44.108290758 +0000 UTC m=+0.021564024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.211 250273 INFO nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:57:44 np0005593232 podman[306802]: 2026-01-23 09:57:44.212437661 +0000 UTC m=+0.125710897 container start 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:57:44 np0005593232 podman[306802]: 2026-01-23 09:57:44.215753255 +0000 UTC m=+0.129026491 container attach 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.247 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.400 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.402 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.403 250273 INFO nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Creating image(s)#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.438 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.473 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.508 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.513 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.598 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.600 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.601 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.602 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.637 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.642 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 eb090825-fac8-4425-a930-ef8258edf056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.700 250273 DEBUG nova.policy [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ba9c41274b44742bd852accd5d8cbe1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a591b57bf4334172b3794e4e70d83016', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:57:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1997: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:57:44 np0005593232 nova_compute[250269]: 2026-01-23 09:57:44.973 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 eb090825-fac8-4425-a930-ef8258edf056_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.331s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:45 np0005593232 wizardly_colden[306819]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:57:45 np0005593232 wizardly_colden[306819]: --> relative data size: 1.0
Jan 23 04:57:45 np0005593232 wizardly_colden[306819]: --> All data devices are unavailable
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.042 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] resizing rbd image eb090825-fac8-4425-a930-ef8258edf056_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:57:45 np0005593232 systemd[1]: libpod-8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b.scope: Deactivated successfully.
Jan 23 04:57:45 np0005593232 podman[306802]: 2026-01-23 09:57:45.063608719 +0000 UTC m=+0.976881965 container died 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 04:57:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-149cfa82dfa1fb5f36d1d98a01ebfe23deedd66d23f58b66d7b485b9cdfde14f-merged.mount: Deactivated successfully.
Jan 23 04:57:45 np0005593232 podman[306802]: 2026-01-23 09:57:45.111737417 +0000 UTC m=+1.025010653 container remove 8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:57:45 np0005593232 systemd[1]: libpod-conmon-8c8f618e67153560f86bb53eca12f10482b75ec6d21fb6d89096bc9fd1eb1f9b.scope: Deactivated successfully.
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.150 250273 DEBUG nova.objects.instance [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lazy-loading 'migration_context' on Instance uuid eb090825-fac8-4425-a930-ef8258edf056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.188 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.189 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Ensure instance console log exists: /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.190 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.190 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.191 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:45 np0005593232 nova_compute[250269]: 2026-01-23 09:57:45.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.725505574 +0000 UTC m=+0.059015979 container create 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 04:57:45 np0005593232 systemd[1]: Started libpod-conmon-01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518.scope.
Jan 23 04:57:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.708666495 +0000 UTC m=+0.042176920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.810654356 +0000 UTC m=+0.144164841 container init 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.817700196 +0000 UTC m=+0.151210601 container start 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:57:45 np0005593232 vigorous_mclean[307169]: 167 167
Jan 23 04:57:45 np0005593232 systemd[1]: libpod-01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518.scope: Deactivated successfully.
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.821225966 +0000 UTC m=+0.154736451 container attach 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.823644635 +0000 UTC m=+0.157155080 container died 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:57:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7b63822dbbf32f3adb4c6f9604243ada7407a9ff385800d7deaaf6f2f3da5568-merged.mount: Deactivated successfully.
Jan 23 04:57:45 np0005593232 podman[307153]: 2026-01-23 09:57:45.870465467 +0000 UTC m=+0.203975872 container remove 01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:57:45 np0005593232 systemd[1]: libpod-conmon-01ce15a733d0b9c5eaec3b0ce66dc4391ba72418832d713267abdf8910e84518.scope: Deactivated successfully.
Jan 23 04:57:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:46.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:46.095 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:57:46 np0005593232 nova_compute[250269]: 2026-01-23 09:57:46.096 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:46.096 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.099555253 +0000 UTC m=+0.060472931 container create 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:57:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:46 np0005593232 systemd[1]: Started libpod-conmon-241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c.scope.
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.0797639 +0000 UTC m=+0.040681608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2ed7433598ff34209a27b80e2b9cc73ba69f8e2c3e0edb7d55cd160219ba61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2ed7433598ff34209a27b80e2b9cc73ba69f8e2c3e0edb7d55cd160219ba61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2ed7433598ff34209a27b80e2b9cc73ba69f8e2c3e0edb7d55cd160219ba61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2ed7433598ff34209a27b80e2b9cc73ba69f8e2c3e0edb7d55cd160219ba61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.194079822 +0000 UTC m=+0.154997520 container init 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.19966847 +0000 UTC m=+0.160586148 container start 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.203413037 +0000 UTC m=+0.164330715 container attach 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1998: 321 pgs: 321 active+clean; 146 MiB data, 757 MiB used, 20 GiB / 21 GiB avail; 887 KiB/s rd, 2.2 MiB/s wr, 60 op/s
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0012117517564675228 of space, bias 1.0, pg target 0.36352552694025686 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:57:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:57:46 np0005593232 nova_compute[250269]: 2026-01-23 09:57:46.902 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Successfully created port: 52545d4d-b803-4245-9bb8-963733848c65 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:57:46 np0005593232 elastic_elion[307210]: {
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:    "0": [
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:        {
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "devices": [
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "/dev/loop3"
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            ],
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "lv_name": "ceph_lv0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "lv_size": "7511998464",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "name": "ceph_lv0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "tags": {
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.cluster_name": "ceph",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.crush_device_class": "",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.encrypted": "0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.osd_id": "0",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.type": "block",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:                "ceph.vdo": "0"
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            },
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "type": "block",
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:            "vg_name": "ceph_vg0"
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:        }
Jan 23 04:57:46 np0005593232 elastic_elion[307210]:    ]
Jan 23 04:57:46 np0005593232 elastic_elion[307210]: }
Jan 23 04:57:46 np0005593232 systemd[1]: libpod-241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c.scope: Deactivated successfully.
Jan 23 04:57:46 np0005593232 podman[307194]: 2026-01-23 09:57:46.982185867 +0000 UTC m=+0.943103545 container died 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 04:57:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3f2ed7433598ff34209a27b80e2b9cc73ba69f8e2c3e0edb7d55cd160219ba61-merged.mount: Deactivated successfully.
Jan 23 04:57:47 np0005593232 podman[307194]: 2026-01-23 09:57:47.034825544 +0000 UTC m=+0.995743222 container remove 241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:57:47 np0005593232 systemd[1]: libpod-conmon-241f27af268e7e6147277b67fc7b1b5fd7bfd0de0dd088f0f2d820cefbfded6c.scope: Deactivated successfully.
Jan 23 04:57:47 np0005593232 nova_compute[250269]: 2026-01-23 09:57:47.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:47 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:47Z|00278|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.662521947 +0000 UTC m=+0.043910250 container create 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 04:57:47 np0005593232 systemd[1]: Started libpod-conmon-000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae.scope.
Jan 23 04:57:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.640291725 +0000 UTC m=+0.021680048 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.75298395 +0000 UTC m=+0.134372273 container init 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.764880538 +0000 UTC m=+0.146268841 container start 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.768058619 +0000 UTC m=+0.149446952 container attach 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 04:57:47 np0005593232 sweet_rubin[307387]: 167 167
Jan 23 04:57:47 np0005593232 systemd[1]: libpod-000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae.scope: Deactivated successfully.
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.771534248 +0000 UTC m=+0.152922551 container died 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:57:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4295602a4a3be76c225dd13138094a4e2b1e0d2df0872c28c745650eb4ca2bb2-merged.mount: Deactivated successfully.
Jan 23 04:57:47 np0005593232 podman[307371]: 2026-01-23 09:57:47.81695501 +0000 UTC m=+0.198343323 container remove 000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:57:47 np0005593232 systemd[1]: libpod-conmon-000f0b4a7d9cae6badc73e2b26fcfa5fe4dace78dc2dceb97f11dc9393f69dae.scope: Deactivated successfully.
Jan 23 04:57:47 np0005593232 podman[307411]: 2026-01-23 09:57:47.989516878 +0000 UTC m=+0.039720021 container create f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:57:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:48.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:48 np0005593232 systemd[1]: Started libpod-conmon-f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a.scope.
Jan 23 04:57:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b16ff0a992260f4db681737a9f83a0693b16e86ad0c88eb69d39a985f42cfc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:48 np0005593232 podman[307411]: 2026-01-23 09:57:47.971248678 +0000 UTC m=+0.021451831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:57:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b16ff0a992260f4db681737a9f83a0693b16e86ad0c88eb69d39a985f42cfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b16ff0a992260f4db681737a9f83a0693b16e86ad0c88eb69d39a985f42cfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b16ff0a992260f4db681737a9f83a0693b16e86ad0c88eb69d39a985f42cfc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:48 np0005593232 podman[307411]: 2026-01-23 09:57:48.08420262 +0000 UTC m=+0.134405873 container init f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:57:48 np0005593232 podman[307411]: 2026-01-23 09:57:48.092925558 +0000 UTC m=+0.143128691 container start f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:57:48 np0005593232 podman[307411]: 2026-01-23 09:57:48.096093398 +0000 UTC m=+0.146296631 container attach f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 04:57:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v1999: 321 pgs: 321 active+clean; 149 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 146 op/s
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]: {
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:        "osd_id": 0,
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:        "type": "bluestore"
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]:    }
Jan 23 04:57:48 np0005593232 nervous_poitras[307427]: }
Jan 23 04:57:48 np0005593232 systemd[1]: libpod-f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a.scope: Deactivated successfully.
Jan 23 04:57:48 np0005593232 podman[307411]: 2026-01-23 09:57:48.95824021 +0000 UTC m=+1.008443363 container died f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:57:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f8b16ff0a992260f4db681737a9f83a0693b16e86ad0c88eb69d39a985f42cfc-merged.mount: Deactivated successfully.
Jan 23 04:57:49 np0005593232 podman[307411]: 2026-01-23 09:57:49.024839214 +0000 UTC m=+1.075042357 container remove f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 04:57:49 np0005593232 systemd[1]: libpod-conmon-f015cc9c1273675f9c50ce66a7d472eb43e9777b45d41eda58d8f7105a62a75a.scope: Deactivated successfully.
Jan 23 04:57:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:57:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:57:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4af0369b-dac8-47f5-bcd1-ffe1b1459656 does not exist
Jan 23 04:57:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7479409c-c177-44ae-9463-60e41939b560 does not exist
Jan 23 04:57:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d026db86-4d79-46d1-800e-33168097f511 does not exist
Jan 23 04:57:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:49.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:57:50 np0005593232 nova_compute[250269]: 2026-01-23 09:57:50.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:50 np0005593232 nova_compute[250269]: 2026-01-23 09:57:50.741 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Successfully updated port: 52545d4d-b803-4245-9bb8-963733848c65 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:57:50 np0005593232 nova_compute[250269]: 2026-01-23 09:57:50.806 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:57:50 np0005593232 nova_compute[250269]: 2026-01-23 09:57:50.806 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquired lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:57:50 np0005593232 nova_compute[250269]: 2026-01-23 09:57:50.807 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:57:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2000: 321 pgs: 321 active+clean; 149 MiB data, 762 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 23 04:57:51 np0005593232 nova_compute[250269]: 2026-01-23 09:57:51.058 250273 DEBUG nova.compute.manager [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-changed-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:57:51 np0005593232 nova_compute[250269]: 2026-01-23 09:57:51.058 250273 DEBUG nova.compute.manager [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Refreshing instance network info cache due to event network-changed-52545d4d-b803-4245-9bb8-963733848c65. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:57:51 np0005593232 nova_compute[250269]: 2026-01-23 09:57:51.058 250273 DEBUG oslo_concurrency.lockutils [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:57:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:51 np0005593232 nova_compute[250269]: 2026-01-23 09:57:51.354 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:57:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:51.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:52 np0005593232 nova_compute[250269]: 2026-01-23 09:57:52.244 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2001: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 04:57:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:53.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.926 250273 DEBUG nova.network.neutron [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Updating instance_info_cache with network_info: [{"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.964 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Releasing lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.965 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Instance network_info: |[{"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.965 250273 DEBUG oslo_concurrency.lockutils [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.966 250273 DEBUG nova.network.neutron [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Refreshing network info cache for port 52545d4d-b803-4245-9bb8-963733848c65 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.968 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Start _get_guest_xml network_info=[{"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.972 250273 WARNING nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.979 250273 DEBUG nova.virt.libvirt.host [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.981 250273 DEBUG nova.virt.libvirt.host [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.985 250273 DEBUG nova.virt.libvirt.host [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.985 250273 DEBUG nova.virt.libvirt.host [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.986 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.986 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.987 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.987 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.987 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.987 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.988 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.988 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.988 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.988 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.989 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.989 250273 DEBUG nova.virt.hardware [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:57:53 np0005593232 nova_compute[250269]: 2026-01-23 09:57:53.991 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:54.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:57:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893006682' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.430 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.460 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.465 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2002: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 04:57:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:57:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550311669' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.934 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.937 250273 DEBUG nova.virt.libvirt.vif [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1174123994',display_name='tempest-InstanceActionsNegativeTestJSON-server-1174123994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1174123994',id=88,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a591b57bf4334172b3794e4e70d83016',ramdisk_id='',reservation_id='r-a370fdxf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-2106267144',owner_user_name='tempest-InstanceActionsNegativeTestJSON-2106267144-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:57:44Z,user_data=None,user_id='2ba9c41274b44742bd852accd5d8cbe1',uuid=eb090825-fac8-4425-a930-ef8258edf056,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.938 250273 DEBUG nova.network.os_vif_util [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converting VIF {"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.940 250273 DEBUG nova.network.os_vif_util [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.943 250273 DEBUG nova.objects.instance [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lazy-loading 'pci_devices' on Instance uuid eb090825-fac8-4425-a930-ef8258edf056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.968 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <uuid>eb090825-fac8-4425-a930-ef8258edf056</uuid>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <name>instance-00000058</name>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1174123994</nova:name>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:57:53</nova:creationTime>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:user uuid="2ba9c41274b44742bd852accd5d8cbe1">tempest-InstanceActionsNegativeTestJSON-2106267144-project-member</nova:user>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:project uuid="a591b57bf4334172b3794e4e70d83016">tempest-InstanceActionsNegativeTestJSON-2106267144</nova:project>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <nova:port uuid="52545d4d-b803-4245-9bb8-963733848c65">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="serial">eb090825-fac8-4425-a930-ef8258edf056</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="uuid">eb090825-fac8-4425-a930-ef8258edf056</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/eb090825-fac8-4425-a930-ef8258edf056_disk">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/eb090825-fac8-4425-a930-ef8258edf056_disk.config">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:5d:6c:2d"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <target dev="tap52545d4d-b8"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/console.log" append="off"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:57:54 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:57:54 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:57:54 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:57:54 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.971 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Preparing to wait for external event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.972 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.972 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.972 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.973 250273 DEBUG nova.virt.libvirt.vif [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1174123994',display_name='tempest-InstanceActionsNegativeTestJSON-server-1174123994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1174123994',id=88,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a591b57bf4334172b3794e4e70d83016',ramdisk_id='',reservation_id='r-a370fdxf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-2106267144',owner_user_name='tempest-InstanceActionsNegativeTestJSON-2106267144-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:57:44Z,user_data=None,user_id='2ba9c41274b44742bd852accd5d8cbe1',uuid=eb090825-fac8-4425-a930-ef8258edf056,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.973 250273 DEBUG nova.network.os_vif_util [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converting VIF {"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.974 250273 DEBUG nova.network.os_vif_util [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.974 250273 DEBUG os_vif [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.975 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.976 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.983 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.983 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52545d4d-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.983 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52545d4d-b8, col_values=(('external_ids', {'iface-id': '52545d4d-b803-4245-9bb8-963733848c65', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:6c:2d', 'vm-uuid': 'eb090825-fac8-4425-a930-ef8258edf056'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:54 np0005593232 NetworkManager[49057]: <info>  [1769162274.9864] manager: (tap52545d4d-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.987 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.991 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:54 np0005593232 nova_compute[250269]: 2026-01-23 09:57:54.992 250273 INFO os_vif [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8')#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.062 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.062 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.063 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] No VIF found with MAC fa:16:3e:5d:6c:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.063 250273 INFO nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Using config drive#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.102 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:55 np0005593232 nova_compute[250269]: 2026-01-23 09:57:55.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:57:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:55.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:57:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:56.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:56.099 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:57:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2003: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 04:57:56 np0005593232 nova_compute[250269]: 2026-01-23 09:57:56.898 250273 INFO nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Creating config drive at /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config#033[00m
Jan 23 04:57:56 np0005593232 nova_compute[250269]: 2026-01-23 09:57:56.907 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3kp90fwi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.048 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3kp90fwi" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.122 250273 DEBUG nova.storage.rbd_utils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] rbd image eb090825-fac8-4425-a930-ef8258edf056_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.125 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config eb090825-fac8-4425-a930-ef8258edf056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.295 250273 DEBUG oslo_concurrency.processutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config eb090825-fac8-4425-a930-ef8258edf056_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.296 250273 INFO nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Deleting local config drive /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056/disk.config because it was imported into RBD.#033[00m
Jan 23 04:57:57 np0005593232 kernel: tap52545d4d-b8: entered promiscuous mode
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.3479] manager: (tap52545d4d-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Jan 23 04:57:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:57Z|00279|binding|INFO|Claiming lport 52545d4d-b803-4245-9bb8-963733848c65 for this chassis.
Jan 23 04:57:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:57Z|00280|binding|INFO|52545d4d-b803-4245-9bb8-963733848c65: Claiming fa:16:3e:5d:6c:2d 10.100.0.5
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.354 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.356 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.371 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:6c:2d 10.100.0.5'], port_security=['fa:16:3e:5d:6c:2d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eb090825-fac8-4425-a930-ef8258edf056', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a591b57bf4334172b3794e4e70d83016', 'neutron:revision_number': '2', 'neutron:security_group_ids': '635f1145-d02c-4890-b694-bda390cb2303', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec0a0da7-fd71-4380-9beb-f27e9afa31f7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=52545d4d-b803-4245-9bb8-963733848c65) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.373 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 52545d4d-b803-4245-9bb8-963733848c65 in datapath 339226f8-9e77-474d-b9e0-019c81e7dc5f bound to our chassis#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.374 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 339226f8-9e77-474d-b9e0-019c81e7dc5f#033[00m
Jan 23 04:57:57 np0005593232 systemd-udevd[307645]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.3895] device (tap52545d4d-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.3912] device (tap52545d4d-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.389 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[867b51dc-9ade-4d24-96e5-ce2b59d93e87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.391 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap339226f8-91 in ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.394 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap339226f8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.394 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd20f74-20b4-4e34-bd02-62708f004e16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.396 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6af89191-bf14-4b3a-9ea2-4c3a14a353fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 systemd-machined[215836]: New machine qemu-34-instance-00000058.
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.411 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0b07307a-c67c-4089-ba78-967288d29b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.420 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 systemd[1]: Started Virtual Machine qemu-34-instance-00000058.
Jan 23 04:57:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:57Z|00281|binding|INFO|Setting lport 52545d4d-b803-4245-9bb8-963733848c65 ovn-installed in OVS
Jan 23 04:57:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:57Z|00282|binding|INFO|Setting lport 52545d4d-b803-4245-9bb8-963733848c65 up in Southbound
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.428 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[244518cc-2671-4448-b75d-d83c4d62931c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.460 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5116c2-6064-4830-8ca7-50314a359a56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.466 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6f3d5-9c7e-4784-9b16-0fe377d425f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 systemd-udevd[307648]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.4684] manager: (tap339226f8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.503 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fb1dcd-da50-4b9a-ae6d-0dd72d711158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.506 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ecabcd16-44fc-4f17-ac9d-923799d06263]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.5277] device (tap339226f8-90): carrier: link connected
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.533 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[72c74cad-5467-48d3-95dd-5c65496c4209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.550 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d83e8ffd-ea60-4356-96a4-e6e36e1b5240]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap339226f8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:20:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619067, 'reachable_time': 38972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307681, 'error': None, 'target': 'ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.567 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2d2fa3-9919-4f87-b41e-6c9111e21fd0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe51:2043'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 619067, 'tstamp': 619067}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307682, 'error': None, 'target': 'ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.584 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dea12486-e0bd-47b6-bf89-b6485e234c48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap339226f8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:51:20:43'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 84], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619067, 'reachable_time': 38972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307683, 'error': None, 'target': 'ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.612 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc9bd94-2e74-4d9d-88c6-b9eff2a067a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.673 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eeb9f8d0-7139-4518-a53a-70c4cb6f6c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap339226f8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.676 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap339226f8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:57 np0005593232 NetworkManager[49057]: <info>  [1769162277.6785] manager: (tap339226f8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 kernel: tap339226f8-90: entered promiscuous mode
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.682 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap339226f8-90, col_values=(('external_ids', {'iface-id': '50568d44-7041-48a5-830c-ce1ed8735fd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:57:57 np0005593232 ovn_controller[151001]: 2026-01-23T09:57:57Z|00283|binding|INFO|Releasing lport 50568d44-7041-48a5-830c-ce1ed8735fd4 from this chassis (sb_readonly=0)
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.696 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/339226f8-9e77-474d-b9e0-019c81e7dc5f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/339226f8-9e77-474d-b9e0-019c81e7dc5f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.697 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[81261f16-ca8c-4619-9f21-9c21232816f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.698 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-339226f8-9e77-474d-b9e0-019c81e7dc5f
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/339226f8-9e77-474d-b9e0-019c81e7dc5f.pid.haproxy
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 339226f8-9e77-474d-b9e0-019c81e7dc5f
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:57:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:57:57.699 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'env', 'PROCESS_TAG=haproxy-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/339226f8-9e77-474d-b9e0-019c81e7dc5f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:57:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:57.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.962 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162277.9615686, eb090825-fac8-4425-a930-ef8258edf056 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.964 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] VM Started (Lifecycle Event)#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.970 250273 DEBUG nova.compute.manager [req-5e75602b-7554-4cf6-9f38-c29646d104cd req-a5fe127b-281a-487d-b9a9-2ec96ff34530 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.970 250273 DEBUG oslo_concurrency.lockutils [req-5e75602b-7554-4cf6-9f38-c29646d104cd req-a5fe127b-281a-487d-b9a9-2ec96ff34530 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.970 250273 DEBUG oslo_concurrency.lockutils [req-5e75602b-7554-4cf6-9f38-c29646d104cd req-a5fe127b-281a-487d-b9a9-2ec96ff34530 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.971 250273 DEBUG oslo_concurrency.lockutils [req-5e75602b-7554-4cf6-9f38-c29646d104cd req-a5fe127b-281a-487d-b9a9-2ec96ff34530 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.971 250273 DEBUG nova.compute.manager [req-5e75602b-7554-4cf6-9f38-c29646d104cd req-a5fe127b-281a-487d-b9a9-2ec96ff34530 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Processing event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.972 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.975 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.978 250273 INFO nova.virt.libvirt.driver [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] Instance spawned successfully.#033[00m
Jan 23 04:57:57 np0005593232 nova_compute[250269]: 2026-01-23 09:57:57.979 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.005 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.012 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.016 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.017 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.017 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.017 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.018 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.018 250273 DEBUG nova.virt.libvirt.driver [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:57:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:57:58.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.050 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.050 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162277.9620874, eb090825-fac8-4425-a930-ef8258edf056 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.051 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.096 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.101 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162277.9745297, eb090825-fac8-4425-a930-ef8258edf056 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.101 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:57:58 np0005593232 podman[307755]: 2026-01-23 09:57:58.118314113 +0000 UTC m=+0.087802689 container create 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.139 250273 INFO nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Took 13.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.139 250273 DEBUG nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.140 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.149 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:57:58 np0005593232 podman[307755]: 2026-01-23 09:57:58.055566858 +0000 UTC m=+0.025055464 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:57:58 np0005593232 systemd[1]: Started libpod-conmon-0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50.scope.
Jan 23 04:57:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:57:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94ecbfd0c93aab324bc3bc4e21a86ad0e29ba7b29464fd61551548f4597d9201/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:57:58 np0005593232 podman[307755]: 2026-01-23 09:57:58.207840999 +0000 UTC m=+0.177329605 container init 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 04:57:58 np0005593232 podman[307755]: 2026-01-23 09:57:58.213654224 +0000 UTC m=+0.183142800 container start 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.218 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.236 250273 INFO nova.compute.manager [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Took 15.12 seconds to build instance.#033[00m
Jan 23 04:57:58 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [NOTICE]   (307772) : New worker (307774) forked
Jan 23 04:57:58 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [NOTICE]   (307772) : Loading success.
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.265 250273 DEBUG oslo_concurrency.lockutils [None req-5edd1c85-78eb-43da-b38e-c1833d530b25 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.770 250273 DEBUG nova.network.neutron [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Updated VIF entry in instance network info cache for port 52545d4d-b803-4245-9bb8-963733848c65. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.771 250273 DEBUG nova.network.neutron [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Updating instance_info_cache with network_info: [{"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:57:58 np0005593232 nova_compute[250269]: 2026-01-23 09:57:58.804 250273 DEBUG oslo_concurrency.lockutils [req-387fa756-070f-4671-9601-d3978640c31b req-b6e3970c-5ae6-4246-b9ec-39cdbf2d4a1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-eb090825-fac8-4425-a930-ef8258edf056" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:57:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2004: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 96 op/s
Jan 23 04:57:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:57:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:57:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:57:59.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:57:59 np0005593232 nova_compute[250269]: 2026-01-23 09:57:59.986 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:00.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.241 250273 DEBUG nova.compute.manager [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.242 250273 DEBUG oslo_concurrency.lockutils [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.243 250273 DEBUG oslo_concurrency.lockutils [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.243 250273 DEBUG oslo_concurrency.lockutils [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.243 250273 DEBUG nova.compute.manager [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] No waiting events found dispatching network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.243 250273 WARNING nova.compute.manager [req-903789f7-f294-40d6-871b-ef7ba163394d req-fda0d959-e04d-4db1-b9b2-a161882125ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received unexpected event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:58:00 np0005593232 nova_compute[250269]: 2026-01-23 09:58:00.417 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2005: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 5.4 KiB/s rd, 1023 B/s wr, 8 op/s
Jan 23 04:58:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.484 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.486 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.486 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.487 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.488 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.491 250273 INFO nova.compute.manager [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Terminating instance#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.492 250273 DEBUG nova.compute.manager [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:58:01 np0005593232 kernel: tap52545d4d-b8 (unregistering): left promiscuous mode
Jan 23 04:58:01 np0005593232 NetworkManager[49057]: <info>  [1769162281.5336] device (tap52545d4d-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.540 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:01Z|00284|binding|INFO|Releasing lport 52545d4d-b803-4245-9bb8-963733848c65 from this chassis (sb_readonly=0)
Jan 23 04:58:01 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:01Z|00285|binding|INFO|Setting lport 52545d4d-b803-4245-9bb8-963733848c65 down in Southbound
Jan 23 04:58:01 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:01Z|00286|binding|INFO|Removing iface tap52545d4d-b8 ovn-installed in OVS
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.542 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.551 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:6c:2d 10.100.0.5'], port_security=['fa:16:3e:5d:6c:2d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'eb090825-fac8-4425-a930-ef8258edf056', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a591b57bf4334172b3794e4e70d83016', 'neutron:revision_number': '4', 'neutron:security_group_ids': '635f1145-d02c-4890-b694-bda390cb2303', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec0a0da7-fd71-4380-9beb-f27e9afa31f7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=52545d4d-b803-4245-9bb8-963733848c65) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.552 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 52545d4d-b803-4245-9bb8-963733848c65 in datapath 339226f8-9e77-474d-b9e0-019c81e7dc5f unbound from our chassis#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.553 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 339226f8-9e77-474d-b9e0-019c81e7dc5f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.555 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a77a6462-87cc-4aaf-8375-90ab46af4835]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.556 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f namespace which is not needed anymore#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.563 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 23 04:58:01 np0005593232 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d00000058.scope: Consumed 4.140s CPU time.
Jan 23 04:58:01 np0005593232 systemd-machined[215836]: Machine qemu-34-instance-00000058 terminated.
Jan 23 04:58:01 np0005593232 podman[307786]: 2026-01-23 09:58:01.657501874 +0000 UTC m=+0.091141133 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 04:58:01 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [NOTICE]   (307772) : haproxy version is 2.8.14-c23fe91
Jan 23 04:58:01 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [NOTICE]   (307772) : path to executable is /usr/sbin/haproxy
Jan 23 04:58:01 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [WARNING]  (307772) : Exiting Master process...
Jan 23 04:58:01 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [ALERT]    (307772) : Current worker (307774) exited with code 143 (Terminated)
Jan 23 04:58:01 np0005593232 neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f[307768]: [WARNING]  (307772) : All workers exited. Exiting... (0)
Jan 23 04:58:01 np0005593232 systemd[1]: libpod-0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50.scope: Deactivated successfully.
Jan 23 04:58:01 np0005593232 podman[307828]: 2026-01-23 09:58:01.693696184 +0000 UTC m=+0.042866110 container died 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:58:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50-userdata-shm.mount: Deactivated successfully.
Jan 23 04:58:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-94ecbfd0c93aab324bc3bc4e21a86ad0e29ba7b29464fd61551548f4597d9201-merged.mount: Deactivated successfully.
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.737 250273 INFO nova.virt.libvirt.driver [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] Instance destroyed successfully.#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.737 250273 DEBUG nova.objects.instance [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lazy-loading 'resources' on Instance uuid eb090825-fac8-4425-a930-ef8258edf056 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:58:01 np0005593232 podman[307828]: 2026-01-23 09:58:01.746667371 +0000 UTC m=+0.095837297 container cleanup 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.755 250273 DEBUG nova.virt.libvirt.vif [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:57:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1174123994',display_name='tempest-InstanceActionsNegativeTestJSON-server-1174123994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1174123994',id=88,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:57:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a591b57bf4334172b3794e4e70d83016',ramdisk_id='',reservation_id='r-a370fdxf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-2106267144',owner_user_name='tempest-InstanceActionsNegativeTestJSON-2106267144-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:57:58Z,user_data=None,user_id='2ba9c41274b44742bd852accd5d8cbe1',uuid=eb090825-fac8-4425-a930-ef8258edf056,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.756 250273 DEBUG nova.network.os_vif_util [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converting VIF {"id": "52545d4d-b803-4245-9bb8-963733848c65", "address": "fa:16:3e:5d:6c:2d", "network": {"id": "339226f8-9e77-474d-b9e0-019c81e7dc5f", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1887237548-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a591b57bf4334172b3794e4e70d83016", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52545d4d-b8", "ovs_interfaceid": "52545d4d-b803-4245-9bb8-963733848c65", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.757 250273 DEBUG nova.network.os_vif_util [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:58:01 np0005593232 systemd[1]: libpod-conmon-0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50.scope: Deactivated successfully.
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.757 250273 DEBUG os_vif [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.762 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.763 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52545d4d-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.771 250273 INFO os_vif [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:6c:2d,bridge_name='br-int',has_traffic_filtering=True,id=52545d4d-b803-4245-9bb8-963733848c65,network=Network(339226f8-9e77-474d-b9e0-019c81e7dc5f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52545d4d-b8')#033[00m
Jan 23 04:58:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:01.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:01 np0005593232 podman[307873]: 2026-01-23 09:58:01.822179758 +0000 UTC m=+0.047487061 container remove 0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.828 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[afb19338-0b59-40de-8067-195da84a81f9]: (4, ('Fri Jan 23 09:58:01 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f (0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50)\n0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50\nFri Jan 23 09:58:01 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f (0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50)\n0de165220264ffc2898d8e58875ff4ea4db6bbc8e34ec60cfbcdda57eddd6d50\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.829 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e2b0fd-04ba-451c-82a2-cd9dc1ead4f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.830 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap339226f8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 kernel: tap339226f8-90: left promiscuous mode
Jan 23 04:58:01 np0005593232 nova_compute[250269]: 2026-01-23 09:58:01.846 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.849 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[848dfb09-497b-484e-a99e-46ab711f9b3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.874 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f0adc5f2-4092-41e9-b88d-1fc69ba74e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.876 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3cf4fa-7f25-4021-923d-2ab8281f19d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.891 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b9406715-604a-45b8-b362-bb164ee6c86b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 619059, 'reachable_time': 34506, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307905, 'error': None, 'target': 'ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:01 np0005593232 systemd[1]: run-netns-ovnmeta\x2d339226f8\x2d9e77\x2d474d\x2db9e0\x2d019c81e7dc5f.mount: Deactivated successfully.
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.894 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-339226f8-9e77-474d-b9e0-019c81e7dc5f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:58:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:01.895 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[947a9737-9d84-48fd-b0d4-bb0013f44a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:02.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.172 250273 INFO nova.virt.libvirt.driver [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Deleting instance files /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056_del#033[00m
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.173 250273 INFO nova.virt.libvirt.driver [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Deletion of /var/lib/nova/instances/eb090825-fac8-4425-a930-ef8258edf056_del complete#033[00m
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.245 250273 INFO nova.compute.manager [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.245 250273 DEBUG oslo.service.loopingcall [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.246 250273 DEBUG nova.compute.manager [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:58:02 np0005593232 nova_compute[250269]: 2026-01-23 09:58:02.246 250273 DEBUG nova.network.neutron [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:58:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:58:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3082832833' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:58:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2006: 321 pgs: 321 active+clean; 100 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 99 op/s
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.359 250273 DEBUG nova.compute.manager [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-unplugged-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.360 250273 DEBUG oslo_concurrency.lockutils [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.361 250273 DEBUG oslo_concurrency.lockutils [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.361 250273 DEBUG oslo_concurrency.lockutils [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.361 250273 DEBUG nova.compute.manager [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] No waiting events found dispatching network-vif-unplugged-52545d4d-b803-4245-9bb8-963733848c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:58:03 np0005593232 nova_compute[250269]: 2026-01-23 09:58:03.362 250273 DEBUG nova.compute.manager [req-7efac5cd-f3a4-4700-9bcd-80e40a839e72 req-34ac4291-c453-46cc-b95f-633611e46fbc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-unplugged-52545d4d-b803-4245-9bb8-963733848c65 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:58:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:03.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:04.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.126 250273 DEBUG nova.network.neutron [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.166 250273 INFO nova.compute.manager [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] Took 1.92 seconds to deallocate network for instance.#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.244 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.245 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.357 250273 DEBUG oslo_concurrency.processutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.413 250273 DEBUG nova.compute.manager [req-d58a23f0-7074-4a71-978f-d00a4f28719a req-db3f0f69-c25b-481c-8627-7cea2bb2fb85 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-deleted-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:58:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318746290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.780 250273 DEBUG oslo_concurrency.processutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.787 250273 DEBUG nova.compute.provider_tree [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.815 250273 DEBUG nova.scheduler.client.report [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.856 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2007: 321 pgs: 321 active+clean; 100 MiB data, 738 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.902 250273 INFO nova.scheduler.client.report [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Deleted allocations for instance eb090825-fac8-4425-a930-ef8258edf056#033[00m
Jan 23 04:58:04 np0005593232 nova_compute[250269]: 2026-01-23 09:58:04.991 250273 DEBUG oslo_concurrency.lockutils [None req-c1e3bd16-c525-4fb1-a686-44c491c8a720 2ba9c41274b44742bd852accd5d8cbe1 a591b57bf4334172b3794e4e70d83016 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:58:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/856851660' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:58:05 np0005593232 podman[307980]: 2026-01-23 09:58:05.409837348 +0000 UTC m=+0.060136371 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.419 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.696 250273 DEBUG nova.compute.manager [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.696 250273 DEBUG oslo_concurrency.lockutils [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "eb090825-fac8-4425-a930-ef8258edf056-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.696 250273 DEBUG oslo_concurrency.lockutils [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.697 250273 DEBUG oslo_concurrency.lockutils [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "eb090825-fac8-4425-a930-ef8258edf056-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.697 250273 DEBUG nova.compute.manager [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] No waiting events found dispatching network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:58:05 np0005593232 nova_compute[250269]: 2026-01-23 09:58:05.697 250273 WARNING nova.compute.manager [req-7a9ab830-3d77-4be1-9d31-075ea2fcf784 req-66bc29f0-8f5a-4926-9835-afc736adc1e3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: eb090825-fac8-4425-a930-ef8258edf056] Received unexpected event network-vif-plugged-52545d4d-b803-4245-9bb8-963733848c65 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 04:58:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:05.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:06.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:06 np0005593232 nova_compute[250269]: 2026-01-23 09:58:06.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2008: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:07.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2009: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 23 04:58:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:09.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:10 np0005593232 nova_compute[250269]: 2026-01-23 09:58:10.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2010: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Jan 23 04:58:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:11 np0005593232 nova_compute[250269]: 2026-01-23 09:58:11.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:12 np0005593232 nova_compute[250269]: 2026-01-23 09:58:12.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2011: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 97 op/s
Jan 23 04:58:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:13.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:14.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2012: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 852 B/s wr, 6 op/s
Jan 23 04:58:15 np0005593232 nova_compute[250269]: 2026-01-23 09:58:15.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:15.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:16.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:16 np0005593232 nova_compute[250269]: 2026-01-23 09:58:16.735 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162281.7342315, eb090825-fac8-4425-a930-ef8258edf056 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:58:16 np0005593232 nova_compute[250269]: 2026-01-23 09:58:16.736 250273 INFO nova.compute.manager [-] [instance: eb090825-fac8-4425-a930-ef8258edf056] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:58:16 np0005593232 nova_compute[250269]: 2026-01-23 09:58:16.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:16 np0005593232 nova_compute[250269]: 2026-01-23 09:58:16.824 250273 DEBUG nova.compute.manager [None req-41be1b73-d6f7-4358-850f-307977f476cc - - - - - -] [instance: eb090825-fac8-4425-a930-ef8258edf056] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:58:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2013: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 4.1 KiB/s rd, 853 B/s wr, 6 op/s
Jan 23 04:58:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:17.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2014: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 23 04:58:19 np0005593232 nova_compute[250269]: 2026-01-23 09:58:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:19 np0005593232 nova_compute[250269]: 2026-01-23 09:58:19.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 04:58:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:19.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:20 np0005593232 nova_compute[250269]: 2026-01-23 09:58:20.425 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2015: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 23 04:58:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:21 np0005593232 nova_compute[250269]: 2026-01-23 09:58:21.320 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:21 np0005593232 nova_compute[250269]: 2026-01-23 09:58:21.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:21.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:22.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2016: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail; 767 B/s rd, 0 op/s
Jan 23 04:58:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:23.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:24.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2017: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:58:25 np0005593232 nova_compute[250269]: 2026-01-23 09:58:25.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:25 np0005593232 nova_compute[250269]: 2026-01-23 09:58:25.427 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:25.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:26.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:26 np0005593232 nova_compute[250269]: 2026-01-23 09:58:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:26.476 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:58:26 np0005593232 nova_compute[250269]: 2026-01-23 09:58:26.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:26.477 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:58:26 np0005593232 nova_compute[250269]: 2026-01-23 09:58:26.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2018: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:58:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:27.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2019: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.323 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.323 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:29 np0005593232 nova_compute[250269]: 2026-01-23 09:58:29.323 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:29.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:30.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:30 np0005593232 nova_compute[250269]: 2026-01-23 09:58:30.316 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:30 np0005593232 nova_compute[250269]: 2026-01-23 09:58:30.316 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:58:30 np0005593232 nova_compute[250269]: 2026-01-23 09:58:30.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2020: 321 pgs: 321 active+clean; 88 MiB data, 733 MiB used, 20 GiB / 21 GiB avail
Jan 23 04:58:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:31 np0005593232 nova_compute[250269]: 2026-01-23 09:58:31.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:31.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:32.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.344 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.344 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.344 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.345 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.345 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:32 np0005593232 podman[308065]: 2026-01-23 09:58:32.457662703 +0000 UTC m=+0.116966068 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 04:58:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:58:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415534115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.798 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2021: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.992 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.994 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4514MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.994 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:32 np0005593232 nova_compute[250269]: 2026-01-23 09:58:32.995 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:33 np0005593232 nova_compute[250269]: 2026-01-23 09:58:33.325 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:58:33 np0005593232 nova_compute[250269]: 2026-01-23 09:58:33.326 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:58:33 np0005593232 nova_compute[250269]: 2026-01-23 09:58:33.529 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:58:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768812906' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:58:33 np0005593232 nova_compute[250269]: 2026-01-23 09:58:33.976 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:33 np0005593232 nova_compute[250269]: 2026-01-23 09:58:33.983 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:58:34 np0005593232 nova_compute[250269]: 2026-01-23 09:58:34.013 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:58:34 np0005593232 nova_compute[250269]: 2026-01-23 09:58:34.067 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:58:34 np0005593232 nova_compute[250269]: 2026-01-23 09:58:34.067 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:34.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2022: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:58:35 np0005593232 nova_compute[250269]: 2026-01-23 09:58:35.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:35.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:36 np0005593232 nova_compute[250269]: 2026-01-23 09:58:36.068 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:36.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:36 np0005593232 podman[308136]: 2026-01-23 09:58:36.391677164 +0000 UTC m=+0.051979219 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 23 04:58:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:36.480 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:36 np0005593232 nova_compute[250269]: 2026-01-23 09:58:36.782 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2023: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:58:37
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes', 'default.rgw.log', 'vms']
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:58:37 np0005593232 nova_compute[250269]: 2026-01-23 09:58:37.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:58:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:37.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:37 np0005593232 nova_compute[250269]: 2026-01-23 09:58:37.912 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:37 np0005593232 nova_compute[250269]: 2026-01-23 09:58:37.913 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:37 np0005593232 nova_compute[250269]: 2026-01-23 09:58:37.959 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:58:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:38.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.104 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.105 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.111 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.112 250273 INFO nova.compute.claims [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.398 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:58:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490829864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.867 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.872 250273 DEBUG nova.compute.provider_tree [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:58:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2024: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.901 250273 DEBUG nova.scheduler.client.report [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.952 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:38 np0005593232 nova_compute[250269]: 2026-01-23 09:58:38.952 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.027 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.027 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.067 250273 INFO nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.099 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.515 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.516 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.516 250273 INFO nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Creating image(s)#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.542 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.570 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.596 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.599 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.662 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.663 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.664 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.664 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.693 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.697 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:39.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:39 np0005593232 nova_compute[250269]: 2026-01-23 09:58:39.952 250273 DEBUG nova.policy [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd83df80213fd40f99fdc68c146fe9a2a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c288779980de4f03be20b7eed343b775', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.025 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.327s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:40.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.101 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] resizing rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.220 250273 DEBUG nova.objects.instance [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.257 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.258 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Ensure instance console log exists: /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.258 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.259 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.259 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:40 np0005593232 nova_compute[250269]: 2026-01-23 09:58:40.433 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2025: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 04:58:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:41 np0005593232 nova_compute[250269]: 2026-01-23 09:58:41.488 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Successfully created port: 369958b7-bcfd-4341-8b47-7283c5cfffd9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:58:41 np0005593232 nova_compute[250269]: 2026-01-23 09:58:41.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:41.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:42.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2026: 321 pgs: 321 active+clean; 226 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 5.3 MiB/s wr, 81 op/s
Jan 23 04:58:43 np0005593232 nova_compute[250269]: 2026-01-23 09:58:43.821 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Successfully updated port: 369958b7-bcfd-4341-8b47-7283c5cfffd9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:58:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:43.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:43 np0005593232 nova_compute[250269]: 2026-01-23 09:58:43.864 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:58:43 np0005593232 nova_compute[250269]: 2026-01-23 09:58:43.865 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquired lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:58:43 np0005593232 nova_compute[250269]: 2026-01-23 09:58:43.865 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:58:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:44.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:44 np0005593232 nova_compute[250269]: 2026-01-23 09:58:44.282 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:58:44 np0005593232 nova_compute[250269]: 2026-01-23 09:58:44.807 250273 DEBUG nova.compute.manager [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-changed-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:44 np0005593232 nova_compute[250269]: 2026-01-23 09:58:44.808 250273 DEBUG nova.compute.manager [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Refreshing instance network info cache due to event network-changed-369958b7-bcfd-4341-8b47-7283c5cfffd9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:58:44 np0005593232 nova_compute[250269]: 2026-01-23 09:58:44.808 250273 DEBUG oslo_concurrency.lockutils [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:58:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2027: 321 pgs: 321 active+clean; 226 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 23 04:58:45 np0005593232 nova_compute[250269]: 2026-01-23 09:58:45.506 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:46.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:46 np0005593232 nova_compute[250269]: 2026-01-23 09:58:46.787 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029664017425088405 of space, bias 1.0, pg target 0.8899205227526521 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009896487646566163 of space, bias 1.0, pg target 0.2968946293969849 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:58:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2028: 321 pgs: 321 active+clean; 226 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 23 04:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 04:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1551 writes, 6795 keys, 1551 commit groups, 1.0 writes per commit group, ingest: 10.39 MB, 0.02 MB/s#012Interval WAL: 1551 writes, 1551 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     38.8      1.50              0.21        27    0.056       0      0       0.0       0.0#012  L6      1/0   10.99 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.0     79.1     65.9      3.58              0.76        26    0.138    147K    14K       0.0       0.0#012 Sum      1/0   10.99 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.0     55.7     57.8      5.08              0.97        53    0.096    147K    14K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.9     35.6     36.5      1.67              0.20        10    0.167     35K   2597       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0     79.1     65.9      3.58              0.76        26    0.138    147K    14K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.8      1.50              0.21        26    0.058       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.057, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 5.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 1.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 32.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000271 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1921,31.69 MB,10.4229%) FilterBlock(54,424.61 KB,0.136401%) IndexBlock(54,742.66 KB,0.238569%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 04:58:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:48.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2029: 321 pgs: 321 active+clean; 226 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.919 250273 DEBUG nova.network.neutron [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Updating instance_info_cache with network_info: [{"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.978 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Releasing lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.978 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Instance network_info: |[{"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.979 250273 DEBUG oslo_concurrency.lockutils [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.979 250273 DEBUG nova.network.neutron [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Refreshing network info cache for port 369958b7-bcfd-4341-8b47-7283c5cfffd9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.982 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Start _get_guest_xml network_info=[{"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:58:48 np0005593232 nova_compute[250269]: 2026-01-23 09:58:48.988 250273 WARNING nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.003 250273 DEBUG nova.virt.libvirt.host [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.004 250273 DEBUG nova.virt.libvirt.host [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.008 250273 DEBUG nova.virt.libvirt.host [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.009 250273 DEBUG nova.virt.libvirt.host [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.011 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.012 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.013 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.013 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.014 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.014 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.015 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.015 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.016 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.017 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.017 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.018 250273 DEBUG nova.virt.hardware [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.023 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:58:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/507530720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.467 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.500 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.504 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:49.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:58:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1470281102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.976 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.978 250273 DEBUG nova.virt.libvirt.vif [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-123958626',display_name='tempest-tempest.common.compute-instance-123958626-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-123958626-1',id=91,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c288779980de4f03be20b7eed343b775',ramdisk_id='',reservation_id='r-4ujtl3ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-351408189',owner_user_name='tempest-MultipleCreateTestJSON-351408189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:58:39Z,user_data=None,user_id='d83df80213fd40f99fdc68c146fe9a2a',uuid=9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.979 250273 DEBUG nova.network.os_vif_util [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converting VIF {"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.980 250273 DEBUG nova.network.os_vif_util [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:58:49 np0005593232 nova_compute[250269]: 2026-01-23 09:58:49.981 250273 DEBUG nova.objects.instance [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:58:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:50.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.090 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <uuid>9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44</uuid>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <name>instance-0000005b</name>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:name>tempest-tempest.common.compute-instance-123958626-1</nova:name>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:58:48</nova:creationTime>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:user uuid="d83df80213fd40f99fdc68c146fe9a2a">tempest-MultipleCreateTestJSON-351408189-project-member</nova:user>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:project uuid="c288779980de4f03be20b7eed343b775">tempest-MultipleCreateTestJSON-351408189</nova:project>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <nova:port uuid="369958b7-bcfd-4341-8b47-7283c5cfffd9">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="serial">9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="uuid">9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:da:b3:ab"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <target dev="tap369958b7-bc"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/console.log" append="off"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:58:50 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:58:50 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:58:50 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:58:50 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.091 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Preparing to wait for external event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.092 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.092 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.092 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.093 250273 DEBUG nova.virt.libvirt.vif [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-123958626',display_name='tempest-tempest.common.compute-instance-123958626-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-123958626-1',id=91,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c288779980de4f03be20b7eed343b775',ramdisk_id='',reservation_id='r-4ujtl3ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-351408189',owner_user_name='tempest-MultipleCreateTestJSON-351408189-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:58:39Z,user_data=None,user_id='d83df80213fd40f99fdc68c146fe9a2a',uuid=9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.093 250273 DEBUG nova.network.os_vif_util [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converting VIF {"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.094 250273 DEBUG nova.network.os_vif_util [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.094 250273 DEBUG os_vif [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.095 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.095 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.098 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.098 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap369958b7-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.098 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap369958b7-bc, col_values=(('external_ids', {'iface-id': '369958b7-bcfd-4341-8b47-7283c5cfffd9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:b3:ab', 'vm-uuid': '9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:50 np0005593232 NetworkManager[49057]: <info>  [1769162330.1011] manager: (tap369958b7-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.109 250273 INFO os_vif [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc')#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.358 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.359 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.360 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] No VIF found with MAC fa:16:3e:da:b3:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.360 250273 INFO nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Using config drive#033[00m
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.386 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:50 np0005593232 podman[308652]: 2026-01-23 09:58:50.457022 +0000 UTC m=+0.056518349 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 04:58:50 np0005593232 nova_compute[250269]: 2026-01-23 09:58:50.508 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:50 np0005593232 podman[308652]: 2026-01-23 09:58:50.560221385 +0000 UTC m=+0.159717714 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:58:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2030: 321 pgs: 321 active+clean; 226 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 23 04:58:51 np0005593232 podman[308804]: 2026-01-23 09:58:51.142714463 +0000 UTC m=+0.052121634 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:58:51 np0005593232 podman[308804]: 2026-01-23 09:58:51.159386007 +0000 UTC m=+0.068793168 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 04:58:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.217 250273 INFO nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Creating config drive at /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.222 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjgw503hv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.338 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.353 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjgw503hv" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:51 np0005593232 podman[308865]: 2026-01-23 09:58:51.363225115 +0000 UTC m=+0.060417040 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, version=2.2.4, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2)
Jan 23 04:58:51 np0005593232 podman[308865]: 2026-01-23 09:58:51.376472511 +0000 UTC m=+0.073664446 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, vcs-type=git, version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.390 250273 DEBUG nova.storage.rbd_utils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] rbd image 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.400 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:58:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:58:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:58:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.700 250273 DEBUG oslo_concurrency.processutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.701 250273 INFO nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Deleting local config drive /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44/disk.config because it was imported into RBD.#033[00m
Jan 23 04:58:51 np0005593232 kernel: tap369958b7-bc: entered promiscuous mode
Jan 23 04:58:51 np0005593232 NetworkManager[49057]: <info>  [1769162331.7571] manager: (tap369958b7-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Jan 23 04:58:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:51Z|00287|binding|INFO|Claiming lport 369958b7-bcfd-4341-8b47-7283c5cfffd9 for this chassis.
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:51Z|00288|binding|INFO|369958b7-bcfd-4341-8b47-7283c5cfffd9: Claiming fa:16:3e:da:b3:ab 10.100.0.4
Jan 23 04:58:51 np0005593232 systemd-machined[215836]: New machine qemu-35-instance-0000005b.
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:51 np0005593232 systemd[1]: Started Virtual Machine qemu-35-instance-0000005b.
Jan 23 04:58:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:51Z|00289|binding|INFO|Setting lport 369958b7-bcfd-4341-8b47-7283c5cfffd9 ovn-installed in OVS
Jan 23 04:58:51 np0005593232 nova_compute[250269]: 2026-01-23 09:58:51.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:51 np0005593232 systemd-udevd[309046]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:58:51 np0005593232 NetworkManager[49057]: <info>  [1769162331.8462] device (tap369958b7-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:58:51 np0005593232 NetworkManager[49057]: <info>  [1769162331.8473] device (tap369958b7-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:58:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:51.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:51 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:51Z|00290|binding|INFO|Setting lport 369958b7-bcfd-4341-8b47-7283c5cfffd9 up in Southbound
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.885 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b3:ab 10.100.0.4'], port_security=['fa:16:3e:da:b3:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5732b3-3484-43db-a231-53d04de40d61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c288779980de4f03be20b7eed343b775', 'neutron:revision_number': '2', 'neutron:security_group_ids': '288ecf98-3e6e-478c-8e27-86a4106b4ef8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2529943-1c00-4757-827e-798919a83756, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=369958b7-bcfd-4341-8b47-7283c5cfffd9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.886 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 369958b7-bcfd-4341-8b47-7283c5cfffd9 in datapath 6c5732b3-3484-43db-a231-53d04de40d61 bound to our chassis#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.888 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6c5732b3-3484-43db-a231-53d04de40d61#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.899 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9d2418-79a4-43e6-92ae-3b63ee33321f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.900 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6c5732b3-31 in ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.902 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6c5732b3-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.903 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4277550c-2e7f-4465-a65a-e7bf8757d48c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.904 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6569df35-0f34-44b8-ae5a-6bc7c45cc29c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.916 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9f020960-c092-4cc7-ab37-44e0476cea22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.941 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7a33cd-97dc-4627-a314-d9f5e0acc923]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.981 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec2b8df-ea74-45f7-8bab-5e5fc251216c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:51.987 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[73c3760c-8b2a-4515-bf50-2249f2109fa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:51 np0005593232 systemd-udevd[309049]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:58:51 np0005593232 NetworkManager[49057]: <info>  [1769162331.9891] manager: (tap6c5732b3-30): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.030 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9f391ee0-492e-47e0-966e-f4b9e76b0431]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.034 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[83d78848-7728-4b07-8e76-40ccd08b283f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 NetworkManager[49057]: <info>  [1769162332.0609] device (tap6c5732b3-30): carrier: link connected
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.067 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[845e6925-0b24-4c0c-b852-2e01029d22fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.083 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[22d22642-f26f-40e3-91bb-97c0380afc34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5732b3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:ad:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624520, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309092, 'error': None, 'target': 'ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:52.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.099 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a795a731-3200-411e-92f0-4553bc0d13d6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe04:adb9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624520, 'tstamp': 624520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309093, 'error': None, 'target': 'ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.116 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[58d8d67b-7dc8-4ca9-b84b-98e74e74714d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6c5732b3-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:04:ad:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 87], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624520, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309094, 'error': None, 'target': 'ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.153 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ae012e21-acc0-4b80-96af-76220aa5f058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.215 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee36fc8-aeb3-4064-a59a-f95b13b09995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.217 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5732b3-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.218 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.219 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c5732b3-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.273 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:52 np0005593232 NetworkManager[49057]: <info>  [1769162332.2740] manager: (tap6c5732b3-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Jan 23 04:58:52 np0005593232 kernel: tap6c5732b3-30: entered promiscuous mode
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.276 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.276 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6c5732b3-30, col_values=(('external_ids', {'iface-id': '4f372140-9451-4bb5-99b3-fc5570b8346b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:52 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:52Z|00291|binding|INFO|Releasing lport 4f372140-9451-4bb5-99b3-fc5570b8346b from this chassis (sb_readonly=0)
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.294 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6c5732b3-3484-43db-a231-53d04de40d61.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6c5732b3-3484-43db-a231-53d04de40d61.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.296 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ee61b133-6fce-455e-984c-8bd3bd0ad5e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.298 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-6c5732b3-3484-43db-a231-53d04de40d61
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/6c5732b3-3484-43db-a231-53d04de40d61.pid.haproxy
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 6c5732b3-3484-43db-a231-53d04de40d61
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:58:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:52.299 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61', 'env', 'PROCESS_TAG=haproxy-6c5732b3-3484-43db-a231-53d04de40d61', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6c5732b3-3484-43db-a231-53d04de40d61.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 428ea120-2213-4848-8079-26a696c66a11 does not exist
Jan 23 04:58:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d4b67b21-9309-4009-8e61-dcbe477f6847 does not exist
Jan 23 04:58:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cbd27790-08f3-4a9c-8545-aea609ef3089 does not exist
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.521 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162332.521033, 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.522 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] VM Started (Lifecycle Event)#033[00m
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:58:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.562 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.566 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162332.522179, 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.566 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.615 250273 DEBUG nova.network.neutron [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Updated VIF entry in instance network info cache for port 369958b7-bcfd-4341-8b47-7283c5cfffd9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.616 250273 DEBUG nova.network.neutron [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Updating instance_info_cache with network_info: [{"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.675 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:58:52 np0005593232 podman[309246]: 2026-01-23 09:58:52.675478847 +0000 UTC m=+0.050506397 container create a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.676 250273 DEBUG oslo_concurrency.lockutils [req-1b39a50d-655c-4584-8040-d63029c13f06 req-9dab6f19-7ce7-4b9f-985c-5e3fd825ef99 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.680 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:58:52 np0005593232 nova_compute[250269]: 2026-01-23 09:58:52.716 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:58:52 np0005593232 systemd[1]: Started libpod-conmon-a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76.scope.
Jan 23 04:58:52 np0005593232 podman[309246]: 2026-01-23 09:58:52.650390854 +0000 UTC m=+0.025418434 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:58:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8b18948b3770d991732096597e5f686ccb14aaf751d0a2aeb731fd3876afa0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:52 np0005593232 podman[309246]: 2026-01-23 09:58:52.7712167 +0000 UTC m=+0.146244250 container init a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:58:52 np0005593232 podman[309246]: 2026-01-23 09:58:52.777745296 +0000 UTC m=+0.152772846 container start a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 04:58:52 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [NOTICE]   (309303) : New worker (309305) forked
Jan 23 04:58:52 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [NOTICE]   (309303) : Loading success.
Jan 23 04:58:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2031: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.06725155 +0000 UTC m=+0.051391743 container create bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 04:58:53 np0005593232 systemd[1]: Started libpod-conmon-bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068.scope.
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.044812072 +0000 UTC m=+0.028952325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.157058344 +0000 UTC m=+0.141198557 container init bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.166477922 +0000 UTC m=+0.150618115 container start bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.170474256 +0000 UTC m=+0.154614469 container attach bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:58:53 np0005593232 tender_curie[309369]: 167 167
Jan 23 04:58:53 np0005593232 systemd[1]: libpod-bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068.scope: Deactivated successfully.
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.173941284 +0000 UTC m=+0.158081477 container died bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:58:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a78012d163ab78f6201f51a72bceed79f6cc1e1e69f455b4e04458e62eb72654-merged.mount: Deactivated successfully.
Jan 23 04:58:53 np0005593232 podman[309352]: 2026-01-23 09:58:53.220631132 +0000 UTC m=+0.204771325 container remove bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_curie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 04:58:53 np0005593232 systemd[1]: libpod-conmon-bc258e1a51b5cc4b57029ad47e207330857f8a3a8158d2da6122bc04e1a3b068.scope: Deactivated successfully.
Jan 23 04:58:53 np0005593232 podman[309393]: 2026-01-23 09:58:53.449187793 +0000 UTC m=+0.044677331 container create 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:53 np0005593232 systemd[1]: Started libpod-conmon-262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a.scope.
Jan 23 04:58:53 np0005593232 podman[309393]: 2026-01-23 09:58:53.429638887 +0000 UTC m=+0.025128445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:53 np0005593232 podman[309393]: 2026-01-23 09:58:53.550630578 +0000 UTC m=+0.146120136 container init 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 23 04:58:53 np0005593232 podman[309393]: 2026-01-23 09:58:53.560114298 +0000 UTC m=+0.155603876 container start 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:53 np0005593232 podman[309393]: 2026-01-23 09:58:53.565214623 +0000 UTC m=+0.160704201 container attach 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 04:58:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:53.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:58:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:54.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:58:54 np0005593232 musing_mayer[309410]: --> passed data devices: 0 physical, 1 LVM
Jan 23 04:58:54 np0005593232 musing_mayer[309410]: --> relative data size: 1.0
Jan 23 04:58:54 np0005593232 musing_mayer[309410]: --> All data devices are unavailable
Jan 23 04:58:54 np0005593232 systemd[1]: libpod-262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a.scope: Deactivated successfully.
Jan 23 04:58:54 np0005593232 podman[309393]: 2026-01-23 09:58:54.414510519 +0000 UTC m=+1.010000057 container died 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dad9fccd2e073fc0b15f597cdfdf050264ec76fecf93e1260e435f6a3463b73e-merged.mount: Deactivated successfully.
Jan 23 04:58:54 np0005593232 podman[309393]: 2026-01-23 09:58:54.464248734 +0000 UTC m=+1.059738272 container remove 262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 04:58:54 np0005593232 systemd[1]: libpod-conmon-262c8a4d3328476cb30076dd94a8c31bf6beddad261db4fb19f988a988c2c29a.scope: Deactivated successfully.
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.523 250273 DEBUG nova.compute.manager [req-e3514af1-81cc-454a-8845-928e2c5343e8 req-67a11e73-c591-4c83-ab67-956b91a91904 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.524 250273 DEBUG oslo_concurrency.lockutils [req-e3514af1-81cc-454a-8845-928e2c5343e8 req-67a11e73-c591-4c83-ab67-956b91a91904 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.524 250273 DEBUG oslo_concurrency.lockutils [req-e3514af1-81cc-454a-8845-928e2c5343e8 req-67a11e73-c591-4c83-ab67-956b91a91904 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.524 250273 DEBUG oslo_concurrency.lockutils [req-e3514af1-81cc-454a-8845-928e2c5343e8 req-67a11e73-c591-4c83-ab67-956b91a91904 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.524 250273 DEBUG nova.compute.manager [req-e3514af1-81cc-454a-8845-928e2c5343e8 req-67a11e73-c591-4c83-ab67-956b91a91904 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Processing event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.526 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.531 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162334.5310054, 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.531 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.533 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.538 250273 INFO nova.virt.libvirt.driver [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Instance spawned successfully.#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.539 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.578 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.585 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.589 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.590 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.591 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.591 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.592 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.592 250273 DEBUG nova.virt.libvirt.driver [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.631 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.708 250273 INFO nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Took 15.19 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.709 250273 DEBUG nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.824 250273 INFO nova.compute.manager [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Took 16.77 seconds to build instance.#033[00m
Jan 23 04:58:54 np0005593232 nova_compute[250269]: 2026-01-23 09:58:54.862 250273 DEBUG oslo_concurrency.lockutils [None req-a86f2d7d-549a-46be-8d76-9e5f7e2dc573 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2032: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 6.7 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.050524889 +0000 UTC m=+0.045311010 container create 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:58:55 np0005593232 systemd[1]: Started libpod-conmon-037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6.scope.
Jan 23 04:58:55 np0005593232 nova_compute[250269]: 2026-01-23 09:58:55.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.030647764 +0000 UTC m=+0.025433935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.130635198 +0000 UTC m=+0.125421319 container init 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.138651476 +0000 UTC m=+0.133437597 container start 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:55 np0005593232 reverent_rubin[309594]: 167 167
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.144707008 +0000 UTC m=+0.139493159 container attach 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 04:58:55 np0005593232 systemd[1]: libpod-037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6.scope: Deactivated successfully.
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.145656395 +0000 UTC m=+0.140442516 container died 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 04:58:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-83fda80d22b46eee3e204438908c738b588739e1549747aff7b4c67834ab04b4-merged.mount: Deactivated successfully.
Jan 23 04:58:55 np0005593232 podman[309578]: 2026-01-23 09:58:55.18660097 +0000 UTC m=+0.181387091 container remove 037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_rubin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 23 04:58:55 np0005593232 systemd[1]: libpod-conmon-037fa1ed0944d6e1dc014de978e9ab48f797fd48eb483e9e5a5f79d824a95ea6.scope: Deactivated successfully.
Jan 23 04:58:55 np0005593232 podman[309616]: 2026-01-23 09:58:55.357324305 +0000 UTC m=+0.040075581 container create 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:58:55 np0005593232 systemd[1]: Started libpod-conmon-7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde.scope.
Jan 23 04:58:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0c9363fa16daf3eaa915cda35f8050d0a528f0e872bc004776fa23ba3e5062/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0c9363fa16daf3eaa915cda35f8050d0a528f0e872bc004776fa23ba3e5062/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:55 np0005593232 podman[309616]: 2026-01-23 09:58:55.338832519 +0000 UTC m=+0.021583825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0c9363fa16daf3eaa915cda35f8050d0a528f0e872bc004776fa23ba3e5062/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0c9363fa16daf3eaa915cda35f8050d0a528f0e872bc004776fa23ba3e5062/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:55 np0005593232 podman[309616]: 2026-01-23 09:58:55.446764408 +0000 UTC m=+0.129515744 container init 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 04:58:55 np0005593232 podman[309616]: 2026-01-23 09:58:55.454757086 +0000 UTC m=+0.137508362 container start 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 04:58:55 np0005593232 podman[309616]: 2026-01-23 09:58:55.457804242 +0000 UTC m=+0.140555538 container attach 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 04:58:55 np0005593232 nova_compute[250269]: 2026-01-23 09:58:55.510 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:55.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:56.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]: {
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:    "0": [
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:        {
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "devices": [
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "/dev/loop3"
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            ],
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "lv_name": "ceph_lv0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "lv_size": "7511998464",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "name": "ceph_lv0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "tags": {
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.cephx_lockbox_secret": "",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.cluster_name": "ceph",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.crush_device_class": "",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.encrypted": "0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.osd_id": "0",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.type": "block",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:                "ceph.vdo": "0"
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            },
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "type": "block",
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:            "vg_name": "ceph_vg0"
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:        }
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]:    ]
Jan 23 04:58:56 np0005593232 modest_blackwell[309633]: }
Jan 23 04:58:56 np0005593232 systemd[1]: libpod-7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde.scope: Deactivated successfully.
Jan 23 04:58:56 np0005593232 podman[309616]: 2026-01-23 09:58:56.343654727 +0000 UTC m=+1.026406013 container died 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.783 250273 DEBUG nova.compute.manager [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.785 250273 DEBUG oslo_concurrency.lockutils [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.785 250273 DEBUG oslo_concurrency.lockutils [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.785 250273 DEBUG oslo_concurrency.lockutils [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.786 250273 DEBUG nova.compute.manager [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] No waiting events found dispatching network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:58:56 np0005593232 nova_compute[250269]: 2026-01-23 09:58:56.786 250273 WARNING nova.compute.manager [req-b824fe50-4704-435c-9c09-fb4b76ca4923 req-7eb3fede-d44d-4f26-91b3-2769477abe29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received unexpected event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:58:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2033: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 808 KiB/s rd, 24 KiB/s wr, 35 op/s
Jan 23 04:58:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b0c9363fa16daf3eaa915cda35f8050d0a528f0e872bc004776fa23ba3e5062-merged.mount: Deactivated successfully.
Jan 23 04:58:57 np0005593232 podman[309616]: 2026-01-23 09:58:57.801714688 +0000 UTC m=+2.484465964 container remove 7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 04:58:57 np0005593232 systemd[1]: libpod-conmon-7f8ce98312f36fe4cfda147a66cfd89ea88a99a9f6765fb022a336476a763fde.scope: Deactivated successfully.
Jan 23 04:58:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:57.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:58:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:58:58.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.444603533 +0000 UTC m=+0.041238034 container create d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:58 np0005593232 systemd[1]: Started libpod-conmon-d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16.scope.
Jan 23 04:58:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.425462009 +0000 UTC m=+0.022096540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.526805401 +0000 UTC m=+0.123439922 container init d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.534601163 +0000 UTC m=+0.131235664 container start d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.538406381 +0000 UTC m=+0.135040902 container attach d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 04:58:58 np0005593232 compassionate_payne[309810]: 167 167
Jan 23 04:58:58 np0005593232 systemd[1]: libpod-d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16.scope: Deactivated successfully.
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.542028734 +0000 UTC m=+0.138663315 container died d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 04:58:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d71a89514a8057000e644323a9953a33683c782c781d6537bdf2bc9345ce77a9-merged.mount: Deactivated successfully.
Jan 23 04:58:58 np0005593232 podman[309793]: 2026-01-23 09:58:58.581286611 +0000 UTC m=+0.177921112 container remove d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:58:58 np0005593232 systemd[1]: libpod-conmon-d9044aa897f4c4e0e90e947a76bd160fe4200105275d5be2dac2d987cd9b7a16.scope: Deactivated successfully.
Jan 23 04:58:58 np0005593232 podman[309835]: 2026-01-23 09:58:58.761152397 +0000 UTC m=+0.043546130 container create 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:58:58 np0005593232 systemd[1]: Started libpod-conmon-5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541.scope.
Jan 23 04:58:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2a72a5f4ea529f129affbf227ca088a9d608f080ffe2a55bfe83656cede1f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2a72a5f4ea529f129affbf227ca088a9d608f080ffe2a55bfe83656cede1f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2a72a5f4ea529f129affbf227ca088a9d608f080ffe2a55bfe83656cede1f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a2a72a5f4ea529f129affbf227ca088a9d608f080ffe2a55bfe83656cede1f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 04:58:58 np0005593232 podman[309835]: 2026-01-23 09:58:58.74121408 +0000 UTC m=+0.023607833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 04:58:58 np0005593232 podman[309835]: 2026-01-23 09:58:58.847660367 +0000 UTC m=+0.130054120 container init 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 04:58:58 np0005593232 podman[309835]: 2026-01-23 09:58:58.854543983 +0000 UTC m=+0.136937716 container start 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 04:58:58 np0005593232 podman[309835]: 2026-01-23 09:58:58.857733264 +0000 UTC m=+0.140127017 container attach 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 04:58:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2034: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 25 KiB/s wr, 132 op/s
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.208 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.210 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.211 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.211 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.211 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.213 250273 INFO nova.compute.manager [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Terminating instance#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.214 250273 DEBUG nova.compute.manager [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 04:58:59 np0005593232 kernel: tap369958b7-bc (unregistering): left promiscuous mode
Jan 23 04:58:59 np0005593232 NetworkManager[49057]: <info>  [1769162339.2591] device (tap369958b7-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 04:58:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:59Z|00292|binding|INFO|Releasing lport 369958b7-bcfd-4341-8b47-7283c5cfffd9 from this chassis (sb_readonly=0)
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.272 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:59Z|00293|binding|INFO|Setting lport 369958b7-bcfd-4341-8b47-7283c5cfffd9 down in Southbound
Jan 23 04:58:59 np0005593232 ovn_controller[151001]: 2026-01-23T09:58:59Z|00294|binding|INFO|Removing iface tap369958b7-bc ovn-installed in OVS
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.274 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.292 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.295 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:b3:ab 10.100.0.4'], port_security=['fa:16:3e:da:b3:ab 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6c5732b3-3484-43db-a231-53d04de40d61', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c288779980de4f03be20b7eed343b775', 'neutron:revision_number': '4', 'neutron:security_group_ids': '288ecf98-3e6e-478c-8e27-86a4106b4ef8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c2529943-1c00-4757-827e-798919a83756, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=369958b7-bcfd-4341-8b47-7283c5cfffd9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.297 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 369958b7-bcfd-4341-8b47-7283c5cfffd9 in datapath 6c5732b3-3484-43db-a231-53d04de40d61 unbound from our chassis#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.299 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6c5732b3-3484-43db-a231-53d04de40d61, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.300 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[241a06b6-8086-4f48-85f3-cbdf4360754e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.301 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61 namespace which is not needed anymore#033[00m
Jan 23 04:58:59 np0005593232 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 23 04:58:59 np0005593232 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000005b.scope: Consumed 5.442s CPU time.
Jan 23 04:58:59 np0005593232 systemd-machined[215836]: Machine qemu-35-instance-0000005b terminated.
Jan 23 04:58:59 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [NOTICE]   (309303) : haproxy version is 2.8.14-c23fe91
Jan 23 04:58:59 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [NOTICE]   (309303) : path to executable is /usr/sbin/haproxy
Jan 23 04:58:59 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [ALERT]    (309303) : Current worker (309305) exited with code 143 (Terminated)
Jan 23 04:58:59 np0005593232 neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61[309298]: [WARNING]  (309303) : All workers exited. Exiting... (0)
Jan 23 04:58:59 np0005593232 systemd[1]: libpod-a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76.scope: Deactivated successfully.
Jan 23 04:58:59 np0005593232 podman[309882]: 2026-01-23 09:58:59.437203735 +0000 UTC m=+0.046628577 container died a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.437 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.442 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.452 250273 INFO nova.virt.libvirt.driver [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Instance destroyed successfully.#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.452 250273 DEBUG nova.objects.instance [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lazy-loading 'resources' on Instance uuid 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:58:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76-userdata-shm.mount: Deactivated successfully.
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.470 250273 DEBUG nova.virt.libvirt.vif [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:58:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-123958626',display_name='tempest-tempest.common.compute-instance-123958626-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-123958626-1',id=91,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:58:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c288779980de4f03be20b7eed343b775',ramdisk_id='',reservation_id='r-4ujtl3ky',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-351408189',owner_user_name='tempest-MultipleCreateTestJSON-351408189-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T09:58:54Z,user_data=None,user_id='d83df80213fd40f99fdc68c146fe9a2a',uuid=9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.471 250273 DEBUG nova.network.os_vif_util [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converting VIF {"id": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "address": "fa:16:3e:da:b3:ab", "network": {"id": "6c5732b3-3484-43db-a231-53d04de40d61", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-989500160-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c288779980de4f03be20b7eed343b775", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap369958b7-bc", "ovs_interfaceid": "369958b7-bcfd-4341-8b47-7283c5cfffd9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:58:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f8b18948b3770d991732096597e5f686ccb14aaf751d0a2aeb731fd3876afa0e-merged.mount: Deactivated successfully.
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.472 250273 DEBUG nova.network.os_vif_util [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.472 250273 DEBUG os_vif [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.475 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap369958b7-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.485 250273 INFO os_vif [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:b3:ab,bridge_name='br-int',has_traffic_filtering=True,id=369958b7-bcfd-4341-8b47-7283c5cfffd9,network=Network(6c5732b3-3484-43db-a231-53d04de40d61),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap369958b7-bc')#033[00m
Jan 23 04:58:59 np0005593232 podman[309882]: 2026-01-23 09:58:59.5111911 +0000 UTC m=+0.120615942 container cleanup a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 04:58:59 np0005593232 systemd[1]: libpod-conmon-a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76.scope: Deactivated successfully.
Jan 23 04:58:59 np0005593232 podman[309937]: 2026-01-23 09:58:59.569747105 +0000 UTC m=+0.038010132 container remove a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.575 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[22c5ea22-f966-4509-ba18-d93ffb9ad5e4]: (4, ('Fri Jan 23 09:58:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61 (a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76)\na7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76\nFri Jan 23 09:58:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61 (a7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76)\na7ec8f17d8486dc71a85d73b17c771344366483b3775006d685bd71512322a76\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.576 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b316a0a-a4ea-4aec-a46c-53299517b591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.577 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5732b3-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:58:59 np0005593232 kernel: tap6c5732b3-30: left promiscuous mode
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.581 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.598 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6055ef09-f974-4211-a517-af5eeaa3dae3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.621 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5c6db7-73b0-4a98-a064-ae22b48c7c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.622 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fd709d69-2219-47d3-9947-8aaecdb0a69d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.637 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd90de2-f751-45d2-86b4-5f6d15d71696]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624511, 'reachable_time': 35514, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309959, 'error': None, 'target': 'ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 systemd[1]: run-netns-ovnmeta\x2d6c5732b3\x2d3484\x2d43db\x2da231\x2d53d04de40d61.mount: Deactivated successfully.
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.640 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6c5732b3-3484-43db-a231-53d04de40d61 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 04:58:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:58:59.641 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6b27b970-6b4d-4b7a-b7a5-daee9e5e7064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:58:59 np0005593232 goofy_cori[309852]: {
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:        "osd_id": 0,
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:        "type": "bluestore"
Jan 23 04:58:59 np0005593232 goofy_cori[309852]:    }
Jan 23 04:58:59 np0005593232 goofy_cori[309852]: }
Jan 23 04:58:59 np0005593232 systemd[1]: libpod-5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541.scope: Deactivated successfully.
Jan 23 04:58:59 np0005593232 podman[309971]: 2026-01-23 09:58:59.805793048 +0000 UTC m=+0.028036299 container died 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 04:58:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a2a72a5f4ea529f129affbf227ca088a9d608f080ffe2a55bfe83656cede1f7-merged.mount: Deactivated successfully.
Jan 23 04:58:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:58:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:58:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:58:59.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:58:59 np0005593232 podman[309971]: 2026-01-23 09:58:59.866276058 +0000 UTC m=+0.088519289 container remove 5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cori, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 04:58:59 np0005593232 systemd[1]: libpod-conmon-5f4af0421d33639f39126f97a6b63ca3bc73d2ed2c7074c5ca56cdb7e8f8a541.scope: Deactivated successfully.
Jan 23 04:58:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.982 250273 DEBUG nova.compute.manager [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-unplugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.982 250273 DEBUG oslo_concurrency.lockutils [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.983 250273 DEBUG oslo_concurrency.lockutils [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.983 250273 DEBUG oslo_concurrency.lockutils [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.983 250273 DEBUG nova.compute.manager [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] No waiting events found dispatching network-vif-unplugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:58:59 np0005593232 nova_compute[250269]: 2026-01-23 09:58:59.984 250273 DEBUG nova.compute.manager [req-eb727e8c-4710-4d03-bdc9-ade98efe2f55 req-22fec8cf-8040-45f2-baf5-ad4ec96f72a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-unplugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 04:59:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:59:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 04:59:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:00.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:59:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9958241a-7191-4acb-99b5-523d8de698ee does not exist
Jan 23 04:59:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ccd6e6de-634c-4143-bc8d-d9635a3107aa does not exist
Jan 23 04:59:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4458691-14da-4d6a-844a-e8b30e3c3618 does not exist
Jan 23 04:59:00 np0005593232 nova_compute[250269]: 2026-01-23 09:59:00.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2035: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 25 KiB/s wr, 132 op/s
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.243293) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341243388, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 1672, "num_deletes": 256, "total_data_size": 2771097, "memory_usage": 2810896, "flush_reason": "Manual Compaction"}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341266137, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 2708623, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43445, "largest_seqno": 45116, "table_properties": {"data_size": 2701150, "index_size": 4352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16110, "raw_average_key_size": 19, "raw_value_size": 2685897, "raw_average_value_size": 3320, "num_data_blocks": 191, "num_entries": 809, "num_filter_entries": 809, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162190, "oldest_key_time": 1769162190, "file_creation_time": 1769162341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 22879 microseconds, and 7638 cpu microseconds.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.266177) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 2708623 bytes OK
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.266208) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.269009) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.269069) EVENT_LOG_v1 {"time_micros": 1769162341269056, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.269098) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 2763949, prev total WAL file size 2776318, number of live WAL files 2.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.270123) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353034' seq:72057594037927935, type:22 .. '6C6F676D0031373536' seq:0, type:0; will stop at (end)
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(2645KB)], [95(10MB)]
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341270190, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 14230803, "oldest_snapshot_seqno": -1}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7310 keys, 14086188 bytes, temperature: kUnknown
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341422793, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 14086188, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14033657, "index_size": 33155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 187578, "raw_average_key_size": 25, "raw_value_size": 13899407, "raw_average_value_size": 1901, "num_data_blocks": 1328, "num_entries": 7310, "num_filter_entries": 7310, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.423278) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 14086188 bytes
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.424971) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.1 rd, 92.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.0 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 7837, records dropped: 527 output_compression: NoCompression
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.425013) EVENT_LOG_v1 {"time_micros": 1769162341424983, "job": 56, "event": "compaction_finished", "compaction_time_micros": 152887, "compaction_time_cpu_micros": 38365, "output_level": 6, "num_output_files": 1, "total_output_size": 14086188, "num_input_records": 7837, "num_output_records": 7310, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341425832, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341427746, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.270051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.427968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.427977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.427978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.427980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.427981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.428611) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341428656, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 251, "total_data_size": 14388, "memory_usage": 20136, "flush_reason": "Manual Compaction"}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341430791, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 13893, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45117, "largest_seqno": 45372, "table_properties": {"data_size": 12141, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5144, "raw_average_key_size": 20, "raw_value_size": 8731, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 256, "num_filter_entries": 256, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162341, "oldest_key_time": 1769162341, "file_creation_time": 1769162341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 2224 microseconds, and 842 cpu microseconds.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.430838) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 13893 bytes OK
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.430856) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.432514) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.432531) EVENT_LOG_v1 {"time_micros": 1769162341432526, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.432546) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 12369, prev total WAL file size 12369, number of live WAL files 2.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.432925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373537' seq:0, type:0; will stop at (end)
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(13KB)], [98(13MB)]
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341432963, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 14100081, "oldest_snapshot_seqno": -1}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7060 keys, 10259114 bytes, temperature: kUnknown
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341546380, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10259114, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10213198, "index_size": 27180, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 182576, "raw_average_key_size": 25, "raw_value_size": 10088200, "raw_average_value_size": 1428, "num_data_blocks": 1078, "num_entries": 7060, "num_filter_entries": 7060, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162341, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.546633) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10259114 bytes
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.548098) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.2 rd, 90.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.4 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(1753.3) write-amplify(738.4) OK, records in: 7566, records dropped: 506 output_compression: NoCompression
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.548120) EVENT_LOG_v1 {"time_micros": 1769162341548110, "job": 58, "event": "compaction_finished", "compaction_time_micros": 113491, "compaction_time_cpu_micros": 37300, "output_level": 6, "num_output_files": 1, "total_output_size": 10259114, "num_input_records": 7566, "num_output_records": 7060, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341548243, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162341550890, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.432829) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.550944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.550948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.550949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.550951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:01.550953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.760 250273 INFO nova.virt.libvirt.driver [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Deleting instance files /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_del#033[00m
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.761 250273 INFO nova.virt.libvirt.driver [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Deletion of /var/lib/nova/instances/9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44_del complete#033[00m
Jan 23 04:59:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:01.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.900 250273 INFO nova.compute.manager [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.901 250273 DEBUG oslo.service.loopingcall [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.901 250273 DEBUG nova.compute.manager [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 04:59:01 np0005593232 nova_compute[250269]: 2026-01-23 09:59:01.902 250273 DEBUG nova.network.neutron [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 04:59:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:02.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.609 250273 DEBUG nova.compute.manager [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.610 250273 DEBUG oslo_concurrency.lockutils [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.611 250273 DEBUG oslo_concurrency.lockutils [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.611 250273 DEBUG oslo_concurrency.lockutils [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.612 250273 DEBUG nova.compute.manager [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] No waiting events found dispatching network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:59:02 np0005593232 nova_compute[250269]: 2026-01-23 09:59:02.612 250273 WARNING nova.compute.manager [req-423ba7b3-67f3-480f-86e1-f6c28e081bda req-d954bf59-160f-4b6e-bb8e-4cb5cb92a5de 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received unexpected event network-vif-plugged-369958b7-bcfd-4341-8b47-7283c5cfffd9 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 04:59:02 np0005593232 podman[310064]: 2026-01-23 09:59:02.705520003 +0000 UTC m=+0.109237208 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:59:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2036: 321 pgs: 321 active+clean; 134 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 39 KiB/s wr, 207 op/s
Jan 23 04:59:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:03.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:04.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.480 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.672 250273 DEBUG nova.network.neutron [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.693 250273 INFO nova.compute.manager [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Took 2.79 seconds to deallocate network for instance.#033[00m
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.765 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.765 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.840 250273 DEBUG nova.compute.manager [req-376c009a-fd25-426a-85e8-b7e81ad8ade4 req-4806dcca-a90d-4ace-81cc-a92b527d4ec8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Received event network-vif-deleted-369958b7-bcfd-4341-8b47-7283c5cfffd9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2037: 321 pgs: 321 active+clean; 134 MiB data, 753 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 27 KiB/s wr, 198 op/s
Jan 23 04:59:04 np0005593232 nova_compute[250269]: 2026-01-23 09:59:04.952 250273 DEBUG oslo_concurrency.processutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:59:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237228802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.481 250273 DEBUG oslo_concurrency.processutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.490 250273 DEBUG nova.compute.provider_tree [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.514 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.521 250273 DEBUG nova.scheduler.client.report [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.566 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.604 250273 INFO nova.scheduler.client.report [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Deleted allocations for instance 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44#033[00m
Jan 23 04:59:05 np0005593232 nova_compute[250269]: 2026-01-23 09:59:05.760 250273 DEBUG oslo_concurrency.lockutils [None req-d3d260c0-a86d-4d30-9de0-20efe8fe0d82 d83df80213fd40f99fdc68c146fe9a2a c288779980de4f03be20b7eed343b775 - - default default] Lock "9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:05.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:06.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2038: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 27 KiB/s wr, 210 op/s
Jan 23 04:59:07 np0005593232 podman[310140]: 2026-01-23 09:59:07.40697944 +0000 UTC m=+0.058973058 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:07.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:08.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2039: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 15 KiB/s wr, 237 op/s
Jan 23 04:59:09 np0005593232 nova_compute[250269]: 2026-01-23 09:59:09.483 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:09.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.093 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.094 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:10.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.119 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.252 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.254 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.269 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.269 250273 INFO nova.compute.claims [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.462 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.516 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:59:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425019332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:59:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2040: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 15 KiB/s wr, 140 op/s
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.902 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.908 250273 DEBUG nova.compute.provider_tree [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.937 250273 DEBUG nova.scheduler.client.report [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.983 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:10 np0005593232 nova_compute[250269]: 2026-01-23 09:59:10.984 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.102 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.103 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.183 250273 INFO nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.237 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 04:59:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.398 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.400 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.400 250273 INFO nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Creating image(s)#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.436 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.476 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.502 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.505 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.541 250273 DEBUG nova.policy [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9d4a5c201efa4992a9ef57d8abdc1675', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.569 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.570 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.571 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.571 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.595 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:11 np0005593232 nova_compute[250269]: 2026-01-23 09:59:11.599 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:11.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:12.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2041: 321 pgs: 321 active+clean; 129 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.5 MiB/s wr, 182 op/s
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.232 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.310 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] resizing rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.458 250273 DEBUG nova.objects.instance [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'migration_context' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:59:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:13.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.926 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.926 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Ensure instance console log exists: /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.927 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.927 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:13 np0005593232 nova_compute[250269]: 2026-01-23 09:59:13.927 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:14.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:14 np0005593232 nova_compute[250269]: 2026-01-23 09:59:14.449 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162339.448345, 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:59:14 np0005593232 nova_compute[250269]: 2026-01-23 09:59:14.450 250273 INFO nova.compute.manager [-] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] VM Stopped (Lifecycle Event)#033[00m
Jan 23 04:59:14 np0005593232 nova_compute[250269]: 2026-01-23 09:59:14.486 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:14 np0005593232 nova_compute[250269]: 2026-01-23 09:59:14.506 250273 DEBUG nova.compute.manager [None req-aa71636f-39fa-4d34-af68-269930a5091d - - - - - -] [instance: 9b2a6de6-1eeb-436a-8e0b-ea2236d2fc44] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:59:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2042: 321 pgs: 321 active+clean; 129 MiB data, 749 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 108 op/s
Jan 23 04:59:15 np0005593232 nova_compute[250269]: 2026-01-23 09:59:15.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:15.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:15 np0005593232 nova_compute[250269]: 2026-01-23 09:59:15.922 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Successfully created port: 35f84523-a0b5-4102-ba04-cc5da6075d54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 04:59:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:16.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2043: 321 pgs: 321 active+clean; 134 MiB data, 752 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 112 op/s
Jan 23 04:59:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:17.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:18.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2044: 321 pgs: 321 active+clean; 216 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.8 MiB/s wr, 143 op/s
Jan 23 04:59:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:19.423 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:59:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:19.424 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 04:59:19 np0005593232 nova_compute[250269]: 2026-01-23 09:59:19.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:19 np0005593232 nova_compute[250269]: 2026-01-23 09:59:19.489 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:19.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:20.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:20 np0005593232 nova_compute[250269]: 2026-01-23 09:59:20.519 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2045: 321 pgs: 321 active+clean; 216 MiB data, 789 MiB used, 20 GiB / 21 GiB avail; 56 KiB/s rd, 4.8 MiB/s wr, 88 op/s
Jan 23 04:59:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:21.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:22.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:22 np0005593232 nova_compute[250269]: 2026-01-23 09:59:22.340 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2046: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 81 KiB/s rd, 5.3 MiB/s wr, 122 op/s
Jan 23 04:59:23 np0005593232 nova_compute[250269]: 2026-01-23 09:59:23.761 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Successfully updated port: 35f84523-a0b5-4102-ba04-cc5da6075d54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 04:59:23 np0005593232 nova_compute[250269]: 2026-01-23 09:59:23.829 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:59:23 np0005593232 nova_compute[250269]: 2026-01-23 09:59:23.829 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:59:23 np0005593232 nova_compute[250269]: 2026-01-23 09:59:23.830 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 04:59:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:23.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:24.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:24 np0005593232 nova_compute[250269]: 2026-01-23 09:59:24.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2047: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.8 MiB/s wr, 80 op/s
Jan 23 04:59:25 np0005593232 nova_compute[250269]: 2026-01-23 09:59:25.521 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:25 np0005593232 nova_compute[250269]: 2026-01-23 09:59:25.824 250273 DEBUG nova.compute.manager [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:25 np0005593232 nova_compute[250269]: 2026-01-23 09:59:25.824 250273 DEBUG nova.compute.manager [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing instance network info cache due to event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:59:25 np0005593232 nova_compute[250269]: 2026-01-23 09:59:25.824 250273 DEBUG oslo_concurrency.lockutils [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:59:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:25.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:26.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:26 np0005593232 nova_compute[250269]: 2026-01-23 09:59:26.482 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 04:59:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2048: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 3.8 MiB/s wr, 80 op/s
Jan 23 04:59:27 np0005593232 nova_compute[250269]: 2026-01-23 09:59:27.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:27.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:28 np0005593232 nova_compute[250269]: 2026-01-23 09:59:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2049: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 380 KiB/s rd, 3.6 MiB/s wr, 87 op/s
Jan 23 04:59:29 np0005593232 nova_compute[250269]: 2026-01-23 09:59:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:29.426 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:29 np0005593232 nova_compute[250269]: 2026-01-23 09:59:29.495 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:29.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.163 250273 DEBUG nova.network.neutron [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.219 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.220 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance network_info: |[{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.221 250273 DEBUG oslo_concurrency.lockutils [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.221 250273 DEBUG nova.network.neutron [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.225 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start _get_guest_xml network_info=[{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.232 250273 WARNING nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.245 250273 DEBUG nova.virt.libvirt.host [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.246 250273 DEBUG nova.virt.libvirt.host [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.257 250273 DEBUG nova.virt.libvirt.host [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.257 250273 DEBUG nova.virt.libvirt.host [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.259 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.260 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.261 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.261 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.261 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.262 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.262 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.262 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.263 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.263 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.263 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.264 250273 DEBUG nova.virt.hardware [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.266 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:59:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3141524211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.692 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.722 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:30 np0005593232 nova_compute[250269]: 2026-01-23 09:59:30.728 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2050: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 353 KiB/s rd, 518 KiB/s wr, 44 op/s
Jan 23 04:59:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 04:59:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3641479076' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.157 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.159 250273 DEBUG nova.virt.libvirt.vif [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:59:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.160 250273 DEBUG nova.network.os_vif_util [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.161 250273 DEBUG nova.network.os_vif_util [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.162 250273 DEBUG nova.objects.instance [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 04:59:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.394 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <uuid>1bdbf4d2-447b-47d0-8b3f-878ee65905a7</uuid>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <name>instance-0000005d</name>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestJSON-server-1908507141</nova:name>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 09:59:30</nova:creationTime>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:user uuid="9d4a5c201efa4992a9ef57d8abdc1675">tempest-ServerActionsTestJSON-1619235720-project-member</nova:user>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:project uuid="74c5c1d0762242f29a5d26033efd9f6d">tempest-ServerActionsTestJSON-1619235720</nova:project>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <nova:port uuid="35f84523-a0b5-4102-ba04-cc5da6075d54">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <system>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="serial">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="uuid">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </system>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <os>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </os>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <features>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </features>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </clock>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  <devices>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </source>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      </auth>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </disk>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:b8:d6:dc"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <target dev="tap35f84523-a0"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </interface>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/console.log" append="off"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </serial>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <video>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </video>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </rng>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 04:59:31 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 04:59:31 np0005593232 nova_compute[250269]:  </devices>
Jan 23 04:59:31 np0005593232 nova_compute[250269]: </domain>
Jan 23 04:59:31 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.395 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Preparing to wait for external event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.396 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.396 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.396 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.397 250273 DEBUG nova.virt.libvirt.vif [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T09:59:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.397 250273 DEBUG nova.network.os_vif_util [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.398 250273 DEBUG nova.network.os_vif_util [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.398 250273 DEBUG os_vif [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.399 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.400 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.400 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.401 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.401 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.404 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.404 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f84523-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.404 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f84523-a0, col_values=(('external_ids', {'iface-id': '35f84523-a0b5-4102-ba04-cc5da6075d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:d6:dc', 'vm-uuid': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.406 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:31 np0005593232 NetworkManager[49057]: <info>  [1769162371.4074] manager: (tap35f84523-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.412 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.413 250273 INFO os_vif [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.502 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.503 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.503 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] No VIF found with MAC fa:16:3e:b8:d6:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.503 250273 INFO nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Using config drive#033[00m
Jan 23 04:59:31 np0005593232 nova_compute[250269]: 2026-01-23 09:59:31.526 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:31.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:32.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:32 np0005593232 nova_compute[250269]: 2026-01-23 09:59:32.320 250273 INFO nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Creating config drive at /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config#033[00m
Jan 23 04:59:32 np0005593232 nova_compute[250269]: 2026-01-23 09:59:32.328 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy81ut4n8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:32 np0005593232 nova_compute[250269]: 2026-01-23 09:59:32.460 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy81ut4n8" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:32 np0005593232 nova_compute[250269]: 2026-01-23 09:59:32.499 250273 DEBUG nova.storage.rbd_utils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] rbd image 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 04:59:32 np0005593232 nova_compute[250269]: 2026-01-23 09:59:32.502 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2051: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 519 KiB/s wr, 106 op/s
Jan 23 04:59:33 np0005593232 podman[310538]: 2026-01-23 09:59:33.457906322 +0000 UTC m=+0.105392389 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.810 250273 DEBUG oslo_concurrency.processutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config 1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.811 250273 INFO nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Deleting local config drive /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/disk.config because it was imported into RBD.#033[00m
Jan 23 04:59:33 np0005593232 kernel: tap35f84523-a0: entered promiscuous mode
Jan 23 04:59:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:33Z|00295|binding|INFO|Claiming lport 35f84523-a0b5-4102-ba04-cc5da6075d54 for this chassis.
Jan 23 04:59:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:33Z|00296|binding|INFO|35f84523-a0b5-4102-ba04-cc5da6075d54: Claiming fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.878 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:33 np0005593232 NetworkManager[49057]: <info>  [1769162373.8795] manager: (tap35f84523-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.895 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.897 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c bound to our chassis#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.899 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee03d7c9-e107-41bf-95cc-5508578ad66c#033[00m
Jan 23 04:59:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:33.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:33 np0005593232 systemd-udevd[310578]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.915 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[42a1ee6f-3d95-46d9-9271-304fb9152ad5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.917 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee03d7c9-e1 in ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.919 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee03d7c9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.920 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d3885e9d-d001-4635-bf25-fe9033778aaa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.921 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[13091805-4b54-4f3c-96fb-296af2f0ba7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:33 np0005593232 systemd-machined[215836]: New machine qemu-36-instance-0000005d.
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.933 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ead7803b-2125-4d6a-a10e-a7fef9ea99e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:33 np0005593232 NetworkManager[49057]: <info>  [1769162373.9352] device (tap35f84523-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 04:59:33 np0005593232 NetworkManager[49057]: <info>  [1769162373.9361] device (tap35f84523-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 04:59:33 np0005593232 systemd[1]: Started Virtual Machine qemu-36-instance-0000005d.
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.959 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a582271c-65e2-4e06-bd29-f568b1cf6f2a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.985 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:33Z|00297|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 ovn-installed in OVS
Jan 23 04:59:33 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:33Z|00298|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 up in Southbound
Jan 23 04:59:33 np0005593232 nova_compute[250269]: 2026-01-23 09:59:33.989 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:33.998 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[62edbd96-8218-485d-a9c5-bfadeb192c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.003 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f269a581-4697-4767-adf7-10f8607832d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 systemd-udevd[310582]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 04:59:34 np0005593232 NetworkManager[49057]: <info>  [1769162374.0071] manager: (tapee03d7c9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.039 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed54adb-8679-4be6-a477-b900f0f573e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.043 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[527198ca-10b3-405f-a4b3-d5d36ea268b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 NetworkManager[49057]: <info>  [1769162374.0679] device (tapee03d7c9-e0): carrier: link connected
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.075 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8d52538b-7aed-468b-9f70-94d2c10b8379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.091 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7f7c5d59-eb2a-4015-a13a-c970fe60c984]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628721, 'reachable_time': 34481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310611, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.109 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49a9dd75-860e-40f5-a079-f7230bb4a602]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:6530'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 628721, 'tstamp': 628721}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310612, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.132 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[05f3a4ab-4364-4b8c-9f36-e86e4c4cfe66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 90], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628721, 'reachable_time': 34481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310613, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:34.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.160 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f9faea56-2b34-4463-ae67-2519d7ef1444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.220 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6c3898-2304-4cae-9320-c95c91d924db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.222 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.222 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.222 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee03d7c9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:34 np0005593232 NetworkManager[49057]: <info>  [1769162374.2653] manager: (tapee03d7c9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Jan 23 04:59:34 np0005593232 kernel: tapee03d7c9-e0: entered promiscuous mode
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.264 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.268 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee03d7c9-e0, col_values=(('external_ids', {'iface-id': '702d4523-a665-42f5-9a36-57d187c0698a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:34 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:34Z|00299|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.283 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.284 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.285 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fde3d53d-0f41-4b41-b1e4-e023512ad936]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.286 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 04:59:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:34.286 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'env', 'PROCESS_TAG=haproxy-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee03d7c9-e107-41bf-95cc-5508578ad66c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.360 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.360 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.548 250273 DEBUG nova.compute.manager [req-612425e2-6ba8-4f68-acfc-af64ccf92070 req-d0d2754a-4801-4aad-bb2e-b294b3dd59e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.549 250273 DEBUG oslo_concurrency.lockutils [req-612425e2-6ba8-4f68-acfc-af64ccf92070 req-d0d2754a-4801-4aad-bb2e-b294b3dd59e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.549 250273 DEBUG oslo_concurrency.lockutils [req-612425e2-6ba8-4f68-acfc-af64ccf92070 req-d0d2754a-4801-4aad-bb2e-b294b3dd59e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.549 250273 DEBUG oslo_concurrency.lockutils [req-612425e2-6ba8-4f68-acfc-af64ccf92070 req-d0d2754a-4801-4aad-bb2e-b294b3dd59e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.549 250273 DEBUG nova.compute.manager [req-612425e2-6ba8-4f68-acfc-af64ccf92070 req-d0d2754a-4801-4aad-bb2e-b294b3dd59e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Processing event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.574 250273 DEBUG nova.network.neutron [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated VIF entry in instance network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.575 250273 DEBUG nova.network.neutron [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.605 250273 DEBUG oslo_concurrency.lockutils [req-cb16cdda-156d-4e40-8a0d-6a759a1a4fd0 req-6a28ac05-e14c-4fe1-a09f-c25b831af853 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:59:34 np0005593232 podman[310701]: 2026-01-23 09:59:34.674249937 +0000 UTC m=+0.049446307 container create 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 23 04:59:34 np0005593232 systemd[1]: Started libpod-conmon-07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f.scope.
Jan 23 04:59:34 np0005593232 podman[310701]: 2026-01-23 09:59:34.648362741 +0000 UTC m=+0.023559121 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 04:59:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 04:59:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a182760b561881a27387fe07beeed352cb3e32d90b51efab40d174a488f9967/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 04:59:34 np0005593232 podman[310701]: 2026-01-23 09:59:34.766614074 +0000 UTC m=+0.141810484 container init 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 04:59:34 np0005593232 podman[310701]: 2026-01-23 09:59:34.772165282 +0000 UTC m=+0.147361662 container start 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 04:59:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:59:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838544387' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:59:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [NOTICE]   (310721) : New worker (310725) forked
Jan 23 04:59:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [NOTICE]   (310721) : Loading success.
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.814 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2052: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 511 B/s wr, 73 op/s
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.937 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.938 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.942 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162374.9422822, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.943 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Started (Lifecycle Event)#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.944 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.949 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.953 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance spawned successfully.#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.953 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.983 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.990 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.991 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.991 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.992 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.992 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.993 250273 DEBUG nova.virt.libvirt.driver [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 04:59:34 np0005593232 nova_compute[250269]: 2026-01-23 09:59:34.996 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.035 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.035 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162374.942414, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.035 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Paused (Lifecycle Event)#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.068 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.072 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162374.9481766, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.072 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.094 250273 INFO nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Took 23.70 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.094 250273 DEBUG nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.105 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.109 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.135 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.136 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4386MB free_disk=20.92583465576172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.206 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.240 250273 INFO nova.compute.manager [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Took 25.02 seconds to build instance.#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.307 250273 DEBUG oslo_concurrency.lockutils [None req-26108c20-f3fc-45c8-81a4-1cbcd56f0b5b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.365 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.365 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.366 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.411 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.520 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.520 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.558 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.629 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 04:59:35 np0005593232 nova_compute[250269]: 2026-01-23 09:59:35.723 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 04:59:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:35.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:36.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 04:59:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3614108392' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.197 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.202 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 04:59:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.543 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.616 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.617 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.836 250273 DEBUG nova.compute.manager [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.836 250273 DEBUG oslo_concurrency.lockutils [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.837 250273 DEBUG oslo_concurrency.lockutils [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.837 250273 DEBUG oslo_concurrency.lockutils [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.837 250273 DEBUG nova.compute.manager [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 04:59:36 np0005593232 nova_compute[250269]: 2026-01-23 09:59:36.837 250273 WARNING nova.compute.manager [req-145a1fc4-c72a-4298-936c-be8d6b39eb92 req-3ccd077d-3040-47a9-b51e-43634d85cbb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 04:59:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2053: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 25 KiB/s wr, 105 op/s
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_09:59:37
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'backups', 'default.rgw.control', 'volumes']
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 04:59:37 np0005593232 nova_compute[250269]: 2026-01-23 09:59:37.611 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:37 np0005593232 nova_compute[250269]: 2026-01-23 09:59:37.682 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:37 np0005593232 nova_compute[250269]: 2026-01-23 09:59:37.682 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 04:59:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:37.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 04:59:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:38.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 04:59:38 np0005593232 podman[310765]: 2026-01-23 09:59:38.405853872 +0000 UTC m=+0.055418218 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 04:59:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2054: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 39 KiB/s wr, 216 op/s
Jan 23 04:59:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 04:59:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:39.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 04:59:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:40.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:40 np0005593232 nova_compute[250269]: 2026-01-23 09:59:40.570 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2055: 321 pgs: 321 active+clean; 227 MiB data, 795 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 39 KiB/s wr, 206 op/s
Jan 23 04:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:41 np0005593232 nova_compute[250269]: 2026-01-23 09:59:41.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:41.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:42.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:42.611 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 04:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:42.612 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 04:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 09:59:42.613 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 04:59:42 np0005593232 NetworkManager[49057]: <info>  [1769162382.8671] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 23 04:59:42 np0005593232 NetworkManager[49057]: <info>  [1769162382.8679] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Jan 23 04:59:42 np0005593232 nova_compute[250269]: 2026-01-23 09:59:42.866 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2056: 321 pgs: 321 active+clean; 247 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 7.6 MiB/s rd, 2.0 MiB/s wr, 310 op/s
Jan 23 04:59:43 np0005593232 nova_compute[250269]: 2026-01-23 09:59:43.055 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:43 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:43Z|00300|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 04:59:43 np0005593232 nova_compute[250269]: 2026-01-23 09:59:43.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:43.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:44.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 04:59:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265103605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 04:59:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 04:59:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4265103605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 04:59:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2057: 321 pgs: 321 active+clean; 247 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 2.0 MiB/s wr, 248 op/s
Jan 23 04:59:44 np0005593232 nova_compute[250269]: 2026-01-23 09:59:44.992 250273 DEBUG nova.compute.manager [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 04:59:44 np0005593232 nova_compute[250269]: 2026-01-23 09:59:44.993 250273 DEBUG nova.compute.manager [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing instance network info cache due to event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 04:59:44 np0005593232 nova_compute[250269]: 2026-01-23 09:59:44.993 250273 DEBUG oslo_concurrency.lockutils [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 04:59:44 np0005593232 nova_compute[250269]: 2026-01-23 09:59:44.993 250273 DEBUG oslo_concurrency.lockutils [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 04:59:44 np0005593232 nova_compute[250269]: 2026-01-23 09:59:44.993 250273 DEBUG nova.network.neutron [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 04:59:45 np0005593232 nova_compute[250269]: 2026-01-23 09:59:45.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:45.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:46.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:46 np0005593232 nova_compute[250269]: 2026-01-23 09:59:46.412 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002996754606365159 of space, bias 1.0, pg target 0.8990263819095476 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002084351153917737 of space, bias 1.0, pg target 0.6253053461753211 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 04:59:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2058: 321 pgs: 321 active+clean; 254 MiB data, 820 MiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 282 op/s
Jan 23 04:59:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:47.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2059: 321 pgs: 321 active+clean; 227 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 3.4 MiB/s wr, 283 op/s
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.001630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389001731, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 666, "num_deletes": 251, "total_data_size": 812312, "memory_usage": 825992, "flush_reason": "Manual Compaction"}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389009761, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 803810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45373, "largest_seqno": 46038, "table_properties": {"data_size": 800371, "index_size": 1283, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8049, "raw_average_key_size": 19, "raw_value_size": 793505, "raw_average_value_size": 1902, "num_data_blocks": 57, "num_entries": 417, "num_filter_entries": 417, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162342, "oldest_key_time": 1769162342, "file_creation_time": 1769162389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 8155 microseconds, and 3651 cpu microseconds.
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.009801) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 803810 bytes OK
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.009819) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.013144) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.013175) EVENT_LOG_v1 {"time_micros": 1769162389013158, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.013195) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 808827, prev total WAL file size 808827, number of live WAL files 2.
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.013909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(784KB)], [101(10018KB)]
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389013986, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11062924, "oldest_snapshot_seqno": -1}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6967 keys, 9139345 bytes, temperature: kUnknown
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389080301, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9139345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9094966, "index_size": 25835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 181371, "raw_average_key_size": 26, "raw_value_size": 8972485, "raw_average_value_size": 1287, "num_data_blocks": 1014, "num_entries": 6967, "num_filter_entries": 6967, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162389, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.080752) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9139345 bytes
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.084814) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.4 rd, 137.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.8 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(25.1) write-amplify(11.4) OK, records in: 7477, records dropped: 510 output_compression: NoCompression
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.084902) EVENT_LOG_v1 {"time_micros": 1769162389084837, "job": 60, "event": "compaction_finished", "compaction_time_micros": 66482, "compaction_time_cpu_micros": 22736, "output_level": 6, "num_output_files": 1, "total_output_size": 9139345, "num_input_records": 7477, "num_output_records": 6967, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389085225, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162389087569, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.013746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.087622) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.087628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.087630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.087632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-09:59:49.087633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 04:59:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:49.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:50.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:50 np0005593232 nova_compute[250269]: 2026-01-23 09:59:50.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:50Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 04:59:50 np0005593232 ovn_controller[151001]: 2026-01-23T09:59:50Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 04:59:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2060: 321 pgs: 321 active+clean; 227 MiB data, 810 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 173 op/s
Jan 23 04:59:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:51 np0005593232 nova_compute[250269]: 2026-01-23 09:59:51.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:51 np0005593232 nova_compute[250269]: 2026-01-23 09:59:51.696 250273 DEBUG nova.network.neutron [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated VIF entry in instance network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 04:59:51 np0005593232 nova_compute[250269]: 2026-01-23 09:59:51.696 250273 DEBUG nova.network.neutron [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 04:59:51 np0005593232 nova_compute[250269]: 2026-01-23 09:59:51.742 250273 DEBUG oslo_concurrency.lockutils [req-5e75cf4a-09a8-4057-ab23-2e54b52bf987 req-e9bcc577-51c5-43ce-9e24-74261ce5c6c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 04:59:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:51.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:52.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2061: 321 pgs: 321 active+clean; 200 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 249 op/s
Jan 23 04:59:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 04:59:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 04:59:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2062: 321 pgs: 321 active+clean; 200 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 446 KiB/s rd, 2.4 MiB/s wr, 145 op/s
Jan 23 04:59:55 np0005593232 nova_compute[250269]: 2026-01-23 09:59:55.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:55.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:56.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 04:59:56 np0005593232 nova_compute[250269]: 2026-01-23 09:59:56.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 04:59:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2063: 321 pgs: 321 active+clean; 200 MiB data, 812 MiB used, 20 GiB / 21 GiB avail; 446 KiB/s rd, 2.4 MiB/s wr, 145 op/s
Jan 23 04:59:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:09:59:58.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 04:59:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2064: 321 pgs: 321 active+clean; 200 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 111 op/s
Jan 23 04:59:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 04:59:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 04:59:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:09:59:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:00:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:00:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:00.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:00 np0005593232 nova_compute[250269]: 2026-01-23 10:00:00.602 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:00 np0005593232 nova_compute[250269]: 2026-01-23 10:00:00.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:00.710 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:00:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:00.711 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:00:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2065: 321 pgs: 321 active+clean; 200 MiB data, 809 MiB used, 20 GiB / 21 GiB avail; 248 KiB/s rd, 954 KiB/s wr, 77 op/s
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:01 np0005593232 nova_compute[250269]: 2026-01-23 10:00:01.433 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 29878483-3012-4265-9744-77ef157900f5 does not exist
Jan 23 05:00:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9c4ffa4c-439b-4d56-8338-7950b424538f does not exist
Jan 23 05:00:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev abc19865-ceef-435f-985b-e60a53acea3d does not exist
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:00:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:00:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:01.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:02.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.336258721 +0000 UTC m=+0.036069447 container create ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:00:02 np0005593232 systemd[1]: Started libpod-conmon-ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db.scope.
Jan 23 05:00:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.411573573 +0000 UTC m=+0.111384319 container init ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.320131992 +0000 UTC m=+0.019942738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.41991665 +0000 UTC m=+0.119727376 container start ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.423201343 +0000 UTC m=+0.123012089 container attach ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:02 np0005593232 systemd[1]: libpod-ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db.scope: Deactivated successfully.
Jan 23 05:00:02 np0005593232 gracious_banzai[311135]: 167 167
Jan 23 05:00:02 np0005593232 conmon[311135]: conmon ffb918abc34d3990e551 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db.scope/container/memory.events
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.427854156 +0000 UTC m=+0.127664872 container died ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Jan 23 05:00:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-86475ad39eb03e59b5db0f7f781de3f5af7f237930fb870268a77e1295f6af0c-merged.mount: Deactivated successfully.
Jan 23 05:00:02 np0005593232 podman[311118]: 2026-01-23 10:00:02.465728623 +0000 UTC m=+0.165539349 container remove ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_banzai, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:00:02 np0005593232 systemd[1]: libpod-conmon-ffb918abc34d3990e5512711f3cc571a88178f28987d5d3f6721fc5e3cb729db.scope: Deactivated successfully.
Jan 23 05:00:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:00:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:00:02 np0005593232 podman[311158]: 2026-01-23 10:00:02.633461164 +0000 UTC m=+0.045065563 container create f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:00:02 np0005593232 systemd[1]: Started libpod-conmon-f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9.scope.
Jan 23 05:00:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:02 np0005593232 podman[311158]: 2026-01-23 10:00:02.612621451 +0000 UTC m=+0.024225840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:02 np0005593232 podman[311158]: 2026-01-23 10:00:02.726518481 +0000 UTC m=+0.138122940 container init f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:00:02 np0005593232 podman[311158]: 2026-01-23 10:00:02.737027129 +0000 UTC m=+0.148631508 container start f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:00:02 np0005593232 podman[311158]: 2026-01-23 10:00:02.741003843 +0000 UTC m=+0.152608262 container attach f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2066: 321 pgs: 321 active+clean; 238 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 264 KiB/s rd, 2.4 MiB/s wr, 87 op/s
Jan 23 05:00:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:00:03 np0005593232 determined_tu[311174]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:00:03 np0005593232 determined_tu[311174]: --> relative data size: 1.0
Jan 23 05:00:03 np0005593232 determined_tu[311174]: --> All data devices are unavailable
Jan 23 05:00:03 np0005593232 systemd[1]: libpod-f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9.scope: Deactivated successfully.
Jan 23 05:00:03 np0005593232 podman[311158]: 2026-01-23 10:00:03.576220418 +0000 UTC m=+0.987824787 container died f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-08a261b8e2a4998cd1f1a19ba80e5fa3e89d2c8cb34b062857ddcf9f0f46f7da-merged.mount: Deactivated successfully.
Jan 23 05:00:03 np0005593232 podman[311158]: 2026-01-23 10:00:03.636443441 +0000 UTC m=+1.048047800 container remove f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:00:03 np0005593232 systemd[1]: libpod-conmon-f3374a66db33830cbbd50b5801f70899031875a0e2ce3a49f421e37f60544fa9.scope: Deactivated successfully.
Jan 23 05:00:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:03.713 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:03 np0005593232 podman[311241]: 2026-01-23 10:00:03.714660086 +0000 UTC m=+0.105723388 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 05:00:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:03.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:04.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.204155447 +0000 UTC m=+0.039082593 container create fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:00:04 np0005593232 systemd[1]: Started libpod-conmon-fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae.scope.
Jan 23 05:00:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.190095737 +0000 UTC m=+0.025022903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.350300634 +0000 UTC m=+0.185227800 container init fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.357061146 +0000 UTC m=+0.191988292 container start fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.359989559 +0000 UTC m=+0.194916725 container attach fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:00:04 np0005593232 reverent_kalam[311432]: 167 167
Jan 23 05:00:04 np0005593232 systemd[1]: libpod-fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae.scope: Deactivated successfully.
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.363078987 +0000 UTC m=+0.198006133 container died fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:00:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-094ff398add71366ca87fa058dfb0fe2d56ea00d0711c4f77c81370d462b7bf7-merged.mount: Deactivated successfully.
Jan 23 05:00:04 np0005593232 podman[311415]: 2026-01-23 10:00:04.405673669 +0000 UTC m=+0.240600815 container remove fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 23 05:00:04 np0005593232 systemd[1]: libpod-conmon-fc08ed7e37d60052301edcf6ee571ea5a0009ca9e46dab665f5ca1f606cf39ae.scope: Deactivated successfully.
Jan 23 05:00:04 np0005593232 podman[311454]: 2026-01-23 10:00:04.599131781 +0000 UTC m=+0.047139042 container create 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:00:04 np0005593232 systemd[1]: Started libpod-conmon-07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0.scope.
Jan 23 05:00:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c391c54fdc7eb7c712444e9da06af33622bb32bfff9e2c82862e8e5be3eb2739/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c391c54fdc7eb7c712444e9da06af33622bb32bfff9e2c82862e8e5be3eb2739/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c391c54fdc7eb7c712444e9da06af33622bb32bfff9e2c82862e8e5be3eb2739/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c391c54fdc7eb7c712444e9da06af33622bb32bfff9e2c82862e8e5be3eb2739/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:04 np0005593232 podman[311454]: 2026-01-23 10:00:04.671906911 +0000 UTC m=+0.119914192 container init 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:00:04 np0005593232 podman[311454]: 2026-01-23 10:00:04.581404667 +0000 UTC m=+0.029411948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:04 np0005593232 podman[311454]: 2026-01-23 10:00:04.678619982 +0000 UTC m=+0.126627233 container start 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:00:04 np0005593232 podman[311454]: 2026-01-23 10:00:04.681812613 +0000 UTC m=+0.129819904 container attach 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:00:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2067: 321 pgs: 321 active+clean; 238 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 10 op/s
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]: {
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:    "0": [
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:        {
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "devices": [
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "/dev/loop3"
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            ],
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "lv_name": "ceph_lv0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "lv_size": "7511998464",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "name": "ceph_lv0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "tags": {
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.cluster_name": "ceph",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.crush_device_class": "",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.encrypted": "0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.osd_id": "0",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.type": "block",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:                "ceph.vdo": "0"
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            },
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "type": "block",
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:            "vg_name": "ceph_vg0"
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:        }
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]:    ]
Jan 23 05:00:05 np0005593232 jolly_dhawan[311471]: }
Jan 23 05:00:05 np0005593232 systemd[1]: libpod-07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0.scope: Deactivated successfully.
Jan 23 05:00:05 np0005593232 podman[311454]: 2026-01-23 10:00:05.454196381 +0000 UTC m=+0.902203642 container died 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:00:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c391c54fdc7eb7c712444e9da06af33622bb32bfff9e2c82862e8e5be3eb2739-merged.mount: Deactivated successfully.
Jan 23 05:00:05 np0005593232 podman[311454]: 2026-01-23 10:00:05.505173291 +0000 UTC m=+0.953180552 container remove 07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dhawan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:00:05 np0005593232 systemd[1]: libpod-conmon-07a46acba2dfe473dbe48e72c29ec156fbabcae610f1c8b8b14ee2a4cd2ae6a0.scope: Deactivated successfully.
Jan 23 05:00:05 np0005593232 nova_compute[250269]: 2026-01-23 10:00:05.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:05.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.148595721 +0000 UTC m=+0.038551127 container create ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:00:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:06 np0005593232 systemd[1]: Started libpod-conmon-ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88.scope.
Jan 23 05:00:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.131629919 +0000 UTC m=+0.021585335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.239715573 +0000 UTC m=+0.129670999 container init ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.246136855 +0000 UTC m=+0.136092261 container start ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.251163278 +0000 UTC m=+0.141118704 container attach ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:00:06 np0005593232 practical_ardinghelli[311651]: 167 167
Jan 23 05:00:06 np0005593232 systemd[1]: libpod-ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88.scope: Deactivated successfully.
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.252932479 +0000 UTC m=+0.142887885 container died ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:00:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-85bd8b3d8c61b6e7b2fd497ccde987f6afc52c712fb2591b14efa27d17c8cfca-merged.mount: Deactivated successfully.
Jan 23 05:00:06 np0005593232 podman[311634]: 2026-01-23 10:00:06.287921104 +0000 UTC m=+0.177876510 container remove ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:00:06 np0005593232 systemd[1]: libpod-conmon-ddd2fe3b954e07487ec6d80a07b36e88e7677fc9f80dfee0c2bbe52541f5fc88.scope: Deactivated successfully.
Jan 23 05:00:06 np0005593232 nova_compute[250269]: 2026-01-23 10:00:06.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:06 np0005593232 podman[311676]: 2026-01-23 10:00:06.456952942 +0000 UTC m=+0.040121773 container create 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:00:06 np0005593232 systemd[1]: Started libpod-conmon-21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323.scope.
Jan 23 05:00:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bca3e23f8d5490ecaed800c970fbca221f14fe652bd8f1aa18908626df7d39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bca3e23f8d5490ecaed800c970fbca221f14fe652bd8f1aa18908626df7d39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bca3e23f8d5490ecaed800c970fbca221f14fe652bd8f1aa18908626df7d39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78bca3e23f8d5490ecaed800c970fbca221f14fe652bd8f1aa18908626df7d39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:06 np0005593232 podman[311676]: 2026-01-23 10:00:06.530464882 +0000 UTC m=+0.113633733 container init 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:06 np0005593232 podman[311676]: 2026-01-23 10:00:06.439849525 +0000 UTC m=+0.023018376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:00:06 np0005593232 podman[311676]: 2026-01-23 10:00:06.538169171 +0000 UTC m=+0.121338012 container start 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:00:06 np0005593232 podman[311676]: 2026-01-23 10:00:06.544556413 +0000 UTC m=+0.127725344 container attach 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:00:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2068: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]: {
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:        "osd_id": 0,
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:        "type": "bluestore"
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]:    }
Jan 23 05:00:07 np0005593232 nervous_robinson[311692]: }
Jan 23 05:00:07 np0005593232 systemd[1]: libpod-21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323.scope: Deactivated successfully.
Jan 23 05:00:07 np0005593232 podman[311676]: 2026-01-23 10:00:07.503599781 +0000 UTC m=+1.086768612 container died 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:00:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-78bca3e23f8d5490ecaed800c970fbca221f14fe652bd8f1aa18908626df7d39-merged.mount: Deactivated successfully.
Jan 23 05:00:07 np0005593232 podman[311676]: 2026-01-23 10:00:07.586374035 +0000 UTC m=+1.169542866 container remove 21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_robinson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:07 np0005593232 systemd[1]: libpod-conmon-21e0cd58cbcfd24c1c655b5746121e481fadfbaa7bf227adb6ab43b8472d5323.scope: Deactivated successfully.
Jan 23 05:00:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:00:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:00:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eed7e207-ab20-4b02-9d22-d1a461b224c0 does not exist
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 277a4dff-9cc9-4191-8fdf-0b7b5d95589c does not exist
Jan 23 05:00:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b7aaea31-bce1-430f-9c04-f4a479797e53 does not exist
Jan 23 05:00:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:07.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.947 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.947 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.948 250273 INFO nova.compute.manager [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Rebooting instance#033[00m
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.976 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.977 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:00:07 np0005593232 nova_compute[250269]: 2026-01-23 10:00:07.977 250273 DEBUG nova.network.neutron [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:00:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:08.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:00:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2069: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 05:00:09 np0005593232 podman[311779]: 2026-01-23 10:00:09.402774026 +0000 UTC m=+0.060177903 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:09.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:10 np0005593232 nova_compute[250269]: 2026-01-23 10:00:10.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2070: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 05:00:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:11 np0005593232 nova_compute[250269]: 2026-01-23 10:00:11.438 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:11.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:12 np0005593232 nova_compute[250269]: 2026-01-23 10:00:12.728 250273 DEBUG nova.network.neutron [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:00:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2071: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 23 05:00:13 np0005593232 nova_compute[250269]: 2026-01-23 10:00:13.124 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:00:13 np0005593232 nova_compute[250269]: 2026-01-23 10:00:13.126 250273 DEBUG nova.compute.manager [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:13.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:14 np0005593232 kernel: tap35f84523-a0 (unregistering): left promiscuous mode
Jan 23 05:00:14 np0005593232 NetworkManager[49057]: <info>  [1769162414.0709] device (tap35f84523-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:14Z|00301|binding|INFO|Releasing lport 35f84523-a0b5-4102-ba04-cc5da6075d54 from this chassis (sb_readonly=0)
Jan 23 05:00:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:14Z|00302|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 down in Southbound
Jan 23 05:00:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:14Z|00303|binding|INFO|Removing iface tap35f84523-a0 ovn-installed in OVS
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 23 05:00:14 np0005593232 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d0000005d.scope: Consumed 14.794s CPU time.
Jan 23 05:00:14 np0005593232 systemd-machined[215836]: Machine qemu-36-instance-0000005d terminated.
Jan 23 05:00:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:14.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.273 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.275 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c unbound from our chassis#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.276 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee03d7c9-e107-41bf-95cc-5508578ad66c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.278 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[39b7b61c-d8da-4738-98d7-f0f604e50c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.278 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace which is not needed anymore#033[00m
Jan 23 05:00:14 np0005593232 NetworkManager[49057]: <info>  [1769162414.3369] manager: (tap35f84523-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.338 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.344 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.355 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance destroyed successfully.#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.356 250273 DEBUG nova.objects.instance [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'resources' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:14 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [NOTICE]   (310721) : haproxy version is 2.8.14-c23fe91
Jan 23 05:00:14 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [NOTICE]   (310721) : path to executable is /usr/sbin/haproxy
Jan 23 05:00:14 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [WARNING]  (310721) : Exiting Master process...
Jan 23 05:00:14 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [ALERT]    (310721) : Current worker (310725) exited with code 143 (Terminated)
Jan 23 05:00:14 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[310717]: [WARNING]  (310721) : All workers exited. Exiting... (0)
Jan 23 05:00:14 np0005593232 systemd[1]: libpod-07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f.scope: Deactivated successfully.
Jan 23 05:00:14 np0005593232 podman[311833]: 2026-01-23 10:00:14.416854318 +0000 UTC m=+0.043608671 container died 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:00:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f-userdata-shm.mount: Deactivated successfully.
Jan 23 05:00:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4a182760b561881a27387fe07beeed352cb3e32d90b51efab40d174a488f9967-merged.mount: Deactivated successfully.
Jan 23 05:00:14 np0005593232 podman[311833]: 2026-01-23 10:00:14.449587879 +0000 UTC m=+0.076342232 container cleanup 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:00:14 np0005593232 systemd[1]: libpod-conmon-07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f.scope: Deactivated successfully.
Jan 23 05:00:14 np0005593232 podman[311861]: 2026-01-23 10:00:14.51323895 +0000 UTC m=+0.043860209 container remove 07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.519 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[116fa8ce-2295-4c76-adbc-1e21a3826c76]: (4, ('Fri Jan 23 10:00:14 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f)\n07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f\nFri Jan 23 10:00:14 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f)\n07e5bbf0f6fd60ae59cccd3cb04b49b9771f8e30569370c28c0dbb54bbb1d51f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.521 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[019147f4-dbb8-4c71-bc30-2b1cbbcda3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.522 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 kernel: tapee03d7c9-e0: left promiscuous mode
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.540 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.544 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[09b4fa92-cb01-4b5f-8df6-b6392d802737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.556 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3dc027a4-9af2-47f8-be62-03d8a8ce041c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.558 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2cd02c29-93e2-499d-9655-866da68319b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.575 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0c52b77e-7229-447a-8ec2-519cc80746f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 628713, 'reachable_time': 41198, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311880, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.577 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:00:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:14.577 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[53920d6d-fb29-45c5-b4a0-905f994e34ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:14 np0005593232 systemd[1]: run-netns-ovnmeta\x2dee03d7c9\x2de107\x2d41bf\x2d95cc\x2d5508578ad66c.mount: Deactivated successfully.
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.625 250273 DEBUG nova.virt.libvirt.vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.626 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.626 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.627 250273 DEBUG os_vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.628 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.628 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f84523-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.629 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.635 250273 INFO os_vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.642 250273 DEBUG nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start _get_guest_xml network_info=[{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.646 250273 WARNING nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.655 250273 DEBUG nova.virt.libvirt.host [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.655 250273 DEBUG nova.virt.libvirt.host [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.689 250273 DEBUG nova.virt.libvirt.host [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.690 250273 DEBUG nova.virt.libvirt.host [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.692 250273 DEBUG nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.692 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.693 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.693 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.693 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.694 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.694 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.694 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.695 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.695 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.695 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.696 250273 DEBUG nova.virt.hardware [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.696 250273 DEBUG nova.objects.instance [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:14 np0005593232 nova_compute[250269]: 2026-01-23 10:00:14.824 250273 DEBUG oslo_concurrency.processutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2072: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 296 KiB/s wr, 32 op/s
Jan 23 05:00:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:00:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1531769357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.301 250273 DEBUG oslo_concurrency.processutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.353 250273 DEBUG oslo_concurrency.processutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.737 250273 DEBUG nova.compute.manager [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.738 250273 DEBUG oslo_concurrency.lockutils [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.738 250273 DEBUG oslo_concurrency.lockutils [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.738 250273 DEBUG oslo_concurrency.lockutils [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.739 250273 DEBUG nova.compute.manager [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.739 250273 WARNING nova.compute.manager [req-ca9d4b5b-ae9f-43be-8e52-de48ce37c6db req-0e140226-ac08-4e74-a3e1-6818e1fbf611 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 05:00:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:00:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3982295067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.844 250273 DEBUG oslo_concurrency.processutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.845 250273 DEBUG nova.virt.libvirt.vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.846 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.847 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:15 np0005593232 nova_compute[250269]: 2026-01-23 10:00:15.848 250273 DEBUG nova.objects.instance [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:15.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.221 250273 DEBUG nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <uuid>1bdbf4d2-447b-47d0-8b3f-878ee65905a7</uuid>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <name>instance-0000005d</name>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestJSON-server-1908507141</nova:name>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:00:14</nova:creationTime>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:user uuid="9d4a5c201efa4992a9ef57d8abdc1675">tempest-ServerActionsTestJSON-1619235720-project-member</nova:user>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:project uuid="74c5c1d0762242f29a5d26033efd9f6d">tempest-ServerActionsTestJSON-1619235720</nova:project>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <nova:port uuid="35f84523-a0b5-4102-ba04-cc5da6075d54">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="serial">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="uuid">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:b8:d6:dc"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <target dev="tap35f84523-a0"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/console.log" append="off"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:00:16 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:00:16 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:00:16 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:00:16 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.223 250273 DEBUG nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.224 250273 DEBUG nova.virt.libvirt.driver [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.225 250273 DEBUG nova.virt.libvirt.vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.225 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.225 250273 DEBUG nova.network.os_vif_util [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.226 250273 DEBUG os_vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.227 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.227 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.230 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f84523-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.230 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f84523-a0, col_values=(('external_ids', {'iface-id': '35f84523-a0b5-4102-ba04-cc5da6075d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:d6:dc', 'vm-uuid': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.2329] manager: (tap35f84523-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.232 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.239 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.240 250273 INFO os_vif [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:00:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:16 np0005593232 kernel: tap35f84523-a0: entered promiscuous mode
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.3126] manager: (tap35f84523-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/155)
Jan 23 05:00:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:16Z|00304|binding|INFO|Claiming lport 35f84523-a0b5-4102-ba04-cc5da6075d54 for this chassis.
Jan 23 05:00:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:16Z|00305|binding|INFO|35f84523-a0b5-4102-ba04-cc5da6075d54: Claiming fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:00:16 np0005593232 systemd-udevd[311803]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.314 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.3254] device (tap35f84523-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.3264] device (tap35f84523-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:00:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:16Z|00306|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 ovn-installed in OVS
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.332 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 systemd-machined[215836]: New machine qemu-37-instance-0000005d.
Jan 23 05:00:16 np0005593232 systemd[1]: Started Virtual Machine qemu-37-instance-0000005d.
Jan 23 05:00:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:16Z|00307|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 up in Southbound
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.543 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.544 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c bound to our chassis#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.546 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee03d7c9-e107-41bf-95cc-5508578ad66c#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.560 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fc099763-5cdc-4a8a-a1c0-4e7491cba7e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.562 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee03d7c9-e1 in ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.563 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee03d7c9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.564 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd15476-4650-4887-aa0c-05cae2559fdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.565 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c267c601-90e7-4341-adb8-f3f686f231d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.578 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[41b66c74-edf5-4cde-ad6c-335b0a6a0026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.589 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[50ee392e-3c1e-41d7-bd11-9413d776a320]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.632 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce52642-826d-4ed2-a9f4-2d051efb6b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.6383] manager: (tapee03d7c9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/156)
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.637 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b4187158-c72f-472f-8eaa-2a1d70a07d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.681 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b8ec25-9772-4fa1-bf82-5d0dd437e357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.684 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[13bfc0cf-95bf-4403-9571-28152d3106d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.7157] device (tapee03d7c9-e0): carrier: link connected
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.727 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e3411ea6-7ab3-44de-8870-9d69b904ccdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.745 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[53af821d-65fa-4923-9ae1-7995455f10db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632985, 'reachable_time': 23418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311990, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.763 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fe5f16dd-f7ca-44ed-89e7-9380256c298e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:6530'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632985, 'tstamp': 632985}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311991, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.781 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[025e5e98-127e-4756-9f8d-5852b28645fb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 93], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632985, 'reachable_time': 23418, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312007, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.821 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[edb35157-e42c-4e50-9076-5ea4b987e46a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.883 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a600643f-515f-4df9-80d4-608cbcab28ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.884 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.885 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.885 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee03d7c9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 kernel: tapee03d7c9-e0: entered promiscuous mode
Jan 23 05:00:16 np0005593232 NetworkManager[49057]: <info>  [1769162416.8879] manager: (tapee03d7c9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.888 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.891 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee03d7c9-e0, col_values=(('external_ids', {'iface-id': '702d4523-a665-42f5-9a36-57d187c0698a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:16Z|00308|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.894 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.894 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec18dc8-a55c-4ce1-b249-497705d6040a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.895 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:00:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:16.896 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'env', 'PROCESS_TAG=haproxy-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee03d7c9-e107-41bf-95cc-5508578ad66c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:00:16 np0005593232 nova_compute[250269]: 2026-01-23 10:00:16.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2073: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 296 KiB/s wr, 32 op/s
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.063 250273 DEBUG nova.compute.manager [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.064 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.064 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162417.0628123, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.064 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.069 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance rebooted successfully.#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.069 250273 DEBUG nova.compute.manager [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.128 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.134 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:00:17 np0005593232 podman[312066]: 2026-01-23 10:00:17.27042055 +0000 UTC m=+0.063124887 container create e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:00:17 np0005593232 systemd[1]: Started libpod-conmon-e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831.scope.
Jan 23 05:00:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:17 np0005593232 podman[312066]: 2026-01-23 10:00:17.237112322 +0000 UTC m=+0.029816689 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:00:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b9e66b227c14cebe09f2e939f12384ddd8f9ba019b19b78652c5195811fa566/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:17 np0005593232 podman[312066]: 2026-01-23 10:00:17.349129848 +0000 UTC m=+0.141834195 container init e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:00:17 np0005593232 podman[312066]: 2026-01-23 10:00:17.354782169 +0000 UTC m=+0.147486506 container start e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.387 250273 DEBUG oslo_concurrency.lockutils [None req-edc38619-4ac7-465d-82e1-c5897f685eb5 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 9.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:17 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [NOTICE]   (312086) : New worker (312088) forked
Jan 23 05:00:17 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [NOTICE]   (312086) : Loading success.
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.416 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162417.0638347, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.416 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Started (Lifecycle Event)#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.445 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:17 np0005593232 nova_compute[250269]: 2026-01-23 10:00:17.450 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:00:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:17.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.233 250273 DEBUG nova.compute.manager [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.234 250273 DEBUG oslo_concurrency.lockutils [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.234 250273 DEBUG oslo_concurrency.lockutils [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.235 250273 DEBUG oslo_concurrency.lockutils [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.235 250273 DEBUG nova.compute.manager [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:18 np0005593232 nova_compute[250269]: 2026-01-23 10:00:18.235 250273 WARNING nova.compute.manager [req-429b61a8-6c75-410d-8fc4-6c14a7a3016c req-ecc86f92-ac27-482b-a256-167ce246bc8a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:00:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2074: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Jan 23 05:00:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:19Z|00309|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:00:19 np0005593232 nova_compute[250269]: 2026-01-23 10:00:19.648 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:19.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:20 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.213 250273 INFO nova.compute.manager [None req-e261484f-3237-4704-b72f-39b0bdd04b1c 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Get console output#033[00m
Jan 23 05:00:20 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.218 250273 INFO oslo.privsep.daemon [None req-e261484f-3237-4704-b72f-39b0bdd04b1c 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp6f14ukyu/privsep.sock']#033[00m
Jan 23 05:00:20 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.612 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2075: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 1.1 KiB/s wr, 23 op/s
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.063 250273 INFO oslo.privsep.daemon [None req-e261484f-3237-4704-b72f-39b0bdd04b1c 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.923 312104 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.928 312104 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.930 312104 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:20.930 312104 INFO oslo.privsep.daemon [-] privsep daemon running as pid 312104#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.413 250273 DEBUG nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.414 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.415 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.415 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.416 250273 DEBUG nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.417 250273 WARNING nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.417 250273 DEBUG nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.418 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.418 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.418 250273 DEBUG oslo_concurrency.lockutils [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.418 250273 DEBUG nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:21 np0005593232 nova_compute[250269]: 2026-01-23 10:00:21.419 250273 WARNING nova.compute.manager [req-a22de214-da7f-41fe-be14-d29cb69d27f9 req-b7ad29c6-c67b-46f9-8a90-7ce9252a4921 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:00:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:21.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2076: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 KiB/s wr, 71 op/s
Jan 23 05:00:23 np0005593232 nova_compute[250269]: 2026-01-23 10:00:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2077: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 23 05:00:25 np0005593232 nova_compute[250269]: 2026-01-23 10:00:25.615 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:26 np0005593232 nova_compute[250269]: 2026-01-23 10:00:26.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2078: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 23 05:00:27 np0005593232 nova_compute[250269]: 2026-01-23 10:00:27.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:28 np0005593232 nova_compute[250269]: 2026-01-23 10:00:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2079: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 23 05:00:29 np0005593232 nova_compute[250269]: 2026-01-23 10:00:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:29.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:30Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:00:30 np0005593232 nova_compute[250269]: 2026-01-23 10:00:30.617 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2080: 321 pgs: 321 active+clean; 246 MiB data, 824 MiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 48 op/s
Jan 23 05:00:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.506 250273 DEBUG oslo_concurrency.lockutils [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.507 250273 DEBUG oslo_concurrency.lockutils [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.508 250273 DEBUG nova.compute.manager [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.512 250273 DEBUG nova.compute.manager [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.514 250273 DEBUG nova.objects.instance [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'flavor' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:31 np0005593232 nova_compute[250269]: 2026-01-23 10:00:31.580 250273 DEBUG nova.virt.libvirt.driver [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:00:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:31.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.331 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.332 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:00:32 np0005593232 nova_compute[250269]: 2026-01-23 10:00:32.333 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2081: 321 pgs: 321 active+clean; 209 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 41 KiB/s wr, 177 op/s
Jan 23 05:00:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:33.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:34 np0005593232 kernel: tap35f84523-a0 (unregistering): left promiscuous mode
Jan 23 05:00:34 np0005593232 NetworkManager[49057]: <info>  [1769162434.2713] device (tap35f84523-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.288 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:34Z|00310|binding|INFO|Releasing lport 35f84523-a0b5-4102-ba04-cc5da6075d54 from this chassis (sb_readonly=0)
Jan 23 05:00:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:34Z|00311|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 down in Southbound
Jan 23 05:00:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:34Z|00312|binding|INFO|Removing iface tap35f84523-a0 ovn-installed in OVS
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.292 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.299 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '6', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.301 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c unbound from our chassis#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.303 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee03d7c9-e107-41bf-95cc-5508578ad66c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.305 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[02cd6af9-b114-4a08-bac4-3b759cd9bebe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.305 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace which is not needed anymore#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.317 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 23 05:00:34 np0005593232 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d0000005d.scope: Consumed 14.053s CPU time.
Jan 23 05:00:34 np0005593232 systemd-machined[215836]: Machine qemu-37-instance-0000005d terminated.
Jan 23 05:00:34 np0005593232 podman[312163]: 2026-01-23 10:00:34.398014665 +0000 UTC m=+0.090709781 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 05:00:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [NOTICE]   (312086) : haproxy version is 2.8.14-c23fe91
Jan 23 05:00:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [NOTICE]   (312086) : path to executable is /usr/sbin/haproxy
Jan 23 05:00:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [WARNING]  (312086) : Exiting Master process...
Jan 23 05:00:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [ALERT]    (312086) : Current worker (312088) exited with code 143 (Terminated)
Jan 23 05:00:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312082]: [WARNING]  (312086) : All workers exited. Exiting... (0)
Jan 23 05:00:34 np0005593232 systemd[1]: libpod-e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831.scope: Deactivated successfully.
Jan 23 05:00:34 np0005593232 podman[312209]: 2026-01-23 10:00:34.431636841 +0000 UTC m=+0.045091794 container died e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:00:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831-userdata-shm.mount: Deactivated successfully.
Jan 23 05:00:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b9e66b227c14cebe09f2e939f12384ddd8f9ba019b19b78652c5195811fa566-merged.mount: Deactivated successfully.
Jan 23 05:00:34 np0005593232 podman[312209]: 2026-01-23 10:00:34.481612352 +0000 UTC m=+0.095067285 container cleanup e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 05:00:34 np0005593232 systemd[1]: libpod-conmon-e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831.scope: Deactivated successfully.
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.501 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.507 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 podman[312243]: 2026-01-23 10:00:34.549436771 +0000 UTC m=+0.045736411 container remove e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.556 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7bc4b2-cc4f-4b9e-a92c-bb28c704755c]: (4, ('Fri Jan 23 10:00:34 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831)\ne1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831\nFri Jan 23 10:00:34 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (e1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831)\ne1594e2b0bbded75e8f4e1d82b88c4352a7b9ab4f1128e1c48a781f7347e1831\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.558 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b8152e60-cdd0-4118-a05c-0f339054e4b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.559 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.560 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 kernel: tapee03d7c9-e0: left promiscuous mode
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.578 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.583 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ba971b2d-ef47-4f1c-873b-4128cc4e25e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.597 250273 INFO nova.virt.libvirt.driver [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.596 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7caab0f1-28b9-4b29-b7bf-c972725f4e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.599 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f66b0e50-5f6b-4d59-bf1f-173d9a10ad13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.604 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance destroyed successfully.#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.605 250273 DEBUG nova.objects.instance [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'numa_topology' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.619 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5296d120-6068-4103-916e-ff6ba20eaa5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632976, 'reachable_time': 33425, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312269, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 systemd[1]: run-netns-ovnmeta\x2dee03d7c9\x2de107\x2d41bf\x2d95cc\x2d5508578ad66c.mount: Deactivated successfully.
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.626 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:00:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:34.626 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a5f781e1-5307-4570-8d7d-f1cee2611f90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.640 250273 DEBUG nova.compute.manager [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.727 250273 DEBUG oslo_concurrency.lockutils [None req-920dd7da-1edc-4529-acd4-219c2732eb50 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2082: 321 pgs: 321 active+clean; 209 MiB data, 821 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 41 KiB/s wr, 128 op/s
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.961 250273 DEBUG nova.compute.manager [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.961 250273 DEBUG oslo_concurrency.lockutils [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.962 250273 DEBUG oslo_concurrency.lockutils [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.962 250273 DEBUG oslo_concurrency.lockutils [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.963 250273 DEBUG nova.compute.manager [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:34 np0005593232 nova_compute[250269]: 2026-01-23 10:00:34.963 250273 WARNING nova.compute.manager [req-fe15dd73-c547-4329-aee1-fafecaba0c44 req-a8b03ede-db23-41b2-9bb2-87372d13bf44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 05:00:35 np0005593232 nova_compute[250269]: 2026-01-23 10:00:35.620 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:35.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:36 np0005593232 nova_compute[250269]: 2026-01-23 10:00:36.279 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2083: 321 pgs: 321 active+clean; 202 MiB data, 818 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 151 op/s
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.116 250273 DEBUG nova.compute.manager [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.116 250273 DEBUG oslo_concurrency.lockutils [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.117 250273 DEBUG oslo_concurrency.lockutils [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.117 250273 DEBUG oslo_concurrency.lockutils [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.117 250273 DEBUG nova.compute.manager [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.117 250273 WARNING nova.compute.manager [req-3114ba58-f4df-468c-8df5-d437c1010d11 req-77c8d580-7a74-483b-955a-72a3053d7469 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:00:37
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.880 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.913 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.913 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.913 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.914 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.944 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.944 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.944 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.944 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:00:37 np0005593232 nova_compute[250269]: 2026-01-23 10:00:37.945 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:37.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.140 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'flavor' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.174 250273 DEBUG oslo_concurrency.lockutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.174 250273 DEBUG oslo_concurrency.lockutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.175 250273 DEBUG nova.network.neutron [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.175 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'info_cache' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:00:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:38.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:00:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714898217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.402 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.511 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.512 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.672 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.673 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4498MB free_disk=20.94247817993164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.674 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.674 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.804 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.804 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.805 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:00:38 np0005593232 nova_compute[250269]: 2026-01-23 10:00:38.858 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2084: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 151 op/s
Jan 23 05:00:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:00:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2069558639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:00:39 np0005593232 nova_compute[250269]: 2026-01-23 10:00:39.336 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:39 np0005593232 nova_compute[250269]: 2026-01-23 10:00:39.343 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:00:39 np0005593232 nova_compute[250269]: 2026-01-23 10:00:39.372 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:00:39 np0005593232 nova_compute[250269]: 2026-01-23 10:00:39.406 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:00:39 np0005593232 nova_compute[250269]: 2026-01-23 10:00:39.407 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:39.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:40 np0005593232 podman[312318]: 2026-01-23 10:00:40.42163767 +0000 UTC m=+0.080827300 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:00:40 np0005593232 nova_compute[250269]: 2026-01-23 10:00:40.621 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:40 np0005593232 nova_compute[250269]: 2026-01-23 10:00:40.786 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:00:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2085: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 151 op/s
Jan 23 05:00:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:41 np0005593232 nova_compute[250269]: 2026-01-23 10:00:41.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:41.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:42.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:42 np0005593232 nova_compute[250269]: 2026-01-23 10:00:42.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:42.612 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:42.613 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:42.613 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2086: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 151 op/s
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.609 250273 DEBUG nova.network.neutron [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.654 250273 DEBUG oslo_concurrency.lockutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.704 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance destroyed successfully.#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.704 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'numa_topology' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.728 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'resources' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.757 250273 DEBUG nova.virt.libvirt.vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.757 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.758 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.758 250273 DEBUG os_vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.760 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.761 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f84523-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.762 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.766 250273 INFO os_vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.772 250273 DEBUG nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start _get_guest_xml network_info=[{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.776 250273 WARNING nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.781 250273 DEBUG nova.virt.libvirt.host [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.781 250273 DEBUG nova.virt.libvirt.host [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.784 250273 DEBUG nova.virt.libvirt.host [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.785 250273 DEBUG nova.virt.libvirt.host [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.786 250273 DEBUG nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.786 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.787 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.787 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.787 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.787 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.788 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.788 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.788 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.788 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.788 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.789 250273 DEBUG nova.virt.hardware [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.789 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:43 np0005593232 nova_compute[250269]: 2026-01-23 10:00:43.826 250273 DEBUG oslo_concurrency.processutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:43.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:44.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419660965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.276 250273 DEBUG oslo_concurrency.processutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.324 250273 DEBUG oslo_concurrency.processutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467068968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/467068968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:00:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614728920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.884 250273 DEBUG oslo_concurrency.processutils [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.886 250273 DEBUG nova.virt.libvirt.vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.887 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.888 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.890 250273 DEBUG nova.objects.instance [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.924 250273 DEBUG nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <uuid>1bdbf4d2-447b-47d0-8b3f-878ee65905a7</uuid>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <name>instance-0000005d</name>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestJSON-server-1908507141</nova:name>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:00:43</nova:creationTime>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:user uuid="9d4a5c201efa4992a9ef57d8abdc1675">tempest-ServerActionsTestJSON-1619235720-project-member</nova:user>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:project uuid="74c5c1d0762242f29a5d26033efd9f6d">tempest-ServerActionsTestJSON-1619235720</nova:project>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <nova:port uuid="35f84523-a0b5-4102-ba04-cc5da6075d54">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="serial">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="uuid">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:b8:d6:dc"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <target dev="tap35f84523-a0"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/console.log" append="off"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:00:44 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:00:44 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:00:44 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:00:44 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.926 250273 DEBUG nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.927 250273 DEBUG nova.virt.libvirt.driver [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.928 250273 DEBUG nova.virt.libvirt.vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:00:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.928 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.929 250273 DEBUG nova.network.os_vif_util [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.930 250273 DEBUG os_vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.931 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.931 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:00:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2087: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 8.5 KiB/s wr, 23 op/s
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.934 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f84523-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.934 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f84523-a0, col_values=(('external_ids', {'iface-id': '35f84523-a0b5-4102-ba04-cc5da6075d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:d6:dc', 'vm-uuid': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:44 np0005593232 NetworkManager[49057]: <info>  [1769162444.9370] manager: (tap35f84523-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:44 np0005593232 nova_compute[250269]: 2026-01-23 10:00:44.943 250273 INFO os_vif [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:00:45 np0005593232 kernel: tap35f84523-a0: entered promiscuous mode
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.2497] manager: (tap35f84523-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Jan 23 05:00:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:45Z|00313|binding|INFO|Claiming lport 35f84523-a0b5-4102-ba04-cc5da6075d54 for this chassis.
Jan 23 05:00:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:45Z|00314|binding|INFO|35f84523-a0b5-4102-ba04-cc5da6075d54: Claiming fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.283 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.292 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '7', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.294 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c bound to our chassis#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.295 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee03d7c9-e107-41bf-95cc-5508578ad66c#033[00m
Jan 23 05:00:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:45Z|00315|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 ovn-installed in OVS
Jan 23 05:00:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:45Z|00316|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 up in Southbound
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.304 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.306 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cc26a8-0420-4c18-a909-236c3b5e3f7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.307 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee03d7c9-e1 in ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.310 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee03d7c9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.310 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cb46959c-6eb4-445d-89c8-c2367c3d1b90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.311 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c82a998-a439-4ec5-a568-11a9dab2a52b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 systemd-machined[215836]: New machine qemu-38-instance-0000005d.
Jan 23 05:00:45 np0005593232 systemd-udevd[312465]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:00:45 np0005593232 systemd[1]: Started Virtual Machine qemu-38-instance-0000005d.
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.3277] device (tap35f84523-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.326 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ddd7029e-6254-48e5-b247-150239b29f3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.3284] device (tap35f84523-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.339 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[916c3f0e-00e5-4339-9fd1-bfb1e9bd4083]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.377 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d269a963-e90e-4232-8bc6-0a7ad50ac63d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.382 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d95d241f-72e4-4595-ab6f-ae02a4e1f771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.3837] manager: (tapee03d7c9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/160)
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.418 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[15d672b7-052b-4dd4-9dc5-999f05a55f52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.421 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f417237d-3a95-4582-a57c-11ffa08939c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.4484] device (tapee03d7c9-e0): carrier: link connected
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.455 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[85597a10-89d0-411d-b3d6-ca04b087ef21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.474 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3bc981-9f44-479b-abda-ce11332630bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635859, 'reachable_time': 25114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312497, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.488 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0e382d7b-d641-4bf5-ba87-83eaf2e1a23f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:6530'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 635859, 'tstamp': 635859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312498, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.507 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[df5391e0-1c11-43ed-8d65-78e72569fae6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 96], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635859, 'reachable_time': 25114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312499, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.541 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4a3199-e8dc-4c0c-9519-054506fd728e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.602 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9d80c564-0151-49c6-b1f6-8375484a11dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.604 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.605 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.605 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee03d7c9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:45 np0005593232 NetworkManager[49057]: <info>  [1769162445.6084] manager: (tapee03d7c9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 23 05:00:45 np0005593232 kernel: tapee03d7c9-e0: entered promiscuous mode
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.611 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee03d7c9-e0, col_values=(('external_ids', {'iface-id': '702d4523-a665-42f5-9a36-57d187c0698a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.612 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:45Z|00317|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.614 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.615 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[025652db-549d-493a-b09d-98d1ef23afd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.616 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:00:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:00:45.617 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'env', 'PROCESS_TAG=haproxy-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee03d7c9-e107-41bf-95cc-5508578ad66c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:00:45 np0005593232 nova_compute[250269]: 2026-01-23 10:00:45.625 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:45.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:46 np0005593232 podman[312568]: 2026-01-23 10:00:45.95397811 +0000 UTC m=+0.020164105 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.050 250273 DEBUG nova.compute.manager [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.051 250273 DEBUG oslo_concurrency.lockutils [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.052 250273 DEBUG oslo_concurrency.lockutils [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.052 250273 DEBUG oslo_concurrency.lockutils [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.052 250273 DEBUG nova.compute.manager [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.052 250273 WARNING nova.compute.manager [req-c0364a2a-fb46-41e7-8855-3a0e66ed26a2 req-a3465487-d255-4d57-80f8-c2dcf2d5e669 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state stopped and task_state powering-on.#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.055 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.056 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162446.0548172, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.056 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.059 250273 DEBUG nova.compute.manager [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.062 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance rebooted successfully.#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.063 250273 DEBUG nova.compute.manager [None req-2e52c354-e0fa-4c72-abbc-e3d02aa00f31 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.116 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.120 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.201 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.202 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162446.0549955, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.202 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Started (Lifecycle Event)#033[00m
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.247 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:00:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:46.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:46 np0005593232 nova_compute[250269]: 2026-01-23 10:00:46.254 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:00:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:46 np0005593232 podman[312568]: 2026-01-23 10:00:46.687781961 +0000 UTC m=+0.753967936 container create 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002175591499162479 of space, bias 1.0, pg target 0.6526774497487436 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002168684859482598 of space, bias 1.0, pg target 0.6506054578447794 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:00:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2088: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 8.5 KiB/s wr, 23 op/s
Jan 23 05:00:47 np0005593232 systemd[1]: Started libpod-conmon-828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9.scope.
Jan 23 05:00:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:00:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d45bb9947914e7115fd389f7acab88d2cc5507872efe5cec29541ffa552309/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:00:47 np0005593232 podman[312568]: 2026-01-23 10:00:47.323707447 +0000 UTC m=+1.389893432 container init 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:00:47 np0005593232 podman[312568]: 2026-01-23 10:00:47.328929356 +0000 UTC m=+1.395115331 container start 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:00:47 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [NOTICE]   (312594) : New worker (312596) forked
Jan 23 05:00:47 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [NOTICE]   (312594) : Loading success.
Jan 23 05:00:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:47.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2089: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 85 B/s wr, 66 op/s
Jan 23 05:00:49 np0005593232 nova_compute[250269]: 2026-01-23 10:00:49.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:49.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:50 np0005593232 nova_compute[250269]: 2026-01-23 10:00:50.628 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2090: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 85 B/s wr, 65 op/s
Jan 23 05:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:51.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:52.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2091: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 83 op/s
Jan 23 05:00:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:54.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2092: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 83 op/s
Jan 23 05:00:54 np0005593232 nova_compute[250269]: 2026-01-23 10:00:54.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.267 250273 DEBUG nova.compute.manager [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.268 250273 DEBUG oslo_concurrency.lockutils [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.268 250273 DEBUG oslo_concurrency.lockutils [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.268 250273 DEBUG oslo_concurrency.lockutils [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.269 250273 DEBUG nova.compute.manager [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.269 250273 WARNING nova.compute.manager [req-c5a550d0-e5ba-4f5e-a508-9108757be8ff req-bc8ac4a0-62c8-4adc-ace0-7313dd863d4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:00:55 np0005593232 nova_compute[250269]: 2026-01-23 10:00:55.629 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:00:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:00:56 np0005593232 nova_compute[250269]: 2026-01-23 10:00:56.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:56.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:00:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2093: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 83 op/s
Jan 23 05:00:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:57.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:00:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:00:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:00:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:00:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2094: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 85 op/s
Jan 23 05:00:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:00:59Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:00:59 np0005593232 nova_compute[250269]: 2026-01-23 10:00:59.982 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:00:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:00:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:00:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:00:59.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:00.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:00 np0005593232 nova_compute[250269]: 2026-01-23 10:01:00.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:00 np0005593232 nova_compute[250269]: 2026-01-23 10:01:00.935 250273 INFO nova.compute.manager [None req-4cd1e695-aed4-458f-b878-edf396814c8e 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Pausing#033[00m
Jan 23 05:01:00 np0005593232 nova_compute[250269]: 2026-01-23 10:01:00.936 250273 DEBUG nova.objects.instance [None req-4cd1e695-aed4-458f-b878-edf396814c8e 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'flavor' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2095: 321 pgs: 321 active+clean; 202 MiB data, 803 MiB used, 20 GiB / 21 GiB avail; 367 KiB/s rd, 255 B/s wr, 19 op/s
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.110 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162461.1100483, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.110 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.113 250273 DEBUG nova.compute.manager [None req-4cd1e695-aed4-458f-b878-edf396814c8e 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.168 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.173 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:01:01 np0005593232 nova_compute[250269]: 2026-01-23 10:01:01.226 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 23 05:01:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:02.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2096: 321 pgs: 321 active+clean; 123 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 917 KiB/s rd, 11 KiB/s wr, 93 op/s
Jan 23 05:01:03 np0005593232 nova_compute[250269]: 2026-01-23 10:01:03.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:03.901 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:01:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:03.902 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:01:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:04.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.676 250273 INFO nova.compute.manager [None req-4db0f220-d0b7-4507-b2d1-78b79ecd287b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Unpausing#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.677 250273 DEBUG nova.objects.instance [None req-4db0f220-d0b7-4507-b2d1-78b79ecd287b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'flavor' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.751 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162464.7514865, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.752 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:01:04 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.757 250273 DEBUG nova.virt.libvirt.guest [None req-4db0f220-d0b7-4507-b2d1-78b79ecd287b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.757 250273 DEBUG nova.compute.manager [None req-4db0f220-d0b7-4507-b2d1-78b79ecd287b 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.789 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.792 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.828 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 23 05:01:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2097: 321 pgs: 321 active+clean; 123 MiB data, 777 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 11 KiB/s wr, 75 op/s
Jan 23 05:01:04 np0005593232 nova_compute[250269]: 2026-01-23 10:01:04.985 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:05 np0005593232 podman[312675]: 2026-01-23 10:01:05.451108988 +0000 UTC m=+0.100844019 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 23 05:01:05 np0005593232 nova_compute[250269]: 2026-01-23 10:01:05.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:06.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2098: 321 pgs: 321 active+clean; 123 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 11 KiB/s wr, 76 op/s
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:07.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:08.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2099: 321 pgs: 321 active+clean; 123 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 21 KiB/s wr, 76 op/s
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:01:09 np0005593232 nova_compute[250269]: 2026-01-23 10:01:09.988 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:10.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:01:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:10.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:01:10 np0005593232 nova_compute[250269]: 2026-01-23 10:01:10.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:10Z|00318|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 nova_compute[250269]: 2026-01-23 10:01:10.720 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 19e736cd-7961-41d4-8fc7-b6394e5d035c does not exist
Jan 23 05:01:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 828a0df5-02d4-47ce-9aeb-a3538d602d7d does not exist
Jan 23 05:01:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f0567402-ed57-4df1-8a23-470186071d7d does not exist
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:01:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:01:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2100: 321 pgs: 321 active+clean; 123 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 551 KiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 05:01:11 np0005593232 podman[312977]: 2026-01-23 10:01:11.020884364 +0000 UTC m=+0.052235056 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.47780129 +0000 UTC m=+0.050888988 container create bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:01:11 np0005593232 systemd[1]: Started libpod-conmon-bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87.scope.
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.454549389 +0000 UTC m=+0.027637117 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.571715381 +0000 UTC m=+0.144803099 container init bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.579621806 +0000 UTC m=+0.152709504 container start bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.583111435 +0000 UTC m=+0.156199133 container attach bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:01:11 np0005593232 cranky_bassi[313131]: 167 167
Jan 23 05:01:11 np0005593232 systemd[1]: libpod-bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87.scope: Deactivated successfully.
Jan 23 05:01:11 np0005593232 conmon[313131]: conmon bda2f185b715b49ab409 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87.scope/container/memory.events
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.588155199 +0000 UTC m=+0.161242897 container died bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:01:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1e593627fd3db936c869d12eb9b97da91aa4836b0ca6e94ca5bc9851c9818bf2-merged.mount: Deactivated successfully.
Jan 23 05:01:11 np0005593232 podman[313114]: 2026-01-23 10:01:11.632213572 +0000 UTC m=+0.205301270 container remove bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:01:11 np0005593232 systemd[1]: libpod-conmon-bda2f185b715b49ab4098e1a74fda292c027840aa418e6cdfd6b460d61234e87.scope: Deactivated successfully.
Jan 23 05:01:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:01:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:01:11 np0005593232 podman[313154]: 2026-01-23 10:01:11.793966753 +0000 UTC m=+0.030147299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:11.905 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:11 np0005593232 podman[313154]: 2026-01-23 10:01:11.94434608 +0000 UTC m=+0.180526606 container create 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:01:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:12 np0005593232 systemd[1]: Started libpod-conmon-0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc.scope.
Jan 23 05:01:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:12 np0005593232 podman[313154]: 2026-01-23 10:01:12.292708647 +0000 UTC m=+0.528889183 container init 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:01:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:12 np0005593232 podman[313154]: 2026-01-23 10:01:12.3005578 +0000 UTC m=+0.536738326 container start 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:01:12 np0005593232 podman[313154]: 2026-01-23 10:01:12.307827567 +0000 UTC m=+0.544008123 container attach 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:01:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2101: 321 pgs: 321 active+clean; 123 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 551 KiB/s rd, 21 KiB/s wr, 75 op/s
Jan 23 05:01:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:13Z|00319|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:01:13 np0005593232 stupefied_lamarr[313172]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:01:13 np0005593232 stupefied_lamarr[313172]: --> relative data size: 1.0
Jan 23 05:01:13 np0005593232 stupefied_lamarr[313172]: --> All data devices are unavailable
Jan 23 05:01:13 np0005593232 nova_compute[250269]: 2026-01-23 10:01:13.178 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:13 np0005593232 systemd[1]: libpod-0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc.scope: Deactivated successfully.
Jan 23 05:01:13 np0005593232 podman[313154]: 2026-01-23 10:01:13.190123012 +0000 UTC m=+1.426303538 container died 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:01:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2fd893c3249c801ce75af7c858584080b91fb0300a22ee244ec1177ea019499a-merged.mount: Deactivated successfully.
Jan 23 05:01:13 np0005593232 podman[313154]: 2026-01-23 10:01:13.25334267 +0000 UTC m=+1.489523196 container remove 0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lamarr, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:01:13 np0005593232 systemd[1]: libpod-conmon-0850c548b37145a97002ac6ef702bc6a6530195b853f16ea75c333121e6c71cc.scope: Deactivated successfully.
Jan 23 05:01:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:01:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3256569519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:01:13 np0005593232 podman[313338]: 2026-01-23 10:01:13.918321964 +0000 UTC m=+0.046474473 container create 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Jan 23 05:01:13 np0005593232 systemd[1]: Started libpod-conmon-188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2.scope.
Jan 23 05:01:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:13 np0005593232 podman[313338]: 2026-01-23 10:01:13.897751518 +0000 UTC m=+0.025904057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:14.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:14 np0005593232 podman[313338]: 2026-01-23 10:01:14.069789852 +0000 UTC m=+0.197942391 container init 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:01:14 np0005593232 podman[313338]: 2026-01-23 10:01:14.082175744 +0000 UTC m=+0.210328253 container start 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:01:14 np0005593232 frosty_wilbur[313354]: 167 167
Jan 23 05:01:14 np0005593232 systemd[1]: libpod-188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2.scope: Deactivated successfully.
Jan 23 05:01:14 np0005593232 podman[313338]: 2026-01-23 10:01:14.088504404 +0000 UTC m=+0.216656913 container attach 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:01:14 np0005593232 podman[313338]: 2026-01-23 10:01:14.089719618 +0000 UTC m=+0.217872127 container died 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:01:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-88f1cc8ffaa1d0aa996e64505db3761df0125e1769755caed1acf94214300cd7-merged.mount: Deactivated successfully.
Jan 23 05:01:14 np0005593232 podman[313338]: 2026-01-23 10:01:14.170106595 +0000 UTC m=+0.298259114 container remove 188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:01:14 np0005593232 systemd[1]: libpod-conmon-188384d221cfbe956950415502b3a5a2b09cad52633c9d67dd527f7c18f485e2.scope: Deactivated successfully.
Jan 23 05:01:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:14.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:14 np0005593232 podman[313377]: 2026-01-23 10:01:14.354300794 +0000 UTC m=+0.042295164 container create 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:01:14 np0005593232 systemd[1]: Started libpod-conmon-66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b.scope.
Jan 23 05:01:14 np0005593232 podman[313377]: 2026-01-23 10:01:14.335020175 +0000 UTC m=+0.023014585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f493c26b10ff407486db9d9598f713dfc56c6e248baf690cef99e290d1b914ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f493c26b10ff407486db9d9598f713dfc56c6e248baf690cef99e290d1b914ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f493c26b10ff407486db9d9598f713dfc56c6e248baf690cef99e290d1b914ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f493c26b10ff407486db9d9598f713dfc56c6e248baf690cef99e290d1b914ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:14 np0005593232 podman[313377]: 2026-01-23 10:01:14.454165584 +0000 UTC m=+0.142160004 container init 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:01:14 np0005593232 podman[313377]: 2026-01-23 10:01:14.461783401 +0000 UTC m=+0.149777771 container start 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:01:14 np0005593232 podman[313377]: 2026-01-23 10:01:14.465424964 +0000 UTC m=+0.153419334 container attach 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:01:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2102: 321 pgs: 321 active+clean; 123 MiB data, 760 MiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 10 KiB/s wr, 0 op/s
Jan 23 05:01:15 np0005593232 nova_compute[250269]: 2026-01-23 10:01:15.046 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]: {
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:    "0": [
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:        {
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "devices": [
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "/dev/loop3"
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            ],
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "lv_name": "ceph_lv0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "lv_size": "7511998464",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "name": "ceph_lv0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "tags": {
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.cluster_name": "ceph",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.crush_device_class": "",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.encrypted": "0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.osd_id": "0",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.type": "block",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:                "ceph.vdo": "0"
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            },
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "type": "block",
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:            "vg_name": "ceph_vg0"
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:        }
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]:    ]
Jan 23 05:01:15 np0005593232 intelligent_grothendieck[313394]: }
Jan 23 05:01:15 np0005593232 systemd[1]: libpod-66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b.scope: Deactivated successfully.
Jan 23 05:01:15 np0005593232 podman[313377]: 2026-01-23 10:01:15.35761673 +0000 UTC m=+1.045611100 container died 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:01:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f493c26b10ff407486db9d9598f713dfc56c6e248baf690cef99e290d1b914ed-merged.mount: Deactivated successfully.
Jan 23 05:01:15 np0005593232 podman[313377]: 2026-01-23 10:01:15.420823248 +0000 UTC m=+1.108817618 container remove 66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_grothendieck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:01:15 np0005593232 systemd[1]: libpod-conmon-66517a790925f2536e06aaac30447192753423832aed7eda8ee0a876d51d862b.scope: Deactivated successfully.
Jan 23 05:01:15 np0005593232 nova_compute[250269]: 2026-01-23 10:01:15.638 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:01:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:16.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:01:16 np0005593232 podman[313556]: 2026-01-23 10:01:15.987648249 +0000 UTC m=+0.020992857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:16 np0005593232 podman[313556]: 2026-01-23 10:01:16.311107899 +0000 UTC m=+0.344452487 container create d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:01:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:16 np0005593232 systemd[1]: Started libpod-conmon-d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8.scope.
Jan 23 05:01:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 29K writes, 112K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 29K writes, 10K syncs, 2.88 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4536 writes, 16K keys, 4536 commit groups, 1.0 writes per commit group, ingest: 16.92 MB, 0.03 MB/s#012Interval WAL: 4536 writes, 1784 syncs, 2.54 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 05:01:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2103: 321 pgs: 321 active+clean; 158 MiB data, 779 MiB used, 20 GiB / 21 GiB avail; 7.1 KiB/s rd, 1.6 MiB/s wr, 13 op/s
Jan 23 05:01:16 np0005593232 podman[313556]: 2026-01-23 10:01:16.985094159 +0000 UTC m=+1.018438767 container init d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:01:16 np0005593232 podman[313556]: 2026-01-23 10:01:16.992468759 +0000 UTC m=+1.025813347 container start d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:01:16 np0005593232 xenodochial_shtern[313573]: 167 167
Jan 23 05:01:16 np0005593232 systemd[1]: libpod-d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8.scope: Deactivated successfully.
Jan 23 05:01:17 np0005593232 podman[313556]: 2026-01-23 10:01:17.22983331 +0000 UTC m=+1.263177898 container attach d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:01:17 np0005593232 podman[313556]: 2026-01-23 10:01:17.230616592 +0000 UTC m=+1.263961180 container died d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:01:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bf181dc11967ff823ba77e32992e23690067a860db7cc7f9e6ba6996ca526cc3-merged.mount: Deactivated successfully.
Jan 23 05:01:17 np0005593232 podman[313556]: 2026-01-23 10:01:17.720157566 +0000 UTC m=+1.753502154 container remove d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_shtern, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:01:17 np0005593232 systemd[1]: libpod-conmon-d8478b3838d1ffa114f17d87f3fdecdcb838018d5f022d3e8f468ef90603c2b8.scope: Deactivated successfully.
Jan 23 05:01:17 np0005593232 podman[313598]: 2026-01-23 10:01:17.908578945 +0000 UTC m=+0.054756298 container create f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:01:17 np0005593232 systemd[1]: Started libpod-conmon-f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877.scope.
Jan 23 05:01:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b362aae89b840f7f7bda8e5df553e12d773e286bc99a2388026a020750a936e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b362aae89b840f7f7bda8e5df553e12d773e286bc99a2388026a020750a936e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b362aae89b840f7f7bda8e5df553e12d773e286bc99a2388026a020750a936e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b362aae89b840f7f7bda8e5df553e12d773e286bc99a2388026a020750a936e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:17 np0005593232 podman[313598]: 2026-01-23 10:01:17.881419773 +0000 UTC m=+0.027597156 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:01:17 np0005593232 podman[313598]: 2026-01-23 10:01:17.985470152 +0000 UTC m=+0.131647535 container init f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:01:17 np0005593232 podman[313598]: 2026-01-23 10:01:17.992264555 +0000 UTC m=+0.138441908 container start f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:01:17 np0005593232 podman[313598]: 2026-01-23 10:01:17.99770667 +0000 UTC m=+0.143884023 container attach f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:01:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2104: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]: {
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:        "osd_id": 0,
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:        "type": "bluestore"
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]:    }
Jan 23 05:01:18 np0005593232 romantic_hofstadter[313614]: }
Jan 23 05:01:19 np0005593232 systemd[1]: libpod-f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877.scope: Deactivated successfully.
Jan 23 05:01:19 np0005593232 podman[313598]: 2026-01-23 10:01:19.000674337 +0000 UTC m=+1.146851690 container died f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:01:19 np0005593232 systemd[1]: libpod-f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877.scope: Consumed 1.001s CPU time.
Jan 23 05:01:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0b362aae89b840f7f7bda8e5df553e12d773e286bc99a2388026a020750a936e-merged.mount: Deactivated successfully.
Jan 23 05:01:19 np0005593232 podman[313598]: 2026-01-23 10:01:19.337909078 +0000 UTC m=+1.484086421 container remove f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:01:19 np0005593232 systemd[1]: libpod-conmon-f9755118a20dfeb43223dc2ef7897b83880cf2750ff17efc78a2877c0ff2a877.scope: Deactivated successfully.
Jan 23 05:01:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 404d451d-9df3-4ea4-9c3d-8d459aee7e7e does not exist
Jan 23 05:01:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1744e0d5-ac93-4c59-8d20-5889b0881ff8 does not exist
Jan 23 05:01:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e16da978-f019-4a02-b8d5-071d71bbe871 does not exist
Jan 23 05:01:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:20.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:20 np0005593232 nova_compute[250269]: 2026-01-23 10:01:20.051 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:20.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:01:20 np0005593232 nova_compute[250269]: 2026-01-23 10:01:20.641 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2105: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:01:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:22.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:22.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.497 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.498 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.498 250273 INFO nova.compute.manager [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Rebooting instance#033[00m
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.545 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.546 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:01:22 np0005593232 nova_compute[250269]: 2026-01-23 10:01:22.546 250273 DEBUG nova.network.neutron [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:01:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2106: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 05:01:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:24.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:24.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:24 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:24Z|00320|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:01:24 np0005593232 nova_compute[250269]: 2026-01-23 10:01:24.498 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2107: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 05:01:25 np0005593232 nova_compute[250269]: 2026-01-23 10:01:25.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:25 np0005593232 nova_compute[250269]: 2026-01-23 10:01:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:25 np0005593232 nova_compute[250269]: 2026-01-23 10:01:25.643 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:01:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:26.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:01:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:26.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2108: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 05:01:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:01:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:28.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:28 np0005593232 nova_compute[250269]: 2026-01-23 10:01:28.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:28.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2109: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 177 KiB/s wr, 16 op/s
Jan 23 05:01:29 np0005593232 nova_compute[250269]: 2026-01-23 10:01:29.086 250273 DEBUG nova.network.neutron [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:01:29 np0005593232 nova_compute[250269]: 2026-01-23 10:01:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:29 np0005593232 nova_compute[250269]: 2026-01-23 10:01:29.825 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:01:29 np0005593232 nova_compute[250269]: 2026-01-23 10:01:29.827 250273 DEBUG nova.compute.manager [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:30.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.091 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:30.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.645 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 kernel: tap35f84523-a0 (unregistering): left promiscuous mode
Jan 23 05:01:30 np0005593232 NetworkManager[49057]: <info>  [1769162490.6559] device (tap35f84523-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:01:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:30Z|00321|binding|INFO|Releasing lport 35f84523-a0b5-4102-ba04-cc5da6075d54 from this chassis (sb_readonly=0)
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.665 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:30Z|00322|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 down in Southbound
Jan 23 05:01:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:30Z|00323|binding|INFO|Removing iface tap35f84523-a0 ovn-installed in OVS
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 23 05:01:30 np0005593232 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d0000005d.scope: Consumed 15.459s CPU time.
Jan 23 05:01:30 np0005593232 systemd-machined[215836]: Machine qemu-38-instance-0000005d terminated.
Jan 23 05:01:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:30.862 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '8', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:01:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:30.863 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c unbound from our chassis#033[00m
Jan 23 05:01:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:30.864 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee03d7c9-e107-41bf-95cc-5508578ad66c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:01:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:30.867 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49e66ade-9d68-4350-9f84-6c8fe2b660eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:30.868 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace which is not needed anymore#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.918 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.924 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.937 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance destroyed successfully.#033[00m
Jan 23 05:01:30 np0005593232 nova_compute[250269]: 2026-01-23 10:01:30.937 250273 DEBUG nova.objects.instance [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'resources' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2110: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 13 KiB/s wr, 2 op/s
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [NOTICE]   (312594) : haproxy version is 2.8.14-c23fe91
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [NOTICE]   (312594) : path to executable is /usr/sbin/haproxy
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [WARNING]  (312594) : Exiting Master process...
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [WARNING]  (312594) : Exiting Master process...
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [ALERT]    (312594) : Current worker (312596) exited with code 143 (Terminated)
Jan 23 05:01:31 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[312590]: [WARNING]  (312594) : All workers exited. Exiting... (0)
Jan 23 05:01:31 np0005593232 systemd[1]: libpod-828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9.scope: Deactivated successfully.
Jan 23 05:01:31 np0005593232 podman[313791]: 2026-01-23 10:01:31.014757938 +0000 UTC m=+0.047302815 container died 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 05:01:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9-userdata-shm.mount: Deactivated successfully.
Jan 23 05:01:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-31d45bb9947914e7115fd389f7acab88d2cc5507872efe5cec29541ffa552309-merged.mount: Deactivated successfully.
Jan 23 05:01:31 np0005593232 podman[313791]: 2026-01-23 10:01:31.118446625 +0000 UTC m=+0.150991512 container cleanup 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:01:31 np0005593232 systemd[1]: libpod-conmon-828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9.scope: Deactivated successfully.
Jan 23 05:01:31 np0005593232 podman[313821]: 2026-01-23 10:01:31.278556304 +0000 UTC m=+0.135753858 container remove 828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.284 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0b24de3b-8d9b-49c6-9252-e871a67c7c2a]: (4, ('Fri Jan 23 10:01:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9)\n828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9\nFri Jan 23 10:01:31 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9)\n828e10654f856db651944ff332237e7bcbfd481f1f948ecab16dd543071d49b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.286 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[35d1155f-efb6-4460-8db4-91c5f756241f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.288 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.290 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:31 np0005593232 kernel: tapee03d7c9-e0: left promiscuous mode
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.308 250273 DEBUG nova.virt.libvirt.vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:01:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.308 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.309 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.310 250273 DEBUG os_vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.311 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59ab5003-6a07-4810-9cae-d61cfa684f1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.313 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f84523-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.314 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.314 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.317 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.320 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.326 250273 INFO os_vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.327 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[988cba51-7238-4294-a0c6-5d173aac2fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.328 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9728e0a3-dd97-4626-9c98-c7f24483557e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.332 250273 DEBUG nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Start _get_guest_xml network_info=[{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.336 250273 WARNING nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.346 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[672c34a0-c405-40a0-8faf-79c47e059564]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 635851, 'reachable_time': 42193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313838, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 systemd[1]: run-netns-ovnmeta\x2dee03d7c9\x2de107\x2d41bf\x2d95cc\x2d5508578ad66c.mount: Deactivated successfully.
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.351 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:01:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:31.351 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[42e81b39-a629-448c-be13-4f1b63a7353a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.353 250273 DEBUG nova.virt.libvirt.host [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.354 250273 DEBUG nova.virt.libvirt.host [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.391 250273 DEBUG nova.virt.libvirt.host [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.392 250273 DEBUG nova.virt.libvirt.host [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.393 250273 DEBUG nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.394 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.394 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.395 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.395 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.395 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.395 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.396 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.396 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.396 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.396 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.397 250273 DEBUG nova.virt.hardware [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.397 250273 DEBUG nova.objects.instance [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.513 250273 DEBUG oslo_concurrency.processutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:01:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032094244' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:01:31 np0005593232 nova_compute[250269]: 2026-01-23 10:01:31.967 250273 DEBUG oslo_concurrency.processutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.021 250273 DEBUG oslo_concurrency.processutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:32.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.094 250273 DEBUG nova.compute.manager [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.095 250273 DEBUG oslo_concurrency.lockutils [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.096 250273 DEBUG oslo_concurrency.lockutils [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.096 250273 DEBUG oslo_concurrency.lockutils [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.096 250273 DEBUG nova.compute.manager [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.096 250273 WARNING nova.compute.manager [req-abac10ed-9ef3-4f2f-b2f4-65c181182427 req-fe731146-b305-4888-9548-ceb5e3ba748b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:01:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:01:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2280801756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.452 250273 DEBUG oslo_concurrency.processutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.453 250273 DEBUG nova.virt.libvirt.vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:01:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.454 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.455 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.457 250273 DEBUG nova.objects.instance [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.750 250273 DEBUG nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <uuid>1bdbf4d2-447b-47d0-8b3f-878ee65905a7</uuid>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <name>instance-0000005d</name>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestJSON-server-1908507141</nova:name>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:01:31</nova:creationTime>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:user uuid="9d4a5c201efa4992a9ef57d8abdc1675">tempest-ServerActionsTestJSON-1619235720-project-member</nova:user>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:project uuid="74c5c1d0762242f29a5d26033efd9f6d">tempest-ServerActionsTestJSON-1619235720</nova:project>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <nova:port uuid="35f84523-a0b5-4102-ba04-cc5da6075d54">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="serial">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="uuid">1bdbf4d2-447b-47d0-8b3f-878ee65905a7</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk.config">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:b8:d6:dc"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <target dev="tap35f84523-a0"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/console.log" append="off"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:01:32 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:01:32 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:01:32 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:01:32 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.753 250273 DEBUG nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.753 250273 DEBUG nova.virt.libvirt.driver [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.754 250273 DEBUG nova.virt.libvirt.vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:01:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.755 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.755 250273 DEBUG nova.network.os_vif_util [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.756 250273 DEBUG os_vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.757 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.758 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.761 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35f84523-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.762 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap35f84523-a0, col_values=(('external_ids', {'iface-id': '35f84523-a0b5-4102-ba04-cc5da6075d54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:d6:dc', 'vm-uuid': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.763 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 NetworkManager[49057]: <info>  [1769162492.7650] manager: (tap35f84523-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/162)
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.768 250273 INFO os_vif [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:01:32 np0005593232 systemd-udevd[313760]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:01:32 np0005593232 kernel: tap35f84523-a0: entered promiscuous mode
Jan 23 05:01:32 np0005593232 NetworkManager[49057]: <info>  [1769162492.8821] manager: (tap35f84523-a0): new Tun device (/org/freedesktop/NetworkManager/Devices/163)
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.882 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:32Z|00324|binding|INFO|Claiming lport 35f84523-a0b5-4102-ba04-cc5da6075d54 for this chassis.
Jan 23 05:01:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:32Z|00325|binding|INFO|35f84523-a0b5-4102-ba04-cc5da6075d54: Claiming fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:01:32 np0005593232 NetworkManager[49057]: <info>  [1769162492.8926] device (tap35f84523-a0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:01:32 np0005593232 NetworkManager[49057]: <info>  [1769162492.8934] device (tap35f84523-a0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.897 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '9', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.898 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c bound to our chassis#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.899 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ee03d7c9-e107-41bf-95cc-5508578ad66c#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:32Z|00326|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 ovn-installed in OVS
Jan 23 05:01:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:32Z|00327|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 up in Southbound
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 nova_compute[250269]: 2026-01-23 10:01:32.906 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.910 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8505cc-410a-4563-b159-80fc1d6994e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.910 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapee03d7c9-e1 in ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.912 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapee03d7c9-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.912 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[77c22073-54bb-4595-b3dd-ff11423a1a2b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.913 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[94a5f2e6-c856-4513-b968-729262856a32]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.924 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[c80dc1ed-cb9c-497e-9981-7c0cc73c6b90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 systemd-machined[215836]: New machine qemu-39-instance-0000005d.
Jan 23 05:01:32 np0005593232 systemd[1]: Started Virtual Machine qemu-39-instance-0000005d.
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.947 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b6a3ff-1db6-4de9-bad2-1ca5781cf548]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2111: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 8.8 KiB/s rd, 13 KiB/s wr, 11 op/s
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.978 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[029564ef-d49a-4ecf-889d-ec89ffd65629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:32 np0005593232 NetworkManager[49057]: <info>  [1769162492.9845] manager: (tapee03d7c9-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/164)
Jan 23 05:01:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:32.984 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[822319ef-0497-4449-b0f2-0259968eb173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.012 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[dd285e02-c7c4-4372-9747-804811e94b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.015 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[806ead45-9555-4b49-82cc-d187848bc9f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 NetworkManager[49057]: <info>  [1769162493.0346] device (tapee03d7c9-e0): carrier: link connected
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.039 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d3861a-6b70-4b91-b63c-3fbfb71c1e17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.054 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4075fe81-03d1-47d9-8afb-95d8aa46ceab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640617, 'reachable_time': 41421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313947, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.067 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc63e56-f9f7-446e-966d-e83793a9d3e7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecd:6530'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 640617, 'tstamp': 640617}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313948, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.086 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbe0f0e-1c5c-47b8-890b-ccfaf9677761]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapee03d7c9-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cd:65:30'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640617, 'reachable_time': 41421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313949, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.118 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8d861162-e26d-4bfb-8598-c44a7e6c0d5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.183 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e265a5e1-31ae-4718-9452-72d3d35c69a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.185 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.185 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.186 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapee03d7c9-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:33 np0005593232 NetworkManager[49057]: <info>  [1769162493.1883] manager: (tapee03d7c9-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/165)
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:33 np0005593232 kernel: tapee03d7c9-e0: entered promiscuous mode
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.196 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapee03d7c9-e0, col_values=(('external_ids', {'iface-id': '702d4523-a665-42f5-9a36-57d187c0698a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:33 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:33Z|00328|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.229 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.231 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[65ce6c33-f977-4f02-a8d8-06aaba8a3bae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.232 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ee03d7c9-e107-41bf-95cc-5508578ad66c.pid.haproxy
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ee03d7c9-e107-41bf-95cc-5508578ad66c
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:01:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:33.235 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'env', 'PROCESS_TAG=haproxy-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ee03d7c9-e107-41bf-95cc-5508578ad66c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.489 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.489 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162493.487988, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.490 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.492 250273 DEBUG nova.compute.manager [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.496 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance rebooted successfully.#033[00m
Jan 23 05:01:33 np0005593232 nova_compute[250269]: 2026-01-23 10:01:33.497 250273 DEBUG nova.compute.manager [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:33 np0005593232 podman[314023]: 2026-01-23 10:01:33.633040668 +0000 UTC m=+0.051564927 container create a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 05:01:33 np0005593232 systemd[1]: Started libpod-conmon-a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b.scope.
Jan 23 05:01:33 np0005593232 podman[314023]: 2026-01-23 10:01:33.60320373 +0000 UTC m=+0.021728019 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:01:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:01:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b5a662caca3c5bbfe48d83d303b6beed6f58ce93e5aa508589747e4528e32a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:01:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:01:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:34.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:01:34 np0005593232 podman[314023]: 2026-01-23 10:01:34.069799348 +0000 UTC m=+0.488323637 container init a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:01:34 np0005593232 podman[314023]: 2026-01-23 10:01:34.075961043 +0000 UTC m=+0.494485302 container start a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:01:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [NOTICE]   (314044) : New worker (314046) forked
Jan 23 05:01:34 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [NOTICE]   (314044) : Loading success.
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:01:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:34.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.337 250273 DEBUG nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.338 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.338 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.338 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.339 250273 DEBUG nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.339 250273 WARNING nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.339 250273 DEBUG nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.339 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.340 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.340 250273 DEBUG oslo_concurrency.lockutils [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.340 250273 DEBUG nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.340 250273 WARNING nova.compute.manager [req-7b1dad1d-2a19-4751-969d-760cfe380017 req-35dd86b0-f16a-4c5e-869c-d5fbe83dad4a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.342 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.347 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.531 250273 DEBUG oslo_concurrency.lockutils [None req-bc79e924-d9d2-4e3a-ad1e-c60ec5c66251 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 12.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.579 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162493.4881556, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.580 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Started (Lifecycle Event)#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.642 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:01:34 np0005593232 nova_compute[250269]: 2026-01-23 10:01:34.646 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:01:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2112: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 6.5 KiB/s rd, 5.7 KiB/s wr, 9 op/s
Jan 23 05:01:35 np0005593232 nova_compute[250269]: 2026-01-23 10:01:35.039 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:01:35 np0005593232 nova_compute[250269]: 2026-01-23 10:01:35.040 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:01:35 np0005593232 nova_compute[250269]: 2026-01-23 10:01:35.040 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:01:35 np0005593232 nova_compute[250269]: 2026-01-23 10:01:35.041 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:35 np0005593232 nova_compute[250269]: 2026-01-23 10:01:35.648 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:36.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:36.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:36 np0005593232 podman[314056]: 2026-01-23 10:01:36.432779864 +0000 UTC m=+0.084081930 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:01:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.716 250273 DEBUG nova.compute.manager [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.718 250273 DEBUG oslo_concurrency.lockutils [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.718 250273 DEBUG oslo_concurrency.lockutils [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.718 250273 DEBUG oslo_concurrency.lockutils [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.719 250273 DEBUG nova.compute.manager [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:01:36 np0005593232 nova_compute[250269]: 2026-01-23 10:01:36.719 250273 WARNING nova.compute.manager [req-7d02d005-8a65-4030-b74e-00e798d4c5bc req-a195bc08-c91b-4461-86b7-136dfdc64d40 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:01:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2113: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 20 KiB/s wr, 98 op/s
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:01:37
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'images']
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:01:37 np0005593232 nova_compute[250269]: 2026-01-23 10:01:37.765 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 23 05:01:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:38.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:01:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2114: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 145 op/s
Jan 23 05:01:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:40.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:40.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:40 np0005593232 nova_compute[250269]: 2026-01-23 10:01:40.650 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2115: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 145 op/s
Jan 23 05:01:41 np0005593232 podman[314085]: 2026-01-23 10:01:41.398755463 +0000 UTC m=+0.047788049 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:01:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:42.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:42.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:42.613 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:42.614 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:01:42.614 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:42 np0005593232 nova_compute[250269]: 2026-01-23 10:01:42.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2116: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 145 op/s
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.288 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.639 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.639 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.640 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.640 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.640 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.930 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.930 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.931 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.931 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:01:43 np0005593232 nova_compute[250269]: 2026-01-23 10:01:43.932 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:44.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:44.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/654963539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:01:44 np0005593232 nova_compute[250269]: 2026-01-23 10:01:44.383 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3796410807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:01:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3796410807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:01:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2117: 321 pgs: 321 active+clean; 169 MiB data, 778 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 14 KiB/s wr, 136 op/s
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.185 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.186 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.433 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.435 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4335MB free_disk=20.921676635742188GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.436 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.437 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.595 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.652 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.694 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance a50f4fa0-c4bd-41c7-be13-6afd972661b6 has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:01:45 np0005593232 nova_compute[250269]: 2026-01-23 10:01:45.795 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:46.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:01:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/560724757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.269 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.275 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:01:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:46.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.468 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.469 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.471 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.555 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.560 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.560 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.890 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.891 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.901 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:01:46 np0005593232 nova_compute[250269]: 2026-01-23 10:01:46.901 250273 INFO nova.compute.claims [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003173419179229481 of space, bias 1.0, pg target 0.9520257537688442 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:01:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2118: 321 pgs: 321 active+clean; 208 MiB data, 827 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.7 MiB/s wr, 199 op/s
Jan 23 05:01:47 np0005593232 nova_compute[250269]: 2026-01-23 10:01:47.519 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:01:47Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:d6:dc 10.100.0.5
Jan 23 05:01:47 np0005593232 nova_compute[250269]: 2026-01-23 10:01:47.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:01:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143500824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:01:47 np0005593232 nova_compute[250269]: 2026-01-23 10:01:47.976 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:47 np0005593232 nova_compute[250269]: 2026-01-23 10:01:47.982 250273 DEBUG nova.compute.provider_tree [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:01:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:48.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:48 np0005593232 nova_compute[250269]: 2026-01-23 10:01:48.195 250273 DEBUG nova.scheduler.client.report [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:01:48 np0005593232 nova_compute[250269]: 2026-01-23 10:01:48.243 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:48 np0005593232 nova_compute[250269]: 2026-01-23 10:01:48.244 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:01:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:48.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:48 np0005593232 nova_compute[250269]: 2026-01-23 10:01:48.609 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:01:48 np0005593232 nova_compute[250269]: 2026-01-23 10:01:48.610 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:01:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2119: 321 pgs: 321 active+clean; 246 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.471 250273 INFO nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.511 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.686 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.691 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.692 250273 INFO nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Creating image(s)#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.779 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.809 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.857 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.863 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:49 np0005593232 nova_compute[250269]: 2026-01-23 10:01:49.864 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:50.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:50 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:01:50 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:01:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:50.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:50 np0005593232 nova_compute[250269]: 2026-01-23 10:01:50.475 250273 DEBUG nova.virt.libvirt.imagebackend [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/ae1f9e37-418c-462f-81d1-3599a6d89de9/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/ae1f9e37-418c-462f-81d1-3599a6d89de9/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 05:01:50 np0005593232 nova_compute[250269]: 2026-01-23 10:01:50.554 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:01:50 np0005593232 nova_compute[250269]: 2026-01-23 10:01:50.606 250273 DEBUG nova.policy [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c09e682996b940dc97c866f9e4f1e74e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0f5ca0233c1a490aa2d596b88a0ec503', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:01:50 np0005593232 nova_compute[250269]: 2026-01-23 10:01:50.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2120: 321 pgs: 321 active+clean; 246 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 831 KiB/s rd, 3.9 MiB/s wr, 123 op/s
Jan 23 05:01:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:52.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:52.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.657 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Successfully created port: d72fb045-89f6-4d64-beed-2672b5b1e254 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.809 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.880 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.part --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.882 250273 DEBUG nova.virt.images [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] ae1f9e37-418c-462f-81d1-3599a6d89de9 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.883 250273 DEBUG nova.privsep.utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 23 05:01:52 np0005593232 nova_compute[250269]: 2026-01-23 10:01:52.883 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.part /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2121: 321 pgs: 321 active+clean; 248 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.193 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.part /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.converted" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.203 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.270 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20.converted --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.272 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.408s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.297 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:01:53 np0005593232 nova_compute[250269]: 2026-01-23 10:01:53.300 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:01:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:54.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.090 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.789s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.174 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] resizing rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.296 250273 DEBUG nova.objects.instance [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lazy-loading 'migration_context' on Instance uuid a50f4fa0-c4bd-41c7-be13-6afd972661b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.334 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.334 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Ensure instance console log exists: /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.335 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.335 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.336 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:01:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:54 np0005593232 nova_compute[250269]: 2026-01-23 10:01:54.886 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Successfully updated port: d72fb045-89f6-4d64-beed-2672b5b1e254 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:01:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2122: 321 pgs: 321 active+clean; 248 MiB data, 846 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.054 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.055 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquired lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.055 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.337 250273 DEBUG nova.compute.manager [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-changed-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.338 250273 DEBUG nova.compute.manager [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Refreshing instance network info cache due to event network-changed-d72fb045-89f6-4d64-beed-2672b5b1e254. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.338 250273 DEBUG oslo_concurrency.lockutils [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:01:55 np0005593232 nova_compute[250269]: 2026-01-23 10:01:55.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:56 np0005593232 nova_compute[250269]: 2026-01-23 10:01:56.145 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:01:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:56.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:01:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2123: 321 pgs: 321 active+clean; 308 MiB data, 874 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.3 MiB/s wr, 156 op/s
Jan 23 05:01:57 np0005593232 nova_compute[250269]: 2026-01-23 10:01:57.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:01:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:01:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:01:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:01:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:01:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:01:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:01:58.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:01:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2124: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.8 MiB/s wr, 132 op/s
Jan 23 05:02:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:00.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.159 250273 DEBUG nova.network.neutron [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Updating instance_info_cache with network_info: [{"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.227 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Releasing lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.227 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Instance network_info: |[{"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.228 250273 DEBUG oslo_concurrency.lockutils [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.228 250273 DEBUG nova.network.neutron [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Refreshing network info cache for port d72fb045-89f6-4d64-beed-2672b5b1e254 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.230 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Start _get_guest_xml network_info=[{"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': 'ae1f9e37-418c-462f-81d1-3599a6d89de9'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.234 250273 WARNING nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.239 250273 DEBUG nova.virt.libvirt.host [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.240 250273 DEBUG nova.virt.libvirt.host [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.243 250273 DEBUG nova.virt.libvirt.host [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.243 250273 DEBUG nova.virt.libvirt.host [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.245 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.245 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.245 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.246 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.246 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.246 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.246 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.247 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.247 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.247 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.247 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.247 250273 DEBUG nova.virt.hardware [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.250 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:00.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.388 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895136239' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.734 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.757 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:00 np0005593232 nova_compute[250269]: 2026-01-23 10:02:00.760 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2125: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 71 op/s
Jan 23 05:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92112581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.231 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.233 250273 DEBUG nova.virt.libvirt.vif [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1689385279',display_name='tempest-ListServerFiltersTestJSON-instance-1689385279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1689385279',id=99,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f5ca0233c1a490aa2d596b88a0ec503',ramdisk_id='',reservation_id='r-nt00hew7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1524131674',owner_user_name='tempest-ListServerFiltersTestJSON-1524131674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:01:49Z,user_data=None,user_id='c09e682996b940dc97c866f9e4f1e74e',uuid=a50f4fa0-c4bd-41c7-be13-6afd972661b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.233 250273 DEBUG nova.network.os_vif_util [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converting VIF {"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.234 250273 DEBUG nova.network.os_vif_util [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.235 250273 DEBUG nova.objects.instance [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lazy-loading 'pci_devices' on Instance uuid a50f4fa0-c4bd-41c7-be13-6afd972661b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.253 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <uuid>a50f4fa0-c4bd-41c7-be13-6afd972661b6</uuid>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <name>instance-00000063</name>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-1689385279</nova:name>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:02:00</nova:creationTime>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:user uuid="c09e682996b940dc97c866f9e4f1e74e">tempest-ListServerFiltersTestJSON-1524131674-project-member</nova:user>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:project uuid="0f5ca0233c1a490aa2d596b88a0ec503">tempest-ListServerFiltersTestJSON-1524131674</nova:project>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="ae1f9e37-418c-462f-81d1-3599a6d89de9"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <nova:port uuid="d72fb045-89f6-4d64-beed-2672b5b1e254">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="serial">a50f4fa0-c4bd-41c7-be13-6afd972661b6</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="uuid">a50f4fa0-c4bd-41c7-be13-6afd972661b6</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:94:6d:17"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <target dev="tapd72fb045-89"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/console.log" append="off"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:02:01 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:02:01 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:02:01 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:02:01 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.255 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Preparing to wait for external event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.256 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.256 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.256 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.257 250273 DEBUG nova.virt.libvirt.vif [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1689385279',display_name='tempest-ListServerFiltersTestJSON-instance-1689385279',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1689385279',id=99,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0f5ca0233c1a490aa2d596b88a0ec503',ramdisk_id='',reservation_id='r-nt00hew7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-1524131674',owner_user_name='tempest-ListServerFiltersTestJSON-1524131674-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:01:49Z,user_data=None,user_id='c09e682996b940dc97c866f9e4f1e74e',uuid=a50f4fa0-c4bd-41c7-be13-6afd972661b6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.258 250273 DEBUG nova.network.os_vif_util [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converting VIF {"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.259 250273 DEBUG nova.network.os_vif_util [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.259 250273 DEBUG os_vif [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.260 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.261 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.261 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.265 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.265 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd72fb045-89, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.266 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd72fb045-89, col_values=(('external_ids', {'iface-id': 'd72fb045-89f6-4d64-beed-2672b5b1e254', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:6d:17', 'vm-uuid': 'a50f4fa0-c4bd-41c7-be13-6afd972661b6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.268 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:01 np0005593232 NetworkManager[49057]: <info>  [1769162521.2699] manager: (tapd72fb045-89): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.270 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.278 250273 INFO os_vif [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89')#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.355 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.356 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.356 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] No VIF found with MAC fa:16:3e:94:6d:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.356 250273 INFO nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Using config drive#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.378 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:01.803 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:02:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:01.805 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:02:01 np0005593232 nova_compute[250269]: 2026-01-23 10:02:01.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:02.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.492 250273 INFO nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Creating config drive at /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.498 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnthbzdw2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.632 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnthbzdw2" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.659 250273 DEBUG nova.storage.rbd_utils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] rbd image a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.664 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.841 250273 DEBUG oslo_concurrency.processutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config a50f4fa0-c4bd-41c7-be13-6afd972661b6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.842 250273 INFO nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Deleting local config drive /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6/disk.config because it was imported into RBD.#033[00m
Jan 23 05:02:02 np0005593232 kernel: tapd72fb045-89: entered promiscuous mode
Jan 23 05:02:02 np0005593232 NetworkManager[49057]: <info>  [1769162522.8924] manager: (tapd72fb045-89): new Tun device (/org/freedesktop/NetworkManager/Devices/167)
Jan 23 05:02:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:02Z|00329|binding|INFO|Claiming lport d72fb045-89f6-4d64-beed-2672b5b1e254 for this chassis.
Jan 23 05:02:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:02Z|00330|binding|INFO|d72fb045-89f6-4d64-beed-2672b5b1e254: Claiming fa:16:3e:94:6d:17 10.100.0.5
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.955 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:6d:17 10.100.0.5'], port_security=['fa:16:3e:94:6d:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a50f4fa0-c4bd-41c7-be13-6afd972661b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f5ca0233c1a490aa2d596b88a0ec503', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad8a7362-692a-4044-8393-1c10014f8bab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83406af9-ea42-4cda-96ee-b8c04ab0651a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d72fb045-89f6-4d64-beed-2672b5b1e254) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.956 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d72fb045-89f6-4d64-beed-2672b5b1e254 in datapath 969bd83a-7542-46e3-90f0-1a81f26ba6b8 bound to our chassis#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.958 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 969bd83a-7542-46e3-90f0-1a81f26ba6b8#033[00m
Jan 23 05:02:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2126: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 23 05:02:02 np0005593232 systemd-udevd[314546]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.971 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0c61bf-de42-4b96-ae63-7b5b7624f2c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.972 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap969bd83a-71 in ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.974 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap969bd83a-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.974 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9645f0f6-38e5-447b-8ac3-5826115d7ef1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.975 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1fc0db-b124-464e-ba68-768bebb02f7d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:02 np0005593232 systemd-machined[215836]: New machine qemu-40-instance-00000063.
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.983 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:02Z|00331|binding|INFO|Setting lport d72fb045-89f6-4d64-beed-2672b5b1e254 ovn-installed in OVS
Jan 23 05:02:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:02Z|00332|binding|INFO|Setting lport d72fb045-89f6-4d64-beed-2672b5b1e254 up in Southbound
Jan 23 05:02:02 np0005593232 NetworkManager[49057]: <info>  [1769162522.9873] device (tapd72fb045-89): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:02:02 np0005593232 nova_compute[250269]: 2026-01-23 10:02:02.988 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:02 np0005593232 NetworkManager[49057]: <info>  [1769162522.9884] device (tapd72fb045-89): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:02:02 np0005593232 systemd[1]: Started Virtual Machine qemu-40-instance-00000063.
Jan 23 05:02:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:02.989 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[673dcfe5-6fa3-4aeb-a659-b032698b6c24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.006 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cc10b485-aa03-42f5-a027-83840ba33a12]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.037 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c58caf90-1121-467f-85fd-7e9b0272c8ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.044 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1ce325-93c1-4c6e-829b-2bc2551bbc3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 NetworkManager[49057]: <info>  [1769162523.0450] manager: (tap969bd83a-70): new Veth device (/org/freedesktop/NetworkManager/Devices/168)
Jan 23 05:02:03 np0005593232 systemd-udevd[314550]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.083 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ddcd31-c5cd-4049-b812-0a0be304f532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.087 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[80588002-d4ca-4961-9552-a6d2d84a79b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 NetworkManager[49057]: <info>  [1769162523.1100] device (tap969bd83a-70): carrier: link connected
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.163 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce18986-61ae-4a24-a5ca-7029d0bb72cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.179 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ba0bc1e0-9cc0-42ba-9892-f4ca035eb30c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap969bd83a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:fe:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643625, 'reachable_time': 24944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314579, 'error': None, 'target': 'ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.196 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dfce5308-39f5-4ea1-94c5-3e2bc358bd5a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe29:fef5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 643625, 'tstamp': 643625}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314580, 'error': None, 'target': 'ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.211 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07194b88-792e-4d2d-9f36-41a23fb2f98c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap969bd83a-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:29:fe:f5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 101], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643625, 'reachable_time': 24944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314581, 'error': None, 'target': 'ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.244 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[22291f4a-80b3-47d3-a7a7-7dd3682bab19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.307 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8bab084f-7424-4df0-87f9-96c522f67ef6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.309 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap969bd83a-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.309 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.309 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap969bd83a-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:03 np0005593232 NetworkManager[49057]: <info>  [1769162523.3120] manager: (tap969bd83a-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 23 05:02:03 np0005593232 kernel: tap969bd83a-70: entered promiscuous mode
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.314 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap969bd83a-70, col_values=(('external_ids', {'iface-id': '9ee89271-3ee7-4672-8800-56bb900c4dd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:03Z|00333|binding|INFO|Releasing lport 9ee89271-3ee7-4672-8800-56bb900c4dd0 from this chassis (sb_readonly=0)
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.316 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.332 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.333 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/969bd83a-7542-46e3-90f0-1a81f26ba6b8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/969bd83a-7542-46e3-90f0-1a81f26ba6b8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.334 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[41832d4a-c082-496d-925a-0c9fcd9dacce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.335 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-969bd83a-7542-46e3-90f0-1a81f26ba6b8
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/969bd83a-7542-46e3-90f0-1a81f26ba6b8.pid.haproxy
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 969bd83a-7542-46e3-90f0-1a81f26ba6b8
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:02:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:03.336 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'env', 'PROCESS_TAG=haproxy-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/969bd83a-7542-46e3-90f0-1a81f26ba6b8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.479 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162523.4787874, a50f4fa0-c4bd-41c7-be13-6afd972661b6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.480 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] VM Started (Lifecycle Event)#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.531 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.536 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162523.4815228, a50f4fa0-c4bd-41c7-be13-6afd972661b6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.536 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.622 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.625 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:03 np0005593232 nova_compute[250269]: 2026-01-23 10:02:03.664 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:03 np0005593232 podman[314655]: 2026-01-23 10:02:03.718101349 +0000 UTC m=+0.048700405 container create b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:02:03 np0005593232 systemd[1]: Started libpod-conmon-b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004.scope.
Jan 23 05:02:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46b5a3e0537163772e8b8fba92a087b85c482a7d8ba77dc573abfcdfa7482d23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:03 np0005593232 podman[314655]: 2026-01-23 10:02:03.691283587 +0000 UTC m=+0.021882673 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:02:03 np0005593232 podman[314655]: 2026-01-23 10:02:03.862052339 +0000 UTC m=+0.192651405 container init b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:02:03 np0005593232 podman[314655]: 2026-01-23 10:02:03.868578995 +0000 UTC m=+0.199178061 container start b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 23 05:02:03 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [NOTICE]   (314689) : New worker (314701) forked
Jan 23 05:02:03 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [NOTICE]   (314689) : Loading success.
Jan 23 05:02:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.609 250273 DEBUG nova.compute.manager [req-84429651-59ad-47ff-a876-14397456f237 req-a0e58d06-b8ab-4a9e-9ed9-0eb77e264a8c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.610 250273 DEBUG oslo_concurrency.lockutils [req-84429651-59ad-47ff-a876-14397456f237 req-a0e58d06-b8ab-4a9e-9ed9-0eb77e264a8c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.610 250273 DEBUG oslo_concurrency.lockutils [req-84429651-59ad-47ff-a876-14397456f237 req-a0e58d06-b8ab-4a9e-9ed9-0eb77e264a8c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.610 250273 DEBUG oslo_concurrency.lockutils [req-84429651-59ad-47ff-a876-14397456f237 req-a0e58d06-b8ab-4a9e-9ed9-0eb77e264a8c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.610 250273 DEBUG nova.compute.manager [req-84429651-59ad-47ff-a876-14397456f237 req-a0e58d06-b8ab-4a9e-9ed9-0eb77e264a8c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Processing event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.611 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.616 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162524.6162462, a50f4fa0-c4bd-41c7-be13-6afd972661b6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.617 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.618 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.621 250273 INFO nova.virt.libvirt.driver [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Instance spawned successfully.#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.621 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.680 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.681 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.681 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.682 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.682 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.682 250273 DEBUG nova.virt.libvirt.driver [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.719 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.722 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.729 250273 DEBUG nova.network.neutron [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Updated VIF entry in instance network info cache for port d72fb045-89f6-4d64-beed-2672b5b1e254. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.730 250273 DEBUG nova.network.neutron [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Updating instance_info_cache with network_info: [{"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.770 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.775 250273 DEBUG oslo_concurrency.lockutils [req-35826c32-c238-46b0-ae7b-42bae118ce04 req-470e53b9-45e2-4282-b254-48fc40f8f0e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a50f4fa0-c4bd-41c7-be13-6afd972661b6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.797 250273 INFO nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Took 15.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.798 250273 DEBUG nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.931 250273 INFO nova.compute.manager [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Took 18.08 seconds to build instance.#033[00m
Jan 23 05:02:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2127: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Jan 23 05:02:04 np0005593232 nova_compute[250269]: 2026-01-23 10:02:04.998 250273 DEBUG oslo_concurrency.lockutils [None req-9e52b40a-45f9-4396-8f8d-5236eecd26a4 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:05 np0005593232 nova_compute[250269]: 2026-01-23 10:02:05.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:06.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:06 np0005593232 nova_compute[250269]: 2026-01-23 10:02:06.268 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:06.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2128: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 167 op/s
Jan 23 05:02:07 np0005593232 podman[314738]: 2026-01-23 10:02:07.432575678 +0000 UTC m=+0.086589842 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.692 250273 DEBUG nova.compute.manager [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.693 250273 DEBUG oslo_concurrency.lockutils [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.693 250273 DEBUG oslo_concurrency.lockutils [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.694 250273 DEBUG oslo_concurrency.lockutils [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.694 250273 DEBUG nova.compute.manager [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] No waiting events found dispatching network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:02:08 np0005593232 nova_compute[250269]: 2026-01-23 10:02:08.694 250273 WARNING nova.compute.manager [req-7a9d6e52-26bf-493f-b5cf-f464f3497717 req-88a75993-9b20-4d0e-a723-cb7a0db8f9db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received unexpected event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:02:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2129: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 188 op/s
Jan 23 05:02:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:09.806 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:10.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:10.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:10 np0005593232 nova_compute[250269]: 2026-01-23 10:02:10.687 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2130: 321 pgs: 321 active+clean; 341 MiB data, 889 MiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 148 op/s
Jan 23 05:02:11 np0005593232 nova_compute[250269]: 2026-01-23 10:02:11.270 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:11 np0005593232 nova_compute[250269]: 2026-01-23 10:02:11.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:12.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:12 np0005593232 podman[314768]: 2026-01-23 10:02:12.409752986 +0000 UTC m=+0.069275469 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:02:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2131: 321 pgs: 321 active+clean; 343 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 545 KiB/s wr, 171 op/s
Jan 23 05:02:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:14.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2132: 321 pgs: 321 active+clean; 343 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 517 KiB/s wr, 115 op/s
Jan 23 05:02:15 np0005593232 nova_compute[250269]: 2026-01-23 10:02:15.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:16.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:16 np0005593232 nova_compute[250269]: 2026-01-23 10:02:16.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2133: 321 pgs: 321 active+clean; 416 MiB data, 931 MiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Jan 23 05:02:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:18.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2134: 321 pgs: 321 active+clean; 479 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 7.1 MiB/s wr, 259 op/s
Jan 23 05:02:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:19Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:6d:17 10.100.0.5
Jan 23 05:02:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:19Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:6d:17 10.100.0.5
Jan 23 05:02:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.513 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.515 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.552 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.718 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.719 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.746 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.747 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.755 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.755 250273 INFO nova.compute.claims [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.761 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:02:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2135: 321 pgs: 321 active+clean; 479 MiB data, 976 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 7.1 MiB/s wr, 223 op/s
Jan 23 05:02:20 np0005593232 nova_compute[250269]: 2026-01-23 10:02:20.997 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.352 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.645776282 +0000 UTC m=+0.040985236 container create 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:21 np0005593232 systemd[1]: Started libpod-conmon-8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144.scope.
Jan 23 05:02:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.625931238 +0000 UTC m=+0.021140212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.750366604 +0000 UTC m=+0.145575608 container init 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.758210737 +0000 UTC m=+0.153419691 container start 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.76291627 +0000 UTC m=+0.158125244 container attach 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:02:21 np0005593232 epic_swirles[315216]: 167 167
Jan 23 05:02:21 np0005593232 systemd[1]: libpod-8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144.scope: Deactivated successfully.
Jan 23 05:02:21 np0005593232 conmon[315216]: conmon 8eb1c617bc5ecfe5b862 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144.scope/container/memory.events
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.767329816 +0000 UTC m=+0.162538780 container died 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:02:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096474095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:02:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-10e90bb82bda5fecdcc83192edfbadeba28614bdeb31d0c44b79df5dd06bb686-merged.mount: Deactivated successfully.
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.811 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:21 np0005593232 podman[315200]: 2026-01-23 10:02:21.814323261 +0000 UTC m=+0.209532215 container remove 8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.819 250273 DEBUG nova.compute.provider_tree [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:02:21 np0005593232 systemd[1]: libpod-conmon-8eb1c617bc5ecfe5b86284391a8b5b2e97f0180eb2d37a9e050ea79e2ffaa144.scope: Deactivated successfully.
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.887 250273 DEBUG nova.scheduler.client.report [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.934 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.935 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.937 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.951 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:02:21 np0005593232 nova_compute[250269]: 2026-01-23 10:02:21.951 250273 INFO nova.compute.claims [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:02:22 np0005593232 podman[315242]: 2026-01-23 10:02:22.002489178 +0000 UTC m=+0.048497889 container create 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:02:22 np0005593232 systemd[1]: Started libpod-conmon-43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7.scope.
Jan 23 05:02:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f898782c9c05f688148194b26578fb641034224c72e0ab930a94a52f55936f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f898782c9c05f688148194b26578fb641034224c72e0ab930a94a52f55936f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f898782c9c05f688148194b26578fb641034224c72e0ab930a94a52f55936f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f898782c9c05f688148194b26578fb641034224c72e0ab930a94a52f55936f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:22 np0005593232 podman[315242]: 2026-01-23 10:02:21.980959346 +0000 UTC m=+0.026968087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:22.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:22 np0005593232 podman[315242]: 2026-01-23 10:02:22.087050151 +0000 UTC m=+0.133058862 container init 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:02:22 np0005593232 podman[315242]: 2026-01-23 10:02:22.092922468 +0000 UTC m=+0.138931179 container start 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:02:22 np0005593232 podman[315242]: 2026-01-23 10:02:22.096656874 +0000 UTC m=+0.142665585 container attach 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:02:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.106 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.106 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.203 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.239 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:02:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.471 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.534 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.536 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.536 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Creating image(s)#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.562 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.590 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.630 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.636 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.705 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.706 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.707 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.708 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.739 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.743 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.770 250273 DEBUG nova.policy [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9127d08a3bf5404e8cb8c84ed7152834', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '449f402258804f41b10f91a13da1176d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:02:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340273406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.955 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:22 np0005593232 nova_compute[250269]: 2026-01-23 10:02:22.965 250273 DEBUG nova.compute.provider_tree [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:02:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2136: 321 pgs: 321 active+clean; 500 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.8 MiB/s wr, 254 op/s
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.001 250273 DEBUG nova.scheduler.client.report [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.029 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.030 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.122 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.123 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.155 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.191 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:02:23 np0005593232 lucid_germain[315259]: [
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:    {
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "available": false,
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "ceph_device": false,
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "lsm_data": {},
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "lvs": [],
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "path": "/dev/sr0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "rejected_reasons": [
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "Insufficient space (<5GB)",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "Has a FileSystem"
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        ],
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        "sys_api": {
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "actuators": null,
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "device_nodes": "sr0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "devname": "sr0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "human_readable_size": "482.00 KB",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "id_bus": "ata",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "model": "QEMU DVD-ROM",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "nr_requests": "2",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "parent": "/dev/sr0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "partitions": {},
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "path": "/dev/sr0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "removable": "1",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "rev": "2.5+",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "ro": "0",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "rotational": "1",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "sas_address": "",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "sas_device_handle": "",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "scheduler_mode": "mq-deadline",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "sectors": 0,
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "sectorsize": "2048",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "size": 493568.0,
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "support_discard": "2048",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "type": "disk",
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:            "vendor": "QEMU"
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:        }
Jan 23 05:02:23 np0005593232 lucid_germain[315259]:    }
Jan 23 05:02:23 np0005593232 lucid_germain[315259]: ]
Jan 23 05:02:23 np0005593232 systemd[1]: libpod-43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7.scope: Deactivated successfully.
Jan 23 05:02:23 np0005593232 systemd[1]: libpod-43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7.scope: Consumed 1.198s CPU time.
Jan 23 05:02:23 np0005593232 podman[315242]: 2026-01-23 10:02:23.315730694 +0000 UTC m=+1.361739405 container died 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f898782c9c05f688148194b26578fb641034224c72e0ab930a94a52f55936f25-merged.mount: Deactivated successfully.
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.383 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:23 np0005593232 podman[315242]: 2026-01-23 10:02:23.384037215 +0000 UTC m=+1.430045926 container remove 43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:02:23 np0005593232 systemd[1]: libpod-conmon-43bf027fad3adca9885327510edd8cc14ac725e7dc8a04273fefe225457b5fb7.scope: Deactivated successfully.
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.485 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] resizing rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.521 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.523 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.524 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Creating image(s)#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.554 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.588 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.625 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.630 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.808 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.810 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c9417da7-8ccf-4928-975c-50289a158f46 does not exist
Jan 23 05:02:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a998c199-2cc6-4732-8ecf-133d47c09313 does not exist
Jan 23 05:02:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 328e0a55-f139-4a48-98b5-5e1aa7224753 does not exist
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.811 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.812 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:02:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.840 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.845 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6679cffe-216f-4ac9-87ef-45526b43ad12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.875 250273 DEBUG nova.objects.instance [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'migration_context' on Instance uuid 799fa81c-31fa-4706-ba6f-5918fddd4caa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.919 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.920 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Ensure instance console log exists: /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.921 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.921 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:23 np0005593232 nova_compute[250269]: 2026-01-23 10:02:23.922 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.144 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6679cffe-216f-4ac9-87ef-45526b43ad12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.207 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] resizing rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.323 250273 DEBUG nova.objects.instance [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'migration_context' on Instance uuid 6679cffe-216f-4ac9-87ef-45526b43ad12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.352 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.353 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Ensure instance console log exists: /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.353 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.354 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:24 np0005593232 nova_compute[250269]: 2026-01-23 10:02:24.354 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.422794501 +0000 UTC m=+0.039527814 container create de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:02:24 np0005593232 systemd[1]: Started libpod-conmon-de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31.scope.
Jan 23 05:02:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.406474347 +0000 UTC m=+0.023207680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.528500355 +0000 UTC m=+0.145233698 container init de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.53606356 +0000 UTC m=+0.152796873 container start de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:02:24 np0005593232 youthful_driscoll[317007]: 167 167
Jan 23 05:02:24 np0005593232 systemd[1]: libpod-de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31.scope: Deactivated successfully.
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.547124674 +0000 UTC m=+0.163858017 container attach de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.548468262 +0000 UTC m=+0.165201585 container died de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:24 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 23 05:02:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-584eebe98b6fe4d03a314abf225e3af4a31bed538f09f470c5c8eb755e3740ff-merged.mount: Deactivated successfully.
Jan 23 05:02:24 np0005593232 podman[316990]: 2026-01-23 10:02:24.637497872 +0000 UTC m=+0.254231185 container remove de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:24 np0005593232 systemd[1]: libpod-conmon-de04352c0ebeb342a763fb4bf95afa754014037f2664a829a30e954c3c8c8c31.scope: Deactivated successfully.
Jan 23 05:02:24 np0005593232 podman[317032]: 2026-01-23 10:02:24.819581546 +0000 UTC m=+0.039315978 container create 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:02:24 np0005593232 systemd[1]: Started libpod-conmon-739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d.scope.
Jan 23 05:02:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:24 np0005593232 podman[317032]: 2026-01-23 10:02:24.803699335 +0000 UTC m=+0.023433787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:24 np0005593232 podman[317032]: 2026-01-23 10:02:24.905682853 +0000 UTC m=+0.125417315 container init 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:02:24 np0005593232 podman[317032]: 2026-01-23 10:02:24.912141856 +0000 UTC m=+0.131876288 container start 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:02:24 np0005593232 podman[317032]: 2026-01-23 10:02:24.916063438 +0000 UTC m=+0.135797910 container attach 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:02:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2137: 321 pgs: 321 active+clean; 500 MiB data, 986 MiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 7.3 MiB/s wr, 231 op/s
Jan 23 05:02:25 np0005593232 nova_compute[250269]: 2026-01-23 10:02:25.049 250273 DEBUG nova.policy [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9127d08a3bf5404e8cb8c84ed7152834', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '449f402258804f41b10f91a13da1176d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:02:25 np0005593232 nova_compute[250269]: 2026-01-23 10:02:25.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:25 np0005593232 nova_compute[250269]: 2026-01-23 10:02:25.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:25 np0005593232 vigorous_villani[317049]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:02:25 np0005593232 vigorous_villani[317049]: --> relative data size: 1.0
Jan 23 05:02:25 np0005593232 vigorous_villani[317049]: --> All data devices are unavailable
Jan 23 05:02:25 np0005593232 systemd[1]: libpod-739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d.scope: Deactivated successfully.
Jan 23 05:02:25 np0005593232 podman[317064]: 2026-01-23 10:02:25.836854402 +0000 UTC m=+0.024618990 container died 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:02:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ce81ca6e30d022d4297bd5459e949421b933aaad4f02393251381707aa3600a3-merged.mount: Deactivated successfully.
Jan 23 05:02:25 np0005593232 podman[317064]: 2026-01-23 10:02:25.886487963 +0000 UTC m=+0.074252521 container remove 739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:02:25 np0005593232 systemd[1]: libpod-conmon-739c73e7e84cb35831197b419f73cd7ab37b2b7e7d97fcc726b84adb1052833d.scope: Deactivated successfully.
Jan 23 05:02:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:26.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:26 np0005593232 nova_compute[250269]: 2026-01-23 10:02:26.305 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.473229476 +0000 UTC m=+0.042198231 container create 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:02:26 np0005593232 systemd[1]: Started libpod-conmon-1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124.scope.
Jan 23 05:02:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.542492664 +0000 UTC m=+0.111461439 container init 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.453817644 +0000 UTC m=+0.022786459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.548376451 +0000 UTC m=+0.117345216 container start 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:02:26 np0005593232 great_noether[317236]: 167 167
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.55220155 +0000 UTC m=+0.121170345 container attach 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:26 np0005593232 systemd[1]: libpod-1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124.scope: Deactivated successfully.
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.553509577 +0000 UTC m=+0.122478362 container died 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:02:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4c93d12172fc794c06c80c69ac4d73bce37c2409f930e396ebafc1c31cbe05df-merged.mount: Deactivated successfully.
Jan 23 05:02:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:26 np0005593232 podman[317220]: 2026-01-23 10:02:26.595365286 +0000 UTC m=+0.164334051 container remove 1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_noether, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:02:26 np0005593232 systemd[1]: libpod-conmon-1225d1ddee0aa56ca4c858111cc360f068dccdf7fd78296ac63f02e558fc0124.scope: Deactivated successfully.
Jan 23 05:02:26 np0005593232 podman[317259]: 2026-01-23 10:02:26.780458917 +0000 UTC m=+0.040436080 container create 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:02:26 np0005593232 systemd[1]: Started libpod-conmon-0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424.scope.
Jan 23 05:02:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:26 np0005593232 podman[317259]: 2026-01-23 10:02:26.763518365 +0000 UTC m=+0.023495548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/793ac80e0674c8cf1d9ea3c6f64798e4899e070edcfd71d46b968c3e15ae790d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/793ac80e0674c8cf1d9ea3c6f64798e4899e070edcfd71d46b968c3e15ae790d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/793ac80e0674c8cf1d9ea3c6f64798e4899e070edcfd71d46b968c3e15ae790d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/793ac80e0674c8cf1d9ea3c6f64798e4899e070edcfd71d46b968c3e15ae790d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:26 np0005593232 podman[317259]: 2026-01-23 10:02:26.87243551 +0000 UTC m=+0.132412673 container init 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:02:26 np0005593232 podman[317259]: 2026-01-23 10:02:26.878282396 +0000 UTC m=+0.138259559 container start 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:02:26 np0005593232 podman[317259]: 2026-01-23 10:02:26.88122291 +0000 UTC m=+0.141200103 container attach 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:02:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2138: 321 pgs: 321 active+clean; 607 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 12 MiB/s wr, 328 op/s
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]: {
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:    "0": [
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:        {
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "devices": [
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "/dev/loop3"
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            ],
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "lv_name": "ceph_lv0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "lv_size": "7511998464",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "name": "ceph_lv0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "tags": {
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.cluster_name": "ceph",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.crush_device_class": "",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.encrypted": "0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.osd_id": "0",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.type": "block",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:                "ceph.vdo": "0"
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            },
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "type": "block",
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:            "vg_name": "ceph_vg0"
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:        }
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]:    ]
Jan 23 05:02:27 np0005593232 hungry_hawking[317276]: }
Jan 23 05:02:27 np0005593232 systemd[1]: libpod-0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424.scope: Deactivated successfully.
Jan 23 05:02:27 np0005593232 podman[317286]: 2026-01-23 10:02:27.720996831 +0000 UTC m=+0.022105659 container died 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:02:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-793ac80e0674c8cf1d9ea3c6f64798e4899e070edcfd71d46b968c3e15ae790d-merged.mount: Deactivated successfully.
Jan 23 05:02:27 np0005593232 podman[317286]: 2026-01-23 10:02:27.766823193 +0000 UTC m=+0.067931981 container remove 0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:27 np0005593232 systemd[1]: libpod-conmon-0fb1d63ae82a84c52abb05839c6b3897a95e1bda74dfe081a709edb06817d424.scope: Deactivated successfully.
Jan 23 05:02:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:28.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:28 np0005593232 nova_compute[250269]: 2026-01-23 10:02:28.327 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Successfully created port: f80bf815-6340-4f49-a6aa-1d0aaf916228 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.354444741 +0000 UTC m=+0.042376725 container create 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:02:28 np0005593232 systemd[1]: Started libpod-conmon-83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62.scope.
Jan 23 05:02:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.335991487 +0000 UTC m=+0.023923491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.436792701 +0000 UTC m=+0.124724705 container init 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.442934465 +0000 UTC m=+0.130866479 container start 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.446311111 +0000 UTC m=+0.134243165 container attach 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:02:28 np0005593232 distracted_germain[317460]: 167 167
Jan 23 05:02:28 np0005593232 systemd[1]: libpod-83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62.scope: Deactivated successfully.
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.447410313 +0000 UTC m=+0.135342317 container died 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:02:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5dee6e32e602487495fe6c4e1bc61a4f5900799ef5e4d4002ef3ce52c87a3f34-merged.mount: Deactivated successfully.
Jan 23 05:02:28 np0005593232 podman[317443]: 2026-01-23 10:02:28.485147255 +0000 UTC m=+0.173079239 container remove 83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_germain, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:02:28 np0005593232 systemd[1]: libpod-conmon-83f974e5497cbbc0821df93bbb98420eb4e846dbda3a732eaf606e6acc768b62.scope: Deactivated successfully.
Jan 23 05:02:28 np0005593232 podman[317484]: 2026-01-23 10:02:28.658843231 +0000 UTC m=+0.039775531 container create 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 23 05:02:28 np0005593232 systemd[1]: Started libpod-conmon-78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813.scope.
Jan 23 05:02:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:28 np0005593232 podman[317484]: 2026-01-23 10:02:28.64296917 +0000 UTC m=+0.023901490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:02:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7190d6f6b5382d6c311f8471addc9f6d5d104630e84b8c66fc92ba5819f40c17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7190d6f6b5382d6c311f8471addc9f6d5d104630e84b8c66fc92ba5819f40c17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7190d6f6b5382d6c311f8471addc9f6d5d104630e84b8c66fc92ba5819f40c17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7190d6f6b5382d6c311f8471addc9f6d5d104630e84b8c66fc92ba5819f40c17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2139: 321 pgs: 321 active+clean; 671 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 MiB/s wr, 276 op/s
Jan 23 05:02:29 np0005593232 podman[317484]: 2026-01-23 10:02:29.116465514 +0000 UTC m=+0.497397834 container init 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:02:29 np0005593232 podman[317484]: 2026-01-23 10:02:29.123061172 +0000 UTC m=+0.503993472 container start 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:02:29 np0005593232 podman[317484]: 2026-01-23 10:02:29.126679384 +0000 UTC m=+0.507611704 container attach 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 23 05:02:29 np0005593232 nova_compute[250269]: 2026-01-23 10:02:29.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]: {
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:        "osd_id": 0,
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:        "type": "bluestore"
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]:    }
Jan 23 05:02:29 np0005593232 affectionate_bardeen[317500]: }
Jan 23 05:02:30 np0005593232 systemd[1]: libpod-78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813.scope: Deactivated successfully.
Jan 23 05:02:30 np0005593232 podman[317522]: 2026-01-23 10:02:30.054081647 +0000 UTC m=+0.021875142 container died 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 05:02:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7190d6f6b5382d6c311f8471addc9f6d5d104630e84b8c66fc92ba5819f40c17-merged.mount: Deactivated successfully.
Jan 23 05:02:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:30.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:30 np0005593232 podman[317522]: 2026-01-23 10:02:30.104680765 +0000 UTC m=+0.072474240 container remove 78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:02:30 np0005593232 systemd[1]: libpod-conmon-78352e167af423f021b75113f380cd2ffa3b333345311d69981b0f66a83df813.scope: Deactivated successfully.
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 10846009-9bbb-4386-b9e8-9885adc21f44 does not exist
Jan 23 05:02:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0d4415d1-de0f-42cf-841f-f056029f7802 does not exist
Jan 23 05:02:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c47690fa-38fb-4ef4-8d77-9adb59b3d56b does not exist
Jan 23 05:02:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:02:30 np0005593232 nova_compute[250269]: 2026-01-23 10:02:30.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:30 np0005593232 nova_compute[250269]: 2026-01-23 10:02:30.882 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Successfully created port: d80acda3-66d1-4992-889f-b668c7ef1094 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:02:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2140: 321 pgs: 321 active+clean; 671 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 365 KiB/s rd, 8.1 MiB/s wr, 178 op/s
Jan 23 05:02:31 np0005593232 nova_compute[250269]: 2026-01-23 10:02:31.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:31 np0005593232 nova_compute[250269]: 2026-01-23 10:02:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:31 np0005593232 nova_compute[250269]: 2026-01-23 10:02:31.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2141: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 8.2 MiB/s wr, 219 op/s
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.310 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Successfully updated port: f80bf815-6340-4f49-a6aa-1d0aaf916228 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.599 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.599 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquired lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:33 np0005593232 nova_compute[250269]: 2026-01-23 10:02:33.599 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:02:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:34 np0005593232 nova_compute[250269]: 2026-01-23 10:02:34.292 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:02:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2142: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1003 KiB/s rd, 7.5 MiB/s wr, 187 op/s
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.602 250273 DEBUG nova.compute.manager [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-changed-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.603 250273 DEBUG nova.compute.manager [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Refreshing instance network info cache due to event network-changed-f80bf815-6340-4f49-a6aa-1d0aaf916228. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.603 250273 DEBUG oslo_concurrency.lockutils [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.641 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Successfully updated port: d80acda3-66d1-4992-889f-b668c7ef1094 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.673 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.673 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquired lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.674 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:02:35 np0005593232 nova_compute[250269]: 2026-01-23 10:02:35.700 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.295 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.387 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:02:36 np0005593232 nova_compute[250269]: 2026-01-23 10:02:36.387 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:02:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:36.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2143: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.5 MiB/s wr, 248 op/s
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:02:37
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.log', 'images', '.rgw.root', '.mgr']
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:02:37 np0005593232 nova_compute[250269]: 2026-01-23 10:02:37.264 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:02:37 np0005593232 nova_compute[250269]: 2026-01-23 10:02:37.264 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:37 np0005593232 nova_compute[250269]: 2026-01-23 10:02:37.264 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:02:37 np0005593232 nova_compute[250269]: 2026-01-23 10:02:37.265 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:02:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:38.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:02:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:38.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:38 np0005593232 podman[317591]: 2026-01-23 10:02:38.46902489 +0000 UTC m=+0.121795551 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.504 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Updating instance_info_cache with network_info: [{"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.920 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Releasing lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.920 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Instance network_info: |[{"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.921 250273 DEBUG oslo_concurrency.lockutils [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.921 250273 DEBUG nova.network.neutron [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Refreshing network info cache for port f80bf815-6340-4f49-a6aa-1d0aaf916228 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.927 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Start _get_guest_xml network_info=[{"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.932 250273 WARNING nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.943 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.944 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.955 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.955 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.957 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.957 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.957 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.957 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.957 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.958 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.959 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:02:38 np0005593232 nova_compute[250269]: 2026-01-23 10:02:38.961 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2144: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 192 op/s
Jan 23 05:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211218540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.414 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.440 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.447 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.823 250273 DEBUG nova.compute.manager [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-changed-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.824 250273 DEBUG nova.compute.manager [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Refreshing instance network info cache due to event network-changed-d80acda3-66d1-4992-889f-b668c7ef1094. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.824 250273 DEBUG oslo_concurrency.lockutils [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:02:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3706328021' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.907 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.908 250273 DEBUG nova.virt.libvirt.vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-1',id=103,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:02:22Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=799fa81c-31fa-4706-ba6f-5918fddd4caa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.908 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.909 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:39 np0005593232 nova_compute[250269]: 2026-01-23 10:02:39.911 250273 DEBUG nova.objects.instance [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'pci_devices' on Instance uuid 799fa81c-31fa-4706-ba6f-5918fddd4caa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:40.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.156 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <uuid>799fa81c-31fa-4706-ba6f-5918fddd4caa</uuid>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <name>instance-00000067</name>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1630527270-1</nova:name>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:02:38</nova:creationTime>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:user uuid="9127d08a3bf5404e8cb8c84ed7152834">tempest-ListServersNegativeTestJSON-2075106169-project-member</nova:user>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:project uuid="449f402258804f41b10f91a13da1176d">tempest-ListServersNegativeTestJSON-2075106169</nova:project>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <nova:port uuid="f80bf815-6340-4f49-a6aa-1d0aaf916228">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="serial">799fa81c-31fa-4706-ba6f-5918fddd4caa</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="uuid">799fa81c-31fa-4706-ba6f-5918fddd4caa</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/799fa81c-31fa-4706-ba6f-5918fddd4caa_disk">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:cc:51:99"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <target dev="tapf80bf815-63"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/console.log" append="off"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:02:40 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:02:40 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:02:40 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:02:40 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.157 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Preparing to wait for external event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.157 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.158 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.158 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.159 250273 DEBUG nova.virt.libvirt.vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-1',id=103,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:02:22Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=799fa81c-31fa-4706-ba6f-5918fddd4caa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.159 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.160 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.160 250273 DEBUG os_vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.161 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.161 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.162 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.167 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf80bf815-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.167 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf80bf815-63, col_values=(('external_ids', {'iface-id': 'f80bf815-6340-4f49-a6aa-1d0aaf916228', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:51:99', 'vm-uuid': '799fa81c-31fa-4706-ba6f-5918fddd4caa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:40 np0005593232 NetworkManager[49057]: <info>  [1769162560.2153] manager: (tapf80bf815-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/170)
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.221 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.222 250273 INFO os_vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63')#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.368 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.369 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.370 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No VIF found with MAC fa:16:3e:cc:51:99, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.371 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Using config drive#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.407 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.451 250273 DEBUG nova.network.neutron [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Updating instance_info_cache with network_info: [{"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.539 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:40 np0005593232 nova_compute[250269]: 2026-01-23 10:02:40.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2145: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 49 KiB/s wr, 142 op/s
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.192 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.192 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.192 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.223 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Releasing lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.224 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Instance network_info: |[{"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.224 250273 DEBUG oslo_concurrency.lockutils [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.224 250273 DEBUG nova.network.neutron [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Refreshing network info cache for port d80acda3-66d1-4992-889f-b668c7ef1094 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.228 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Start _get_guest_xml network_info=[{"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.233 250273 WARNING nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.236 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.237 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.238 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.238 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.238 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.291 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.292 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.300 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.301 250273 DEBUG nova.virt.libvirt.host [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.303 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.303 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.304 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.304 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.305 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.305 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.305 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.305 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.305 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.306 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.306 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.306 250273 DEBUG nova.virt.hardware [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.310 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/921077733' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.691 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553289435' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.768 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.799 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.803 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.932 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.933 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.937 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.938 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000067 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.941 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:41 np0005593232 nova_compute[250269]: 2026-01-23 10:02:41.941 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:02:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:42.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.110 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.111 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4032MB free_disk=20.65644073486328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.111 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.111 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1318308815' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.245 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.246 250273 DEBUG nova.virt.libvirt.vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-3',id=105,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:02:23Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=6679cffe-216f-4ac9-87ef-45526b43ad12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.246 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.247 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.248 250273 DEBUG nova.objects.instance [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'pci_devices' on Instance uuid 6679cffe-216f-4ac9-87ef-45526b43ad12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:02:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4180091172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.362 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <uuid>6679cffe-216f-4ac9-87ef-45526b43ad12</uuid>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <name>instance-00000069</name>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:name>tempest-ListServersNegativeTestJSON-server-1630527270-3</nova:name>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:02:41</nova:creationTime>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:user uuid="9127d08a3bf5404e8cb8c84ed7152834">tempest-ListServersNegativeTestJSON-2075106169-project-member</nova:user>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:project uuid="449f402258804f41b10f91a13da1176d">tempest-ListServersNegativeTestJSON-2075106169</nova:project>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <nova:port uuid="d80acda3-66d1-4992-889f-b668c7ef1094">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="serial">6679cffe-216f-4ac9-87ef-45526b43ad12</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="uuid">6679cffe-216f-4ac9-87ef-45526b43ad12</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6679cffe-216f-4ac9-87ef-45526b43ad12_disk">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:c0:8b:df"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <target dev="tapd80acda3-66"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/console.log" append="off"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:02:42 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:02:42 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:02:42 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:02:42 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.363 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Preparing to wait for external event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.363 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.364 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.364 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.365 250273 DEBUG nova.virt.libvirt.vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-3',id=105,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:02:23Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=6679cffe-216f-4ac9-87ef-45526b43ad12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.365 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.366 250273 DEBUG nova.network.os_vif_util [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.366 250273 DEBUG os_vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.367 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.367 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.367 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.370 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.370 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd80acda3-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.371 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd80acda3-66, col_values=(('external_ids', {'iface-id': 'd80acda3-66d1-4992-889f-b668c7ef1094', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:8b:df', 'vm-uuid': '6679cffe-216f-4ac9-87ef-45526b43ad12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:42 np0005593232 NetworkManager[49057]: <info>  [1769162562.3733] manager: (tapd80acda3-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/171)
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.380 250273 INFO os_vif [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66')#033[00m
Jan 23 05:02:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:42.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.526 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Creating config drive at /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.532 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw8amerr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:42.614 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:42.615 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:42.616 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.623 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.624 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.624 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] No VIF found with MAC fa:16:3e:c0:8b:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.624 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Using config drive#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.688 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.693 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw8amerr" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.694 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance a50f4fa0-c4bd-41c7-be13-6afd972661b6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 799fa81c-31fa-4706-ba6f-5918fddd4caa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 6679cffe-216f-4ac9-87ef-45526b43ad12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.695 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.723 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.735 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2146: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 53 KiB/s wr, 142 op/s
Jan 23 05:02:42 np0005593232 nova_compute[250269]: 2026-01-23 10:02:42.989 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:43 np0005593232 podman[317864]: 2026-01-23 10:02:43.432243942 +0000 UTC m=+0.081171538 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 23 05:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/747564563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.468 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.475 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.591 250273 DEBUG nova.network.neutron [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Updated VIF entry in instance network info cache for port f80bf815-6340-4f49-a6aa-1d0aaf916228. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.591 250273 DEBUG nova.network.neutron [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Updating instance_info_cache with network_info: [{"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.677 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config 799fa81c-31fa-4706-ba6f-5918fddd4caa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.942s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.677 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Deleting local config drive /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa/disk.config because it was imported into RBD.#033[00m
Jan 23 05:02:43 np0005593232 NetworkManager[49057]: <info>  [1769162563.7244] manager: (tapf80bf815-63): new Tun device (/org/freedesktop/NetworkManager/Devices/172)
Jan 23 05:02:43 np0005593232 kernel: tapf80bf815-63: entered promiscuous mode
Jan 23 05:02:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:43Z|00334|binding|INFO|Claiming lport f80bf815-6340-4f49-a6aa-1d0aaf916228 for this chassis.
Jan 23 05:02:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:43Z|00335|binding|INFO|f80bf815-6340-4f49-a6aa-1d0aaf916228: Claiming fa:16:3e:cc:51:99 10.100.0.13
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.728 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:43Z|00336|binding|INFO|Setting lport f80bf815-6340-4f49-a6aa-1d0aaf916228 ovn-installed in OVS
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:43 np0005593232 systemd-udevd[317902]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:02:43 np0005593232 nova_compute[250269]: 2026-01-23 10:02:43.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:43 np0005593232 systemd-machined[215836]: New machine qemu-41-instance-00000067.
Jan 23 05:02:43 np0005593232 NetworkManager[49057]: <info>  [1769162563.7677] device (tapf80bf815-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:02:43 np0005593232 NetworkManager[49057]: <info>  [1769162563.7683] device (tapf80bf815-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:02:43 np0005593232 systemd[1]: Started Virtual Machine qemu-41-instance-00000067.
Jan 23 05:02:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:44.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.110 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162564.1092603, 799fa81c-31fa-4706-ba6f-5918fddd4caa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.111 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] VM Started (Lifecycle Event)#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.282 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Creating config drive at /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.288 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchtzmuy9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.429 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpchtzmuy9" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:44.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.464 250273 DEBUG nova.storage.rbd_utils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] rbd image 6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.468 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config 6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.523 250273 DEBUG oslo_concurrency.lockutils [req-6a181dc4-8a5c-4ad0-b286-e81b19abbb83 req-8487b1e9-18c8-4bf4-a4e0-37067c0dec58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-799fa81c-31fa-4706-ba6f-5918fddd4caa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.525 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:02:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:44Z|00337|binding|INFO|Setting lport f80bf815-6340-4f49-a6aa-1d0aaf916228 up in Southbound
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.546 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:51:99 10.100.0.13'], port_security=['fa:16:3e:cc:51:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '799fa81c-31fa-4706-ba6f-5918fddd4caa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '449f402258804f41b10f91a13da1176d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07d82268-5230-4e65-b516-4629d542718c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c66279a-df2c-4cb2-9aaa-3c8f92544e07, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f80bf815-6340-4f49-a6aa-1d0aaf916228) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.548 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f80bf815-6340-4f49-a6aa-1d0aaf916228 in datapath 5d65fb2c-55c9-4b50-aff7-9502add4a8c8 bound to our chassis#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.549 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d65fb2c-55c9-4b50-aff7-9502add4a8c8#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.566 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[17616b6e-2823-4be6-a3ab-59bed1c19c10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.567 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5d65fb2c-51 in ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.569 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5d65fb2c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.569 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a095de-5f85-4c5a-bd28-01d884775234]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.570 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49200f8c-ce98-4c3b-836d-4593548208eb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.582 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b0792eed-ec9a-4acf-9577-3f92183e853d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.597 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b8040f96-57ad-4fa6-891f-38a92c43c40d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.634 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[35143247-53d4-4592-9a77-a916c4e4189c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.642 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a19fe887-9a74-44e3-a93f-8afe0a4de3f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.6431] manager: (tap5d65fb2c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/173)
Jan 23 05:02:44 np0005593232 systemd-udevd[317907]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.658 250273 DEBUG oslo_concurrency.processutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config 6679cffe-216f-4ac9-87ef-45526b43ad12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.662 250273 INFO nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Deleting local config drive /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12/disk.config because it was imported into RBD.#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.679 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8226e4-bb98-497c-9dae-283e2500a8bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.683 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc070b2-5b28-4e85-a14d-cda90092f37e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.7110] device (tap5d65fb2c-50): carrier: link connected
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.717 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[abb64024-ab09-4992-97a6-8478d345d0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.7210] manager: (tapd80acda3-66): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Jan 23 05:02:44 np0005593232 systemd-udevd[318058]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:02:44 np0005593232 kernel: tapd80acda3-66: entered promiscuous mode
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.728 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:44Z|00338|if_status|INFO|Not updating pb chassis for d80acda3-66d1-4992-889f-b668c7ef1094 now as sb is readonly
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.7368] device (tapd80acda3-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.7375] device (tapd80acda3-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.740 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0eabcb8-efa0-4522-b532-f193efd56885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d65fb2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:89:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647785, 'reachable_time': 34327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318087, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.743 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 systemd-machined[215836]: New machine qemu-42-instance-00000069.
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.755 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9e517e-b574-4508-95bd-ab35d2b440f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febb:8995'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647785, 'tstamp': 647785}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318091, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 systemd[1]: Started Virtual Machine qemu-42-instance-00000069.
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.772 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[43a8a2ca-439e-4f45-90a1-73ff953053dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d65fb2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:89:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647785, 'reachable_time': 34327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318092, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.814 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b8c88daf-0837-4077-9e34-b9500bb549b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.878 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[10822659-b1a2-41cd-9bc8-cfb98ac80907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.881 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d65fb2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.881 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.882 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d65fb2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 kernel: tap5d65fb2c-50: entered promiscuous mode
Jan 23 05:02:44 np0005593232 NetworkManager[49057]: <info>  [1769162564.8848] manager: (tap5d65fb2c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.888 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d65fb2c-50, col_values=(('external_ids', {'iface-id': 'd4b096a2-d3ab-4a1a-b849-ee44a5218036'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:44Z|00339|binding|INFO|Releasing lport d4b096a2-d3ab-4a1a-b849-ee44a5218036 from this chassis (sb_readonly=1)
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.907 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5d65fb2c-55c9-4b50-aff7-9502add4a8c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5d65fb2c-55c9-4b50-aff7-9502add4a8c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.909 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6075bea5-59a9-4002-8d96-fa1dadf810e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.910 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-5d65fb2c-55c9-4b50-aff7-9502add4a8c8
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/5d65fb2c-55c9-4b50-aff7-9502add4a8c8.pid.haproxy
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 5d65fb2c-55c9-4b50-aff7-9502add4a8c8
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:02:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:44.911 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'env', 'PROCESS_TAG=haproxy-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5d65fb2c-55c9-4b50-aff7-9502add4a8c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.967 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.978 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162564.1095932, 799fa81c-31fa-4706-ba6f-5918fddd4caa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:44 np0005593232 nova_compute[250269]: 2026-01-23 10:02:44.979 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:02:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2147: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.7 KiB/s wr, 101 op/s
Jan 23 05:02:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:45Z|00340|binding|INFO|Claiming lport d80acda3-66d1-4992-889f-b668c7ef1094 for this chassis.
Jan 23 05:02:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:45Z|00341|binding|INFO|d80acda3-66d1-4992-889f-b668c7ef1094: Claiming fa:16:3e:c0:8b:df 10.100.0.6
Jan 23 05:02:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:45Z|00342|binding|INFO|Setting lport d80acda3-66d1-4992-889f-b668c7ef1094 ovn-installed in OVS
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.124 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.182 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.187 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.203 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:02:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:45Z|00343|binding|INFO|Setting lport d80acda3-66d1-4992-889f-b668c7ef1094 up in Southbound
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.203 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.203 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:8b:df 10.100.0.6'], port_security=['fa:16:3e:c0:8b:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6679cffe-216f-4ac9-87ef-45526b43ad12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '449f402258804f41b10f91a13da1176d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07d82268-5230-4e65-b516-4629d542718c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c66279a-df2c-4cb2-9aaa-3c8f92544e07, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d80acda3-66d1-4992-889f-b668c7ef1094) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.249 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.303 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.303 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:45 np0005593232 podman[318165]: 2026-01-23 10:02:45.312042386 +0000 UTC m=+0.060190311 container create 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:02:45 np0005593232 systemd[1]: Started libpod-conmon-2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04.scope.
Jan 23 05:02:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:02:45 np0005593232 podman[318165]: 2026-01-23 10:02:45.28016636 +0000 UTC m=+0.028314315 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:02:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/336d0461e6e8be3868767d3675c81cde1d615fc156ed1951233da19fffc53976/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.389 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162565.388752, 6679cffe-216f-4ac9-87ef-45526b43ad12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.389 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] VM Started (Lifecycle Event)#033[00m
Jan 23 05:02:45 np0005593232 podman[318165]: 2026-01-23 10:02:45.400672404 +0000 UTC m=+0.148820359 container init 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:02:45 np0005593232 podman[318165]: 2026-01-23 10:02:45.406405467 +0000 UTC m=+0.154553392 container start 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:02:45 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [NOTICE]   (318191) : New worker (318193) forked
Jan 23 05:02:45 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [NOTICE]   (318191) : Loading success.
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.464 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d80acda3-66d1-4992-889f-b668c7ef1094 in datapath 5d65fb2c-55c9-4b50-aff7-9502add4a8c8 unbound from our chassis#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.466 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d65fb2c-55c9-4b50-aff7-9502add4a8c8#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.479 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9494abaa-b286-4d5f-847d-58af4c03bcba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.503 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b2982152-ee2c-4d1f-9a74-2ca4337a976e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.506 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8450e53c-5523-4e6b-82c7-a5100b0acb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.529 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.533 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162565.3890188, 6679cffe-216f-4ac9-87ef-45526b43ad12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.533 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.535 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c1df54-9814-4bc3-8507-74fa7a0dc9c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.553 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0d19d77a-babf-4cc2-8fa2-a35668d6ce0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d65fb2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:89:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 176, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 4, 'rx_bytes': 176, 'tx_bytes': 264, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647785, 'reachable_time': 34327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318207, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.568 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[81af2264-82b8-4c89-923f-1de156a46f05]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d65fb2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647799, 'tstamp': 647799}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d65fb2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647801, 'tstamp': 647801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.570 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d65fb2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.573 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.573 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d65fb2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.573 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.573 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d65fb2c-50, col_values=(('external_ids', {'iface-id': 'd4b096a2-d3ab-4a1a-b849-ee44a5218036'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:45.574 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.579 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.581 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.627 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.874 250273 DEBUG nova.compute.manager [req-b04c6617-25ff-4bf4-a321-e141747f027e req-0a07f6fa-5a98-45ce-9500-2cfb71d9d903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.874 250273 DEBUG oslo_concurrency.lockutils [req-b04c6617-25ff-4bf4-a321-e141747f027e req-0a07f6fa-5a98-45ce-9500-2cfb71d9d903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.875 250273 DEBUG oslo_concurrency.lockutils [req-b04c6617-25ff-4bf4-a321-e141747f027e req-0a07f6fa-5a98-45ce-9500-2cfb71d9d903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.875 250273 DEBUG oslo_concurrency.lockutils [req-b04c6617-25ff-4bf4-a321-e141747f027e req-0a07f6fa-5a98-45ce-9500-2cfb71d9d903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.876 250273 DEBUG nova.compute.manager [req-b04c6617-25ff-4bf4-a321-e141747f027e req-0a07f6fa-5a98-45ce-9500-2cfb71d9d903 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Processing event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.877 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.881 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162565.8814695, 799fa81c-31fa-4706-ba6f-5918fddd4caa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.882 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.884 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.888 250273 INFO nova.virt.libvirt.driver [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Instance spawned successfully.#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.889 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.936 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.938 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.945 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.946 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.946 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.946 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.947 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:45 np0005593232 nova_compute[250269]: 2026-01-23 10:02:45.947 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:46 np0005593232 nova_compute[250269]: 2026-01-23 10:02:46.021 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:46 np0005593232 nova_compute[250269]: 2026-01-23 10:02:46.105 250273 INFO nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Took 23.57 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:02:46 np0005593232 nova_compute[250269]: 2026-01-23 10:02:46.106 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:46 np0005593232 nova_compute[250269]: 2026-01-23 10:02:46.340 250273 INFO nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Took 25.66 seconds to build instance.#033[00m
Jan 23 05:02:46 np0005593232 nova_compute[250269]: 2026-01-23 10:02:46.411 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:02:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:46.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:02:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01581275154708729 of space, bias 1.0, pg target 4.7438254641261866 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.563330250790992 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 23 05:02:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2148: 321 pgs: 321 active+clean; 702 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 2.5 MiB/s wr, 217 op/s
Jan 23 05:02:47 np0005593232 nova_compute[250269]: 2026-01-23 10:02:47.374 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:47 np0005593232 nova_compute[250269]: 2026-01-23 10:02:47.800 250273 DEBUG nova.network.neutron [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Updated VIF entry in instance network info cache for port d80acda3-66d1-4992-889f-b668c7ef1094. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:02:47 np0005593232 nova_compute[250269]: 2026-01-23 10:02:47.801 250273 DEBUG nova.network.neutron [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Updating instance_info_cache with network_info: [{"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:02:47 np0005593232 nova_compute[250269]: 2026-01-23 10:02:47.940 250273 DEBUG oslo_concurrency.lockutils [req-eb500b75-d0cc-4a8e-8583-95bae7cf2806 req-672a03ec-2e17-4017-9e8b-c1b5d8600e91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6679cffe-216f-4ac9-87ef-45526b43ad12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:02:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:02:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:48.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.122 250273 DEBUG nova.compute.manager [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.123 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.123 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.123 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.124 250273 DEBUG nova.compute.manager [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] No waiting events found dispatching network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.124 250273 WARNING nova.compute.manager [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received unexpected event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.124 250273 DEBUG nova.compute.manager [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.124 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.124 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.125 250273 DEBUG oslo_concurrency.lockutils [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.125 250273 DEBUG nova.compute.manager [req-d4ad1445-d5c6-4cb9-9d6c-6676ea409d88 req-d64c0a19-8295-4380-bdac-6b5ccfb9f896 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Processing event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.126 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.128 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162568.12876, 6679cffe-216f-4ac9-87ef-45526b43ad12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.129 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.139 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.141 250273 INFO nova.virt.libvirt.driver [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Instance spawned successfully.#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.141 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.239 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.241 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.262 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.262 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.263 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.263 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.264 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.264 250273 DEBUG nova.virt.libvirt.driver [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.376 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:02:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:48.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.469 250273 INFO nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Took 24.95 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.470 250273 DEBUG nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.590 250273 INFO nova.compute.manager [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Took 27.63 seconds to build instance.#033[00m
Jan 23 05:02:48 np0005593232 nova_compute[250269]: 2026-01-23 10:02:48.619 250273 DEBUG oslo_concurrency.lockutils [None req-9660114f-14e4-4f33-88bf-1d67e2d45046 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 27.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2149: 321 pgs: 321 active+clean; 731 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.3 MiB/s wr, 221 op/s
Jan 23 05:02:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:50.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.433 250273 DEBUG nova.compute.manager [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.433 250273 DEBUG oslo_concurrency.lockutils [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.433 250273 DEBUG oslo_concurrency.lockutils [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.434 250273 DEBUG oslo_concurrency.lockutils [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.434 250273 DEBUG nova.compute.manager [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] No waiting events found dispatching network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.434 250273 WARNING nova.compute.manager [req-1259b03b-7bbe-4679-ab66-096cf5acccc9 req-2cc08814-383c-46d9-812c-76c3d5e53f34 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received unexpected event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:02:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:50.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:50 np0005593232 nova_compute[250269]: 2026-01-23 10:02:50.743 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2150: 321 pgs: 321 active+clean; 731 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 180 op/s
Jan 23 05:02:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:52.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:52 np0005593232 nova_compute[250269]: 2026-01-23 10:02:52.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:52.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2151: 321 pgs: 321 active+clean; 738 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.3 MiB/s wr, 417 op/s
Jan 23 05:02:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:54.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:54.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2152: 321 pgs: 321 active+clean; 738 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 4.3 MiB/s wr, 417 op/s
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.292 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.293 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.295 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.334 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.372 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.372 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Image id 84c0ef19-7f67-4bd3-95d8-507c3e0942ed yields fingerprint a6f655456a04e1d13ef2e44ed4544c38917863a2 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.373 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] image 84c0ef19-7f67-4bd3-95d8-507c3e0942ed at (/var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2): checking#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.373 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] image 84c0ef19-7f67-4bd3-95d8-507c3e0942ed at (/var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.376 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.376 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Image id ae1f9e37-418c-462f-81d1-3599a6d89de9 yields fingerprint 8edc4c18d7d1964a485fb1b305c460bdc5a45b20 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.377 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] image ae1f9e37-418c-462f-81d1-3599a6d89de9 at (/var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20): checking#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.377 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] image ae1f9e37-418c-462f-81d1-3599a6d89de9 at (/var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.379 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.379 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] a50f4fa0-c4bd-41c7-be13-6afd972661b6 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.380 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] 799fa81c-31fa-4706-ba6f-5918fddd4caa is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.380 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] 6679cffe-216f-4ac9-87ef-45526b43ad12 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.380 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Active base files: /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.381 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.381 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.381 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.381 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Jan 23 05:02:55 np0005593232 nova_compute[250269]: 2026-01-23 10:02:55.745 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:56.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.392 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.393 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.393 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.393 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.393 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.395 250273 INFO nova.compute.manager [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Terminating instance#033[00m
Jan 23 05:02:56 np0005593232 nova_compute[250269]: 2026-01-23 10:02:56.396 250273 DEBUG nova.compute.manager [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:02:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:56.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:02:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2153: 321 pgs: 321 active+clean; 687 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 4.3 MiB/s wr, 435 op/s
Jan 23 05:02:57 np0005593232 nova_compute[250269]: 2026-01-23 10:02:57.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:02:58.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:02:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:02:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:02:58.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:02:58 np0005593232 kernel: tapf80bf815-63 (unregistering): left promiscuous mode
Jan 23 05:02:58 np0005593232 NetworkManager[49057]: <info>  [1769162578.5571] device (tapf80bf815-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:02:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:58Z|00344|binding|INFO|Releasing lport f80bf815-6340-4f49-a6aa-1d0aaf916228 from this chassis (sb_readonly=0)
Jan 23 05:02:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:58Z|00345|binding|INFO|Setting lport f80bf815-6340-4f49-a6aa-1d0aaf916228 down in Southbound
Jan 23 05:02:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:02:58Z|00346|binding|INFO|Removing iface tapf80bf815-63 ovn-installed in OVS
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.573 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.580 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:51:99 10.100.0.13'], port_security=['fa:16:3e:cc:51:99 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '799fa81c-31fa-4706-ba6f-5918fddd4caa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '449f402258804f41b10f91a13da1176d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07d82268-5230-4e65-b516-4629d542718c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c66279a-df2c-4cb2-9aaa-3c8f92544e07, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f80bf815-6340-4f49-a6aa-1d0aaf916228) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.582 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f80bf815-6340-4f49-a6aa-1d0aaf916228 in datapath 5d65fb2c-55c9-4b50-aff7-9502add4a8c8 unbound from our chassis#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.583 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5d65fb2c-55c9-4b50-aff7-9502add4a8c8#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.599 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[64061220-6612-4d4d-a0c2-d83472351baa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.599 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 23 05:02:58 np0005593232 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000067.scope: Consumed 11.569s CPU time.
Jan 23 05:02:58 np0005593232 systemd-machined[215836]: Machine qemu-41-instance-00000067 terminated.
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.626 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1f357be0-3c06-4131-b114-7e13d8d1d5ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.630 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[046a6e59-8f61-413e-895d-c2be5e3045b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.661 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[56e22e5f-c165-47eb-9a64-54a208885f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.678 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[291064a7-dd68-4276-9818-194b980c7bbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5d65fb2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bb:89:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 103], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647785, 'reachable_time': 34327, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318229, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.694 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d42fc47c-db7a-4dc3-a11c-001c79a15172]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5d65fb2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647799, 'tstamp': 647799}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318230, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5d65fb2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647801, 'tstamp': 647801}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318230, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.696 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d65fb2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.702 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5d65fb2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.703 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.703 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5d65fb2c-50, col_values=(('external_ids', {'iface-id': 'd4b096a2-d3ab-4a1a-b849-ee44a5218036'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:02:58.704 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.840 250273 INFO nova.virt.libvirt.driver [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Instance destroyed successfully.#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.841 250273 DEBUG nova.objects.instance [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'resources' on Instance uuid 799fa81c-31fa-4706-ba6f-5918fddd4caa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.881 250273 DEBUG nova.virt.libvirt.vif [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-1',id=103,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:02:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:02:46Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=799fa81c-31fa-4706-ba6f-5918fddd4caa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.882 250273 DEBUG nova.network.os_vif_util [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "address": "fa:16:3e:cc:51:99", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf80bf815-63", "ovs_interfaceid": "f80bf815-6340-4f49-a6aa-1d0aaf916228", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.883 250273 DEBUG nova.network.os_vif_util [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.883 250273 DEBUG os_vif [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.885 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.885 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf80bf815-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.892 250273 INFO os_vif [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:51:99,bridge_name='br-int',has_traffic_filtering=True,id=f80bf815-6340-4f49-a6aa-1d0aaf916228,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf80bf815-63')#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.922 250273 DEBUG nova.compute.manager [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-unplugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.923 250273 DEBUG oslo_concurrency.lockutils [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.923 250273 DEBUG oslo_concurrency.lockutils [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.924 250273 DEBUG oslo_concurrency.lockutils [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.924 250273 DEBUG nova.compute.manager [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] No waiting events found dispatching network-vif-unplugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:02:58 np0005593232 nova_compute[250269]: 2026-01-23 10:02:58.924 250273 DEBUG nova.compute.manager [req-3b928419-a26c-4cb3-9f26-3c845969d19b req-e9fb0c12-bca0-419c-a8b4-e63caf194918 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-unplugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:02:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2154: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 1.9 MiB/s wr, 362 op/s
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.342 250273 INFO nova.virt.libvirt.driver [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Deleting instance files /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa_del#033[00m
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.344 250273 INFO nova.virt.libvirt.driver [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Deletion of /var/lib/nova/instances/799fa81c-31fa-4706-ba6f-5918fddd4caa_del complete#033[00m
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.435 250273 INFO nova.compute.manager [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Took 3.04 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.436 250273 DEBUG oslo.service.loopingcall [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.436 250273 DEBUG nova.compute.manager [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:02:59 np0005593232 nova_compute[250269]: 2026-01-23 10:02:59.436 250273 DEBUG nova.network.neutron [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:03:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:00.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:00 np0005593232 nova_compute[250269]: 2026-01-23 10:03:00.747 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2155: 321 pgs: 321 active+clean; 579 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 75 KiB/s wr, 299 op/s
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.187 250273 DEBUG nova.compute.manager [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.187 250273 DEBUG oslo_concurrency.lockutils [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.188 250273 DEBUG oslo_concurrency.lockutils [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.188 250273 DEBUG oslo_concurrency.lockutils [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.188 250273 DEBUG nova.compute.manager [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] No waiting events found dispatching network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.189 250273 WARNING nova.compute.manager [req-1d33f426-c4b7-43d1-a38b-33a67058ffc5 req-254e080e-2302-430e-9105-3e8759a15445 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received unexpected event network-vif-plugged-f80bf815-6340-4f49-a6aa-1d0aaf916228 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.417 250273 DEBUG nova.network.neutron [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.458 250273 INFO nova.compute.manager [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Took 2.02 seconds to deallocate network for instance.#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.541 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.541 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:01 np0005593232 nova_compute[250269]: 2026-01-23 10:03:01.774 250273 DEBUG oslo_concurrency.processutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:02.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478839507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.255 250273 DEBUG oslo_concurrency.processutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.261 250273 DEBUG nova.compute.provider_tree [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.305 250273 DEBUG nova.scheduler.client.report [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.347 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:02Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:8b:df 10.100.0.6
Jan 23 05:03:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:02Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:8b:df 10.100.0.6
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.385 250273 INFO nova.scheduler.client.report [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Deleted allocations for instance 799fa81c-31fa-4706-ba6f-5918fddd4caa#033[00m
Jan 23 05:03:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:02.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:02 np0005593232 nova_compute[250269]: 2026-01-23 10:03:02.582 250273 DEBUG oslo_concurrency.lockutils [None req-f878c7bc-d002-4e7c-a716-0155c5ef159c 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "799fa81c-31fa-4706-ba6f-5918fddd4caa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2156: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 498 op/s
Jan 23 05:03:03 np0005593232 nova_compute[250269]: 2026-01-23 10:03:03.366 250273 DEBUG nova.compute.manager [req-77fbbc57-af84-4aba-9f0a-f5184e0a6c20 req-f5290a99-c5ac-4f1c-8f8c-6bd58166627c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Received event network-vif-deleted-f80bf815-6340-4f49-a6aa-1d0aaf916228 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:03 np0005593232 nova_compute[250269]: 2026-01-23 10:03:03.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:04.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2157: 321 pgs: 321 active+clean; 672 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 7.1 MiB/s wr, 260 op/s
Jan 23 05:03:05 np0005593232 nova_compute[250269]: 2026-01-23 10:03:05.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.766030) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162585766162, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2116, "num_deletes": 251, "total_data_size": 3753745, "memory_usage": 3827360, "flush_reason": "Manual Compaction"}
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162585906310, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3639383, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46039, "largest_seqno": 48154, "table_properties": {"data_size": 3629919, "index_size": 5894, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20166, "raw_average_key_size": 20, "raw_value_size": 3610814, "raw_average_value_size": 3665, "num_data_blocks": 257, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162390, "oldest_key_time": 1769162390, "file_creation_time": 1769162585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 140317 microseconds, and 9826 cpu microseconds.
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.906386) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3639383 bytes OK
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.906417) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.993049) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.993099) EVENT_LOG_v1 {"time_micros": 1769162585993090, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.993130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3744946, prev total WAL file size 3744946, number of live WAL files 2.
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.994926) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3554KB)], [104(8925KB)]
Jan 23 05:03:05 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162585995054, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12778728, "oldest_snapshot_seqno": -1}
Jan 23 05:03:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:06.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7433 keys, 10853683 bytes, temperature: kUnknown
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162586177569, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10853683, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10805001, "index_size": 28986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18629, "raw_key_size": 191859, "raw_average_key_size": 25, "raw_value_size": 10673232, "raw_average_value_size": 1435, "num_data_blocks": 1144, "num_entries": 7433, "num_filter_entries": 7433, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162585, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.177896) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10853683 bytes
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.182350) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 70.0 rd, 59.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 8.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(6.5) write-amplify(3.0) OK, records in: 7952, records dropped: 519 output_compression: NoCompression
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.182384) EVENT_LOG_v1 {"time_micros": 1769162586182363, "job": 62, "event": "compaction_finished", "compaction_time_micros": 182590, "compaction_time_cpu_micros": 27076, "output_level": 6, "num_output_files": 1, "total_output_size": 10853683, "num_input_records": 7952, "num_output_records": 7433, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162586183218, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162586184735, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:05.994549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.184857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.184863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.184907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.184909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:03:06.184910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:03:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:06.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:06.984 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:03:06 np0005593232 nova_compute[250269]: 2026-01-23 10:03:06.985 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:06.986 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:03:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2158: 321 pgs: 321 active+clean; 653 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 7.8 MiB/s wr, 329 op/s
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.834 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.835 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.836 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.836 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.836 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.838 250273 INFO nova.compute.manager [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Terminating instance#033[00m
Jan 23 05:03:07 np0005593232 nova_compute[250269]: 2026-01-23 10:03:07.839 250273 DEBUG nova.compute.manager [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:03:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:07.988 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:08 np0005593232 kernel: tapd80acda3-66 (unregistering): left promiscuous mode
Jan 23 05:03:08 np0005593232 NetworkManager[49057]: <info>  [1769162588.0175] device (tapd80acda3-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.036 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:08Z|00347|binding|INFO|Releasing lport d80acda3-66d1-4992-889f-b668c7ef1094 from this chassis (sb_readonly=0)
Jan 23 05:03:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:08Z|00348|binding|INFO|Setting lport d80acda3-66d1-4992-889f-b668c7ef1094 down in Southbound
Jan 23 05:03:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:08Z|00349|binding|INFO|Removing iface tapd80acda3-66 ovn-installed in OVS
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.038 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.060 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.081 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:8b:df 10.100.0.6'], port_security=['fa:16:3e:c0:8b:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '6679cffe-216f-4ac9-87ef-45526b43ad12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '449f402258804f41b10f91a13da1176d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07d82268-5230-4e65-b516-4629d542718c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0c66279a-df2c-4cb2-9aaa-3c8f92544e07, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d80acda3-66d1-4992-889f-b668c7ef1094) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.082 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d80acda3-66d1-4992-889f-b668c7ef1094 in datapath 5d65fb2c-55c9-4b50-aff7-9502add4a8c8 unbound from our chassis#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.084 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5d65fb2c-55c9-4b50-aff7-9502add4a8c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.085 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f10b862b-eef4-4d4c-b96a-4707955d0e9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.085 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8 namespace which is not needed anymore#033[00m
Jan 23 05:03:08 np0005593232 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 23 05:03:08 np0005593232 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000069.scope: Consumed 15.284s CPU time.
Jan 23 05:03:08 np0005593232 systemd-machined[215836]: Machine qemu-42-instance-00000069 terminated.
Jan 23 05:03:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:08.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:08 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [NOTICE]   (318191) : haproxy version is 2.8.14-c23fe91
Jan 23 05:03:08 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [NOTICE]   (318191) : path to executable is /usr/sbin/haproxy
Jan 23 05:03:08 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [WARNING]  (318191) : Exiting Master process...
Jan 23 05:03:08 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [ALERT]    (318191) : Current worker (318193) exited with code 143 (Terminated)
Jan 23 05:03:08 np0005593232 neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8[318187]: [WARNING]  (318191) : All workers exited. Exiting... (0)
Jan 23 05:03:08 np0005593232 systemd[1]: libpod-2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04.scope: Deactivated successfully.
Jan 23 05:03:08 np0005593232 podman[318363]: 2026-01-23 10:03:08.245810432 +0000 UTC m=+0.047119220 container died 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.259 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.265 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.271 250273 INFO nova.virt.libvirt.driver [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Instance destroyed successfully.#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.271 250273 DEBUG nova.objects.instance [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lazy-loading 'resources' on Instance uuid 6679cffe-216f-4ac9-87ef-45526b43ad12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-336d0461e6e8be3868767d3675c81cde1d615fc156ed1951233da19fffc53976-merged.mount: Deactivated successfully.
Jan 23 05:03:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04-userdata-shm.mount: Deactivated successfully.
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.291 250273 DEBUG nova.virt.libvirt.vif [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:02:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1630527270',display_name='tempest-ListServersNegativeTestJSON-server-1630527270-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1630527270-3',id=105,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-23T10:02:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='449f402258804f41b10f91a13da1176d',ramdisk_id='',reservation_id='r-2yft4oee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-2075106169',owner_user_name='tempest-ListServersNegativeTestJSON-2075106169-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:02:48Z,user_data=None,user_id='9127d08a3bf5404e8cb8c84ed7152834',uuid=6679cffe-216f-4ac9-87ef-45526b43ad12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.292 250273 DEBUG nova.network.os_vif_util [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converting VIF {"id": "d80acda3-66d1-4992-889f-b668c7ef1094", "address": "fa:16:3e:c0:8b:df", "network": {"id": "5d65fb2c-55c9-4b50-aff7-9502add4a8c8", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-1244961874-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "449f402258804f41b10f91a13da1176d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd80acda3-66", "ovs_interfaceid": "d80acda3-66d1-4992-889f-b668c7ef1094", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.293 250273 DEBUG nova.network.os_vif_util [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:08 np0005593232 podman[318363]: 2026-01-23 10:03:08.2939739 +0000 UTC m=+0.095282678 container cleanup 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.294 250273 DEBUG os_vif [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.295 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd80acda3-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.297 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.302 250273 INFO os_vif [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:8b:df,bridge_name='br-int',has_traffic_filtering=True,id=d80acda3-66d1-4992-889f-b668c7ef1094,network=Network(5d65fb2c-55c9-4b50-aff7-9502add4a8c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd80acda3-66')#033[00m
Jan 23 05:03:08 np0005593232 systemd[1]: libpod-conmon-2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04.scope: Deactivated successfully.
Jan 23 05:03:08 np0005593232 podman[318401]: 2026-01-23 10:03:08.369753874 +0000 UTC m=+0.048294294 container remove 2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.375 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[29d1291c-bd3b-46a7-b645-ea35bb353de1]: (4, ('Fri Jan 23 10:03:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8 (2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04)\n2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04\nFri Jan 23 10:03:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8 (2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04)\n2d405e10791f2425e6c4583cd153dec63e478dc37daf5a9e7b220815ee9f6e04\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.377 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[61bd35ce-7aeb-4186-ba4a-ac484bb420d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.378 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5d65fb2c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.379 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 kernel: tap5d65fb2c-50: left promiscuous mode
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.394 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.400 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[de85fe80-5364-4fa2-bef7-5d11891aa979]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.417 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4eef0f3a-40f5-488c-beb4-100f45168244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.418 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bc17a618-8bfd-451d-bfc2-37e3c98194f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.434 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcfb0a9-7d3f-46cf-bf05-f994a122316e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647776, 'reachable_time': 41785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318434, 'error': None, 'target': 'ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 systemd[1]: run-netns-ovnmeta\x2d5d65fb2c\x2d55c9\x2d4b50\x2daff7\x2d9502add4a8c8.mount: Deactivated successfully.
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.438 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5d65fb2c-55c9-4b50-aff7-9502add4a8c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:03:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:08.438 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[49bc70d2-0d36-4f13-812d-1ce22fb30ea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.452 250273 DEBUG nova.compute.manager [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-unplugged-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.452 250273 DEBUG oslo_concurrency.lockutils [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.453 250273 DEBUG oslo_concurrency.lockutils [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.453 250273 DEBUG oslo_concurrency.lockutils [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.453 250273 DEBUG nova.compute.manager [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] No waiting events found dispatching network-vif-unplugged-d80acda3-66d1-4992-889f-b668c7ef1094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.454 250273 DEBUG nova.compute.manager [req-b832d286-2b9d-4f8f-ac2c-6c5b920d7ffd req-8d78dba9-115e-4383-8e2f-751be32751ee 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-unplugged-d80acda3-66d1-4992-889f-b668c7ef1094 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:03:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:08.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.726 250273 INFO nova.virt.libvirt.driver [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Deleting instance files /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12_del#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.727 250273 INFO nova.virt.libvirt.driver [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Deletion of /var/lib/nova/instances/6679cffe-216f-4ac9-87ef-45526b43ad12_del complete#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.819 250273 INFO nova.compute.manager [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.820 250273 DEBUG oslo.service.loopingcall [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.820 250273 DEBUG nova.compute.manager [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:03:08 np0005593232 nova_compute[250269]: 2026-01-23 10:03:08.821 250273 DEBUG nova.network.neutron [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:03:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2159: 321 pgs: 321 active+clean; 599 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.8 MiB/s wr, 413 op/s
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.277 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.278 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.278 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.278 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.278 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.279 250273 INFO nova.compute.manager [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Terminating instance#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.280 250273 DEBUG nova.compute.manager [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:03:09 np0005593232 kernel: tapd72fb045-89 (unregistering): left promiscuous mode
Jan 23 05:03:09 np0005593232 NetworkManager[49057]: <info>  [1769162589.3359] device (tapd72fb045-89): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:03:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:09Z|00350|binding|INFO|Releasing lport d72fb045-89f6-4d64-beed-2672b5b1e254 from this chassis (sb_readonly=0)
Jan 23 05:03:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:09Z|00351|binding|INFO|Setting lport d72fb045-89f6-4d64-beed-2672b5b1e254 down in Southbound
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.343 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:09Z|00352|binding|INFO|Removing iface tapd72fb045-89 ovn-installed in OVS
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.351 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:6d:17 10.100.0.5'], port_security=['fa:16:3e:94:6d:17 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a50f4fa0-c4bd-41c7-be13-6afd972661b6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0f5ca0233c1a490aa2d596b88a0ec503', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad8a7362-692a-4044-8393-1c10014f8bab', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=83406af9-ea42-4cda-96ee-b8c04ab0651a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d72fb045-89f6-4d64-beed-2672b5b1e254) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.352 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d72fb045-89f6-4d64-beed-2672b5b1e254 in datapath 969bd83a-7542-46e3-90f0-1a81f26ba6b8 unbound from our chassis#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.354 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 969bd83a-7542-46e3-90f0-1a81f26ba6b8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.355 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[677c6243-5474-44d4-b2f4-3c11c4296c7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.356 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8 namespace which is not needed anymore#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 23 05:03:09 np0005593232 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000063.scope: Consumed 16.080s CPU time.
Jan 23 05:03:09 np0005593232 systemd-machined[215836]: Machine qemu-40-instance-00000063 terminated.
Jan 23 05:03:09 np0005593232 podman[318436]: 2026-01-23 10:03:09.426721699 +0000 UTC m=+0.081902849 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:03:09 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [NOTICE]   (314689) : haproxy version is 2.8.14-c23fe91
Jan 23 05:03:09 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [NOTICE]   (314689) : path to executable is /usr/sbin/haproxy
Jan 23 05:03:09 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [WARNING]  (314689) : Exiting Master process...
Jan 23 05:03:09 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [ALERT]    (314689) : Current worker (314701) exited with code 143 (Terminated)
Jan 23 05:03:09 np0005593232 neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8[314670]: [WARNING]  (314689) : All workers exited. Exiting... (0)
Jan 23 05:03:09 np0005593232 systemd[1]: libpod-b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004.scope: Deactivated successfully.
Jan 23 05:03:09 np0005593232 podman[318480]: 2026-01-23 10:03:09.481724852 +0000 UTC m=+0.041361187 container died b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:03:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004-userdata-shm.mount: Deactivated successfully.
Jan 23 05:03:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-46b5a3e0537163772e8b8fba92a087b85c482a7d8ba77dc573abfcdfa7482d23-merged.mount: Deactivated successfully.
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.521 250273 INFO nova.virt.libvirt.driver [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Instance destroyed successfully.#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.522 250273 DEBUG nova.objects.instance [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lazy-loading 'resources' on Instance uuid a50f4fa0-c4bd-41c7-be13-6afd972661b6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:09 np0005593232 podman[318480]: 2026-01-23 10:03:09.528222013 +0000 UTC m=+0.087858348 container cleanup b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:09 np0005593232 systemd[1]: libpod-conmon-b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004.scope: Deactivated successfully.
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.544 250273 DEBUG nova.virt.libvirt.vif [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:01:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1689385279',display_name='tempest-ListServerFiltersTestJSON-instance-1689385279',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1689385279',id=99,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:02:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0f5ca0233c1a490aa2d596b88a0ec503',ramdisk_id='',reservation_id='r-nt00hew7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-1524131674',owner_user_name='tempest-ListServerFiltersTestJSON-1524131674-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:02:04Z,user_data=None,user_id='c09e682996b940dc97c866f9e4f1e74e',uuid=a50f4fa0-c4bd-41c7-be13-6afd972661b6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.545 250273 DEBUG nova.network.os_vif_util [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converting VIF {"id": "d72fb045-89f6-4d64-beed-2672b5b1e254", "address": "fa:16:3e:94:6d:17", "network": {"id": "969bd83a-7542-46e3-90f0-1a81f26ba6b8", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-1578393838-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0f5ca0233c1a490aa2d596b88a0ec503", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd72fb045-89", "ovs_interfaceid": "d72fb045-89f6-4d64-beed-2672b5b1e254", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.546 250273 DEBUG nova.network.os_vif_util [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.546 250273 DEBUG os_vif [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.548 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.548 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd72fb045-89, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.550 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.551 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.554 250273 INFO os_vif [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:6d:17,bridge_name='br-int',has_traffic_filtering=True,id=d72fb045-89f6-4d64-beed-2672b5b1e254,network=Network(969bd83a-7542-46e3-90f0-1a81f26ba6b8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd72fb045-89')#033[00m
Jan 23 05:03:09 np0005593232 podman[318516]: 2026-01-23 10:03:09.593293942 +0000 UTC m=+0.043986941 container remove b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.611 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e1b855-ddd1-46dc-a2a6-5f05a8a835fd]: (4, ('Fri Jan 23 10:03:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8 (b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004)\nb77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004\nFri Jan 23 10:03:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8 (b77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004)\nb77622b8b80eaf2ca08503ba6572deda2be6acc1bb8bf70349590a6a6d1c0004\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.613 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[377350cc-9c82-41a1-8dde-5496fb3fa7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.614 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap969bd83a-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:09 np0005593232 kernel: tap969bd83a-70: left promiscuous mode
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.617 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.634 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92e451d0-472e-4580-992e-06301087a3b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.657 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a81d000a-752e-4324-98ed-586b7b464242]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.658 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2c34367-273f-4a10-9e35-1d440b75f200]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.674 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6191a9f3-d415-44dd-8138-2ce01b9547b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 643617, 'reachable_time': 44260, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318547, 'error': None, 'target': 'ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 systemd[1]: run-netns-ovnmeta\x2d969bd83a\x2d7542\x2d46e3\x2d90f0\x2d1a81f26ba6b8.mount: Deactivated successfully.
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.678 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-969bd83a-7542-46e3-90f0-1a81f26ba6b8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:03:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:09.678 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce1ed06-5de2-4825-bfec-1de14167defa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.711 250273 DEBUG nova.network.neutron [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.733 250273 INFO nova.compute.manager [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Took 0.91 seconds to deallocate network for instance.#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.790 250273 DEBUG nova.compute.manager [req-04f33bda-5e31-4eea-8a98-5d8564b49018 req-6df9d708-6a9b-45c1-8684-84837020bd1c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-deleted-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.810 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.811 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:09 np0005593232 nova_compute[250269]: 2026-01-23 10:03:09.908 250273 DEBUG oslo_concurrency.processutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.009 250273 INFO nova.virt.libvirt.driver [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Deleting instance files /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6_del#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.010 250273 INFO nova.virt.libvirt.driver [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Deletion of /var/lib/nova/instances/a50f4fa0-c4bd-41c7-be13-6afd972661b6_del complete#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.072 250273 INFO nova.compute.manager [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.073 250273 DEBUG oslo.service.loopingcall [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.073 250273 DEBUG nova.compute.manager [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.073 250273 DEBUG nova.network.neutron [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:03:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:10.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614838429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.346 250273 DEBUG oslo_concurrency.processutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.354 250273 DEBUG nova.compute.provider_tree [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.386 250273 DEBUG nova.scheduler.client.report [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.414 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.454 250273 INFO nova.scheduler.client.report [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Deleted allocations for instance 6679cffe-216f-4ac9-87ef-45526b43ad12#033[00m
Jan 23 05:03:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:10.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.584 250273 DEBUG nova.compute.manager [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.585 250273 DEBUG oslo_concurrency.lockutils [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.585 250273 DEBUG oslo_concurrency.lockutils [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.585 250273 DEBUG oslo_concurrency.lockutils [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.586 250273 DEBUG nova.compute.manager [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] No waiting events found dispatching network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.586 250273 WARNING nova.compute.manager [req-f974095c-0453-40a9-97d9-d93fd49971b8 req-977eda81-ebb8-4a87-9465-cc6eac3fc975 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Received unexpected event network-vif-plugged-d80acda3-66d1-4992-889f-b668c7ef1094 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.752 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.886 250273 DEBUG oslo_concurrency.lockutils [None req-73985b92-bf77-4346-a645-6a381da0f9a1 9127d08a3bf5404e8cb8c84ed7152834 449f402258804f41b10f91a13da1176d - - default default] Lock "6679cffe-216f-4ac9-87ef-45526b43ad12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.948 250273 DEBUG nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-unplugged-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.949 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.949 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.949 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.949 250273 DEBUG nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] No waiting events found dispatching network-vif-unplugged-d72fb045-89f6-4d64-beed-2672b5b1e254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.949 250273 DEBUG nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-unplugged-d72fb045-89f6-4d64-beed-2672b5b1e254 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.950 250273 DEBUG nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.950 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.950 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.951 250273 DEBUG oslo_concurrency.lockutils [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.951 250273 DEBUG nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] No waiting events found dispatching network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:10 np0005593232 nova_compute[250269]: 2026-01-23 10:03:10.951 250273 WARNING nova.compute.manager [req-7f4813d2-6e0f-4f37-93ff-9770175bd666 req-52b7b84b-d5d3-4e2d-88a4-a7fb2fdede50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received unexpected event network-vif-plugged-d72fb045-89f6-4d64-beed-2672b5b1e254 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:03:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2160: 321 pgs: 321 active+clean; 599 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 7.8 MiB/s wr, 370 op/s
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.433 250273 DEBUG nova.network.neutron [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.463 250273 INFO nova.compute.manager [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Took 1.39 seconds to deallocate network for instance.#033[00m
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.516 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.517 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.603 250273 DEBUG oslo_concurrency.processutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:11 np0005593232 nova_compute[250269]: 2026-01-23 10:03:11.892 250273 DEBUG nova.compute.manager [req-aa3ae095-8a5b-4f87-aaf3-98071c19c56d req-d85bc705-bfaa-4a85-b82a-2fcf7cb17978 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Received event network-vif-deleted-d72fb045-89f6-4d64-beed-2672b5b1e254 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120257233' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.075 250273 DEBUG oslo_concurrency.processutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.084 250273 DEBUG nova.compute.provider_tree [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.126 250273 DEBUG nova.scheduler.client.report [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.181 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.228 250273 INFO nova.scheduler.client.report [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Deleted allocations for instance a50f4fa0-c4bd-41c7-be13-6afd972661b6#033[00m
Jan 23 05:03:12 np0005593232 nova_compute[250269]: 2026-01-23 10:03:12.350 250273 DEBUG oslo_concurrency.lockutils [None req-9b96052c-ff8b-4b32-83e7-9ce30df436a1 c09e682996b940dc97c866f9e4f1e74e 0f5ca0233c1a490aa2d596b88a0ec503 - - default default] Lock "a50f4fa0-c4bd-41c7-be13-6afd972661b6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:12.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2161: 321 pgs: 321 active+clean; 283 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.9 MiB/s wr, 538 op/s
Jan 23 05:03:13 np0005593232 nova_compute[250269]: 2026-01-23 10:03:13.837 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162578.8357968, 799fa81c-31fa-4706-ba6f-5918fddd4caa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:13 np0005593232 nova_compute[250269]: 2026-01-23 10:03:13.838 250273 INFO nova.compute.manager [-] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:03:13 np0005593232 nova_compute[250269]: 2026-01-23 10:03:13.933 250273 DEBUG nova.compute.manager [None req-08fc44a1-2d27-4067-8ddc-6d4a0cc2a204 - - - - - -] [instance: 799fa81c-31fa-4706-ba6f-5918fddd4caa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:14.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:14 np0005593232 podman[318596]: 2026-01-23 10:03:14.411425211 +0000 UTC m=+0.066050808 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:14.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:14 np0005593232 nova_compute[250269]: 2026-01-23 10:03:14.552 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2162: 321 pgs: 321 active+clean; 283 MiB data, 965 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 771 KiB/s wr, 339 op/s
Jan 23 05:03:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:15Z|00353|binding|INFO|Releasing lport 702d4523-a665-42f5-9a36-57d187c0698a from this chassis (sb_readonly=0)
Jan 23 05:03:15 np0005593232 nova_compute[250269]: 2026-01-23 10:03:15.279 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:15 np0005593232 nova_compute[250269]: 2026-01-23 10:03:15.756 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2163: 321 pgs: 321 active+clean; 269 MiB data, 956 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 772 KiB/s wr, 413 op/s
Jan 23 05:03:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:18.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2164: 321 pgs: 321 active+clean; 202 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 72 KiB/s wr, 475 op/s
Jan 23 05:03:19 np0005593232 nova_compute[250269]: 2026-01-23 10:03:19.555 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:20.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:20.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:20 np0005593232 nova_compute[250269]: 2026-01-23 10:03:20.759 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2165: 321 pgs: 321 active+clean; 202 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 238 KiB/s rd, 17 KiB/s wr, 372 op/s
Jan 23 05:03:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:22 np0005593232 nova_compute[250269]: 2026-01-23 10:03:22.124 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:22 np0005593232 nova_compute[250269]: 2026-01-23 10:03:22.125 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:22 np0005593232 nova_compute[250269]: 2026-01-23 10:03:22.125 250273 DEBUG nova.network.neutron [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:03:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:03:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:03:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:22.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2166: 321 pgs: 321 active+clean; 202 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 239 KiB/s rd, 17 KiB/s wr, 373 op/s
Jan 23 05:03:23 np0005593232 nova_compute[250269]: 2026-01-23 10:03:23.269 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162588.2682843, 6679cffe-216f-4ac9-87ef-45526b43ad12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:23 np0005593232 nova_compute[250269]: 2026-01-23 10:03:23.270 250273 INFO nova.compute.manager [-] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:03:23 np0005593232 nova_compute[250269]: 2026-01-23 10:03:23.300 250273 DEBUG nova.compute.manager [None req-25f8f9b2-55db-435d-b285-cff9044405c9 - - - - - -] [instance: 6679cffe-216f-4ac9-87ef-45526b43ad12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.314 250273 DEBUG nova.network.neutron [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.337 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.466 250273 DEBUG nova.virt.libvirt.driver [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.469 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Creating file /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/df20420b24e445019780718dc480cc46.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.469 250273 DEBUG oslo_concurrency.processutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/df20420b24e445019780718dc480cc46.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:24.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.517 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162589.5169828, a50f4fa0-c4bd-41c7-be13-6afd972661b6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.518 250273 INFO nova.compute.manager [-] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.541 250273 DEBUG nova.compute.manager [None req-f5b54729-0635-4bce-b51a-f90a8fd3977f - - - - - -] [instance: a50f4fa0-c4bd-41c7-be13-6afd972661b6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.557 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.894 250273 DEBUG oslo_concurrency.processutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/df20420b24e445019780718dc480cc46.tmp" returned: 1 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.895 250273 DEBUG oslo_concurrency.processutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7/df20420b24e445019780718dc480cc46.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.895 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Creating directory /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 23 05:03:24 np0005593232 nova_compute[250269]: 2026-01-23 10:03:24.896 250273 DEBUG oslo_concurrency.processutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2167: 321 pgs: 321 active+clean; 202 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 1.2 KiB/s wr, 204 op/s
Jan 23 05:03:25 np0005593232 nova_compute[250269]: 2026-01-23 10:03:25.091 250273 DEBUG oslo_concurrency.processutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/1bdbf4d2-447b-47d0-8b3f-878ee65905a7" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:25 np0005593232 nova_compute[250269]: 2026-01-23 10:03:25.097 250273 DEBUG nova.virt.libvirt.driver [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:03:25 np0005593232 nova_compute[250269]: 2026-01-23 10:03:25.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2168: 321 pgs: 321 active+clean; 202 MiB data, 928 MiB used, 20 GiB / 21 GiB avail; 126 KiB/s rd, 3.5 KiB/s wr, 205 op/s
Jan 23 05:03:27 np0005593232 kernel: tap35f84523-a0 (unregistering): left promiscuous mode
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.382 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:27 np0005593232 NetworkManager[49057]: <info>  [1769162607.3871] device (tap35f84523-a0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:03:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:27Z|00354|binding|INFO|Releasing lport 35f84523-a0b5-4102-ba04-cc5da6075d54 from this chassis (sb_readonly=0)
Jan 23 05:03:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:27Z|00355|binding|INFO|Setting lport 35f84523-a0b5-4102-ba04-cc5da6075d54 down in Southbound
Jan 23 05:03:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:27Z|00356|binding|INFO|Removing iface tap35f84523-a0 ovn-installed in OVS
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:27.413 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:d6:dc 10.100.0.5'], port_security=['fa:16:3e:b8:d6:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1bdbf4d2-447b-47d0-8b3f-878ee65905a7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c5c1d0762242f29a5d26033efd9f6d', 'neutron:revision_number': '10', 'neutron:security_group_ids': '53abfec9-e9a4-4b72-b0e0-38bea0069f7b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.175', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=26eced02-0507-4a33-9943-52faf3fc8cd2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=35f84523-a0b5-4102-ba04-cc5da6075d54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:03:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:27.414 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 35f84523-a0b5-4102-ba04-cc5da6075d54 in datapath ee03d7c9-e107-41bf-95cc-5508578ad66c unbound from our chassis#033[00m
Jan 23 05:03:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:27.416 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ee03d7c9-e107-41bf-95cc-5508578ad66c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:03:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:27.417 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8a95e4-b576-4af1-aa79-67e517a0a41b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:27.418 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c namespace which is not needed anymore#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:27 np0005593232 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 23 05:03:27 np0005593232 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d0000005d.scope: Consumed 18.301s CPU time.
Jan 23 05:03:27 np0005593232 systemd-machined[215836]: Machine qemu-39-instance-0000005d terminated.
Jan 23 05:03:27 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [NOTICE]   (314044) : haproxy version is 2.8.14-c23fe91
Jan 23 05:03:27 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [NOTICE]   (314044) : path to executable is /usr/sbin/haproxy
Jan 23 05:03:27 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [WARNING]  (314044) : Exiting Master process...
Jan 23 05:03:27 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [ALERT]    (314044) : Current worker (314046) exited with code 143 (Terminated)
Jan 23 05:03:27 np0005593232 neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c[314039]: [WARNING]  (314044) : All workers exited. Exiting... (0)
Jan 23 05:03:27 np0005593232 systemd[1]: libpod-a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b.scope: Deactivated successfully.
Jan 23 05:03:27 np0005593232 podman[318701]: 2026-01-23 10:03:27.558137933 +0000 UTC m=+0.047099540 container died a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b-userdata-shm.mount: Deactivated successfully.
Jan 23 05:03:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e8b5a662caca3c5bbfe48d83d303b6beed6f58ce93e5aa508589747e4528e32a-merged.mount: Deactivated successfully.
Jan 23 05:03:27 np0005593232 podman[318701]: 2026-01-23 10:03:27.648389957 +0000 UTC m=+0.137351564 container cleanup a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:03:27 np0005593232 systemd[1]: libpod-conmon-a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b.scope: Deactivated successfully.
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.852 250273 DEBUG nova.compute.manager [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.853 250273 DEBUG oslo_concurrency.lockutils [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.853 250273 DEBUG oslo_concurrency.lockutils [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.854 250273 DEBUG oslo_concurrency.lockutils [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.854 250273 DEBUG nova.compute.manager [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:27 np0005593232 nova_compute[250269]: 2026-01-23 10:03:27.854 250273 WARNING nova.compute.manager [req-735a8fb7-99cc-4b80-adca-87fa6d702271 req-280c2595-4d5e-4102-8570-1d0a90a1a49e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-unplugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.116 250273 INFO nova.virt.libvirt.driver [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.121 250273 INFO nova.virt.libvirt.driver [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Instance destroyed successfully.#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.122 250273 DEBUG nova.virt.libvirt.vif [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T09:59:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:03:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-267124880-network", "vif_mac": "fa:16:3e:b8:d6:dc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.122 250273 DEBUG nova.network.os_vif_util [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestJSON-267124880-network", "vif_mac": "fa:16:3e:b8:d6:dc"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.123 250273 DEBUG nova.network.os_vif_util [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.124 250273 DEBUG os_vif [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.128 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.128 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f84523-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.130 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.134 250273 INFO os_vif [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.138 250273 DEBUG nova.virt.libvirt.driver [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.138 250273 DEBUG nova.virt.libvirt.driver [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:03:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:28 np0005593232 podman[318738]: 2026-01-23 10:03:28.862110974 +0000 UTC m=+1.188818960 container remove a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.870 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9de638-f85e-4138-81d5-afe79df34b40]: (4, ('Fri Jan 23 10:03:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b)\na6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b\nFri Jan 23 10:03:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c (a6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b)\na6af6f138373a8d405eff1ea618cf1d9a5983f442a1c5874bf353859d1c0609b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.873 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e4fe0127-406d-49b0-afa4-d9def4f62f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.874 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapee03d7c9-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.876 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:28 np0005593232 kernel: tapee03d7c9-e0: left promiscuous mode
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.896 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[260d6076-d062-414b-b6b2-68357520d1e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 nova_compute[250269]: 2026-01-23 10:03:28.911 250273 DEBUG neutronclient.v2_0.client [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 35f84523-a0b5-4102-ba04-cc5da6075d54 for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.918 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c826092-5ac7-4b84-87a6-9f605f32bd7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.919 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[68ae17e2-ff74-4242-8c6f-3dd49428251c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.936 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85d5af07-6e0d-459a-bb1b-946b607e3bfb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 640611, 'reachable_time': 28165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318758, 'error': None, 'target': 'ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:28 np0005593232 systemd[1]: run-netns-ovnmeta\x2dee03d7c9\x2de107\x2d41bf\x2d95cc\x2d5508578ad66c.mount: Deactivated successfully.
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.939 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ee03d7c9-e107-41bf-95cc-5508578ad66c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:03:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:28.939 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[38396fae-61fe-4de1-8c7f-259444bdee9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2169: 321 pgs: 321 active+clean; 202 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 6.7 KiB/s wr, 133 op/s
Jan 23 05:03:29 np0005593232 nova_compute[250269]: 2026-01-23 10:03:29.033 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:29 np0005593232 nova_compute[250269]: 2026-01-23 10:03:29.033 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:29 np0005593232 nova_compute[250269]: 2026-01-23 10:03:29.033 250273 DEBUG oslo_concurrency.lockutils [None req-1fddec7d-2ba4-43c7-8f78-574f9a080d64 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:29 np0005593232 nova_compute[250269]: 2026-01-23 10:03:29.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.038 250273 DEBUG nova.compute.manager [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.039 250273 DEBUG oslo_concurrency.lockutils [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.039 250273 DEBUG oslo_concurrency.lockutils [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.039 250273 DEBUG oslo_concurrency.lockutils [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.040 250273 DEBUG nova.compute.manager [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.040 250273 WARNING nova.compute.manager [req-dd01ca6a-b817-4cbb-9ae4-afaf80dad27f req-8e6cfb1d-091b-4f55-88a4-c96bb8c0325f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:03:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.414 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.415 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.438 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:03:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:30.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.550 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.551 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.559 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.559 250273 INFO nova.compute.claims [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.621 250273 DEBUG nova.compute.manager [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.621 250273 DEBUG nova.compute.manager [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing instance network info cache due to event network-changed-35f84523-a0b5-4102-ba04-cc5da6075d54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.621 250273 DEBUG oslo_concurrency.lockutils [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.622 250273 DEBUG oslo_concurrency.lockutils [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.622 250273 DEBUG nova.network.neutron [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Refreshing network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.763 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:30 np0005593232 nova_compute[250269]: 2026-01-23 10:03:30.768 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2170: 321 pgs: 321 active+clean; 202 MiB data, 925 MiB used, 20 GiB / 21 GiB avail; 3.4 KiB/s rd, 6.4 KiB/s wr, 2 op/s
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1745102224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.228 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.235 250273 DEBUG nova.compute.provider_tree [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.260 250273 DEBUG nova.scheduler.client.report [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.312 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.313 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.514 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.515 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.541 250273 INFO nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.569 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.744 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.745 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.745 250273 INFO nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating image(s)#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.829 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.866 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.902 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.909 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.956 250273 DEBUG nova.policy [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '29710db389c842df836944048225740f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.990 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.991 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.992 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:31 np0005593232 nova_compute[250269]: 2026-01-23 10:03:31.992 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.022 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.026 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:32.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.316 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.327 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.418 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] resizing rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:03:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.545 250273 DEBUG nova.objects.instance [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'migration_context' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.587 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.588 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Ensure instance console log exists: /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.588 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.589 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.589 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.952 250273 DEBUG nova.network.neutron [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updated VIF entry in instance network info cache for port 35f84523-a0b5-4102-ba04-cc5da6075d54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.953 250273 DEBUG nova.network.neutron [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:32 np0005593232 nova_compute[250269]: 2026-01-23 10:03:32.972 250273 DEBUG oslo_concurrency.lockutils [req-036dd269-5d50-4301-93ca-855fa8f851db req-0f8b00cb-d70f-4e7f-9913-502ec932c8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 23 05:03:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2171: 321 pgs: 321 active+clean; 271 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 23 05:03:33 np0005593232 nova_compute[250269]: 2026-01-23 10:03:33.133 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:33 np0005593232 nova_compute[250269]: 2026-01-23 10:03:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:33 np0005593232 nova_compute[250269]: 2026-01-23 10:03:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:33 np0005593232 nova_compute[250269]: 2026-01-23 10:03:33.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5a1b5a70-499c-4b03-b713-7c687d02333d does not exist
Jan 23 05:03:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 19e2b8f1-901c-45c5-82d9-7b5fd9428b6a does not exist
Jan 23 05:03:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aac87217-2f53-4071-a814-b5f5d72187b7 does not exist
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:03:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:03:33 np0005593232 nova_compute[250269]: 2026-01-23 10:03:33.717 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Successfully created port: 72849734-6d62-43db-b9e3-9c5f39b22d9d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:03:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:03:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:03:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.298219893 +0000 UTC m=+0.049600291 container create d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:03:34 np0005593232 systemd[1]: Started libpod-conmon-d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72.scope.
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.278008208 +0000 UTC m=+0.029388626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.393107429 +0000 UTC m=+0.144487847 container init d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.403226447 +0000 UTC m=+0.154606855 container start d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:03:34 np0005593232 laughing_buck[319238]: 167 167
Jan 23 05:03:34 np0005593232 systemd[1]: libpod-d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72.scope: Deactivated successfully.
Jan 23 05:03:34 np0005593232 conmon[319238]: conmon d0f7e1266dcdd029dadc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72.scope/container/memory.events
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.422279628 +0000 UTC m=+0.173660086 container attach d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.423650627 +0000 UTC m=+0.175031035 container died d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:03:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5197ede55345ae03e01bd643b6dfa3db81e91e1c0c8b3e3d964dead3d3915b13-merged.mount: Deactivated successfully.
Jan 23 05:03:34 np0005593232 podman[319222]: 2026-01-23 10:03:34.467718319 +0000 UTC m=+0.219098717 container remove d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:03:34 np0005593232 systemd[1]: libpod-conmon-d0f7e1266dcdd029dadc860bee00004e207f0d6a69a4b82bdf496155e67cef72.scope: Deactivated successfully.
Jan 23 05:03:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:34.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:34 np0005593232 podman[319263]: 2026-01-23 10:03:34.644607526 +0000 UTC m=+0.046901794 container create 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:03:34 np0005593232 systemd[1]: Started libpod-conmon-5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9.scope.
Jan 23 05:03:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:34 np0005593232 podman[319263]: 2026-01-23 10:03:34.628334783 +0000 UTC m=+0.030629071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:34 np0005593232 podman[319263]: 2026-01-23 10:03:34.732357069 +0000 UTC m=+0.134651357 container init 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:03:34 np0005593232 podman[319263]: 2026-01-23 10:03:34.741620532 +0000 UTC m=+0.143914800 container start 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:03:34 np0005593232 podman[319263]: 2026-01-23 10:03:34.744828403 +0000 UTC m=+0.147122671 container attach 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:03:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2173: 321 pgs: 321 active+clean; 271 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 55 op/s
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.486 250273 DEBUG nova.compute.manager [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.489 250273 DEBUG oslo_concurrency.lockutils [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.490 250273 DEBUG oslo_concurrency.lockutils [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.490 250273 DEBUG oslo_concurrency.lockutils [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.490 250273 DEBUG nova.compute.manager [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.491 250273 WARNING nova.compute.manager [req-2a66a981-26ed-4db3-9cae-3e594e263d43 req-b88c73c9-aec8-49a4-b162-52abc89f903d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:03:35 np0005593232 xenodochial_lichterman[319279]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:03:35 np0005593232 xenodochial_lichterman[319279]: --> relative data size: 1.0
Jan 23 05:03:35 np0005593232 xenodochial_lichterman[319279]: --> All data devices are unavailable
Jan 23 05:03:35 np0005593232 systemd[1]: libpod-5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9.scope: Deactivated successfully.
Jan 23 05:03:35 np0005593232 podman[319263]: 2026-01-23 10:03:35.569783744 +0000 UTC m=+0.972078012 container died 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:03:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-846e9b384f7430559af11ea8418efcc2dce2e9ff9c86fe5d2a483f822773505b-merged.mount: Deactivated successfully.
Jan 23 05:03:35 np0005593232 podman[319263]: 2026-01-23 10:03:35.629694987 +0000 UTC m=+1.031989255 container remove 5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lichterman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:03:35 np0005593232 systemd[1]: libpod-conmon-5768c9784a037ac2a91355ec92a969f7be57742950cd90afa6bc026a97b42dd9.scope: Deactivated successfully.
Jan 23 05:03:35 np0005593232 nova_compute[250269]: 2026-01-23 10:03:35.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:36.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.241936234 +0000 UTC m=+0.043179618 container create e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:03:36 np0005593232 systemd[1]: Started libpod-conmon-e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba.scope.
Jan 23 05:03:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.22279054 +0000 UTC m=+0.024034004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.322278047 +0000 UTC m=+0.123521481 container init e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.32803105 +0000 UTC m=+0.129274434 container start e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.331173019 +0000 UTC m=+0.132416433 container attach e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:03:36 np0005593232 eager_sanderson[319465]: 167 167
Jan 23 05:03:36 np0005593232 systemd[1]: libpod-e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba.scope: Deactivated successfully.
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.334674769 +0000 UTC m=+0.135918183 container died e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:03:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e0c6474e92058186e6f3ba559ddbb239eb62437d1b22f62dc56b71db0379c5df-merged.mount: Deactivated successfully.
Jan 23 05:03:36 np0005593232 podman[319449]: 2026-01-23 10:03:36.372291158 +0000 UTC m=+0.173534532 container remove e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:03:36 np0005593232 systemd[1]: libpod-conmon-e7aeb4ffbf84be7131663528fb58c27fe2131b553a66efcc193fd3c863b872ba.scope: Deactivated successfully.
Jan 23 05:03:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:36.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:36 np0005593232 podman[319488]: 2026-01-23 10:03:36.522966059 +0000 UTC m=+0.039560675 container create 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:03:36 np0005593232 systemd[1]: Started libpod-conmon-7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b.scope.
Jan 23 05:03:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:36 np0005593232 nova_compute[250269]: 2026-01-23 10:03:36.578 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Successfully updated port: 72849734-6d62-43db-b9e3-9c5f39b22d9d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:03:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658e6abb053907d6e57bda74232e15afeac391ea6df12f06805e06fc63e04998/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658e6abb053907d6e57bda74232e15afeac391ea6df12f06805e06fc63e04998/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658e6abb053907d6e57bda74232e15afeac391ea6df12f06805e06fc63e04998/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/658e6abb053907d6e57bda74232e15afeac391ea6df12f06805e06fc63e04998/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:36 np0005593232 podman[319488]: 2026-01-23 10:03:36.594157532 +0000 UTC m=+0.110752148 container init 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 23 05:03:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:36 np0005593232 podman[319488]: 2026-01-23 10:03:36.504904156 +0000 UTC m=+0.021498792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:36 np0005593232 podman[319488]: 2026-01-23 10:03:36.604496256 +0000 UTC m=+0.121090872 container start 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:03:36 np0005593232 podman[319488]: 2026-01-23 10:03:36.607681396 +0000 UTC m=+0.124276042 container attach 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:03:36 np0005593232 nova_compute[250269]: 2026-01-23 10:03:36.621 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:36 np0005593232 nova_compute[250269]: 2026-01-23 10:03:36.621 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquired lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:36 np0005593232 nova_compute[250269]: 2026-01-23 10:03:36.621 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:03:36 np0005593232 nova_compute[250269]: 2026-01-23 10:03:36.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2174: 321 pgs: 321 active+clean; 285 MiB data, 952 MiB used, 20 GiB / 21 GiB avail; 628 KiB/s rd, 4.1 MiB/s wr, 77 op/s
Jan 23 05:03:37 np0005593232 nova_compute[250269]: 2026-01-23 10:03:37.116 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:03:37
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr']
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]: {
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:    "0": [
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:        {
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "devices": [
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "/dev/loop3"
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            ],
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "lv_name": "ceph_lv0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "lv_size": "7511998464",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "name": "ceph_lv0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "tags": {
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.cluster_name": "ceph",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.crush_device_class": "",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.encrypted": "0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.osd_id": "0",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.type": "block",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:                "ceph.vdo": "0"
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            },
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "type": "block",
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:            "vg_name": "ceph_vg0"
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:        }
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]:    ]
Jan 23 05:03:37 np0005593232 affectionate_kirch[319504]: }
Jan 23 05:03:37 np0005593232 systemd[1]: libpod-7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b.scope: Deactivated successfully.
Jan 23 05:03:37 np0005593232 conmon[319504]: conmon 7a1c9c74a80e700e4fd1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b.scope/container/memory.events
Jan 23 05:03:37 np0005593232 podman[319488]: 2026-01-23 10:03:37.443149137 +0000 UTC m=+0.959743763 container died 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:03:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-658e6abb053907d6e57bda74232e15afeac391ea6df12f06805e06fc63e04998-merged.mount: Deactivated successfully.
Jan 23 05:03:37 np0005593232 podman[319488]: 2026-01-23 10:03:37.503415689 +0000 UTC m=+1.020010305 container remove 7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:03:37 np0005593232 systemd[1]: libpod-conmon-7a1c9c74a80e700e4fd117e527e21139f48650e1cab46abfce15b230b2e8b37b.scope: Deactivated successfully.
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:03:37 np0005593232 nova_compute[250269]: 2026-01-23 10:03:37.654 250273 DEBUG nova.compute.manager [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-changed-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:37 np0005593232 nova_compute[250269]: 2026-01-23 10:03:37.655 250273 DEBUG nova.compute.manager [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Refreshing instance network info cache due to event network-changed-72849734-6d62-43db-b9e3-9c5f39b22d9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:03:37 np0005593232 nova_compute[250269]: 2026-01-23 10:03:37.655 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.116265255 +0000 UTC m=+0.042689034 container create 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.136 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:38 np0005593232 systemd[1]: Started libpod-conmon-7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d.scope.
Jan 23 05:03:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:38.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.098967063 +0000 UTC m=+0.025390842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.201568979 +0000 UTC m=+0.127992788 container init 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.2097141 +0000 UTC m=+0.136137879 container start 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:03:38 np0005593232 sweet_lamport[319683]: 167 167
Jan 23 05:03:38 np0005593232 systemd[1]: libpod-7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d.scope: Deactivated successfully.
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.215081213 +0000 UTC m=+0.141504992 container attach 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.215756382 +0000 UTC m=+0.142180161 container died 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:03:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-89426b393820d94fb1935b2b7d8607325ff253b1fb406568fe53ef8f5b21b63a-merged.mount: Deactivated successfully.
Jan 23 05:03:38 np0005593232 podman[319667]: 2026-01-23 10:03:38.248238475 +0000 UTC m=+0.174662244 container remove 7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lamport, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:03:38 np0005593232 systemd[1]: libpod-conmon-7fd9d68749a19286ddb26aaa5244f2c7fd3f9ea9c25ac49b2859c3d699cb662d.scope: Deactivated successfully.
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.317 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.317 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.340 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:03:38 np0005593232 podman[319705]: 2026-01-23 10:03:38.404066513 +0000 UTC m=+0.039931646 container create e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:03:38 np0005593232 systemd[1]: Started libpod-conmon-e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4.scope.
Jan 23 05:03:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b122a0d3844f65d16c0091640f2b7bde08863a4c6fd7c75428bb575edb8964b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b122a0d3844f65d16c0091640f2b7bde08863a4c6fd7c75428bb575edb8964b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b122a0d3844f65d16c0091640f2b7bde08863a4c6fd7c75428bb575edb8964b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b122a0d3844f65d16c0091640f2b7bde08863a4c6fd7c75428bb575edb8964b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:38 np0005593232 podman[319705]: 2026-01-23 10:03:38.479426494 +0000 UTC m=+0.115291627 container init e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:03:38 np0005593232 podman[319705]: 2026-01-23 10:03:38.385812594 +0000 UTC m=+0.021677747 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:03:38 np0005593232 podman[319705]: 2026-01-23 10:03:38.486581787 +0000 UTC m=+0.122446920 container start e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:03:38 np0005593232 podman[319705]: 2026-01-23 10:03:38.491144447 +0000 UTC m=+0.127009600 container attach e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 05:03:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:38.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.515 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.516 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.517 250273 DEBUG nova.compute.manager [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Going to confirm migration 14 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.818 250273 DEBUG nova.network.neutron [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating instance_info_cache with network_info: [{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.865 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Releasing lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.866 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance network_info: |[{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.866 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.866 250273 DEBUG nova.network.neutron [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Refreshing network info cache for port 72849734-6d62-43db-b9e3-9c5f39b22d9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.870 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start _get_guest_xml network_info=[{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.875 250273 WARNING nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.882 250273 DEBUG nova.virt.libvirt.host [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.883 250273 DEBUG nova.virt.libvirt.host [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.890 250273 DEBUG nova.virt.libvirt.host [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.891 250273 DEBUG nova.virt.libvirt.host [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.893 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.893 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.894 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.894 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.894 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.894 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.895 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.895 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.895 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.895 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.896 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.896 250273 DEBUG nova.virt.hardware [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:03:38 np0005593232 nova_compute[250269]: 2026-01-23 10:03:38.899 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2175: 321 pgs: 321 active+clean; 295 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 257 op/s
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.005 250273 DEBUG neutronclient.v2_0.client [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 35f84523-a0b5-4102-ba04-cc5da6075d54 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.006 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.006 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquired lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.006 250273 DEBUG nova.network.neutron [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.006 250273 DEBUG nova.objects.instance [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'info_cache' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:39 np0005593232 elastic_elion[319722]: {
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:        "osd_id": 0,
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:        "type": "bluestore"
Jan 23 05:03:39 np0005593232 elastic_elion[319722]:    }
Jan 23 05:03:39 np0005593232 elastic_elion[319722]: }
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.327 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.328 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.328 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.328 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.329 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204134198' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:03:39 np0005593232 systemd[1]: libpod-e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4.scope: Deactivated successfully.
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.359 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:39 np0005593232 podman[319764]: 2026-01-23 10:03:39.371148882 +0000 UTC m=+0.022480920 container died e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3274790656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.389 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.393 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b122a0d3844f65d16c0091640f2b7bde08863a4c6fd7c75428bb575edb8964b5-merged.mount: Deactivated successfully.
Jan 23 05:03:39 np0005593232 podman[319764]: 2026-01-23 10:03:39.426504275 +0000 UTC m=+0.077836313 container remove e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:03:39 np0005593232 systemd[1]: libpod-conmon-e48570fa0860ad58c35c8bb5d7ae8b8e024e9d431937f4825e804c4c026897e4.scope: Deactivated successfully.
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d482194-2869-46f0-99c0-091468e83fb1 does not exist
Jan 23 05:03:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0596ce98-2c13-4505-b3cb-75c37b196bb0 does not exist
Jan 23 05:03:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7bcc271f-c4dc-4cff-9c26-969f7802a96b does not exist
Jan 23 05:03:39 np0005593232 podman[319799]: 2026-01-23 10:03:39.56285738 +0000 UTC m=+0.089013471 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419169749' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.814 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:03:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324625156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.861 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.862 250273 DEBUG nova.virt.libvirt.vif [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-tempest.common.compute-instance-1019564397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:03:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.863 250273 DEBUG nova.network.os_vif_util [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.864 250273 DEBUG nova.network.os_vif_util [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.865 250273 DEBUG nova.objects.instance [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.885 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <uuid>40ae15fe-e324-4fc4-b6ee-df051fcbea8f</uuid>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <name>instance-0000006b</name>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:name>tempest-tempest.common.compute-instance-1019564397</nova:name>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:03:38</nova:creationTime>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:user uuid="29710db389c842df836944048225740f">tempest-ServerActionsTestOtherA-882763067-project-member</nova:user>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:project uuid="8c16cd713fa74a88b43e4edf01c273bd">tempest-ServerActionsTestOtherA-882763067</nova:project>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <nova:port uuid="72849734-6d62-43db-b9e3-9c5f39b22d9d">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="serial">40ae15fe-e324-4fc4-b6ee-df051fcbea8f</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="uuid">40ae15fe-e324-4fc4-b6ee-df051fcbea8f</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:18:77:55"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <target dev="tap72849734-6d"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/console.log" append="off"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:03:39 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:03:39 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:03:39 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:03:39 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.886 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Preparing to wait for external event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.886 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.887 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.887 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.888 250273 DEBUG nova.virt.libvirt.vif [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-tempest.common.compute-instance-1019564397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:03:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.888 250273 DEBUG nova.network.os_vif_util [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.889 250273 DEBUG nova.network.os_vif_util [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.889 250273 DEBUG os_vif [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.890 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.891 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.894 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72849734-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.894 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72849734-6d, col_values=(('external_ids', {'iface-id': '72849734-6d62-43db-b9e3-9c5f39b22d9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:77:55', 'vm-uuid': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:39 np0005593232 NetworkManager[49057]: <info>  [1769162619.8972] manager: (tap72849734-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/176)
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.905 250273 INFO os_vif [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d')#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.909 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.910 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.964 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.964 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.964 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No VIF found with MAC fa:16:3e:18:77:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.965 250273 INFO nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Using config drive#033[00m
Jan 23 05:03:39 np0005593232 nova_compute[250269]: 2026-01-23 10:03:39.986 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.100 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.102 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4394MB free_disk=20.855335235595703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.102 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.102 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.159 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration for instance 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 05:03:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:40.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.192 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating resource usage from migration 568d5ba9-cc8d-4d7d-83d3-6c9eb255b306#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.192 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Starting to track outgoing migration 568d5ba9-cc8d-4d7d-83d3-6c9eb255b306 with flavor 68d42077-c749-4366-ba3e-07758debb02d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.462 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration 568d5ba9-cc8d-4d7d-83d3-6c9eb255b306 is active on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.463 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.463 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.463 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:03:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:40.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.622 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.904 250273 DEBUG nova.network.neutron [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Updating instance_info_cache with network_info: [{"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.927 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Releasing lock "refresh_cache-1bdbf4d2-447b-47d0-8b3f-878ee65905a7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:40 np0005593232 nova_compute[250269]: 2026-01-23 10:03:40.928 250273 DEBUG nova.objects.instance [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lazy-loading 'migration_context' on Instance uuid 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:03:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2176: 321 pgs: 321 active+clean; 295 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 257 op/s
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.010 250273 DEBUG nova.storage.rbd_utils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] removing snapshot(nova-resize) on rbd image(1bdbf4d2-447b-47d0-8b3f-878ee65905a7_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/640797016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.144 250273 DEBUG nova.virt.libvirt.vif [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T09:59:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1908507141',display_name='tempest-ServerActionsTestJSON-server-1908507141',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1908507141',id=93,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB0AiEKt9gHrsbueqjCG64VrzhP898xYsJXOd2/6uW3CZrw7c/2vnYXFOKeIp4qvJ25g/gz5/w2irrKH3R3Pyr6HiyEmMxGMtHTZ1L/l92xM4YiKXMLNL4VsFVwX3d+71g==',key_name='tempest-keypair-1055968095',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:03:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c5c1d0762242f29a5d26033efd9f6d',ramdisk_id='',reservation_id='r-ii3s65d1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-1619235720',owner_user_name='tempest-ServerActionsTestJSON-1619235720-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:03:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9d4a5c201efa4992a9ef57d8abdc1675',uuid=1bdbf4d2-447b-47d0-8b3f-878ee65905a7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.144 250273 DEBUG nova.network.os_vif_util [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converting VIF {"id": "35f84523-a0b5-4102-ba04-cc5da6075d54", "address": "fa:16:3e:b8:d6:dc", "network": {"id": "ee03d7c9-e107-41bf-95cc-5508578ad66c", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-267124880-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c5c1d0762242f29a5d26033efd9f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap35f84523-a0", "ovs_interfaceid": "35f84523-a0b5-4102-ba04-cc5da6075d54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.145 250273 DEBUG nova.network.os_vif_util [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.145 250273 DEBUG os_vif [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.147 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.147 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35f84523-a0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.147 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.149 250273 INFO os_vif [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:d6:dc,bridge_name='br-int',has_traffic_filtering=True,id=35f84523-a0b5-4102-ba04-cc5da6075d54,network=Network(ee03d7c9-e107-41bf-95cc-5508578ad66c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap35f84523-a0')#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.149 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.151 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.156 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.173 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.209 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.210 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.210 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.212 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.302 250273 DEBUG oslo_concurrency.processutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.421 250273 INFO nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating config drive at /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.427 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9vixtjo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.559 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9vixtjo" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.593 250273 DEBUG nova.storage.rbd_utils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.596 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.759 250273 DEBUG oslo_concurrency.processutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.760 250273 INFO nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deleting local config drive /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config because it was imported into RBD.#033[00m
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:03:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/344694697' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.807 250273 DEBUG oslo_concurrency.processutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.813 250273 DEBUG nova.compute.provider_tree [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:03:41 np0005593232 kernel: tap72849734-6d: entered promiscuous mode
Jan 23 05:03:41 np0005593232 NetworkManager[49057]: <info>  [1769162621.8201] manager: (tap72849734-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/177)
Jan 23 05:03:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:41Z|00357|binding|INFO|Claiming lport 72849734-6d62-43db-b9e3-9c5f39b22d9d for this chassis.
Jan 23 05:03:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:41Z|00358|binding|INFO|72849734-6d62-43db-b9e3-9c5f39b22d9d: Claiming fa:16:3e:18:77:55 10.100.0.11
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.830 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:77:55 10.100.0.11'], port_security=['fa:16:3e:18:77:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a3146bb5-8649-4287-af90-398646a0838c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=72849734-6d62-43db-b9e3-9c5f39b22d9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.832 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 72849734-6d62-43db-b9e3-9c5f39b22d9d in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b bound to our chassis#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.833 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8575e824-4be0-4206-873e-2f9a3d1ded0b#033[00m
Jan 23 05:03:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:41Z|00359|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d ovn-installed in OVS
Jan 23 05:03:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:41Z|00360|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d up in Southbound
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.838 250273 DEBUG nova.scheduler.client.report [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.843 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.848 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e4a43d5d-c055-4ca0-bd6f-dedd801e7bb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.849 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8575e824-41 in ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.850 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8575e824-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.851 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[798d8ac3-a7d9-4ce7-9c09-435ce44efed1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.851 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eaead8-99da-4473-aa49-e96bd29b2fd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 systemd-udevd[320073]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:03:41 np0005593232 systemd-machined[215836]: New machine qemu-43-instance-0000006b.
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.864 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[592d91cf-56bf-4a6e-ad10-7b4cd1c12c4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 systemd[1]: Started Virtual Machine qemu-43-instance-0000006b.
Jan 23 05:03:41 np0005593232 NetworkManager[49057]: <info>  [1769162621.8735] device (tap72849734-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:03:41 np0005593232 NetworkManager[49057]: <info>  [1769162621.8746] device (tap72849734-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.890 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[11912c38-e0f0-4232-846b-9368f3f197aa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 nova_compute[250269]: 2026-01-23 10:03:41.899 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.919 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc1b59d-4402-4dc3-a029-83e38e133052]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.926 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[23144768-c843-452e-af62-83bac319c2ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 NetworkManager[49057]: <info>  [1769162621.9281] manager: (tap8575e824-40): new Veth device (/org/freedesktop/NetworkManager/Devices/178)
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.954 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[be26bed5-a627-4806-a74b-60b5ed6825ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.958 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e1110d83-52c4-4f90-bbd3-fdffe1eb5156]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:41 np0005593232 NetworkManager[49057]: <info>  [1769162621.9766] device (tap8575e824-40): carrier: link connected
Jan 23 05:03:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:41.984 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe3b7ab-ff2d-4cbc-889d-02c06eaa832f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.002 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[559b5401-5327-497a-9ad9-d7bf6a81d0c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653511, 'reachable_time': 37836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320105, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.019 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07e9649b-b95c-46b0-99a3-89bdea51bdac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:16ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 653511, 'tstamp': 653511}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320106, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.041 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ce1d47bb-90ce-427d-8714-7713885f2d10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 110], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653511, 'reachable_time': 37836, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320107, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.043 250273 INFO nova.scheduler.client.report [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Deleted allocation for migration 568d5ba9-cc8d-4d7d-83d3-6c9eb255b306#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.070 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b6d396e0-f65b-4b99-9fc7-1d817eb24668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.119 250273 DEBUG oslo_concurrency.lockutils [None req-89a23700-31e2-4e1f-bbd1-ef72a305beb0 9d4a5c201efa4992a9ef57d8abdc1675 74c5c1d0762242f29a5d26033efd9f6d - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.132 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a82bd70e-8e08-41ee-87ae-f7d777d896c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.133 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.133 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.134 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8575e824-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.136 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:42 np0005593232 NetworkManager[49057]: <info>  [1769162622.1370] manager: (tap8575e824-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Jan 23 05:03:42 np0005593232 kernel: tap8575e824-40: entered promiscuous mode
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.141 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8575e824-40, col_values=(('external_ids', {'iface-id': 'f7023d86-3158-4cc4-b690-f57bb76e92b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.143 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:42Z|00361|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.145 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.146 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97ce52ab-bb98-4bd2-8993-90193cfaa878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.147 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.148 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'env', 'PROCESS_TAG=haproxy-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8575e824-4be0-4206-873e-2f9a3d1ded0b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:42.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.234 250273 DEBUG nova.network.neutron [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updated VIF entry in instance network info cache for port 72849734-6d62-43db-b9e3-9c5f39b22d9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.234 250273 DEBUG nova.network.neutron [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating instance_info_cache with network_info: [{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.259 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.260 250273 DEBUG nova.compute.manager [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.260 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.261 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.261 250273 DEBUG oslo_concurrency.lockutils [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1bdbf4d2-447b-47d0-8b3f-878ee65905a7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.261 250273 DEBUG nova.compute.manager [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] No waiting events found dispatching network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.262 250273 WARNING nova.compute.manager [req-95c440bd-f2bd-4a19-bedf-809e6261ad0d req-5469e3ab-aa08-4b18-b080-3e2ea02779db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Received unexpected event network-vif-plugged-35f84523-a0b5-4102-ba04-cc5da6075d54 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.267 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162622.2676077, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.268 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Started (Lifecycle Event)#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.293 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.297 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162622.2677383, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.298 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.319 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.322 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.356 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.370 250273 DEBUG nova.compute.manager [req-5ce1c3c6-7d6e-4201-9eb1-78e33398d462 req-5efab4bf-aba8-4e88-af23-ef6c3ff0fda6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.370 250273 DEBUG oslo_concurrency.lockutils [req-5ce1c3c6-7d6e-4201-9eb1-78e33398d462 req-5efab4bf-aba8-4e88-af23-ef6c3ff0fda6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.371 250273 DEBUG oslo_concurrency.lockutils [req-5ce1c3c6-7d6e-4201-9eb1-78e33398d462 req-5efab4bf-aba8-4e88-af23-ef6c3ff0fda6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.371 250273 DEBUG oslo_concurrency.lockutils [req-5ce1c3c6-7d6e-4201-9eb1-78e33398d462 req-5efab4bf-aba8-4e88-af23-ef6c3ff0fda6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.371 250273 DEBUG nova.compute.manager [req-5ce1c3c6-7d6e-4201-9eb1-78e33398d462 req-5efab4bf-aba8-4e88-af23-ef6c3ff0fda6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Processing event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.372 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.375 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162622.3749368, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.375 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.380 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.383 250273 INFO nova.virt.libvirt.driver [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance spawned successfully.#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.384 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.408 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.410 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.427 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.428 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.429 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.429 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.429 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.430 250273 DEBUG nova.virt.libvirt.driver [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.434 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.498 250273 INFO nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Took 10.75 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.499 250273 DEBUG nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:42.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:42 np0005593232 podman[320181]: 2026-01-23 10:03:42.558893563 +0000 UTC m=+0.064949496 container create c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.587 250273 INFO nova.compute.manager [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Took 12.07 seconds to build instance.#033[00m
Jan 23 05:03:42 np0005593232 systemd[1]: Started libpod-conmon-c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167.scope.
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.615 250273 DEBUG oslo_concurrency.lockutils [None req-15a8acd9-97b3-4337-a5a7-b711b70623c2 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.615 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.616 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:03:42.616 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:42 np0005593232 podman[320181]: 2026-01-23 10:03:42.525728681 +0000 UTC m=+0.031784644 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.629 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162607.6284168, 1bdbf4d2-447b-47d0-8b3f-878ee65905a7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:03:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.630 250273 INFO nova.compute.manager [-] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:03:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efec68e94cfd8105cedffda1e1132a6c02116b128e31c0f3868ac9954976a1a2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:03:42 np0005593232 podman[320181]: 2026-01-23 10:03:42.655703554 +0000 UTC m=+0.161759507 container init c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:03:42 np0005593232 nova_compute[250269]: 2026-01-23 10:03:42.655 250273 DEBUG nova.compute.manager [None req-02f931dc-5138-48de-8ac4-c93c908c74c1 - - - - - -] [instance: 1bdbf4d2-447b-47d0-8b3f-878ee65905a7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:03:42 np0005593232 podman[320181]: 2026-01-23 10:03:42.662273051 +0000 UTC m=+0.168328984 container start c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:42 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [NOTICE]   (320200) : New worker (320202) forked
Jan 23 05:03:42 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [NOTICE]   (320200) : Loading success.
Jan 23 05:03:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2178: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 248 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 752 KiB/s wr, 257 op/s
Jan 23 05:03:43 np0005593232 nova_compute[250269]: 2026-01-23 10:03:43.232 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:43 np0005593232 nova_compute[250269]: 2026-01-23 10:03:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:03:43 np0005593232 nova_compute[250269]: 2026-01-23 10:03:43.545 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:44.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:44.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.534 250273 DEBUG nova.compute.manager [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.534 250273 DEBUG oslo_concurrency.lockutils [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.534 250273 DEBUG oslo_concurrency.lockutils [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.534 250273 DEBUG oslo_concurrency.lockutils [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.535 250273 DEBUG nova.compute.manager [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.535 250273 WARNING nova.compute.manager [req-58db8afe-2c8e-48a0-b358-981600c1bdb9 req-026b0a1f-e03c-4819-abd6-e424d4573f89 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state None.#033[00m
Jan 23 05:03:44 np0005593232 podman[320236]: 2026-01-23 10:03:44.744396886 +0000 UTC m=+0.053390718 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:03:44 np0005593232 nova_compute[250269]: 2026-01-23 10:03:44.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2179: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 248 MiB data, 947 MiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 750 KiB/s wr, 256 op/s
Jan 23 05:03:45 np0005593232 nova_compute[250269]: 2026-01-23 10:03:45.820 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:03:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:46.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:03:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:46 np0005593232 nova_compute[250269]: 2026-01-23 10:03:46.621 250273 DEBUG nova.compute.manager [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-changed-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:03:46 np0005593232 nova_compute[250269]: 2026-01-23 10:03:46.621 250273 DEBUG nova.compute.manager [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Refreshing instance network info cache due to event network-changed-72849734-6d62-43db-b9e3-9c5f39b22d9d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:03:46 np0005593232 nova_compute[250269]: 2026-01-23 10:03:46.622 250273 DEBUG oslo_concurrency.lockutils [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:03:46 np0005593232 nova_compute[250269]: 2026-01-23 10:03:46.622 250273 DEBUG oslo_concurrency.lockutils [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:03:46 np0005593232 nova_compute[250269]: 2026-01-23 10:03:46.622 250273 DEBUG nova.network.neutron [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Refreshing network info cache for port 72849734-6d62-43db-b9e3-9c5f39b22d9d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005338650718872138 of space, bias 1.0, pg target 1.6015952156616415 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:03:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:03:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2180: 321 pgs: 321 active+clean; 235 MiB data, 939 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 247 KiB/s wr, 256 op/s
Jan 23 05:03:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:48.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2181: 321 pgs: 321 active+clean; 167 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 163 op/s
Jan 23 05:03:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:49.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:49 np0005593232 nova_compute[250269]: 2026-01-23 10:03:49.366 250273 DEBUG nova.network.neutron [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updated VIF entry in instance network info cache for port 72849734-6d62-43db-b9e3-9c5f39b22d9d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:03:49 np0005593232 nova_compute[250269]: 2026-01-23 10:03:49.367 250273 DEBUG nova.network.neutron [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating instance_info_cache with network_info: [{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:03:49 np0005593232 nova_compute[250269]: 2026-01-23 10:03:49.387 250273 DEBUG oslo_concurrency.lockutils [req-437650dc-c6f0-4e6e-96b4-aef4abc71e6e req-10768c7e-26ad-46c4-959c-f3104a9a02e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-40ae15fe-e324-4fc4-b6ee-df051fcbea8f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:03:49 np0005593232 nova_compute[250269]: 2026-01-23 10:03:49.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:50.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:50 np0005593232 nova_compute[250269]: 2026-01-23 10:03:50.465 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:50 np0005593232 nova_compute[250269]: 2026-01-23 10:03:50.823 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2182: 321 pgs: 321 active+clean; 167 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 21 KiB/s wr, 163 op/s
Jan 23 05:03:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:51.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 23 05:03:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 23 05:03:51 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 23 05:03:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:52.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2184: 321 pgs: 321 active+clean; 213 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 23 05:03:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:53.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:54 np0005593232 nova_compute[250269]: 2026-01-23 10:03:54.906 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2185: 321 pgs: 321 active+clean; 213 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 143 op/s
Jan 23 05:03:55 np0005593232 nova_compute[250269]: 2026-01-23 10:03:55.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:55.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:55 np0005593232 nova_compute[250269]: 2026-01-23 10:03:55.824 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:03:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:03:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:03:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:03:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2186: 321 pgs: 321 active+clean; 230 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 140 op/s
Jan 23 05:03:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:57Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:18:77:55 10.100.0.11
Jan 23 05:03:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:03:57Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:18:77:55 10.100.0.11
Jan 23 05:03:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:57.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:03:58.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2187: 321 pgs: 321 active+clean; 288 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 6.8 MiB/s wr, 130 op/s
Jan 23 05:03:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:03:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:03:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:03:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:03:59 np0005593232 nova_compute[250269]: 2026-01-23 10:03:59.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:00.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:00 np0005593232 nova_compute[250269]: 2026-01-23 10:04:00.826 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2188: 321 pgs: 321 active+clean; 288 MiB data, 967 MiB used, 20 GiB / 21 GiB avail; 329 KiB/s rd, 6.8 MiB/s wr, 130 op/s
Jan 23 05:04:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:01.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:01 np0005593232 nova_compute[250269]: 2026-01-23 10:04:01.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:01 np0005593232 nova_compute[250269]: 2026-01-23 10:04:01.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:04:01 np0005593232 nova_compute[250269]: 2026-01-23 10:04:01.313 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:04:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:02.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2189: 321 pgs: 321 active+clean; 293 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 206 op/s
Jan 23 05:04:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:03.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:03 np0005593232 nova_compute[250269]: 2026-01-23 10:04:03.269 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:03 np0005593232 nova_compute[250269]: 2026-01-23 10:04:03.269 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:03 np0005593232 nova_compute[250269]: 2026-01-23 10:04:03.288 250273 DEBUG nova.objects.instance [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'flavor' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:03 np0005593232 nova_compute[250269]: 2026-01-23 10:04:03.659 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.088 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.089 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.089 250273 INFO nova.compute.manager [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attaching volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d to /dev/vdb#033[00m
Jan 23 05:04:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.328 250273 DEBUG os_brick.utils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.331 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.346 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.346 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b907f4-ed97-483c-8c7e-f2986109a294]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.348 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.364 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.365 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[64dfe44f-4d07-4214-b836-3e92cc453f81]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.367 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.376 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.376 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[50fb6b39-c046-4dc4-aaaf-b20336a8ddf7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.377 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f53e21b7-4ee9-43a2-af53-447a21ab497a]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.378 250273 DEBUG oslo_concurrency.processutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.408 250273 DEBUG oslo_concurrency.processutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.412 250273 DEBUG os_brick.initiator.connectors.lightos [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.412 250273 DEBUG os_brick.initiator.connectors.lightos [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.412 250273 DEBUG os_brick.initiator.connectors.lightos [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.413 250273 DEBUG os_brick.utils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.413 250273 DEBUG nova.virt.block_device [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating existing volume attachment record: 5460fba2-e37d-4d8c-838a-3cf4add6ce02 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:04:04 np0005593232 nova_compute[250269]: 2026-01-23 10:04:04.912 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2190: 321 pgs: 321 active+clean; 293 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 23 05:04:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:05.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.441 250273 DEBUG nova.objects.instance [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'flavor' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.476 250273 DEBUG nova.virt.libvirt.driver [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attempting to attach volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.480 250273 DEBUG nova.virt.libvirt.guest [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d">
Jan 23 05:04:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:04:05 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:04:05 np0005593232 nova_compute[250269]:  <serial>bcbb40c5-2718-4ca8-8327-19725d9ddd5d</serial>
Jan 23 05:04:05 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:04:05 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.626 250273 DEBUG nova.virt.libvirt.driver [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.626 250273 DEBUG nova.virt.libvirt.driver [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.626 250273 DEBUG nova.virt.libvirt.driver [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.626 250273 DEBUG nova.virt.libvirt.driver [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No VIF found with MAC fa:16:3e:18:77:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:05 np0005593232 nova_compute[250269]: 2026-01-23 10:04:05.940 250273 DEBUG oslo_concurrency.lockutils [None req-615a4403-0be6-4f79-8449-01bb776fa283 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2191: 321 pgs: 321 active+clean; 300 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.1 MiB/s wr, 203 op/s
Jan 23 05:04:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:07.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:08.157 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:04:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:08.158 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:04:08 np0005593232 nova_compute[250269]: 2026-01-23 10:04:08.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:08.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:08 np0005593232 nova_compute[250269]: 2026-01-23 10:04:08.651 250273 INFO nova.compute.manager [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Rebuilding instance#033[00m
Jan 23 05:04:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2192: 321 pgs: 321 active+clean; 293 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 212 op/s
Jan 23 05:04:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:09.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.297 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'trusted_certs' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.320 250273 DEBUG nova.compute.manager [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.380 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'pci_requests' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.393 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.408 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'resources' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.421 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'migration_context' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.434 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.438 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:04:09 np0005593232 nova_compute[250269]: 2026-01-23 10:04:09.915 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:04:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:10.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:04:10 np0005593232 podman[320371]: 2026-01-23 10:04:10.431942859 +0000 UTC m=+0.084466731 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 05:04:10 np0005593232 nova_compute[250269]: 2026-01-23 10:04:10.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2193: 321 pgs: 321 active+clean; 293 MiB data, 970 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 23 05:04:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:04:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:11.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:04:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:11 np0005593232 kernel: tap72849734-6d (unregistering): left promiscuous mode
Jan 23 05:04:11 np0005593232 NetworkManager[49057]: <info>  [1769162651.7611] device (tap72849734-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:04:11 np0005593232 nova_compute[250269]: 2026-01-23 10:04:11.770 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:11Z|00362|binding|INFO|Releasing lport 72849734-6d62-43db-b9e3-9c5f39b22d9d from this chassis (sb_readonly=0)
Jan 23 05:04:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:11Z|00363|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d down in Southbound
Jan 23 05:04:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:11Z|00364|binding|INFO|Removing iface tap72849734-6d ovn-installed in OVS
Jan 23 05:04:11 np0005593232 nova_compute[250269]: 2026-01-23 10:04:11.775 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:11.782 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:77:55 10.100.0.11'], port_security=['fa:16:3e:18:77:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a3146bb5-8649-4287-af90-398646a0838c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=72849734-6d62-43db-b9e3-9c5f39b22d9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:04:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:11.785 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 72849734-6d62-43db-b9e3-9c5f39b22d9d in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b unbound from our chassis#033[00m
Jan 23 05:04:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:11.787 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8575e824-4be0-4206-873e-2f9a3d1ded0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:04:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:11.789 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbb4b5b-6649-43cc-b320-f3b34066d1b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:11.790 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace which is not needed anymore#033[00m
Jan 23 05:04:11 np0005593232 nova_compute[250269]: 2026-01-23 10:04:11.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:11 np0005593232 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 23 05:04:11 np0005593232 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d0000006b.scope: Consumed 14.347s CPU time.
Jan 23 05:04:11 np0005593232 systemd-machined[215836]: Machine qemu-43-instance-0000006b terminated.
Jan 23 05:04:11 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [NOTICE]   (320200) : haproxy version is 2.8.14-c23fe91
Jan 23 05:04:11 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [NOTICE]   (320200) : path to executable is /usr/sbin/haproxy
Jan 23 05:04:11 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [WARNING]  (320200) : Exiting Master process...
Jan 23 05:04:11 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [ALERT]    (320200) : Current worker (320202) exited with code 143 (Terminated)
Jan 23 05:04:11 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320196]: [WARNING]  (320200) : All workers exited. Exiting... (0)
Jan 23 05:04:11 np0005593232 systemd[1]: libpod-c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167.scope: Deactivated successfully.
Jan 23 05:04:11 np0005593232 podman[320423]: 2026-01-23 10:04:11.922848442 +0000 UTC m=+0.044280029 container died c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:04:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167-userdata-shm.mount: Deactivated successfully.
Jan 23 05:04:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-efec68e94cfd8105cedffda1e1132a6c02116b128e31c0f3868ac9954976a1a2-merged.mount: Deactivated successfully.
Jan 23 05:04:11 np0005593232 podman[320423]: 2026-01-23 10:04:11.967276485 +0000 UTC m=+0.088708082 container cleanup c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:04:11 np0005593232 systemd[1]: libpod-conmon-c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167.scope: Deactivated successfully.
Jan 23 05:04:12 np0005593232 podman[320450]: 2026-01-23 10:04:12.04098482 +0000 UTC m=+0.052283167 container remove c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.048 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cb370f76-6aea-4823-b34b-d4caa0079e8b]: (4, ('Fri Jan 23 10:04:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167)\nc8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167\nFri Jan 23 10:04:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (c8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167)\nc8043afad1b62d8d5be4d78b8732f808639279bc22f2ad4ca50f2a6aeb4cb167\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.050 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2c787a-47a7-4570-9070-5c4020a30729]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.051 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.053 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:12 np0005593232 kernel: tap8575e824-40: left promiscuous mode
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.071 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.076 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c0f4da-19d5-4e08-8f5b-11683b638e2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.097 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9ee8a3ab-0ee2-4c0c-a9b2-bedefce560f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.098 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85c084c6-b7fe-4c28-a04f-7e07e60843be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.114 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1c489a09-8b3f-4cc6-931d-5bd296e2a815]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 653505, 'reachable_time': 39325, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320480, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.117 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.118 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b58cb02a-4c0a-46c2-8a74-a698779c3209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:12 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8575e824\x2d4be0\x2d4206\x2d873e\x2d2f9a3d1ded0b.mount: Deactivated successfully.
Jan 23 05:04:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:12.160 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.453 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.458 250273 INFO nova.virt.libvirt.driver [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance destroyed successfully.#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.739 250273 INFO nova.compute.manager [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Detaching volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.864 250273 DEBUG nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.865 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.865 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.866 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.866 250273 DEBUG nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.866 250273 WARNING nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state rebuilding.#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.866 250273 DEBUG nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.867 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.867 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.867 250273 DEBUG oslo_concurrency.lockutils [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.867 250273 DEBUG nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.868 250273 WARNING nova.compute.manager [req-7b9f1159-7eab-44f8-8322-a55582f4f6c1 req-c13ec1b5-e19c-47bb-8fe1-bd7738b17016 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state rebuilding.#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.967 250273 INFO nova.virt.block_device [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attempting to driver detach volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d from mountpoint /dev/vdb#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.974 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Attempting to detach device vdb from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.975 250273 DEBUG nova.virt.libvirt.guest [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d">
Jan 23 05:04:12 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  <serial>bcbb40c5-2718-4ca8-8327-19725d9ddd5d</serial>
Jan 23 05:04:12 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:04:12 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:04:12 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:04:12 np0005593232 nova_compute[250269]: 2026-01-23 10:04:12.988 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully detached device vdb from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the persistent domain config.#033[00m
Jan 23 05:04:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2194: 321 pgs: 321 active+clean; 293 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 288 op/s
Jan 23 05:04:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:13.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.352 250273 INFO nova.virt.libvirt.driver [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance destroyed successfully.#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.353 250273 DEBUG nova.virt.libvirt.vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-ServerActionsTestOtherA-server-1776476949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:03:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.353 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.354 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.354 250273 DEBUG os_vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.357 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.357 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72849734-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.360 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.482 250273 INFO os_vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d')#033[00m
Jan 23 05:04:13 np0005593232 nova_compute[250269]: 2026-01-23 10:04:13.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:14.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2195: 321 pgs: 321 active+clean; 293 MiB data, 968 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Jan 23 05:04:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:15.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:15 np0005593232 podman[320503]: 2026-01-23 10:04:15.395081788 +0000 UTC m=+0.057963918 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.557 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deleting instance files /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_del#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.558 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deletion of /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_del complete#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.689 250273 INFO nova.virt.block_device [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Booting with volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d at /dev/vdb#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.923 250273 DEBUG os_brick.utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.924 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.937 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.937 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[62693659-acc2-42b2-8c77-f23bbc25e1a8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.939 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.948 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.948 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[1df6561d-04e7-4b6c-b399-bd681de22677]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.950 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.958 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.958 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[cd836b51-98f9-406f-a3d9-230fe44150cd]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.960 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1f62af-05d5-4f9d-b8b3-17fd032ad863]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.960 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.988 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.991 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.992 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.992 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.993 250273 DEBUG os_brick.utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:04:15 np0005593232 nova_compute[250269]: 2026-01-23 10:04:15.993 250273 DEBUG nova.virt.block_device [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating existing volume attachment record: 71bbc3e7-40bf-4373-9460-ce9cc1fb072a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:04:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:16.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2196: 321 pgs: 321 active+clean; 255 MiB data, 955 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Jan 23 05:04:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:17.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.351 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.352 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating image(s)#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.379 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.404 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.427 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.430 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.496 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.497 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.498 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.498 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.524 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.528 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.804 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.879 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] resizing rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.981 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.982 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Ensure instance console log exists: /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.982 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.982 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.983 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.985 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start _get_guest_xml network_info=[{"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '71bbc3e7-40bf-4373-9460-ce9cc1fb072a', 'device_type': 'disk', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'bcbb40c5-2718-4ca8-8327-19725d9ddd5d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f', 'attached_at': '', 'detached_at': '', 'volume_id': 'bcbb40c5-2718-4ca8-8327-19725d9ddd5d', 'serial': 'bcbb40c5-2718-4ca8-8327-19725d9ddd5d'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.989 250273 WARNING nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.996 250273 DEBUG nova.virt.libvirt.host [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:04:17 np0005593232 nova_compute[250269]: 2026-01-23 10:04:17.996 250273 DEBUG nova.virt.libvirt.host [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.000 250273 DEBUG nova.virt.libvirt.host [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.001 250273 DEBUG nova.virt.libvirt.host [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.002 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.003 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.003 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.003 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.004 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.004 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.004 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.004 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.004 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.005 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.005 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.005 250273 DEBUG nova.virt.hardware [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.005 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'vcpu_model' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.070 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:04:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:04:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2598089355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.521 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.558 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:18 np0005593232 nova_compute[250269]: 2026-01-23 10:04:18.564 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:04:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314274719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:04:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2197: 321 pgs: 321 active+clean; 167 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 220 op/s
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.037 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.072 250273 DEBUG nova.virt.libvirt.vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-ServerActionsTestOtherA-server-1776476949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:03:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.073 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.074 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.077 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <uuid>40ae15fe-e324-4fc4-b6ee-df051fcbea8f</uuid>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <name>instance-0000006b</name>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherA-server-1776476949</nova:name>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:04:17</nova:creationTime>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:user uuid="29710db389c842df836944048225740f">tempest-ServerActionsTestOtherA-882763067-project-member</nova:user>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:project uuid="8c16cd713fa74a88b43e4edf01c273bd">tempest-ServerActionsTestOtherA-882763067</nova:project>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="ae1f9e37-418c-462f-81d1-3599a6d89de9"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <nova:port uuid="72849734-6d62-43db-b9e3-9c5f39b22d9d">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="serial">40ae15fe-e324-4fc4-b6ee-df051fcbea8f</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="uuid">40ae15fe-e324-4fc4-b6ee-df051fcbea8f</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <serial>bcbb40c5-2718-4ca8-8327-19725d9ddd5d</serial>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:18:77:55"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <target dev="tap72849734-6d"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/console.log" append="off"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:04:19 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:04:19 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:04:19 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:04:19 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.078 250273 DEBUG nova.virt.libvirt.vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-ServerActionsTestOtherA-server-1776476949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:03:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.078 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.079 250273 DEBUG nova.network.os_vif_util [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.080 250273 DEBUG os_vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.082 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.082 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.084 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.085 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72849734-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.085 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72849734-6d, col_values=(('external_ids', {'iface-id': '72849734-6d62-43db-b9e3-9c5f39b22d9d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:18:77:55', 'vm-uuid': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:19 np0005593232 NetworkManager[49057]: <info>  [1769162659.0879] manager: (tap72849734-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.090 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.094 250273 INFO os_vif [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d')#033[00m
Jan 23 05:04:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:19.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.146 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.147 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.147 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.147 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No VIF found with MAC fa:16:3e:18:77:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.148 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Using config drive#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.179 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.201 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'ec2_ids' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:19 np0005593232 nova_compute[250269]: 2026-01-23 10:04:19.233 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'keypairs' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.194 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Creating config drive at /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.202 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpd70h_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.334 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvpd70h_h" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.362 250273 DEBUG nova.storage.rbd_utils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.366 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.540 250273 DEBUG oslo_concurrency.processutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config 40ae15fe-e324-4fc4-b6ee-df051fcbea8f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.543 250273 INFO nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deleting local config drive /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f/disk.config because it was imported into RBD.#033[00m
Jan 23 05:04:20 np0005593232 kernel: tap72849734-6d: entered promiscuous mode
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.6015] manager: (tap72849734-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/181)
Jan 23 05:04:20 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:20Z|00365|binding|INFO|Claiming lport 72849734-6d62-43db-b9e3-9c5f39b22d9d for this chassis.
Jan 23 05:04:20 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:20Z|00366|binding|INFO|72849734-6d62-43db-b9e3-9c5f39b22d9d: Claiming fa:16:3e:18:77:55 10.100.0.11
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.615 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.6208] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/182)
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.620 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.6217] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/183)
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.626 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:77:55 10.100.0.11'], port_security=['fa:16:3e:18:77:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a3146bb5-8649-4287-af90-398646a0838c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=72849734-6d62-43db-b9e3-9c5f39b22d9d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.627 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 72849734-6d62-43db-b9e3-9c5f39b22d9d in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b bound to our chassis#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.628 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8575e824-4be0-4206-873e-2f9a3d1ded0b#033[00m
Jan 23 05:04:20 np0005593232 systemd-udevd[320834]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:04:20 np0005593232 systemd-machined[215836]: New machine qemu-44-instance-0000006b.
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.642 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[617edac8-47f5-4ae8-90bb-2204eb9a2ff2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.643 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8575e824-41 in ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.644 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8575e824-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.644 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6873e4c-b7ff-460c-a6e2-f911cda78bb4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.6455] device (tap72849734-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.6461] device (tap72849734-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.645 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[34a7b4b3-82e5-47bd-b68b-95b93dc145bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.660 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1fe709-1378-4e4e-8126-cf2e5ec83bd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 systemd[1]: Started Virtual Machine qemu-44-instance-0000006b.
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.683 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[00d91361-2539-450e-a650-b583fa5392e8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.714 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e54f7596-1021-47c4-97a6-51028ad75f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.722 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d958e733-efe6-487a-b22d-9bb84414464b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 systemd-udevd[320838]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.7236] manager: (tap8575e824-40): new Veth device (/org/freedesktop/NetworkManager/Devices/184)
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.767 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[27af5fd4-7cf7-4006-bebb-7de16e16d435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.771 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[92ae6efd-650a-45e5-b409-c2af179d9f10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.7929] device (tap8575e824-40): carrier: link connected
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.798 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0e27ca00-91eb-4836-8599-37972e92e6f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.822 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3da162b3-0151-4ed4-b000-662ac32f0c08]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657393, 'reachable_time': 26361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320868, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.842 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d037041d-e7ad-4fa1-a3fd-476b9c90ff88]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:16ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 657393, 'tstamp': 657393}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320869, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:20Z|00367|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d up in Southbound
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.863 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0103e021-5e5b-419e-920d-05459f073ecf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 113], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657393, 'reachable_time': 26361, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 320870, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.892 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb8bcad-4d68-4e43-914f-9eed451777d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.950 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6683856b-fb6a-48df-a89a-641eb527f387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.951 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.951 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.952 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8575e824-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 NetworkManager[49057]: <info>  [1769162660.9615] manager: (tap8575e824-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Jan 23 05:04:20 np0005593232 kernel: tap8575e824-40: entered promiscuous mode
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.964 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:20.969 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8575e824-40, col_values=(('external_ids', {'iface-id': 'f7023d86-3158-4cc4-b690-f57bb76e92b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:20 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:20Z|00368|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d ovn-installed in OVS
Jan 23 05:04:20 np0005593232 nova_compute[250269]: 2026-01-23 10:04:20.977 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:20 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:20Z|00369|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.002 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:21.003 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.003 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:21.005 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[17d7e23c-04ba-42d3-bf9d-f6d054feb2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:21.005 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:04:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:21.006 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'env', 'PROCESS_TAG=haproxy-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8575e824-4be0-4206-873e-2f9a3d1ded0b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.016 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2198: 321 pgs: 321 active+clean; 167 MiB data, 900 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 194 op/s
Jan 23 05:04:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:21.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.183 250273 DEBUG nova.compute.manager [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.184 250273 DEBUG oslo_concurrency.lockutils [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.184 250273 DEBUG oslo_concurrency.lockutils [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.185 250273 DEBUG oslo_concurrency.lockutils [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.185 250273 DEBUG nova.compute.manager [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.185 250273 WARNING nova.compute.manager [req-32a15cf7-2d78-4d3d-8519-0bf9418724b7 req-c4662ada-95c5-48ce-b29e-ee886540aed6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state rebuild_spawning.#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.256 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 40ae15fe-e324-4fc4-b6ee-df051fcbea8f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.257 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162661.2560222, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.258 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.261 250273 DEBUG nova.compute.manager [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.263 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.268 250273 INFO nova.virt.libvirt.driver [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance spawned successfully.#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.269 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.292 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.302 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.307 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.308 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.309 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.310 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.311 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.312 250273 DEBUG nova.virt.libvirt.driver [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.357 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.358 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162661.2573466, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.358 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Started (Lifecycle Event)#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.396 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.401 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:04:21 np0005593232 podman[320960]: 2026-01-23 10:04:21.403595522 +0000 UTC m=+0.062272210 container create aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.416 250273 DEBUG nova.compute.manager [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.429 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 23 05:04:21 np0005593232 systemd[1]: Started libpod-conmon-aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e.scope.
Jan 23 05:04:21 np0005593232 podman[320960]: 2026-01-23 10:04:21.371087868 +0000 UTC m=+0.029764576 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:04:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b22531e1f80b7315a641f54985aa7202432cd248b2e4165b5f603c846ed93e8c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.488 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.488 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.489 250273 DEBUG nova.objects.instance [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 05:04:21 np0005593232 podman[320960]: 2026-01-23 10:04:21.491667955 +0000 UTC m=+0.150344643 container init aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:04:21 np0005593232 podman[320960]: 2026-01-23 10:04:21.497043117 +0000 UTC m=+0.155719805 container start aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:04:21 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [NOTICE]   (320979) : New worker (320981) forked
Jan 23 05:04:21 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [NOTICE]   (320979) : Loading success.
Jan 23 05:04:21 np0005593232 nova_compute[250269]: 2026-01-23 10:04:21.557 250273 DEBUG oslo_concurrency.lockutils [None req-c06a066c-b90e-4979-840b-eef6b083443f 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:22.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2199: 321 pgs: 321 active+clean; 247 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 4.9 MiB/s rd, 3.9 MiB/s wr, 323 op/s
Jan 23 05:04:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:23.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.332 250273 DEBUG nova.compute.manager [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.332 250273 DEBUG oslo_concurrency.lockutils [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.333 250273 DEBUG oslo_concurrency.lockutils [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.333 250273 DEBUG oslo_concurrency.lockutils [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.333 250273 DEBUG nova.compute.manager [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.334 250273 WARNING nova.compute.manager [req-d0476879-3112-4d47-ae20-15c1fad4c3ec req-9c4171d1-4dbd-468a-bae7-9b7db9410beb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state None.#033[00m
Jan 23 05:04:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:23Z|00370|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:23 np0005593232 nova_compute[250269]: 2026-01-23 10:04:23.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:24.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.241 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.270 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.271 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.271 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:24 np0005593232 nova_compute[250269]: 2026-01-23 10:04:24.316 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2200: 321 pgs: 321 active+clean; 247 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 181 op/s
Jan 23 05:04:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:25.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.119 250273 DEBUG oslo_concurrency.lockutils [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.121 250273 DEBUG oslo_concurrency.lockutils [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.138 250273 INFO nova.compute.manager [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Detaching volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.386 250273 INFO nova.virt.block_device [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Attempting to driver detach volume bcbb40c5-2718-4ca8-8327-19725d9ddd5d from mountpoint /dev/vdb#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.396 250273 DEBUG nova.virt.libvirt.driver [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Attempting to detach device vdb from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.397 250273 DEBUG nova.virt.libvirt.guest [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d">
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <serial>bcbb40c5-2718-4ca8-8327-19725d9ddd5d</serial>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:04:25 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.485 250273 INFO nova.virt.libvirt.driver [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully detached device vdb from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the persistent domain config.#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.486 250273 DEBUG nova.virt.libvirt.driver [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.487 250273 DEBUG nova.virt.libvirt.guest [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-bcbb40c5-2718-4ca8-8327-19725d9ddd5d">
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <serial>bcbb40c5-2718-4ca8-8327-19725d9ddd5d</serial>
Jan 23 05:04:25 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:04:25 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:04:25 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:04:25 np0005593232 nova_compute[250269]: 2026-01-23 10:04:25.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2201: 321 pgs: 321 active+clean; 247 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 23 05:04:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:27.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:27 np0005593232 nova_compute[250269]: 2026-01-23 10:04:27.322 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:28.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:28 np0005593232 nova_compute[250269]: 2026-01-23 10:04:28.462 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:28 np0005593232 nova_compute[250269]: 2026-01-23 10:04:28.511 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769162668.510797, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:04:28 np0005593232 nova_compute[250269]: 2026-01-23 10:04:28.513 250273 DEBUG nova.virt.libvirt.driver [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:04:28 np0005593232 nova_compute[250269]: 2026-01-23 10:04:28.515 250273 INFO nova.virt.libvirt.driver [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully detached device vdb from instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f from the live domain config.#033[00m
Jan 23 05:04:28 np0005593232 nova_compute[250269]: 2026-01-23 10:04:28.982 250273 DEBUG nova.objects.instance [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'flavor' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2202: 321 pgs: 321 active+clean; 247 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 205 op/s
Jan 23 05:04:29 np0005593232 nova_compute[250269]: 2026-01-23 10:04:29.038 250273 DEBUG oslo_concurrency.lockutils [None req-94f69dc8-4545-4ea7-9ef2-3467daa848d6 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:29 np0005593232 nova_compute[250269]: 2026-01-23 10:04:29.091 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:29.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:29 np0005593232 nova_compute[250269]: 2026-01-23 10:04:29.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:04:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:30.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.715 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.716 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.717 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.717 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.717 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.718 250273 INFO nova.compute.manager [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Terminating instance#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.720 250273 DEBUG nova.compute.manager [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:04:30 np0005593232 kernel: tap72849734-6d (unregistering): left promiscuous mode
Jan 23 05:04:30 np0005593232 NetworkManager[49057]: <info>  [1769162670.7589] device (tap72849734-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:04:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:30Z|00371|binding|INFO|Releasing lport 72849734-6d62-43db-b9e3-9c5f39b22d9d from this chassis (sb_readonly=0)
Jan 23 05:04:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:30Z|00372|binding|INFO|Setting lport 72849734-6d62-43db-b9e3-9c5f39b22d9d down in Southbound
Jan 23 05:04:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:04:30Z|00373|binding|INFO|Removing iface tap72849734-6d ovn-installed in OVS
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:30.779 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:18:77:55 10.100.0.11'], port_security=['fa:16:3e:18:77:55 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '40ae15fe-e324-4fc4-b6ee-df051fcbea8f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a3146bb5-8649-4287-af90-398646a0838c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.237', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=72849734-6d62-43db-b9e3-9c5f39b22d9d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:04:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:30.781 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 72849734-6d62-43db-b9e3-9c5f39b22d9d in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b unbound from our chassis#033[00m
Jan 23 05:04:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:30.782 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8575e824-4be0-4206-873e-2f9a3d1ded0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:04:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:30.783 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[417a6ab2-3b4d-46e4-a6b7-40aa0a3ef0e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:30.783 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace which is not needed anymore#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 23 05:04:30 np0005593232 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d0000006b.scope: Consumed 10.132s CPU time.
Jan 23 05:04:30 np0005593232 systemd-machined[215836]: Machine qemu-44-instance-0000006b terminated.
Jan 23 05:04:30 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [NOTICE]   (320979) : haproxy version is 2.8.14-c23fe91
Jan 23 05:04:30 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [NOTICE]   (320979) : path to executable is /usr/sbin/haproxy
Jan 23 05:04:30 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [WARNING]  (320979) : Exiting Master process...
Jan 23 05:04:30 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [ALERT]    (320979) : Current worker (320981) exited with code 143 (Terminated)
Jan 23 05:04:30 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[320975]: [WARNING]  (320979) : All workers exited. Exiting... (0)
Jan 23 05:04:30 np0005593232 systemd[1]: libpod-aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e.scope: Deactivated successfully.
Jan 23 05:04:30 np0005593232 podman[321072]: 2026-01-23 10:04:30.919197641 +0000 UTC m=+0.043522498 container died aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:04:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e-userdata-shm.mount: Deactivated successfully.
Jan 23 05:04:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b22531e1f80b7315a641f54985aa7202432cd248b2e4165b5f603c846ed93e8c-merged.mount: Deactivated successfully.
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.953 250273 INFO nova.virt.libvirt.driver [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Instance destroyed successfully.#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.954 250273 DEBUG nova.objects.instance [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'resources' on Instance uuid 40ae15fe-e324-4fc4-b6ee-df051fcbea8f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:30 np0005593232 podman[321072]: 2026-01-23 10:04:30.955727389 +0000 UTC m=+0.080052226 container cleanup aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:04:30 np0005593232 systemd[1]: libpod-conmon-aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e.scope: Deactivated successfully.
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.989 250273 DEBUG nova.virt.libvirt.vif [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:03:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1019564397',display_name='tempest-ServerActionsTestOtherA-server-1776476949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1019564397',id=107,image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHGL26NAvGQ3xkW0HCIL1RZNLJx0MIysq/lI68J78z++HdYLLG/Y8vj8YSH+2c/ADo8uw7KAcJiRiM8sc9yPRucXw5x0Nng8MP2LtCKXtQpovhYl24CK3lPO8zqGdIODdQ==',key_name='tempest-keypair-549993098',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:04:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-kexumavt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='ae1f9e37-418c-462f-81d1-3599a6d89de9',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:04:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=40ae15fe-e324-4fc4-b6ee-df051fcbea8f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.989 250273 DEBUG nova.network.os_vif_util [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "address": "fa:16:3e:18:77:55", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72849734-6d", "ovs_interfaceid": "72849734-6d62-43db-b9e3-9c5f39b22d9d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.990 250273 DEBUG nova.network.os_vif_util [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.991 250273 DEBUG os_vif [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.992 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.993 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72849734-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.995 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:30 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.997 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:30.999 250273 INFO os_vif [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:18:77:55,bridge_name='br-int',has_traffic_filtering=True,id=72849734-6d62-43db-b9e3-9c5f39b22d9d,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72849734-6d')#033[00m
Jan 23 05:04:31 np0005593232 podman[321110]: 2026-01-23 10:04:31.023294129 +0000 UTC m=+0.046453131 container remove aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:04:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2203: 321 pgs: 321 active+clean; 247 MiB data, 946 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.028 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[62d9b39b-a574-4f01-9735-2af177aca24b]: (4, ('Fri Jan 23 10:04:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e)\naae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e\nFri Jan 23 10:04:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (aae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e)\naae206c64faa73d14eed52472a5ef93a14d2559908747b1acfce5f3a31e2617e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.030 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8948bed4-8f85-4605-b3b6-b55f39fb7e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.031 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:31 np0005593232 kernel: tap8575e824-40: left promiscuous mode
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.047 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.050 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d9adc75c-b615-4f44-b5f2-ba95f26daaa6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.064 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1e4becf4-f740-420a-bac5-19ca1bd1107e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.065 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[06f37b45-4e8b-46ea-8bdd-b23c60a380f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.080 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdc1db8-02f4-41c1-bf72-1bf3af789b0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 657385, 'reachable_time': 40062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321143, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.083 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:04:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:31.084 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[cc293743-ec8e-4e17-a0b9-7f0d8017fe91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:04:31 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8575e824\x2d4be0\x2d4206\x2d873e\x2d2f9a3d1ded0b.mount: Deactivated successfully.
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.092 250273 DEBUG nova.compute.manager [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.092 250273 DEBUG oslo_concurrency.lockutils [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.093 250273 DEBUG oslo_concurrency.lockutils [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.093 250273 DEBUG oslo_concurrency.lockutils [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.093 250273 DEBUG nova.compute.manager [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.094 250273 DEBUG nova.compute.manager [req-14d88cc8-7aa2-4d8d-a4ad-ac9ec3611acd req-a8d337c8-00e8-4fbc-81e5-47c10d2ba27a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-unplugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:04:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:04:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:31.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.374 250273 INFO nova.virt.libvirt.driver [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deleting instance files /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_del#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.375 250273 INFO nova.virt.libvirt.driver [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deletion of /var/lib/nova/instances/40ae15fe-e324-4fc4-b6ee-df051fcbea8f_del complete#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.448 250273 INFO nova.compute.manager [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.449 250273 DEBUG oslo.service.loopingcall [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.450 250273 DEBUG nova.compute.manager [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:04:31 np0005593232 nova_compute[250269]: 2026-01-23 10:04:31.450 250273 DEBUG nova.network.neutron [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:04:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2204: 321 pgs: 321 active+clean; 200 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 195 op/s
Jan 23 05:04:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:04:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:33.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.260 250273 DEBUG nova.compute.manager [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.260 250273 DEBUG oslo_concurrency.lockutils [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.261 250273 DEBUG oslo_concurrency.lockutils [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.261 250273 DEBUG oslo_concurrency.lockutils [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.262 250273 DEBUG nova.compute.manager [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] No waiting events found dispatching network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.262 250273 WARNING nova.compute.manager [req-e93343a7-85bf-40ff-ab8a-9c461b6fea11 req-d0a3afb1-a0f4-4763-b502-3488b6c327e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received unexpected event network-vif-plugged-72849734-6d62-43db-b9e3-9c5f39b22d9d for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.359 250273 DEBUG nova.network.neutron [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:04:33 np0005593232 nova_compute[250269]: 2026-01-23 10:04:33.456 250273 INFO nova.compute.manager [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Took 2.01 seconds to deallocate network for instance.#033[00m
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.160 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.160 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.229 250273 DEBUG oslo_concurrency.processutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.305 250273 DEBUG nova.compute.manager [req-73dd0f39-7fb2-413a-bbcd-db0105c4e452 req-09fa4ec1-6fc2-465a-a6d2-d731d62005b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Received event network-vif-deleted-72849734-6d62-43db-b9e3-9c5f39b22d9d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:04:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:04:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536670825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.676 250273 DEBUG oslo_concurrency.processutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.682 250273 DEBUG nova.compute.provider_tree [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:04:34 np0005593232 nova_compute[250269]: 2026-01-23 10:04:34.730 250273 DEBUG nova.scheduler.client.report [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:04:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2205: 321 pgs: 321 active+clean; 200 MiB data, 935 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 15 KiB/s wr, 67 op/s
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.051 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.113 250273 INFO nova.scheduler.client.report [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Deleted allocations for instance 40ae15fe-e324-4fc4-b6ee-df051fcbea8f#033[00m
Jan 23 05:04:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:35.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.285 250273 DEBUG oslo_concurrency.lockutils [None req-3993dd0b-5885-4853-b0d1-a6c044c215ba 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "40ae15fe-e324-4fc4-b6ee-df051fcbea8f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.972 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:35 np0005593232 nova_compute[250269]: 2026-01-23 10:04:35.995 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:36.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:36 np0005593232 nova_compute[250269]: 2026-01-23 10:04:36.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2206: 321 pgs: 321 active+clean; 200 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 21 KiB/s wr, 68 op/s
Jan 23 05:04:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:37.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:04:37
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'volumes', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'images', 'default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root']
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:04:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:38 np0005593232 nova_compute[250269]: 2026-01-23 10:04:38.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:38 np0005593232 nova_compute[250269]: 2026-01-23 10:04:38.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:04:38 np0005593232 nova_compute[250269]: 2026-01-23 10:04:38.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:04:38 np0005593232 nova_compute[250269]: 2026-01-23 10:04:38.323 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:04:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2207: 321 pgs: 321 active+clean; 226 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 1.0 MiB/s wr, 52 op/s
Jan 23 05:04:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:39.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:04:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:04:40 np0005593232 nova_compute[250269]: 2026-01-23 10:04:40.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:40 np0005593232 nova_compute[250269]: 2026-01-23 10:04:40.997 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2208: 321 pgs: 321 active+clean; 226 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.0 MiB/s wr, 42 op/s
Jan 23 05:04:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:41.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f0344e36-063b-4095-b6aa-8e58900c8144 does not exist
Jan 23 05:04:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9437a753-d7b6-4310-9dfe-eddc38c5306f does not exist
Jan 23 05:04:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 857dda95-5e09-4fe8-a5a7-5c341ef5da18 does not exist
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:41 np0005593232 podman[321328]: 2026-01-23 10:04:41.364938589 +0000 UTC m=+0.091682936 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:04:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.734 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.735 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.735 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.735 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:04:41 np0005593232 nova_compute[250269]: 2026-01-23 10:04:41.736 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.781088745 +0000 UTC m=+0.041797539 container create 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:04:41 np0005593232 systemd[1]: Started libpod-conmon-47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553.scope.
Jan 23 05:04:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.763168355 +0000 UTC m=+0.023877169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.876354612 +0000 UTC m=+0.137063436 container init 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.887184089 +0000 UTC m=+0.147892883 container start 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.891890903 +0000 UTC m=+0.152599717 container attach 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:04:41 np0005593232 silly_carson[321486]: 167 167
Jan 23 05:04:41 np0005593232 systemd[1]: libpod-47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553.scope: Deactivated successfully.
Jan 23 05:04:41 np0005593232 conmon[321486]: conmon 47f57164124cb35d58e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553.scope/container/memory.events
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.899022636 +0000 UTC m=+0.159731430 container died 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:04:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7515d826c5347d38139681c3c244a73be355670f0ceed6a659217414bcc530a3-merged.mount: Deactivated successfully.
Jan 23 05:04:41 np0005593232 podman[321469]: 2026-01-23 10:04:41.978630428 +0000 UTC m=+0.239339222 container remove 47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_carson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:04:41 np0005593232 systemd[1]: libpod-conmon-47f57164124cb35d58e2192dadeb43008146519e3aee48012631bd8b4510f553.scope: Deactivated successfully.
Jan 23 05:04:42 np0005593232 podman[321533]: 2026-01-23 10:04:42.147492786 +0000 UTC m=+0.039709279 container create 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:04:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:04:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208012876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:04:42 np0005593232 systemd[1]: Started libpod-conmon-755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057.scope.
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.196 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:42 np0005593232 podman[321533]: 2026-01-23 10:04:42.129906176 +0000 UTC m=+0.022122699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:42.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:42 np0005593232 podman[321533]: 2026-01-23 10:04:42.250483993 +0000 UTC m=+0.142700506 container init 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:04:42 np0005593232 podman[321533]: 2026-01-23 10:04:42.261389763 +0000 UTC m=+0.153606256 container start 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:04:42 np0005593232 podman[321533]: 2026-01-23 10:04:42.26620991 +0000 UTC m=+0.158426403 container attach 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.406 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.408 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4392MB free_disk=20.88501739501953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.408 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.409 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:42.616 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:42.617 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:04:42.617 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.777 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.778 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.817 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.865 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.866 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.887 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.920 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:04:42 np0005593232 nova_compute[250269]: 2026-01-23 10:04:42.947 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2209: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 23 05:04:43 np0005593232 affectionate_burnell[321552]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:04:43 np0005593232 affectionate_burnell[321552]: --> relative data size: 1.0
Jan 23 05:04:43 np0005593232 affectionate_burnell[321552]: --> All data devices are unavailable
Jan 23 05:04:43 np0005593232 systemd[1]: libpod-755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057.scope: Deactivated successfully.
Jan 23 05:04:43 np0005593232 conmon[321552]: conmon 755d0835c6bb46856782 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057.scope/container/memory.events
Jan 23 05:04:43 np0005593232 podman[321533]: 2026-01-23 10:04:43.130575011 +0000 UTC m=+1.022791514 container died 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 05:04:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:04:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:43.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:04:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-470abe1934a0a3e340895c68815fc0ee914d832c77de8c9650170b684b3c2b9a-merged.mount: Deactivated successfully.
Jan 23 05:04:43 np0005593232 podman[321533]: 2026-01-23 10:04:43.186211012 +0000 UTC m=+1.078427505 container remove 755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:04:43 np0005593232 systemd[1]: libpod-conmon-755d0835c6bb468567829f2017c3c9483c25de2c60d546ea630543b71b471057.scope: Deactivated successfully.
Jan 23 05:04:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:04:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/555328057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:04:43 np0005593232 nova_compute[250269]: 2026-01-23 10:04:43.398 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:43 np0005593232 nova_compute[250269]: 2026-01-23 10:04:43.408 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:04:43 np0005593232 nova_compute[250269]: 2026-01-23 10:04:43.462 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:04:43 np0005593232 nova_compute[250269]: 2026-01-23 10:04:43.507 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:04:43 np0005593232 nova_compute[250269]: 2026-01-23 10:04:43.507 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.859774531 +0000 UTC m=+0.037674592 container create 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:04:43 np0005593232 systemd[1]: Started libpod-conmon-5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb.scope.
Jan 23 05:04:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.930944013 +0000 UTC m=+0.108844084 container init 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.937185681 +0000 UTC m=+0.115085732 container start 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.843259542 +0000 UTC m=+0.021159633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.94033419 +0000 UTC m=+0.118234271 container attach 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:04:43 np0005593232 quirky_visvesvaraya[321759]: 167 167
Jan 23 05:04:43 np0005593232 systemd[1]: libpod-5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb.scope: Deactivated successfully.
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.94280506 +0000 UTC m=+0.120705121 container died 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:04:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-eef6b4aae36069e3cc7620d50060a5fe72705b0704446040b552f04db2ec5f74-merged.mount: Deactivated successfully.
Jan 23 05:04:43 np0005593232 podman[321742]: 2026-01-23 10:04:43.975320954 +0000 UTC m=+0.153221015 container remove 5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:04:43 np0005593232 systemd[1]: libpod-conmon-5eba95c0b2a55fb64d707bc200dbe4e21a984b79be62acd45794bfbc181015fb.scope: Deactivated successfully.
Jan 23 05:04:44 np0005593232 podman[321783]: 2026-01-23 10:04:44.120659184 +0000 UTC m=+0.038341700 container create 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:04:44 np0005593232 systemd[1]: Started libpod-conmon-0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2.scope.
Jan 23 05:04:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0ffe6e923471553a76ddcda995fc0252d2c336d9485a7802d4e20cf64cbbf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0ffe6e923471553a76ddcda995fc0252d2c336d9485a7802d4e20cf64cbbf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0ffe6e923471553a76ddcda995fc0252d2c336d9485a7802d4e20cf64cbbf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b0ffe6e923471553a76ddcda995fc0252d2c336d9485a7802d4e20cf64cbbf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:44 np0005593232 podman[321783]: 2026-01-23 10:04:44.104508105 +0000 UTC m=+0.022190631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:44 np0005593232 podman[321783]: 2026-01-23 10:04:44.198778434 +0000 UTC m=+0.116460950 container init 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:04:44 np0005593232 podman[321783]: 2026-01-23 10:04:44.208731897 +0000 UTC m=+0.126414403 container start 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:04:44 np0005593232 podman[321783]: 2026-01-23 10:04:44.212405601 +0000 UTC m=+0.130088167 container attach 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:04:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:44.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:44 np0005593232 nova_compute[250269]: 2026-01-23 10:04:44.452 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]: {
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:    "0": [
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:        {
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "devices": [
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "/dev/loop3"
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            ],
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "lv_name": "ceph_lv0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "lv_size": "7511998464",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "name": "ceph_lv0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "tags": {
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.cluster_name": "ceph",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.crush_device_class": "",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.encrypted": "0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.osd_id": "0",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.type": "block",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:                "ceph.vdo": "0"
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            },
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "type": "block",
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:            "vg_name": "ceph_vg0"
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:        }
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]:    ]
Jan 23 05:04:44 np0005593232 goofy_visvesvaraya[321799]: }
Jan 23 05:04:45 np0005593232 systemd[1]: libpod-0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2.scope: Deactivated successfully.
Jan 23 05:04:45 np0005593232 podman[321783]: 2026-01-23 10:04:45.002648366 +0000 UTC m=+0.920330892 container died 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:04:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b0ffe6e923471553a76ddcda995fc0252d2c336d9485a7802d4e20cf64cbbf3-merged.mount: Deactivated successfully.
Jan 23 05:04:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2210: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 05:04:45 np0005593232 podman[321783]: 2026-01-23 10:04:45.055780816 +0000 UTC m=+0.973463322 container remove 0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:04:45 np0005593232 systemd[1]: libpod-conmon-0e7087f232e464b506fcec41aa268686b6176396ddd210d179f1a1a6be6aefd2.scope: Deactivated successfully.
Jan 23 05:04:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:45.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:45 np0005593232 nova_compute[250269]: 2026-01-23 10:04:45.507 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.594254057 +0000 UTC m=+0.037551658 container create 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:04:45 np0005593232 systemd[1]: Started libpod-conmon-6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef.scope.
Jan 23 05:04:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.666589823 +0000 UTC m=+0.109887454 container init 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.577186172 +0000 UTC m=+0.020483793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.675207297 +0000 UTC m=+0.118504918 container start 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.679340125 +0000 UTC m=+0.122637736 container attach 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:04:45 np0005593232 nervous_roentgen[322027]: 167 167
Jan 23 05:04:45 np0005593232 systemd[1]: libpod-6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef.scope: Deactivated successfully.
Jan 23 05:04:45 np0005593232 conmon[322027]: conmon 6249761707a67f81c5c3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef.scope/container/memory.events
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.683073161 +0000 UTC m=+0.126370772 container died 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:04:45 np0005593232 podman[322024]: 2026-01-23 10:04:45.69534788 +0000 UTC m=+0.063687561 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 23 05:04:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-17cd9568302a28d0e85344a429b968929225a9e4f5173d553833a2ec35152a4c-merged.mount: Deactivated successfully.
Jan 23 05:04:45 np0005593232 podman[322010]: 2026-01-23 10:04:45.717388686 +0000 UTC m=+0.160686287 container remove 6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:04:45 np0005593232 systemd[1]: libpod-conmon-6249761707a67f81c5c355690169377522490b0d616a16e4553510ab43faf3ef.scope: Deactivated successfully.
Jan 23 05:04:45 np0005593232 podman[322069]: 2026-01-23 10:04:45.879520583 +0000 UTC m=+0.041160081 container create 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:04:45 np0005593232 systemd[1]: Started libpod-conmon-7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10.scope.
Jan 23 05:04:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:04:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce9f6a322defdc2aa86948c69fdba8cd1aaa5dc0e65ad29a23b10cc5648682e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce9f6a322defdc2aa86948c69fdba8cd1aaa5dc0e65ad29a23b10cc5648682e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce9f6a322defdc2aa86948c69fdba8cd1aaa5dc0e65ad29a23b10cc5648682e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fce9f6a322defdc2aa86948c69fdba8cd1aaa5dc0e65ad29a23b10cc5648682e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:04:45 np0005593232 nova_compute[250269]: 2026-01-23 10:04:45.952 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162670.95118, 40ae15fe-e324-4fc4-b6ee-df051fcbea8f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:04:45 np0005593232 nova_compute[250269]: 2026-01-23 10:04:45.953 250273 INFO nova.compute.manager [-] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:04:45 np0005593232 podman[322069]: 2026-01-23 10:04:45.862321894 +0000 UTC m=+0.023961402 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:04:45 np0005593232 podman[322069]: 2026-01-23 10:04:45.962251784 +0000 UTC m=+0.123891292 container init 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:04:45 np0005593232 podman[322069]: 2026-01-23 10:04:45.968758329 +0000 UTC m=+0.130397827 container start 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:04:45 np0005593232 podman[322069]: 2026-01-23 10:04:45.972443273 +0000 UTC m=+0.134082781 container attach 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:04:45 np0005593232 nova_compute[250269]: 2026-01-23 10:04:45.976 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:45 np0005593232 nova_compute[250269]: 2026-01-23 10:04:45.998 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:46.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:46 np0005593232 nova_compute[250269]: 2026-01-23 10:04:46.460 250273 DEBUG nova.compute.manager [None req-f586fd9e-9225-45f9-987c-59345e49e249 - - - - - -] [instance: 40ae15fe-e324-4fc4-b6ee-df051fcbea8f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:46 np0005593232 cool_gauss[322086]: {
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:        "osd_id": 0,
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:        "type": "bluestore"
Jan 23 05:04:46 np0005593232 cool_gauss[322086]:    }
Jan 23 05:04:46 np0005593232 cool_gauss[322086]: }
Jan 23 05:04:46 np0005593232 systemd[1]: libpod-7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10.scope: Deactivated successfully.
Jan 23 05:04:46 np0005593232 podman[322069]: 2026-01-23 10:04:46.793636067 +0000 UTC m=+0.955275575 container died 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:04:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fce9f6a322defdc2aa86948c69fdba8cd1aaa5dc0e65ad29a23b10cc5648682e-merged.mount: Deactivated successfully.
Jan 23 05:04:46 np0005593232 podman[322069]: 2026-01-23 10:04:46.854419204 +0000 UTC m=+1.016058702 container remove 7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:04:46 np0005593232 systemd[1]: libpod-conmon-7d25e557027fb92866bf402cb8bda99082ff217ea6f18f78644aac137a8b0f10.scope: Deactivated successfully.
Jan 23 05:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:04:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:04:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 203515b4-790a-4b33-ab4d-7e5f174f6ec9 does not exist
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b4213df5-7914-4b11-9b08-ac968723907f does not exist
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6388a94a-5e8e-465b-ac50-c6d0e4a17931 does not exist
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0053357426600595574 of space, bias 1.0, pg target 1.6007227980178673 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:04:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:04:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2211: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 05:04:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:47.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:04:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:48.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2212: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 23 05:04:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:49.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 23 05:04:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 23 05:04:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 23 05:04:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:50 np0005593232 nova_compute[250269]: 2026-01-23 10:04:50.978 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:51 np0005593232 nova_compute[250269]: 2026-01-23 10:04:51.000 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2214: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 915 KiB/s wr, 18 op/s
Jan 23 05:04:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2215: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 409 B/s wr, 26 op/s
Jan 23 05:04:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.852 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.853 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.894 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.894 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.898 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:04:53 np0005593232 nova_compute[250269]: 2026-01-23 10:04:53.931 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.013 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.014 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.023 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.023 250273 INFO nova.compute.claims [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.028 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.196 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:54.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:04:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094155306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.806 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.817 250273 DEBUG nova.compute.provider_tree [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.861 250273 DEBUG nova.scheduler.client.report [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.912 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.913 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.916 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.923 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:04:54 np0005593232 nova_compute[250269]: 2026-01-23 10:04:54.923 250273 INFO nova.compute.claims [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.014 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.015 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:04:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2216: 321 pgs: 321 active+clean; 246 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 409 B/s wr, 26 op/s
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.041 250273 INFO nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.065 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.119 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:04:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:55.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.205 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.207 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.208 250273 INFO nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Creating image(s)#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.234 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.264 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.296 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.301 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.331 250273 DEBUG nova.policy [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7fc54d7ba4d1442d956e0d30350325a2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddad636b2dd940bf9c024a1ac19616e1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.373 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.374 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.375 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.375 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.402 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.406 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 80088a61-e582-44f8-92e2-9d41a6b4a398_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:04:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460967640' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.586 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.592 250273 DEBUG nova.compute.provider_tree [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.614 250273 DEBUG nova.scheduler.client.report [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.642 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.643 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.705 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.706 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.733 250273 INFO nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.784 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.924 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.925 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:04:55 np0005593232 nova_compute[250269]: 2026-01-23 10:04:55.925 250273 INFO nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Creating image(s)#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.043 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.072 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.100 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.103 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.189 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.191 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.192 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.192 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.217 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.220 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 34c9552e-fca1-4094-96d1-eb627cda17ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:04:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:56.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.268 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 80088a61-e582-44f8-92e2-9d41a6b4a398_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.862s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.354 250273 DEBUG nova.policy [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '29710db389c842df836944048225740f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.365 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] resizing rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.553 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Successfully created port: 6d187dfa-30bc-47b4-ae98-8a892252b22d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.556 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 34c9552e-fca1-4094-96d1-eb627cda17ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.596 250273 DEBUG nova.objects.instance [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lazy-loading 'migration_context' on Instance uuid 80088a61-e582-44f8-92e2-9d41a6b4a398 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.642 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.643 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Ensure instance console log exists: /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.643 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.644 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.644 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.650 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] resizing rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.752 250273 DEBUG nova.objects.instance [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'migration_context' on Instance uuid 34c9552e-fca1-4094-96d1-eb627cda17ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.940 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.940 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Ensure instance console log exists: /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.941 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.941 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:04:56 np0005593232 nova_compute[250269]: 2026-01-23 10:04:56.941 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:04:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2217: 321 pgs: 321 active+clean; 266 MiB data, 951 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.1 MiB/s wr, 94 op/s
Jan 23 05:04:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:04:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:57.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:04:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:04:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:04:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:04:58 np0005593232 nova_compute[250269]: 2026-01-23 10:04:58.939 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Successfully created port: 800b7313-6d9b-4fd7-9175-cdecef348ba1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:04:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2218: 321 pgs: 321 active+clean; 330 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 228 op/s
Jan 23 05:04:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:04:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:04:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:04:59.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.010 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Successfully updated port: 6d187dfa-30bc-47b4-ae98-8a892252b22d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.035 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.035 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquired lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.036 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:05:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:00.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.294 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:05:00 np0005593232 nova_compute[250269]: 2026-01-23 10:05:00.984 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2219: 321 pgs: 321 active+clean; 330 MiB data, 978 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 208 op/s
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.079 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Successfully updated port: 800b7313-6d9b-4fd7-9175-cdecef348ba1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.105 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.106 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquired lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.106 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.156 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:01.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.310 250273 DEBUG nova.compute.manager [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-changed-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.311 250273 DEBUG nova.compute.manager [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Refreshing instance network info cache due to event network-changed-6d187dfa-30bc-47b4-ae98-8a892252b22d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:05:01 np0005593232 nova_compute[250269]: 2026-01-23 10:05:01.311 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.003 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:05:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:02.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.383 250273 DEBUG nova.network.neutron [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Updating instance_info_cache with network_info: [{"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.415 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Releasing lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.416 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Instance network_info: |[{"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.416 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.417 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Refreshing network info cache for port 6d187dfa-30bc-47b4-ae98-8a892252b22d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.420 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Start _get_guest_xml network_info=[{"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.424 250273 WARNING nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.427 250273 DEBUG nova.virt.libvirt.host [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.428 250273 DEBUG nova.virt.libvirt.host [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.430 250273 DEBUG nova.virt.libvirt.host [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.431 250273 DEBUG nova.virt.libvirt.host [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.432 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.432 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.433 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.433 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.433 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.433 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.434 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.434 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.434 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.434 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.435 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.435 250273 DEBUG nova.virt.hardware [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.438 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734533407' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.907 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.936 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:02 np0005593232 nova_compute[250269]: 2026-01-23 10:05:02.940 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2220: 321 pgs: 321 active+clean; 339 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 214 op/s
Jan 23 05:05:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:03.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320134451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.386 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.388 250273 DEBUG nova.virt.libvirt.vif [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-843835794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-843835794',id=113,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddad636b2dd940bf9c024a1ac19616e1',ramdisk_id='',reservation_id='r-jcxi180w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-448539251',owner_user_name='tempest-ServerTagsTestJSON-448539251-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:55Z,user_data=None,user_id='7fc54d7ba4d1442d956e0d30350325a2',uuid=80088a61-e582-44f8-92e2-9d41a6b4a398,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.388 250273 DEBUG nova.network.os_vif_util [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converting VIF {"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.389 250273 DEBUG nova.network.os_vif_util [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.390 250273 DEBUG nova.objects.instance [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 80088a61-e582-44f8-92e2-9d41a6b4a398 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.582 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <uuid>80088a61-e582-44f8-92e2-9d41a6b4a398</uuid>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <name>instance-00000071</name>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerTagsTestJSON-server-843835794</nova:name>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:05:02</nova:creationTime>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:user uuid="7fc54d7ba4d1442d956e0d30350325a2">tempest-ServerTagsTestJSON-448539251-project-member</nova:user>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:project uuid="ddad636b2dd940bf9c024a1ac19616e1">tempest-ServerTagsTestJSON-448539251</nova:project>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <nova:port uuid="6d187dfa-30bc-47b4-ae98-8a892252b22d">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="serial">80088a61-e582-44f8-92e2-9d41a6b4a398</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="uuid">80088a61-e582-44f8-92e2-9d41a6b4a398</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/80088a61-e582-44f8-92e2-9d41a6b4a398_disk">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:c1:98:8f"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <target dev="tap6d187dfa-30"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/console.log" append="off"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:05:03 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:05:03 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:05:03 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:05:03 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.584 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Preparing to wait for external event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.585 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.586 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.586 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.587 250273 DEBUG nova.virt.libvirt.vif [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-843835794',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-843835794',id=113,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddad636b2dd940bf9c024a1ac19616e1',ramdisk_id='',reservation_id='r-jcxi180w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-448539251',owner_user_name='tempest-ServerTagsTestJSON-448539251-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:55Z,user_data=None,user_id='7fc54d7ba4d1442d956e0d30350325a2',uuid=80088a61-e582-44f8-92e2-9d41a6b4a398,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.587 250273 DEBUG nova.network.os_vif_util [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converting VIF {"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.587 250273 DEBUG nova.network.os_vif_util [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.588 250273 DEBUG os_vif [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.588 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.589 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.589 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.594 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d187dfa-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.594 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d187dfa-30, col_values=(('external_ids', {'iface-id': '6d187dfa-30bc-47b4-ae98-8a892252b22d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:98:8f', 'vm-uuid': '80088a61-e582-44f8-92e2-9d41a6b4a398'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:03 np0005593232 NetworkManager[49057]: <info>  [1769162703.5970] manager: (tap6d187dfa-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.599 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.604 250273 INFO os_vif [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30')#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.732 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.733 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.733 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] No VIF found with MAC fa:16:3e:c1:98:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.734 250273 INFO nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Using config drive#033[00m
Jan 23 05:05:03 np0005593232 nova_compute[250269]: 2026-01-23 10:05:03.759 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.286 250273 DEBUG nova.network.neutron [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updating instance_info_cache with network_info: [{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.315 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Releasing lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.316 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Instance network_info: |[{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.318 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Start _get_guest_xml network_info=[{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.322 250273 WARNING nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.332 250273 DEBUG nova.virt.libvirt.host [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.333 250273 DEBUG nova.virt.libvirt.host [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.339 250273 DEBUG nova.virt.libvirt.host [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.340 250273 DEBUG nova.virt.libvirt.host [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.341 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.341 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.342 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.342 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.342 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.342 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.343 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.343 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.343 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.343 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.344 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.344 250273 DEBUG nova.virt.hardware [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.346 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372878538' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.793 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.820 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:04 np0005593232 nova_compute[250269]: 2026-01-23 10:05:04.824 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2221: 321 pgs: 321 active+clean; 339 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Jan 23 05:05:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965184376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.277 250273 INFO nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Creating config drive at /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.282 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8i1nvned execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.310 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.312 250273 DEBUG nova.virt.libvirt.vif [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1576726897',display_name='tempest-ServerActionsTestOtherA-server-1576726897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1576726897',id=112,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-bcswvvws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:55Z,user_data=None,user_id='29710db389c842df836944048225740f',uuid=34c9552e-fca1-4094-96d1-eb627cda17ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.313 250273 DEBUG nova.network.os_vif_util [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.314 250273 DEBUG nova.network.os_vif_util [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.315 250273 DEBUG nova.objects.instance [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 34c9552e-fca1-4094-96d1-eb627cda17ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.407 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <uuid>34c9552e-fca1-4094-96d1-eb627cda17ab</uuid>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <name>instance-00000070</name>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherA-server-1576726897</nova:name>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:05:04</nova:creationTime>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:user uuid="29710db389c842df836944048225740f">tempest-ServerActionsTestOtherA-882763067-project-member</nova:user>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:project uuid="8c16cd713fa74a88b43e4edf01c273bd">tempest-ServerActionsTestOtherA-882763067</nova:project>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <nova:port uuid="800b7313-6d9b-4fd7-9175-cdecef348ba1">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="serial">34c9552e-fca1-4094-96d1-eb627cda17ab</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="uuid">34c9552e-fca1-4094-96d1-eb627cda17ab</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/34c9552e-fca1-4094-96d1-eb627cda17ab_disk">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:eb:13:67"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <target dev="tap800b7313-6d"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/console.log" append="off"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:05:05 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:05:05 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:05:05 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:05:05 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.408 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Preparing to wait for external event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.408 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.409 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.409 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.409 250273 DEBUG nova.virt.libvirt.vif [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1576726897',display_name='tempest-ServerActionsTestOtherA-server-1576726897',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1576726897',id=112,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-bcswvvws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:04:55Z,user_data=None,user_id='29710db389c842df836944048225740f',uuid=34c9552e-fca1-4094-96d1-eb627cda17ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.410 250273 DEBUG nova.network.os_vif_util [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.410 250273 DEBUG nova.network.os_vif_util [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.410 250273 DEBUG os_vif [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.412 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.412 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.415 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap800b7313-6d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.416 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap800b7313-6d, col_values=(('external_ids', {'iface-id': '800b7313-6d9b-4fd7-9175-cdecef348ba1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:eb:13:67', 'vm-uuid': '34c9552e-fca1-4094-96d1-eb627cda17ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:05 np0005593232 NetworkManager[49057]: <info>  [1769162705.4182] manager: (tap800b7313-6d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.420 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8i1nvned" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.445 250273 DEBUG nova.storage.rbd_utils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] rbd image 80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.449 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config 80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.475 250273 INFO os_vif [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d')#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.498 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Updated VIF entry in instance network info cache for port 6d187dfa-30bc-47b4-ae98-8a892252b22d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.498 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Updating instance_info_cache with network_info: [{"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.556 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-80088a61-e582-44f8-92e2-9d41a6b4a398" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.557 250273 DEBUG nova.compute.manager [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-changed-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.557 250273 DEBUG nova.compute.manager [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Refreshing instance network info cache due to event network-changed-800b7313-6d9b-4fd7-9175-cdecef348ba1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.558 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.558 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.558 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Refreshing network info cache for port 800b7313-6d9b-4fd7-9175-cdecef348ba1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.765 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.766 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.766 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No VIF found with MAC fa:16:3e:eb:13:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.766 250273 INFO nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Using config drive#033[00m
Jan 23 05:05:05 np0005593232 nova_compute[250269]: 2026-01-23 10:05:05.791 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:06.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.278 250273 DEBUG oslo_concurrency.processutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config 80088a61-e582-44f8-92e2-9d41a6b4a398_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.830s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.279 250273 INFO nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Deleting local config drive /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398/disk.config because it was imported into RBD.#033[00m
Jan 23 05:05:06 np0005593232 kernel: tap6d187dfa-30: entered promiscuous mode
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.3339] manager: (tap6d187dfa-30): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Jan 23 05:05:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:06Z|00374|binding|INFO|Claiming lport 6d187dfa-30bc-47b4-ae98-8a892252b22d for this chassis.
Jan 23 05:05:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:06Z|00375|binding|INFO|6d187dfa-30bc-47b4-ae98-8a892252b22d: Claiming fa:16:3e:c1:98:8f 10.100.0.7
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.338 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:06Z|00376|binding|INFO|Setting lport 6d187dfa-30bc-47b4-ae98-8a892252b22d ovn-installed in OVS
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 systemd-machined[215836]: New machine qemu-45-instance-00000071.
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.365 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 systemd-udevd[322825]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:05:06 np0005593232 systemd[1]: Started Virtual Machine qemu-45-instance-00000071.
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.3845] device (tap6d187dfa-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.3859] device (tap6d187dfa-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:05:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:06Z|00377|binding|INFO|Setting lport 6d187dfa-30bc-47b4-ae98-8a892252b22d up in Southbound
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.400 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:98:8f 10.100.0.7'], port_security=['fa:16:3e:c1:98:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '80088a61-e582-44f8-92e2-9d41a6b4a398', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52380549-883f-4172-950c-7fefbf741501', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddad636b2dd940bf9c024a1ac19616e1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e94d9501-6e08-459f-af0d-365420b85d02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ade1a77-53d2-4fea-9b09-f18cd4772087, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=6d187dfa-30bc-47b4-ae98-8a892252b22d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.401 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 6d187dfa-30bc-47b4-ae98-8a892252b22d in datapath 52380549-883f-4172-950c-7fefbf741501 bound to our chassis#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.403 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52380549-883f-4172-950c-7fefbf741501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.415 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e45d2cf1-5ef9-475b-b1b9-bfa96fde0236]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.416 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52380549-81 in ovnmeta-52380549-883f-4172-950c-7fefbf741501 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.417 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52380549-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.417 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[72a2cc75-d90c-4e46-ae0d-235752375996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.418 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f6cf9a7a-c7c4-400f-ab84-88e204577de0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.431 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[12c344e1-68fb-482c-96fb-e856b7f42400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.445 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[14f93c99-e6bc-4766-b8b4-d4de9bc8a696]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.477 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7b7a8864-270d-4bc1-bdf6-d4c59dedaafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.482 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[204c3794-498e-41e6-b01d-1ed03327a80b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.4837] manager: (tap52380549-80): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.516 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a88ad6aa-ffce-4797-bb0d-308eb0e80d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.519 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[30d4b07c-7ec3-4aa8-b6e3-a3a32d0df43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.5416] device (tap52380549-80): carrier: link connected
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.548 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d349813b-573c-44c4-8a0e-568e799aff8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.567 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d68bdb5b-adb5-4710-9e70-bb6ba5da2776]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52380549-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:01:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661968, 'reachable_time': 37937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322862, 'error': None, 'target': 'ovnmeta-52380549-883f-4172-950c-7fefbf741501', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.584 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c47bbf1d-58e4-4e97-98a3-e4ece4775ce0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:180'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 661968, 'tstamp': 661968}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322863, 'error': None, 'target': 'ovnmeta-52380549-883f-4172-950c-7fefbf741501', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.604 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2d5787-cc8b-49c5-9166-807da15e6ad0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52380549-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:01:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 116], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661968, 'reachable_time': 37937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322864, 'error': None, 'target': 'ovnmeta-52380549-883f-4172-950c-7fefbf741501', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.633 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[46af1421-9367-49c3-a05f-3e235e529294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.691 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0e2549-03bf-423a-9cb7-8bcaceb9f550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.693 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52380549-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.693 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.693 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52380549-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 NetworkManager[49057]: <info>  [1769162706.6960] manager: (tap52380549-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 23 05:05:06 np0005593232 kernel: tap52380549-80: entered promiscuous mode
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.702 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52380549-80, col_values=(('external_ids', {'iface-id': '6a768f9b-b72a-4ed8-9452-da20542ef36a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:06Z|00378|binding|INFO|Releasing lport 6a768f9b-b72a-4ed8-9452-da20542ef36a from this chassis (sb_readonly=0)
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.723 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 nova_compute[250269]: 2026-01-23 10:05:06.728 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.729 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52380549-883f-4172-950c-7fefbf741501.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52380549-883f-4172-950c-7fefbf741501.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.730 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[372fc1d6-d112-44ef-9cab-cd31f6e7a09d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.730 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-52380549-883f-4172-950c-7fefbf741501
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/52380549-883f-4172-950c-7fefbf741501.pid.haproxy
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 52380549-883f-4172-950c-7fefbf741501
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:05:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:06.731 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52380549-883f-4172-950c-7fefbf741501', 'env', 'PROCESS_TAG=haproxy-52380549-883f-4172-950c-7fefbf741501', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52380549-883f-4172-950c-7fefbf741501.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:05:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 23 05:05:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 23 05:05:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2223: 321 pgs: 321 active+clean; 339 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 3.1 MiB/s wr, 164 op/s
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.064 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162707.0637841, 80088a61-e582-44f8-92e2-9d41a6b4a398 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.065 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] VM Started (Lifecycle Event)#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.088 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.093 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162707.0649257, 80088a61-e582-44f8-92e2-9d41a6b4a398 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.093 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.129 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.133 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.166 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:07.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:07 np0005593232 podman[322943]: 2026-01-23 10:05:07.114851973 +0000 UTC m=+0.022176721 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:05:07 np0005593232 podman[322943]: 2026-01-23 10:05:07.280707636 +0000 UTC m=+0.188032374 container create c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.294 250273 INFO nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Creating config drive at /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.299 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurtxjse4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:07 np0005593232 systemd[1]: Started libpod-conmon-c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9.scope.
Jan 23 05:05:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef32f558e16139cb9d1b327fe9da5e84565d0028af59a62090136c8c0581edb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:07 np0005593232 podman[322943]: 2026-01-23 10:05:07.42020996 +0000 UTC m=+0.327534708 container init c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:05:07 np0005593232 podman[322943]: 2026-01-23 10:05:07.428027152 +0000 UTC m=+0.335351880 container start c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.434 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpurtxjse4" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:07 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [NOTICE]   (322965) : New worker (322981) forked
Jan 23 05:05:07 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [NOTICE]   (322965) : Loading success.
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.468 250273 DEBUG nova.storage.rbd_utils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.472 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config 34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.507 250273 DEBUG nova.compute.manager [req-f4ad0a48-2a09-49e5-abed-766141f948fd req-136aeefb-9d98-45f5-bf23-e6fc25148af5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.508 250273 DEBUG oslo_concurrency.lockutils [req-f4ad0a48-2a09-49e5-abed-766141f948fd req-136aeefb-9d98-45f5-bf23-e6fc25148af5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.508 250273 DEBUG oslo_concurrency.lockutils [req-f4ad0a48-2a09-49e5-abed-766141f948fd req-136aeefb-9d98-45f5-bf23-e6fc25148af5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.508 250273 DEBUG oslo_concurrency.lockutils [req-f4ad0a48-2a09-49e5-abed-766141f948fd req-136aeefb-9d98-45f5-bf23-e6fc25148af5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.508 250273 DEBUG nova.compute.manager [req-f4ad0a48-2a09-49e5-abed-766141f948fd req-136aeefb-9d98-45f5-bf23-e6fc25148af5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Processing event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.509 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.513 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162707.5133965, 80088a61-e582-44f8-92e2-9d41a6b4a398 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.514 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.517 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.521 250273 INFO nova.virt.libvirt.driver [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Instance spawned successfully.#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.522 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.561 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.564 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.627 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.630 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.631 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.631 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.632 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.632 250273 DEBUG nova.virt.libvirt.driver [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.635 250273 DEBUG oslo_concurrency.processutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config 34c9552e-fca1-4094-96d1-eb627cda17ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.636 250273 INFO nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Deleting local config drive /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab/disk.config because it was imported into RBD.#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.638 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:07 np0005593232 NetworkManager[49057]: <info>  [1769162707.6875] manager: (tap800b7313-6d): new Tun device (/org/freedesktop/NetworkManager/Devices/191)
Jan 23 05:05:07 np0005593232 kernel: tap800b7313-6d: entered promiscuous mode
Jan 23 05:05:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:07Z|00379|binding|INFO|Claiming lport 800b7313-6d9b-4fd7-9175-cdecef348ba1 for this chassis.
Jan 23 05:05:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:07Z|00380|binding|INFO|800b7313-6d9b-4fd7-9175-cdecef348ba1: Claiming fa:16:3e:eb:13:67 10.100.0.12
Jan 23 05:05:07 np0005593232 systemd-udevd[322854]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.697 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:13:67 10.100.0.12'], port_security=['fa:16:3e:eb:13:67 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '34c9552e-fca1-4094-96d1-eb627cda17ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1b3b2b26-a9c9-438c-b14e-9fddf18d8ea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=800b7313-6d9b-4fd7-9175-cdecef348ba1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.699 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 800b7313-6d9b-4fd7-9175-cdecef348ba1 in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b bound to our chassis#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.700 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8575e824-4be0-4206-873e-2f9a3d1ded0b#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.700 250273 INFO nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Took 12.49 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.700 250273 DEBUG nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:07 np0005593232 NetworkManager[49057]: <info>  [1769162707.7052] device (tap800b7313-6d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:05:07 np0005593232 NetworkManager[49057]: <info>  [1769162707.7064] device (tap800b7313-6d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:05:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:07Z|00381|binding|INFO|Setting lport 800b7313-6d9b-4fd7-9175-cdecef348ba1 ovn-installed in OVS
Jan 23 05:05:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:07Z|00382|binding|INFO|Setting lport 800b7313-6d9b-4fd7-9175-cdecef348ba1 up in Southbound
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.716 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.715 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e137884c-d879-4ce6-b5d1-763e2b133b5b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.720 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8575e824-41 in ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.725 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8575e824-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:05:07 np0005593232 systemd-machined[215836]: New machine qemu-46-instance-00000070.
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.725 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d16ed162-d78e-463c-b8b2-d9e79b84283b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.733 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9c7ce5-e8b6-4748-95e8-03d40edeb7ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 systemd[1]: Started Virtual Machine qemu-46-instance-00000070.
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.745 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0ba5c053-f579-497b-aa0e-66e0a961bcf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.761 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d41aa8fe-7013-4f67-9695-4a835bfd28ed]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.792 250273 INFO nova.compute.manager [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Took 13.82 seconds to build instance.#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.792 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c480c5ca-391d-4033-b9b4-c1cc42cfd235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 NetworkManager[49057]: <info>  [1769162707.7994] manager: (tap8575e824-40): new Veth device (/org/freedesktop/NetworkManager/Devices/192)
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.798 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d04fe96c-9fa3-40f3-a6fe-25d27f8c0e6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 nova_compute[250269]: 2026-01-23 10:05:07.814 250273 DEBUG oslo_concurrency.lockutils [None req-ec287b72-6b85-43b3-bb38-3bd5ab515b80 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.962s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.830 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e2925f-f9c3-4ce8-8c30-7bb4d35318e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.833 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0218a5f1-70e8-4ab5-863a-91eb1d884f13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 NetworkManager[49057]: <info>  [1769162707.8588] device (tap8575e824-40): carrier: link connected
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.864 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e50b8f0e-5247-4744-ba6f-d96378e3bfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.881 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[778ceb8f-e459-478e-b92b-e84b13273ba7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662100, 'reachable_time': 18560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323044, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.897 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[90b9aab2-9993-4e0d-9bf2-756f0d26ed0e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:16ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662100, 'tstamp': 662100}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323045, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.918 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc5e5a7-9aa2-41fb-a5cb-f519abc2e982]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662100, 'reachable_time': 18560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323046, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:07.952 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[645b7006-ba2a-49e7-82f9-08f118b4c897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.013 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b704337-a8e1-40bb-80bf-3e50606a5646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.015 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.015 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.016 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8575e824-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:08 np0005593232 NetworkManager[49057]: <info>  [1769162708.0191] manager: (tap8575e824-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/193)
Jan 23 05:05:08 np0005593232 kernel: tap8575e824-40: entered promiscuous mode
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.022 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8575e824-40, col_values=(('external_ids', {'iface-id': 'f7023d86-3158-4cc4-b690-f57bb76e92b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:08Z|00383|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.025 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.026 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.027 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6b960720-a842-4ee6-9f41-b765ad780739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.028 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8575e824-4be0-4206-873e-2f9a3d1ded0b.pid.haproxy
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8575e824-4be0-4206-873e-2f9a3d1ded0b
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:05:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:08.028 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'env', 'PROCESS_TAG=haproxy-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8575e824-4be0-4206-873e-2f9a3d1ded0b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.042 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:08.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:08 np0005593232 podman[323078]: 2026-01-23 10:05:08.483923645 +0000 UTC m=+0.118357724 container create 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 05:05:08 np0005593232 podman[323078]: 2026-01-23 10:05:08.390956883 +0000 UTC m=+0.025390972 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:05:08 np0005593232 systemd[1]: Started libpod-conmon-09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5.scope.
Jan 23 05:05:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca82ce983b391702a97a93312e0ac7f12a12df67597cde3640bf1eed58e2efdf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:08 np0005593232 podman[323078]: 2026-01-23 10:05:08.616010778 +0000 UTC m=+0.250444847 container init 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 05:05:08 np0005593232 podman[323078]: 2026-01-23 10:05:08.624468639 +0000 UTC m=+0.258902708 container start 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 05:05:08 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [NOTICE]   (323097) : New worker (323099) forked
Jan 23 05:05:08 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [NOTICE]   (323097) : Loading success.
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.693 250273 DEBUG nova.compute.manager [req-25f58f9e-bfda-4331-9cbb-19d7f09534c1 req-52e7f575-4944-4ceb-a375-20f93b60069a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.693 250273 DEBUG oslo_concurrency.lockutils [req-25f58f9e-bfda-4331-9cbb-19d7f09534c1 req-52e7f575-4944-4ceb-a375-20f93b60069a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.694 250273 DEBUG oslo_concurrency.lockutils [req-25f58f9e-bfda-4331-9cbb-19d7f09534c1 req-52e7f575-4944-4ceb-a375-20f93b60069a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.694 250273 DEBUG oslo_concurrency.lockutils [req-25f58f9e-bfda-4331-9cbb-19d7f09534c1 req-52e7f575-4944-4ceb-a375-20f93b60069a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.694 250273 DEBUG nova.compute.manager [req-25f58f9e-bfda-4331-9cbb-19d7f09534c1 req-52e7f575-4944-4ceb-a375-20f93b60069a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Processing event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.929 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.930 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162708.9303503, 34c9552e-fca1-4094-96d1-eb627cda17ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.930 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] VM Started (Lifecycle Event)#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.937 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.942 250273 INFO nova.virt.libvirt.driver [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Instance spawned successfully.#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.943 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.965 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.970 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.974 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.974 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.975 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.975 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.975 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:08 np0005593232 nova_compute[250269]: 2026-01-23 10:05:08.976 250273 DEBUG nova.virt.libvirt.driver [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.009 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.010 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162708.9341707, 34c9552e-fca1-4094-96d1-eb627cda17ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.010 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.040 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2224: 321 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 311 active+clean; 339 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 423 KiB/s wr, 52 op/s
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.046 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162708.935315, 34c9552e-fca1-4094-96d1-eb627cda17ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.046 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.066 250273 INFO nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Took 13.14 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.067 250273 DEBUG nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.107 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.111 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.260 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.280 250273 INFO nova.compute.manager [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Took 15.28 seconds to build instance.#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.351 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updated VIF entry in instance network info cache for port 800b7313-6d9b-4fd7-9175-cdecef348ba1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.352 250273 DEBUG nova.network.neutron [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updating instance_info_cache with network_info: [{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.398 250273 DEBUG oslo_concurrency.lockutils [req-01d72504-e038-4473-9659-65d5a6459dd0 req-8d96cb6e-a34a-40c8-a3b1-aced264f7e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.400 250273 DEBUG oslo_concurrency.lockutils [None req-63c749e0-247d-4790-bbd0-8dd0ee600c13 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.610 250273 DEBUG nova.compute.manager [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.611 250273 DEBUG oslo_concurrency.lockutils [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.611 250273 DEBUG oslo_concurrency.lockutils [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.611 250273 DEBUG oslo_concurrency.lockutils [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.611 250273 DEBUG nova.compute.manager [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] No waiting events found dispatching network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:05:09 np0005593232 nova_compute[250269]: 2026-01-23 10:05:09.611 250273 WARNING nova.compute.manager [req-151cce29-f516-4403-9be3-370193bf5c8d req-7622d421-404a-4386-a771-ab6fe2d8425a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received unexpected event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d for instance with vm_state active and task_state None.#033[00m
Jan 23 05:05:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.452 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.812 250273 DEBUG nova.compute.manager [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.812 250273 DEBUG oslo_concurrency.lockutils [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.812 250273 DEBUG oslo_concurrency.lockutils [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.813 250273 DEBUG oslo_concurrency.lockutils [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.813 250273 DEBUG nova.compute.manager [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] No waiting events found dispatching network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:05:10 np0005593232 nova_compute[250269]: 2026-01-23 10:05:10.813 250273 WARNING nova.compute.manager [req-c021b746-9824-458c-a4fb-adefd33590f5 req-49d27f59-ee61-4131-96b4-ca15620a346e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received unexpected event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:05:11 np0005593232 nova_compute[250269]: 2026-01-23 10:05:11.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2225: 321 pgs: 6 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 311 active+clean; 339 MiB data, 993 MiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 423 KiB/s wr, 52 op/s
Jan 23 05:05:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:12 np0005593232 podman[323152]: 2026-01-23 10:05:12.434838512 +0000 UTC m=+0.091005387 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.452 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:12.452 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:05:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:12.453 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.612 250273 DEBUG nova.compute.manager [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-changed-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.613 250273 DEBUG nova.compute.manager [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Refreshing instance network info cache due to event network-changed-800b7313-6d9b-4fd7-9175-cdecef348ba1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.613 250273 DEBUG oslo_concurrency.lockutils [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.614 250273 DEBUG oslo_concurrency.lockutils [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:12 np0005593232 nova_compute[250269]: 2026-01-23 10:05:12.614 250273 DEBUG nova.network.neutron [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Refreshing network info cache for port 800b7313-6d9b-4fd7-9175-cdecef348ba1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:05:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2226: 321 pgs: 321 active+clean; 372 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.6 MiB/s wr, 355 op/s
Jan 23 05:05:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:13.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:14 np0005593232 nova_compute[250269]: 2026-01-23 10:05:14.603 250273 DEBUG nova.network.neutron [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updated VIF entry in instance network info cache for port 800b7313-6d9b-4fd7-9175-cdecef348ba1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:05:14 np0005593232 nova_compute[250269]: 2026-01-23 10:05:14.604 250273 DEBUG nova.network.neutron [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updating instance_info_cache with network_info: [{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:14 np0005593232 nova_compute[250269]: 2026-01-23 10:05:14.636 250273 DEBUG oslo_concurrency.lockutils [req-a139dc0a-f943-4453-9524-e9918f37485d req-852e8b7f-f4bd-42cc-8b54-081735cea3d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2227: 321 pgs: 321 active+clean; 372 MiB data, 1023 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.6 MiB/s wr, 355 op/s
Jan 23 05:05:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:15.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.456 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.515 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.516 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.516 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.517 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.517 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.518 250273 INFO nova.compute.manager [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Terminating instance#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.519 250273 DEBUG nova.compute.manager [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:05:15 np0005593232 kernel: tap6d187dfa-30 (unregistering): left promiscuous mode
Jan 23 05:05:15 np0005593232 NetworkManager[49057]: <info>  [1769162715.5612] device (tap6d187dfa-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:05:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:15Z|00384|binding|INFO|Releasing lport 6d187dfa-30bc-47b4-ae98-8a892252b22d from this chassis (sb_readonly=0)
Jan 23 05:05:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:15Z|00385|binding|INFO|Setting lport 6d187dfa-30bc-47b4-ae98-8a892252b22d down in Southbound
Jan 23 05:05:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:15Z|00386|binding|INFO|Removing iface tap6d187dfa-30 ovn-installed in OVS
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.571 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.579 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:98:8f 10.100.0.7'], port_security=['fa:16:3e:c1:98:8f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '80088a61-e582-44f8-92e2-9d41a6b4a398', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52380549-883f-4172-950c-7fefbf741501', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddad636b2dd940bf9c024a1ac19616e1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e94d9501-6e08-459f-af0d-365420b85d02', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2ade1a77-53d2-4fea-9b09-f18cd4772087, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=6d187dfa-30bc-47b4-ae98-8a892252b22d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.580 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 6d187dfa-30bc-47b4-ae98-8a892252b22d in datapath 52380549-883f-4172-950c-7fefbf741501 unbound from our chassis#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.582 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52380549-883f-4172-950c-7fefbf741501, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.582 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bf006ddc-6a8b-441d-ad08-08ee03def7bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.583 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52380549-883f-4172-950c-7fefbf741501 namespace which is not needed anymore#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.592 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 23 05:05:15 np0005593232 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000071.scope: Consumed 8.835s CPU time.
Jan 23 05:05:15 np0005593232 systemd-machined[215836]: Machine qemu-45-instance-00000071 terminated.
Jan 23 05:05:15 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [NOTICE]   (322965) : haproxy version is 2.8.14-c23fe91
Jan 23 05:05:15 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [NOTICE]   (322965) : path to executable is /usr/sbin/haproxy
Jan 23 05:05:15 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [WARNING]  (322965) : Exiting Master process...
Jan 23 05:05:15 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [ALERT]    (322965) : Current worker (322981) exited with code 143 (Terminated)
Jan 23 05:05:15 np0005593232 neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501[322961]: [WARNING]  (322965) : All workers exited. Exiting... (0)
Jan 23 05:05:15 np0005593232 systemd[1]: libpod-c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9.scope: Deactivated successfully.
Jan 23 05:05:15 np0005593232 podman[323204]: 2026-01-23 10:05:15.714467254 +0000 UTC m=+0.046612486 container died c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.738 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9-userdata-shm.mount: Deactivated successfully.
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.743 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ef32f558e16139cb9d1b327fe9da5e84565d0028af59a62090136c8c0581edb7-merged.mount: Deactivated successfully.
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.753 250273 INFO nova.virt.libvirt.driver [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Instance destroyed successfully.#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.754 250273 DEBUG nova.objects.instance [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lazy-loading 'resources' on Instance uuid 80088a61-e582-44f8-92e2-9d41a6b4a398 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:15 np0005593232 podman[323204]: 2026-01-23 10:05:15.765214826 +0000 UTC m=+0.097360058 container cleanup c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.769 250273 DEBUG nova.virt.libvirt.vif [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-843835794',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-843835794',id=113,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:05:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddad636b2dd940bf9c024a1ac19616e1',ramdisk_id='',reservation_id='r-jcxi180w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-448539251',owner_user_name='tempest-ServerTagsTestJSON-448539251-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:05:07Z,user_data=None,user_id='7fc54d7ba4d1442d956e0d30350325a2',uuid=80088a61-e582-44f8-92e2-9d41a6b4a398,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.770 250273 DEBUG nova.network.os_vif_util [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converting VIF {"id": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "address": "fa:16:3e:c1:98:8f", "network": {"id": "52380549-883f-4172-950c-7fefbf741501", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-355830193-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddad636b2dd940bf9c024a1ac19616e1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d187dfa-30", "ovs_interfaceid": "6d187dfa-30bc-47b4-ae98-8a892252b22d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.770 250273 DEBUG nova.network.os_vif_util [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.771 250273 DEBUG os_vif [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.774 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d187dfa-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.775 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 systemd[1]: libpod-conmon-c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9.scope: Deactivated successfully.
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.781 250273 INFO os_vif [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:98:8f,bridge_name='br-int',has_traffic_filtering=True,id=6d187dfa-30bc-47b4-ae98-8a892252b22d,network=Network(52380549-883f-4172-950c-7fefbf741501),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d187dfa-30')#033[00m
Jan 23 05:05:15 np0005593232 podman[323221]: 2026-01-23 10:05:15.807701793 +0000 UTC m=+0.073001535 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 05:05:15 np0005593232 podman[323253]: 2026-01-23 10:05:15.838791626 +0000 UTC m=+0.045445582 container remove c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.846 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[699a01bc-8a7b-46dd-9b67-6c190053a300]: (4, ('Fri Jan 23 10:05:15 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501 (c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9)\nc21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9\nFri Jan 23 10:05:15 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-52380549-883f-4172-950c-7fefbf741501 (c21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9)\nc21803a7e6a94afebbb5f4392ae98d410ffddd49b8d678ef30e5aeee0d24f2f9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.847 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc436fc-4a7b-4181-b578-1a6153e13c41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.848 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52380549-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.850 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 kernel: tap52380549-80: left promiscuous mode
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.868 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.875 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3c6b84-3c09-40d0-97da-ab637a1820a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.891 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3ec28d-8908-4632-9a82-58985563a479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.892 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b22435ba-7721-4a7d-9351-f3cc5b7fb9c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.912 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e80d0fc2-37db-4e00-a682-dc47e0e9078a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 661961, 'reachable_time': 31906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323292, 'error': None, 'target': 'ovnmeta-52380549-883f-4172-950c-7fefbf741501', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 systemd[1]: run-netns-ovnmeta\x2d52380549\x2d883f\x2d4172\x2d950c\x2d7fefbf741501.mount: Deactivated successfully.
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.914 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52380549-883f-4172-950c-7fefbf741501 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:05:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:15.915 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[13188850-eaa4-4f13-88c4-28580648f23c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.938 250273 DEBUG nova.compute.manager [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-unplugged-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.938 250273 DEBUG oslo_concurrency.lockutils [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.939 250273 DEBUG oslo_concurrency.lockutils [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.939 250273 DEBUG oslo_concurrency.lockutils [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.939 250273 DEBUG nova.compute.manager [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] No waiting events found dispatching network-vif-unplugged-6d187dfa-30bc-47b4-ae98-8a892252b22d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:05:15 np0005593232 nova_compute[250269]: 2026-01-23 10:05:15.939 250273 DEBUG nova.compute.manager [req-b90ba548-e115-424b-94cc-310af65f6e9a req-44689644-355c-49ce-b88b-d0d8b248b7b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-unplugged-6d187dfa-30bc-47b4-ae98-8a892252b22d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:16.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.321 250273 INFO nova.virt.libvirt.driver [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Deleting instance files /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398_del#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.322 250273 INFO nova.virt.libvirt.driver [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Deletion of /var/lib/nova/instances/80088a61-e582-44f8-92e2-9d41a6b4a398_del complete#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.383 250273 INFO nova.compute.manager [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.384 250273 DEBUG oslo.service.loopingcall [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.384 250273 DEBUG nova.compute.manager [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:05:16 np0005593232 nova_compute[250269]: 2026-01-23 10:05:16.385 250273 DEBUG nova.network.neutron [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.651991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162716652084, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1583, "num_deletes": 258, "total_data_size": 2556552, "memory_usage": 2600416, "flush_reason": "Manual Compaction"}
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162716678311, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 2506998, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48155, "largest_seqno": 49737, "table_properties": {"data_size": 2499629, "index_size": 4248, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16261, "raw_average_key_size": 20, "raw_value_size": 2484539, "raw_average_value_size": 3113, "num_data_blocks": 184, "num_entries": 798, "num_filter_entries": 798, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162586, "oldest_key_time": 1769162586, "file_creation_time": 1769162716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 26375 microseconds, and 6805 cpu microseconds.
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.678356) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 2506998 bytes OK
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.678391) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.680220) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.680276) EVENT_LOG_v1 {"time_micros": 1769162716680263, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.680304) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 2549593, prev total WAL file size 2549593, number of live WAL files 2.
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.681355) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373535' seq:72057594037927935, type:22 .. '6C6F676D0032303037' seq:0, type:0; will stop at (end)
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(2448KB)], [107(10MB)]
Jan 23 05:05:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162716681433, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 13360681, "oldest_snapshot_seqno": -1}
Jan 23 05:05:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2229: 321 pgs: 321 active+clean; 342 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 2.8 MiB/s wr, 391 op/s
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 7695 keys, 13208512 bytes, temperature: kUnknown
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162717091793, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 13208512, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13155555, "index_size": 32621, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19269, "raw_key_size": 198476, "raw_average_key_size": 25, "raw_value_size": 13016808, "raw_average_value_size": 1691, "num_data_blocks": 1296, "num_entries": 7695, "num_filter_entries": 7695, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:05:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:17.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.092381) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 13208512 bytes
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.271767) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 32.5 rd, 32.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 10.4 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(10.6) write-amplify(5.3) OK, records in: 8231, records dropped: 536 output_compression: NoCompression
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.271817) EVENT_LOG_v1 {"time_micros": 1769162717271794, "job": 64, "event": "compaction_finished", "compaction_time_micros": 410574, "compaction_time_cpu_micros": 33072, "output_level": 6, "num_output_files": 1, "total_output_size": 13208512, "num_input_records": 8231, "num_output_records": 7695, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162717272501, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162717274147, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:16.681229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.274259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.274266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.274269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.274270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:05:17.274273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.055 250273 DEBUG nova.compute.manager [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.056 250273 DEBUG oslo_concurrency.lockutils [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.056 250273 DEBUG oslo_concurrency.lockutils [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.056 250273 DEBUG oslo_concurrency.lockutils [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.056 250273 DEBUG nova.compute.manager [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] No waiting events found dispatching network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.056 250273 WARNING nova.compute.manager [req-8d8a7429-a468-4f26-86e6-4be0a8e47450 req-798f3d7c-9832-4384-bc46-f587c2b64657 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received unexpected event network-vif-plugged-6d187dfa-30bc-47b4-ae98-8a892252b22d for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.262 250273 DEBUG nova.network.neutron [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.411 250273 INFO nova.compute.manager [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Took 2.03 seconds to deallocate network for instance.#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.465 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.465 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:18 np0005593232 nova_compute[250269]: 2026-01-23 10:05:18.570 250273 DEBUG oslo_concurrency.processutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2230: 321 pgs: 321 active+clean; 285 MiB data, 971 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 4.3 MiB/s wr, 418 op/s
Jan 23 05:05:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:05:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/469673078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.071 250273 DEBUG oslo_concurrency.processutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.078 250273 DEBUG nova.compute.provider_tree [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.103 250273 DEBUG nova.scheduler.client.report [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.128 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.167 250273 INFO nova.scheduler.client.report [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Deleted allocations for instance 80088a61-e582-44f8-92e2-9d41a6b4a398#033[00m
Jan 23 05:05:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:19.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:19 np0005593232 nova_compute[250269]: 2026-01-23 10:05:19.256 250273 DEBUG oslo_concurrency.lockutils [None req-ba18bb45-1f76-439f-b882-67cbaa59cfc0 7fc54d7ba4d1442d956e0d30350325a2 ddad636b2dd940bf9c024a1ac19616e1 - - default default] Lock "80088a61-e582-44f8-92e2-9d41a6b4a398" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:19.454 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:20 np0005593232 nova_compute[250269]: 2026-01-23 10:05:20.106 250273 DEBUG nova.compute.manager [req-df87e497-c63e-483f-81d4-6d958f420ba7 req-d54a81be-10d5-42a0-92be-44dc08ebf410 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Received event network-vif-deleted-6d187dfa-30bc-47b4-ae98-8a892252b22d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:20 np0005593232 nova_compute[250269]: 2026-01-23 10:05:20.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2231: 321 pgs: 321 active+clean; 285 MiB data, 971 MiB used, 20 GiB / 21 GiB avail; 7.4 MiB/s rd, 4.3 MiB/s wr, 417 op/s
Jan 23 05:05:21 np0005593232 nova_compute[250269]: 2026-01-23 10:05:21.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:21.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2232: 321 pgs: 321 active+clean; 324 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 240 op/s
Jan 23 05:05:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:23.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:24 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:24Z|00387|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:05:24 np0005593232 nova_compute[250269]: 2026-01-23 10:05:24.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2233: 321 pgs: 321 active+clean; 324 MiB data, 984 MiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.6 MiB/s wr, 240 op/s
Jan 23 05:05:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:25.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:25 np0005593232 nova_compute[250269]: 2026-01-23 10:05:25.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:26 np0005593232 nova_compute[250269]: 2026-01-23 10:05:26.091 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2234: 321 pgs: 321 active+clean; 345 MiB data, 995 MiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 5.2 MiB/s wr, 242 op/s
Jan 23 05:05:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:27.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:27 np0005593232 nova_compute[250269]: 2026-01-23 10:05:27.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:28.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2235: 321 pgs: 321 active+clean; 387 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.0 MiB/s wr, 245 op/s
Jan 23 05:05:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:29.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:30.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.311 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.312 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.342 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.484 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.484 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.493 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.494 250273 INFO nova.compute.claims [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.755 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162715.7507832, 80088a61-e582-44f8-92e2-9d41a6b4a398 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.755 250273 INFO nova.compute.manager [-] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.760 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.760 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.807 250273 DEBUG nova.compute.manager [None req-f71debf9-c1f5-4217-aded-a6b2f24b2513 - - - - - -] [instance: 80088a61-e582-44f8-92e2-9d41a6b4a398] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.808 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.887 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:30 np0005593232 nova_compute[250269]: 2026-01-23 10:05:30.916 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2236: 321 pgs: 321 active+clean; 387 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 4.7 MiB/s wr, 204 op/s
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.093 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:31.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:05:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/14338178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.325 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.331 250273 DEBUG nova.compute.provider_tree [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:05:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.646 250273 DEBUG nova.scheduler.client.report [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.675 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.676 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.678 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.685 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.686 250273 INFO nova.compute.claims [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.755 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.790 250273 INFO nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.810 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.874 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.915 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.917 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.917 250273 INFO nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating image(s)#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.943 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:31 np0005593232 nova_compute[250269]: 2026-01-23 10:05:31.979 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.033 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.037 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.114 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.115 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.116 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.116 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.145 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.148 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.320 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.330 250273 DEBUG nova.compute.provider_tree [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.361 250273 DEBUG nova.scheduler.client.report [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.404 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.405 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.418 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.505 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.506 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.515 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] resizing rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.560 250273 INFO nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.634 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.642 250273 DEBUG nova.objects.instance [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'migration_context' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.663 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.663 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Ensure instance console log exists: /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.664 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.664 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.664 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.666 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.670 250273 WARNING nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.674 250273 DEBUG nova.virt.libvirt.host [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.675 250273 DEBUG nova.virt.libvirt.host [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.678 250273 DEBUG nova.virt.libvirt.host [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.678 250273 DEBUG nova.virt.libvirt.host [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.680 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.680 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.680 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.680 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.680 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.681 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.682 250273 DEBUG nova.virt.hardware [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.684 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.724 250273 INFO nova.virt.block_device [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Booting with volume 1337690c-8061-4b7e-bb70-8cbfeecc77ac at /dev/vda#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.865 250273 DEBUG nova.policy [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '29710db389c842df836944048225740f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.961 250273 DEBUG os_brick.utils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.964 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.976 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.976 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab4ba23-0d7f-466a-ae38-febf6ee3c3e7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.977 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.988 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.988 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[3a0b4df5-2812-462e-a486-75df7dc849c9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.989 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.998 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:32 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.998 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[40a6d55e-b0bc-4729-8483-40759b67e9eb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:32.999 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f81bfe70-4e99-42c4-ab44-a1a680ab262c]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.000 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.032 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.035 250273 DEBUG os_brick.initiator.connectors.lightos [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.035 250273 DEBUG os_brick.initiator.connectors.lightos [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.035 250273 DEBUG os_brick.initiator.connectors.lightos [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.036 250273 DEBUG os_brick.utils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.036 250273 DEBUG nova.virt.block_device [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating existing volume attachment record: c9f69558-2b13-4139-a49d-3adea9b4e794 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:05:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2237: 321 pgs: 321 active+clean; 441 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 6.9 MiB/s wr, 226 op/s
Jan 23 05:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1248606922' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.157 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.183 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.187 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:33.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2238748726' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.662 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.664 250273 DEBUG nova.objects.instance [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.680 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <uuid>24f5e3af-2c13-43f5-a624-dc229b9023e3</uuid>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <name>instance-00000073</name>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerShowV254Test-server-1403815402</nova:name>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:05:32</nova:creationTime>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:user uuid="1a65ac354e8a4e2b965a34382f0645d2">tempest-ServerShowV254Test-38819028-project-member</nova:user>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <nova:project uuid="f418c2e5b22b4eeb9d97365ab71edb23">tempest-ServerShowV254Test-38819028</nova:project>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="serial">24f5e3af-2c13-43f5-a624-dc229b9023e3</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="uuid">24f5e3af-2c13-43f5-a624-dc229b9023e3</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/24f5e3af-2c13-43f5-a624-dc229b9023e3_disk">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/console.log" append="off"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:05:33 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:05:33 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:05:33 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:05:33 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.743 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.744 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.744 250273 INFO nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Using config drive#033[00m
Jan 23 05:05:33 np0005593232 nova_compute[250269]: 2026-01-23 10:05:33.768 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.243 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.244 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.244 250273 INFO nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Creating image(s)#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.245 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.245 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Ensure instance console log exists: /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.246 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.246 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.246 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:34.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:34 np0005593232 nova_compute[250269]: 2026-01-23 10:05:34.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2238: 321 pgs: 321 active+clean; 441 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 293 KiB/s rd, 4.5 MiB/s wr, 97 op/s
Jan 23 05:05:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:35.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.246 250273 INFO nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating config drive at /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.251 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_2imt_v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.275 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Successfully created port: f35157ad-0f62-41af-962e-a3afcd66400e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.382 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx_2imt_v" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.411 250273 DEBUG nova.storage.rbd_utils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.415 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.597 250273 DEBUG oslo_concurrency.processutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.599 250273 INFO nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deleting local config drive /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config because it was imported into RBD.#033[00m
Jan 23 05:05:35 np0005593232 systemd-machined[215836]: New machine qemu-47-instance-00000073.
Jan 23 05:05:35 np0005593232 systemd[1]: Started Virtual Machine qemu-47-instance-00000073.
Jan 23 05:05:35 np0005593232 nova_compute[250269]: 2026-01-23 10:05:35.823 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.095 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.183 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162736.1830242, 24f5e3af-2c13-43f5-a624-dc229b9023e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.184 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.188 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.189 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.193 250273 INFO nova.virt.libvirt.driver [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance spawned successfully.#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.193 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.268 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.273 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.273 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.273 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.274 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.274 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.275 250273 DEBUG nova.virt.libvirt.driver [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.279 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:36.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.547 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.547 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162736.1833603, 24f5e3af-2c13-43f5-a624-dc229b9023e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.547 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] VM Started (Lifecycle Event)#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.597 250273 INFO nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Took 4.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.598 250273 DEBUG nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.606 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.609 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.652 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:36 np0005593232 nova_compute[250269]: 2026-01-23 10:05:36.868 250273 INFO nova.compute.manager [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Took 6.42 seconds to build instance.#033[00m
Jan 23 05:05:37 np0005593232 nova_compute[250269]: 2026-01-23 10:05:37.068 250273 DEBUG oslo_concurrency.lockutils [None req-5881d18b-8bfa-4ca8-acd4-8f053487c07b 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2239: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.9 MiB/s wr, 186 op/s
Jan 23 05:05:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:05:37
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'vms', 'cephfs.cephfs.data']
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:05:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:38.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.335 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.424 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Successfully updated port: f35157ad-0f62-41af-962e-a3afcd66400e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.450 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.451 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.451 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.584 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.585 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.585 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.585 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 34c9552e-fca1-4094-96d1-eb627cda17ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.601 250273 DEBUG nova.compute.manager [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.601 250273 DEBUG nova.compute.manager [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing instance network info cache due to event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.601 250273 DEBUG oslo_concurrency.lockutils [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:38 np0005593232 nova_compute[250269]: 2026-01-23 10:05:38.704 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:05:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2240: 321 pgs: 321 active+clean; 465 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.6 MiB/s wr, 255 op/s
Jan 23 05:05:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.602 250273 INFO nova.compute.manager [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Rebuilding instance#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.793 250273 DEBUG nova.network.neutron [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.831 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.831 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance network_info: |[{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.832 250273 DEBUG oslo_concurrency.lockutils [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.832 250273 DEBUG nova.network.neutron [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.836 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Start _get_guest_xml network_info=[{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': True, 'attachment_id': 'c9f69558-2b13-4139-a49d-3adea9b4e794', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1337690c-8061-4b7e-bb70-8cbfeecc77ac', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1337690c-8061-4b7e-bb70-8cbfeecc77ac', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '483afeac-561b-48ff-89d6-d02d1b615fc9', 'attached_at': '', 'detached_at': '', 'volume_id': '1337690c-8061-4b7e-bb70-8cbfeecc77ac', 'serial': '1337690c-8061-4b7e-bb70-8cbfeecc77ac'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.840 250273 WARNING nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.846 250273 DEBUG nova.virt.libvirt.host [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.847 250273 DEBUG nova.virt.libvirt.host [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.851 250273 DEBUG nova.virt.libvirt.host [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.851 250273 DEBUG nova.virt.libvirt.host [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.852 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.852 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.853 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.853 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.854 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.854 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.854 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.854 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.855 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.855 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.855 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.856 250273 DEBUG nova.virt.hardware [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.886 250273 DEBUG nova.storage.rbd_utils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.891 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.973 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:39 np0005593232 nova_compute[250269]: 2026-01-23 10:05:39.991 250273 DEBUG nova.compute.manager [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.047 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'pci_requests' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.059 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.073 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'resources' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.087 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'migration_context' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.101 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.119 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:05:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:40.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/647923012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.355 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.382 250273 DEBUG nova.virt.libvirt.vif [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1175667419',display_name='tempest-ServerActionsTestOtherA-server-1175667419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1175667419',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDPMbCnqcp11s7OR05vsDdiZlZSU5ZbBJSLaqQpawTODCANj+91AmOb6Hdh0FgzlQPvmSu+VYXOLfZik0SA3L4m61/nruOol9dJ9Mz34f8cV2NJKksVR2Ar2t+W5r4M6w==',key_name='tempest-keypair-2078677939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-z67ffbwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:05:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=483afeac-561b-48ff-89d6-d02d1b615fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.383 250273 DEBUG nova.network.os_vif_util [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.384 250273 DEBUG nova.network.os_vif_util [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.386 250273 DEBUG nova.objects.instance [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'pci_devices' on Instance uuid 483afeac-561b-48ff-89d6-d02d1b615fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.409 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <uuid>483afeac-561b-48ff-89d6-d02d1b615fc9</uuid>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <name>instance-00000074</name>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherA-server-1175667419</nova:name>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:05:39</nova:creationTime>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:user uuid="29710db389c842df836944048225740f">tempest-ServerActionsTestOtherA-882763067-project-member</nova:user>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:project uuid="8c16cd713fa74a88b43e4edf01c273bd">tempest-ServerActionsTestOtherA-882763067</nova:project>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <nova:port uuid="f35157ad-0f62-41af-962e-a3afcd66400e">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="serial">483afeac-561b-48ff-89d6-d02d1b615fc9</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="uuid">483afeac-561b-48ff-89d6-d02d1b615fc9</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-1337690c-8061-4b7e-bb70-8cbfeecc77ac">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <serial>1337690c-8061-4b7e-bb70-8cbfeecc77ac</serial>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:15:67:98"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <target dev="tapf35157ad-0f"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/console.log" append="off"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:05:40 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:05:40 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:05:40 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:05:40 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.416 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Preparing to wait for external event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.417 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.417 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.417 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.418 250273 DEBUG nova.virt.libvirt.vif [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1175667419',display_name='tempest-ServerActionsTestOtherA-server-1175667419',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1175667419',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDPMbCnqcp11s7OR05vsDdiZlZSU5ZbBJSLaqQpawTODCANj+91AmOb6Hdh0FgzlQPvmSu+VYXOLfZik0SA3L4m61/nruOol9dJ9Mz34f8cV2NJKksVR2Ar2t+W5r4M6w==',key_name='tempest-keypair-2078677939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-z67ffbwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:05:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=483afeac-561b-48ff-89d6-d02d1b615fc9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.419 250273 DEBUG nova.network.os_vif_util [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.419 250273 DEBUG nova.network.os_vif_util [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.420 250273 DEBUG os_vif [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.422 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.423 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.428 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.429 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf35157ad-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.430 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf35157ad-0f, col_values=(('external_ids', {'iface-id': 'f35157ad-0f62-41af-962e-a3afcd66400e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:67:98', 'vm-uuid': '483afeac-561b-48ff-89d6-d02d1b615fc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:40 np0005593232 NetworkManager[49057]: <info>  [1769162740.4682] manager: (tapf35157ad-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.475 250273 INFO os_vif [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f')#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.587 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.588 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.588 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] No VIF found with MAC fa:16:3e:15:67:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.589 250273 INFO nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Using config drive#033[00m
Jan 23 05:05:40 np0005593232 nova_compute[250269]: 2026-01-23 10:05:40.622 250273 DEBUG nova.storage.rbd_utils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2241: 321 pgs: 321 active+clean; 465 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 220 op/s
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:41.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.396 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updating instance_info_cache with network_info: [{"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.413 250273 INFO nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Creating config drive at /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.422 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpic5ldcag execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.453 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-34c9552e-fca1-4094-96d1-eb627cda17ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.454 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.455 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.486 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.487 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.487 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.488 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.488 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.560 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpic5ldcag" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.593 250273 DEBUG nova.storage.rbd_utils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.597 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config 483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.771 250273 DEBUG oslo_concurrency.processutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config 483afeac-561b-48ff-89d6-d02d1b615fc9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.772 250273 INFO nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Deleting local config drive /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/disk.config because it was imported into RBD.#033[00m
Jan 23 05:05:41 np0005593232 NetworkManager[49057]: <info>  [1769162741.8175] manager: (tapf35157ad-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/195)
Jan 23 05:05:41 np0005593232 kernel: tapf35157ad-0f: entered promiscuous mode
Jan 23 05:05:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:41Z|00388|binding|INFO|Claiming lport f35157ad-0f62-41af-962e-a3afcd66400e for this chassis.
Jan 23 05:05:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:41Z|00389|binding|INFO|f35157ad-0f62-41af-962e-a3afcd66400e: Claiming fa:16:3e:15:67:98 10.100.0.5
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.857 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:41 np0005593232 systemd-udevd[323906]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.865 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:67:98 10.100.0.5'], port_security=['fa:16:3e:15:67:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '483afeac-561b-48ff-89d6-d02d1b615fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b9910180-8b38-41b2-8cb3-4e4af7eb2c2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f35157ad-0f62-41af-962e-a3afcd66400e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.868 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f35157ad-0f62-41af-962e-a3afcd66400e in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b bound to our chassis#033[00m
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.869 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8575e824-4be0-4206-873e-2f9a3d1ded0b#033[00m
Jan 23 05:05:41 np0005593232 NetworkManager[49057]: <info>  [1769162741.8793] device (tapf35157ad-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:05:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:41Z|00390|binding|INFO|Setting lport f35157ad-0f62-41af-962e-a3afcd66400e ovn-installed in OVS
Jan 23 05:05:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:41Z|00391|binding|INFO|Setting lport f35157ad-0f62-41af-962e-a3afcd66400e up in Southbound
Jan 23 05:05:41 np0005593232 NetworkManager[49057]: <info>  [1769162741.8807] device (tapf35157ad-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.895 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b27e3be8-a014-4956-886c-f734efea5455]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:41 np0005593232 systemd-machined[215836]: New machine qemu-48-instance-00000074.
Jan 23 05:05:41 np0005593232 systemd[1]: Started Virtual Machine qemu-48-instance-00000074.
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.930 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ff6e5e03-a83c-4dcf-a091-0846a9a9b4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.934 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[78c9383b-8a30-4ea6-b232-3c5db92f6c10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:05:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290044975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.964 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8faa3e-dd37-4423-a932-3f21fa14e03b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:41 np0005593232 nova_compute[250269]: 2026-01-23 10:05:41.983 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:41.993 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f125d68b-fd17-4967-a4bd-14920df25677]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662100, 'reachable_time': 18560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323924, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.011 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3237775c-9ea0-40d2-8648-422f0775ee38]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8575e824-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662112, 'tstamp': 662112}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323926, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8575e824-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662115, 'tstamp': 662115}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323926, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.013 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.014 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.015 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.016 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8575e824-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.016 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.016 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8575e824-40, col_values=(('external_ids', {'iface-id': 'f7023d86-3158-4cc4-b690-f57bb76e92b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.016 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.019 250273 DEBUG nova.network.neutron [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updated VIF entry in instance network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.019 250273 DEBUG nova.network.neutron [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.050 250273 DEBUG oslo_concurrency.lockutils [req-e7ee9ca5-34ad-46eb-b504-b32376be7466 req-cc66d98d-1acd-403c-9b0f-5b7f2a33a0b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.080 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.081 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.086 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.086 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.092 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.093 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.267 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.268 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3973MB free_disk=20.788982391357422GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.269 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.269 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.285 250273 DEBUG nova.compute.manager [req-f02a56bd-b80b-4c91-bad3-c61e5628c7bb req-90f7ee71-2f7c-400f-878b-c9c49175c558 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.286 250273 DEBUG oslo_concurrency.lockutils [req-f02a56bd-b80b-4c91-bad3-c61e5628c7bb req-90f7ee71-2f7c-400f-878b-c9c49175c558 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.286 250273 DEBUG oslo_concurrency.lockutils [req-f02a56bd-b80b-4c91-bad3-c61e5628c7bb req-90f7ee71-2f7c-400f-878b-c9c49175c558 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.286 250273 DEBUG oslo_concurrency.lockutils [req-f02a56bd-b80b-4c91-bad3-c61e5628c7bb req-90f7ee71-2f7c-400f-878b-c9c49175c558 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.286 250273 DEBUG nova.compute.manager [req-f02a56bd-b80b-4c91-bad3-c61e5628c7bb req-90f7ee71-2f7c-400f-878b-c9c49175c558 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Processing event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:05:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:42.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.383 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 34c9552e-fca1-4094-96d1-eb627cda17ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.384 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 24f5e3af-2c13-43f5-a624-dc229b9023e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.384 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 483afeac-561b-48ff-89d6-d02d1b615fc9 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.385 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.385 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.484 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.617 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.617 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:05:42.618 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.650 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162742.6502717, 483afeac-561b-48ff-89d6-d02d1b615fc9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.651 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] VM Started (Lifecycle Event)#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.655 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.658 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.666 250273 INFO nova.virt.libvirt.driver [-] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance spawned successfully.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.666 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.671 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.694 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.702 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.703 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.704 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.706 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.706 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.707 250273 DEBUG nova.virt.libvirt.driver [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.735 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.736 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162742.6514814, 483afeac-561b-48ff-89d6-d02d1b615fc9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.737 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.783 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.786 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162742.657508, 483afeac-561b-48ff-89d6-d02d1b615fc9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.787 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.795 250273 INFO nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Took 8.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.796 250273 DEBUG nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.810 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.812 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.840 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.877 250273 INFO nova.compute.manager [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Took 12.01 seconds to build instance.#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.898 250273 DEBUG oslo_concurrency.lockutils [None req-035c8dcc-7c19-41d4-b04d-63da196ccf84 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:05:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483296782' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.941 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.946 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.962 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.984 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:05:42 np0005593232 nova_compute[250269]: 2026-01-23 10:05:42.984 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.715s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2242: 321 pgs: 321 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.2 MiB/s wr, 247 op/s
Jan 23 05:05:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:43.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:43 np0005593232 podman[323991]: 2026-01-23 10:05:43.442165744 +0000 UTC m=+0.096782801 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:05:43 np0005593232 nova_compute[250269]: 2026-01-23 10:05:43.821 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:43 np0005593232 nova_compute[250269]: 2026-01-23 10:05:43.847 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:43 np0005593232 nova_compute[250269]: 2026-01-23 10:05:43.848 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:05:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:44.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.452 250273 DEBUG nova.compute.manager [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.452 250273 DEBUG oslo_concurrency.lockutils [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.452 250273 DEBUG oslo_concurrency.lockutils [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.453 250273 DEBUG oslo_concurrency.lockutils [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.453 250273 DEBUG nova.compute.manager [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] No waiting events found dispatching network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:05:44 np0005593232 nova_compute[250269]: 2026-01-23 10:05:44.453 250273 WARNING nova.compute.manager [req-cf09c993-1ddf-4d35-bca3-22a72b99337d req-57cf733d-dc13-4d5b-8552-792fbbf4f9b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received unexpected event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e for instance with vm_state active and task_state None.#033[00m
Jan 23 05:05:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2243: 321 pgs: 321 active+clean; 467 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 1001 KiB/s wr, 225 op/s
Jan 23 05:05:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:45.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:45 np0005593232 nova_compute[250269]: 2026-01-23 10:05:45.469 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:46 np0005593232 nova_compute[250269]: 2026-01-23 10:05:46.150 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:46 np0005593232 podman[324070]: 2026-01-23 10:05:46.405146338 +0000 UTC m=+0.061712145 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:05:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009497174820863577 of space, bias 1.0, pg target 2.8491524462590734 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.294807006676903 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:05:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:05:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2244: 321 pgs: 321 active+clean; 477 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 1.7 MiB/s wr, 298 op/s
Jan 23 05:05:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:47.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 196e065f-f901-4f98-bc79-4eae99d80c82 does not exist
Jan 23 05:05:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 659845d6-d2bd-4dcf-b074-47ac72745d19 does not exist
Jan 23 05:05:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a57a39d-5146-45bd-8aea-dc2f86188a3a does not exist
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:05:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:05:48 np0005593232 nova_compute[250269]: 2026-01-23 10:05:48.579 250273 DEBUG nova.compute.manager [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:05:48 np0005593232 nova_compute[250269]: 2026-01-23 10:05:48.579 250273 DEBUG nova.compute.manager [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing instance network info cache due to event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:05:48 np0005593232 nova_compute[250269]: 2026-01-23 10:05:48.579 250273 DEBUG oslo_concurrency.lockutils [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:48 np0005593232 nova_compute[250269]: 2026-01-23 10:05:48.580 250273 DEBUG oslo_concurrency.lockutils [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:48 np0005593232 nova_compute[250269]: 2026-01-23 10:05:48.580 250273 DEBUG nova.network.neutron [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.777134938 +0000 UTC m=+0.039371779 container create 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:05:48 np0005593232 systemd[1]: Started libpod-conmon-9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce.scope.
Jan 23 05:05:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.760831475 +0000 UTC m=+0.023068336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.858177451 +0000 UTC m=+0.120414332 container init 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.867963899 +0000 UTC m=+0.130200740 container start 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.871015616 +0000 UTC m=+0.133252507 container attach 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:05:48 np0005593232 romantic_knuth[324375]: 167 167
Jan 23 05:05:48 np0005593232 systemd[1]: libpod-9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce.scope: Deactivated successfully.
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.87645016 +0000 UTC m=+0.138686991 container died 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:05:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-eba469561673d2abd5c08c9bf8abe6183a09ed3f3335b48a81ce7ddfc7feee67-merged.mount: Deactivated successfully.
Jan 23 05:05:48 np0005593232 podman[324358]: 2026-01-23 10:05:48.915817239 +0000 UTC m=+0.178054080 container remove 9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:05:48 np0005593232 systemd[1]: libpod-conmon-9078488d1a67331fc48524222db269095b1de025c8b2668fec207e0b2a6724ce.scope: Deactivated successfully.
Jan 23 05:05:49 np0005593232 podman[324400]: 2026-01-23 10:05:49.086300033 +0000 UTC m=+0.043020073 container create 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:05:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2245: 321 pgs: 321 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.4 MiB/s rd, 2.7 MiB/s wr, 276 op/s
Jan 23 05:05:49 np0005593232 systemd[1]: Started libpod-conmon-10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39.scope.
Jan 23 05:05:49 np0005593232 podman[324400]: 2026-01-23 10:05:49.066042778 +0000 UTC m=+0.022762848 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:49 np0005593232 podman[324400]: 2026-01-23 10:05:49.202732072 +0000 UTC m=+0.159452152 container init 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:05:49 np0005593232 podman[324400]: 2026-01-23 10:05:49.21076739 +0000 UTC m=+0.167487430 container start 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:05:49 np0005593232 podman[324400]: 2026-01-23 10:05:49.214485166 +0000 UTC m=+0.171205236 container attach 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:05:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:49.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.171 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:05:50 np0005593232 jolly_keldysh[324414]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:05:50 np0005593232 jolly_keldysh[324414]: --> relative data size: 1.0
Jan 23 05:05:50 np0005593232 jolly_keldysh[324414]: --> All data devices are unavailable
Jan 23 05:05:50 np0005593232 systemd[1]: libpod-10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39.scope: Deactivated successfully.
Jan 23 05:05:50 np0005593232 podman[324400]: 2026-01-23 10:05:50.244136954 +0000 UTC m=+1.200857024 container died 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:05:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9ebb61136f0fa9fde08b7ea0f6756564af6f8cecc8d050da0d9766de4f218ad7-merged.mount: Deactivated successfully.
Jan 23 05:05:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:50.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:50 np0005593232 podman[324400]: 2026-01-23 10:05:50.307299209 +0000 UTC m=+1.264019249 container remove 10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:05:50 np0005593232 systemd[1]: libpod-conmon-10b69e400cc9fc01bb8d965e0384e6742460c787dc666e15eacb4ffdff8e8c39.scope: Deactivated successfully.
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.473 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.490 250273 DEBUG nova.network.neutron [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updated VIF entry in instance network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.490 250273 DEBUG nova.network.neutron [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.525 250273 DEBUG oslo_concurrency.lockutils [req-41fc6c96-2770-4763-a069-4fbb59e82b77 req-6b8336cd-6179-456a-8d80-ec89e00609a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.798 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.799 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:05:50 np0005593232 nova_compute[250269]: 2026-01-23 10:05:50.799 250273 DEBUG nova.network.neutron [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:05:50 np0005593232 podman[324581]: 2026-01-23 10:05:50.941578752 +0000 UTC m=+0.037153156 container create bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:05:50 np0005593232 systemd[1]: Started libpod-conmon-bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262.scope.
Jan 23 05:05:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:51.012214359 +0000 UTC m=+0.107788784 container init bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:51.018409826 +0000 UTC m=+0.113984230 container start bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:50.925559867 +0000 UTC m=+0.021134291 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:51.02313885 +0000 UTC m=+0.118727904 container attach bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:05:51 np0005593232 mystifying_leakey[324598]: 167 167
Jan 23 05:05:51 np0005593232 systemd[1]: libpod-bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262.scope: Deactivated successfully.
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:51.026352671 +0000 UTC m=+0.121927075 container died bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:05:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7ead6993e284453e02732d33c2c15e4285588770de5c47a189757a32fd0c30a5-merged.mount: Deactivated successfully.
Jan 23 05:05:51 np0005593232 podman[324581]: 2026-01-23 10:05:51.063063404 +0000 UTC m=+0.158637808 container remove bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:05:51 np0005593232 systemd[1]: libpod-conmon-bbd0d859e8ba4386336d17b78b340494b0135338c480576ff202b3fb20504262.scope: Deactivated successfully.
Jan 23 05:05:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2246: 321 pgs: 321 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 166 op/s
Jan 23 05:05:51 np0005593232 nova_compute[250269]: 2026-01-23 10:05:51.152 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:51 np0005593232 podman[324621]: 2026-01-23 10:05:51.242885633 +0000 UTC m=+0.045926455 container create c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:05:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:51.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:51 np0005593232 systemd[1]: Started libpod-conmon-c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff.scope.
Jan 23 05:05:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a31fe79fd7f656a47e286652c08d8f2dbcd0486ecafedb3bf191578c669f9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a31fe79fd7f656a47e286652c08d8f2dbcd0486ecafedb3bf191578c669f9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a31fe79fd7f656a47e286652c08d8f2dbcd0486ecafedb3bf191578c669f9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26a31fe79fd7f656a47e286652c08d8f2dbcd0486ecafedb3bf191578c669f9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:51 np0005593232 podman[324621]: 2026-01-23 10:05:51.308976261 +0000 UTC m=+0.112017103 container init c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:05:51 np0005593232 podman[324621]: 2026-01-23 10:05:51.21708777 +0000 UTC m=+0.020128612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:51 np0005593232 podman[324621]: 2026-01-23 10:05:51.316033792 +0000 UTC m=+0.119074624 container start c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:05:51 np0005593232 podman[324621]: 2026-01-23 10:05:51.319340466 +0000 UTC m=+0.122381298 container attach c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:05:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]: {
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:    "0": [
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:        {
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "devices": [
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "/dev/loop3"
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            ],
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "lv_name": "ceph_lv0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "lv_size": "7511998464",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "name": "ceph_lv0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "tags": {
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.cluster_name": "ceph",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.crush_device_class": "",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.encrypted": "0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.osd_id": "0",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.type": "block",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:                "ceph.vdo": "0"
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            },
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "type": "block",
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:            "vg_name": "ceph_vg0"
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:        }
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]:    ]
Jan 23 05:05:52 np0005593232 sleepy_keller[324638]: }
Jan 23 05:05:52 np0005593232 systemd[1]: libpod-c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff.scope: Deactivated successfully.
Jan 23 05:05:52 np0005593232 podman[324621]: 2026-01-23 10:05:52.206585437 +0000 UTC m=+1.009626269 container died c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:05:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-26a31fe79fd7f656a47e286652c08d8f2dbcd0486ecafedb3bf191578c669f9e-merged.mount: Deactivated successfully.
Jan 23 05:05:52 np0005593232 podman[324621]: 2026-01-23 10:05:52.261076986 +0000 UTC m=+1.064117808 container remove c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:05:52 np0005593232 systemd[1]: libpod-conmon-c2e6aecb9e2abdac30b4653d2f43c38ce98043f0b56c77792c94450738919fff.scope: Deactivated successfully.
Jan 23 05:05:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:52.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:52 np0005593232 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 23 05:05:52 np0005593232 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d00000073.scope: Consumed 13.225s CPU time.
Jan 23 05:05:52 np0005593232 systemd-machined[215836]: Machine qemu-47-instance-00000073 terminated.
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.881050163 +0000 UTC m=+0.039735910 container create 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:05:52 np0005593232 systemd[1]: Started libpod-conmon-57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5.scope.
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.863935536 +0000 UTC m=+0.022621303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.975093175 +0000 UTC m=+0.133778932 container init 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.982161786 +0000 UTC m=+0.140847523 container start 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:05:52 np0005593232 nifty_johnson[324819]: 167 167
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.986025606 +0000 UTC m=+0.144711383 container attach 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:05:52 np0005593232 systemd[1]: libpod-57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5.scope: Deactivated successfully.
Jan 23 05:05:52 np0005593232 conmon[324819]: conmon 57359b8b456fe677837b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5.scope/container/memory.events
Jan 23 05:05:52 np0005593232 podman[324803]: 2026-01-23 10:05:52.98934587 +0000 UTC m=+0.148031617 container died 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:05:52 np0005593232 nova_compute[250269]: 2026-01-23 10:05:52.987 250273 DEBUG nova.network.neutron [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.009 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:05:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c2cbf3d6d07f3deb019548337113570aa6feafff84e54ee2dfa1ec23f02596cb-merged.mount: Deactivated successfully.
Jan 23 05:05:53 np0005593232 podman[324803]: 2026-01-23 10:05:53.034626917 +0000 UTC m=+0.193312674 container remove 57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:05:53 np0005593232 systemd[1]: libpod-conmon-57359b8b456fe677837bf30beccb5734fdd8aec47ca2a8ab190f877d1fec98f5.scope: Deactivated successfully.
Jan 23 05:05:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2247: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 4.3 MiB/s wr, 275 op/s
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.125 250273 DEBUG nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.126 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Creating file /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/bcaf39bd737b4a0ea92f0d2028f4ebbb.tmp on remote host 192.168.122.102 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.126 250273 DEBUG oslo_concurrency.processutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/bcaf39bd737b4a0ea92f0d2028f4ebbb.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.192 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.207 250273 INFO nova.virt.libvirt.driver [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance destroyed successfully.#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.213 250273 INFO nova.virt.libvirt.driver [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance destroyed successfully.#033[00m
Jan 23 05:05:53 np0005593232 podman[324844]: 2026-01-23 10:05:53.217860203 +0000 UTC m=+0.047862731 container create 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:05:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:53.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:53 np0005593232 podman[324844]: 2026-01-23 10:05:53.197493205 +0000 UTC m=+0.027495753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:05:53 np0005593232 systemd[1]: Started libpod-conmon-238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e.scope.
Jan 23 05:05:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:05:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947b65cbbe6de17721cff5153a331f4e03f3f7162639835d58d3fb4b62e32cfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947b65cbbe6de17721cff5153a331f4e03f3f7162639835d58d3fb4b62e32cfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947b65cbbe6de17721cff5153a331f4e03f3f7162639835d58d3fb4b62e32cfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/947b65cbbe6de17721cff5153a331f4e03f3f7162639835d58d3fb4b62e32cfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:05:53 np0005593232 podman[324844]: 2026-01-23 10:05:53.345590543 +0000 UTC m=+0.175593091 container init 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:05:53 np0005593232 podman[324844]: 2026-01-23 10:05:53.352328374 +0000 UTC m=+0.182330902 container start 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 23 05:05:53 np0005593232 podman[324844]: 2026-01-23 10:05:53.356186274 +0000 UTC m=+0.186188802 container attach 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.586 250273 DEBUG oslo_concurrency.processutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/bcaf39bd737b4a0ea92f0d2028f4ebbb.tmp" returned: 1 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.588 250273 DEBUG oslo_concurrency.processutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] 'ssh -o BatchMode=yes 192.168.122.102 touch /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9/bcaf39bd737b4a0ea92f0d2028f4ebbb.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.588 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Creating directory /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9 on remote host 192.168.122.102 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.589 250273 DEBUG oslo_concurrency.processutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.798 250273 DEBUG oslo_concurrency.processutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ssh -o BatchMode=yes 192.168.122.102 mkdir -p /var/lib/nova/instances/483afeac-561b-48ff-89d6-d02d1b615fc9" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.803 250273 DEBUG nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.813 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deleting instance files /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3_del#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.815 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deletion of /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3_del complete#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.985 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:05:53 np0005593232 nova_compute[250269]: 2026-01-23 10:05:53.987 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating image(s)#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.025 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.062 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.093 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.101 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.173 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.174 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.175 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.175 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "8edc4c18d7d1964a485fb1b305c460bdc5a45b20" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.245 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:54 np0005593232 nova_compute[250269]: 2026-01-23 10:05:54.250 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2248: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 248 op/s
Jan 23 05:05:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:05:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:55.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]: {
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:        "osd_id": 0,
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:        "type": "bluestore"
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]:    }
Jan 23 05:05:55 np0005593232 awesome_chebyshev[324875]: }
Jan 23 05:05:55 np0005593232 systemd[1]: libpod-238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e.scope: Deactivated successfully.
Jan 23 05:05:55 np0005593232 systemd[1]: libpod-238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e.scope: Consumed 1.983s CPU time.
Jan 23 05:05:55 np0005593232 podman[324844]: 2026-01-23 10:05:55.338633606 +0000 UTC m=+2.168636174 container died 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:05:55 np0005593232 nova_compute[250269]: 2026-01-23 10:05:55.397 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-947b65cbbe6de17721cff5153a331f4e03f3f7162639835d58d3fb4b62e32cfb-merged.mount: Deactivated successfully.
Jan 23 05:05:55 np0005593232 nova_compute[250269]: 2026-01-23 10:05:55.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:55 np0005593232 nova_compute[250269]: 2026-01-23 10:05:55.529 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] resizing rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:05:55 np0005593232 podman[324844]: 2026-01-23 10:05:55.549167158 +0000 UTC m=+2.379169686 container remove 238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:05:55 np0005593232 systemd[1]: libpod-conmon-238a3a11bb9f404699ef8c9dd8adf6e633f5e878097b2b19f05f80592aec8e5e.scope: Deactivated successfully.
Jan 23 05:05:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:05:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:05:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 53dca4cd-2d00-477f-9cbe-e1f2325dd28d does not exist
Jan 23 05:05:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 39892029-92b5-4387-a6e2-9b41433f1f7a does not exist
Jan 23 05:05:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f2e0f6b1-256e-462f-8eec-22b036c10870 does not exist
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.050 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.051 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Ensure instance console log exists: /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.052 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.053 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.053 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.055 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.059 250273 WARNING nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.103 250273 DEBUG nova.virt.libvirt.host [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.104 250273 DEBUG nova.virt.libvirt.host [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.132 250273 DEBUG nova.virt.libvirt.host [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.133 250273 DEBUG nova.virt.libvirt.host [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.136 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.136 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:31Z,direct_url=<?>,disk_format='qcow2',id=ae1f9e37-418c-462f-81d1-3599a6d89de9,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.138 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.138 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.139 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.139 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.140 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.140 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.141 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.141 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.142 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.142 250273 DEBUG nova.virt.hardware [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.143 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.208 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:05:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:05:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:05:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033898096' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.675 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.715 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:56 np0005593232 nova_compute[250269]: 2026-01-23 10:05:56.722 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:05:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:56Z|00392|binding|INFO|Releasing lport f7023d86-3158-4cc4-b690-f57bb76e92b5 from this chassis (sb_readonly=0)
Jan 23 05:05:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:57Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:67:98 10.100.0.5
Jan 23 05:05:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:05:57Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:67:98 10.100.0.5
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:05:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2249: 321 pgs: 321 active+clean; 420 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.3 MiB/s wr, 291 op/s
Jan 23 05:05:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:05:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335554791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.165 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.168 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <uuid>24f5e3af-2c13-43f5-a624-dc229b9023e3</uuid>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <name>instance-00000073</name>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerShowV254Test-server-1403815402</nova:name>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:05:56</nova:creationTime>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:user uuid="1a65ac354e8a4e2b965a34382f0645d2">tempest-ServerShowV254Test-38819028-project-member</nova:user>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <nova:project uuid="f418c2e5b22b4eeb9d97365ab71edb23">tempest-ServerShowV254Test-38819028</nova:project>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="ae1f9e37-418c-462f-81d1-3599a6d89de9"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <nova:ports/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="serial">24f5e3af-2c13-43f5-a624-dc229b9023e3</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="uuid">24f5e3af-2c13-43f5-a624-dc229b9023e3</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/24f5e3af-2c13-43f5-a624-dc229b9023e3_disk">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/console.log" append="off"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:05:57 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:05:57 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:05:57 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:05:57 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.230 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.230 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.231 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Using config drive#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.259 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.286 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.624 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Creating config drive at /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.628 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6qtjcql execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.761 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6qtjcql" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.797 250273 DEBUG nova.storage.rbd_utils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] rbd image 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:05:57 np0005593232 nova_compute[250269]: 2026-01-23 10:05:57.801 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:05:58 np0005593232 nova_compute[250269]: 2026-01-23 10:05:58.012 250273 DEBUG oslo_concurrency.processutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config 24f5e3af-2c13-43f5-a624-dc229b9023e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:05:58 np0005593232 nova_compute[250269]: 2026-01-23 10:05:58.013 250273 INFO nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deleting local config drive /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3/disk.config because it was imported into RBD.#033[00m
Jan 23 05:05:58 np0005593232 systemd-machined[215836]: New machine qemu-49-instance-00000073.
Jan 23 05:05:58 np0005593232 systemd[1]: Started Virtual Machine qemu-49-instance-00000073.
Jan 23 05:05:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:05:58.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2250: 321 pgs: 321 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 7.4 MiB/s wr, 275 op/s
Jan 23 05:05:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:05:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:05:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:05:59.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.351 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 24f5e3af-2c13-43f5-a624-dc229b9023e3 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.352 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162759.351084, 24f5e3af-2c13-43f5-a624-dc229b9023e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.352 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.356 250273 DEBUG nova.compute.manager [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.357 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.363 250273 INFO nova.virt.libvirt.driver [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance spawned successfully.#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.363 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.400 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.400 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.401 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.401 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.402 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.402 250273 DEBUG nova.virt.libvirt.driver [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.435 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.438 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.468 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.469 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162759.3547294, 24f5e3af-2c13-43f5-a624-dc229b9023e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.469 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] VM Started (Lifecycle Event)#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.504 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.509 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.562 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.587 250273 DEBUG nova.compute.manager [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.815 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.816 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:05:59 np0005593232 nova_compute[250269]: 2026-01-23 10:05:59.816 250273 DEBUG nova.objects.instance [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 23 05:06:00 np0005593232 nova_compute[250269]: 2026-01-23 10:06:00.074 250273 DEBUG oslo_concurrency.lockutils [None req-abc8dfd0-bd39-491d-8204-4adf4dba2f4a 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.258s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:00.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:00 np0005593232 nova_compute[250269]: 2026-01-23 10:06:00.528 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2251: 321 pgs: 321 active+clean; 393 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 636 KiB/s rd, 6.0 MiB/s wr, 208 op/s
Jan 23 05:06:01 np0005593232 nova_compute[250269]: 2026-01-23 10:06:01.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:01.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:02.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.479 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.481 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.481 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "24f5e3af-2c13-43f5-a624-dc229b9023e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.481 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.481 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.482 250273 INFO nova.compute.manager [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Terminating instance#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.483 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "refresh_cache-24f5e3af-2c13-43f5-a624-dc229b9023e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.483 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquired lock "refresh_cache-24f5e3af-2c13-43f5-a624-dc229b9023e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.484 250273 DEBUG nova.network.neutron [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:06:02 np0005593232 nova_compute[250269]: 2026-01-23 10:06:02.979 250273 DEBUG nova.network.neutron [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:06:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2252: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 305 op/s
Jan 23 05:06:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:03.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:03 np0005593232 nova_compute[250269]: 2026-01-23 10:06:03.980 250273 DEBUG nova.network.neutron [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.084 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Releasing lock "refresh_cache-24f5e3af-2c13-43f5-a624-dc229b9023e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.085 250273 DEBUG nova.compute.manager [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:06:04 np0005593232 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 23 05:06:04 np0005593232 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d00000073.scope: Consumed 5.939s CPU time.
Jan 23 05:06:04 np0005593232 systemd-machined[215836]: Machine qemu-49-instance-00000073 terminated.
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.255 250273 DEBUG nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.305 250273 INFO nova.virt.libvirt.driver [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance destroyed successfully.#033[00m
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.306 250273 DEBUG nova.objects.instance [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lazy-loading 'resources' on Instance uuid 24f5e3af-2c13-43f5-a624-dc229b9023e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:06:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:04.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.757 250273 INFO nova.virt.libvirt.driver [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deleting instance files /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3_del#033[00m
Jan 23 05:06:04 np0005593232 nova_compute[250269]: 2026-01-23 10:06:04.757 250273 INFO nova.virt.libvirt.driver [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deletion of /var/lib/nova/instances/24f5e3af-2c13-43f5-a624-dc229b9023e3_del complete#033[00m
Jan 23 05:06:05 np0005593232 nova_compute[250269]: 2026-01-23 10:06:05.028 250273 INFO nova.compute.manager [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:06:05 np0005593232 nova_compute[250269]: 2026-01-23 10:06:05.028 250273 DEBUG oslo.service.loopingcall [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:06:05 np0005593232 nova_compute[250269]: 2026-01-23 10:06:05.029 250273 DEBUG nova.compute.manager [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:06:05 np0005593232 nova_compute[250269]: 2026-01-23 10:06:05.029 250273 DEBUG nova.network.neutron [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:06:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2253: 321 pgs: 321 active+clean; 405 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 196 op/s
Jan 23 05:06:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:05.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:05 np0005593232 nova_compute[250269]: 2026-01-23 10:06:05.533 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.218 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:06.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.378 250273 DEBUG nova.network.neutron [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.452 250273 DEBUG nova.network.neutron [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:06:06 np0005593232 kernel: tapf35157ad-0f (unregistering): left promiscuous mode
Jan 23 05:06:06 np0005593232 NetworkManager[49057]: <info>  [1769162766.5142] device (tapf35157ad-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:06:06Z|00393|binding|INFO|Releasing lport f35157ad-0f62-41af-962e-a3afcd66400e from this chassis (sb_readonly=0)
Jan 23 05:06:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:06:06Z|00394|binding|INFO|Setting lport f35157ad-0f62-41af-962e-a3afcd66400e down in Southbound
Jan 23 05:06:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:06:06Z|00395|binding|INFO|Removing iface tapf35157ad-0f ovn-installed in OVS
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.532 250273 INFO nova.compute.manager [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Took 1.50 seconds to deallocate network for instance.#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.532 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:67:98 10.100.0.5'], port_security=['fa:16:3e:15:67:98 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '483afeac-561b-48ff-89d6-d02d1b615fc9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b9910180-8b38-41b2-8cb3-4e4af7eb2c2b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f35157ad-0f62-41af-962e-a3afcd66400e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.533 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f35157ad-0f62-41af-962e-a3afcd66400e in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b unbound from our chassis#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.534 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8575e824-4be0-4206-873e-2f9a3d1ded0b#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.547 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.554 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[11737197-feb8-4151-b89b-ca07018f7ae4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 23 05:06:06 np0005593232 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000074.scope: Consumed 15.052s CPU time.
Jan 23 05:06:06 np0005593232 systemd-machined[215836]: Machine qemu-48-instance-00000074 terminated.
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.589 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb57263-cb98-4daf-a3c8-b255a5cbeef0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.593 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[413f1f8c-119e-4330-8586-d62970bba81d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.623 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[07a3eca1-12bc-4e2c-b751-1cabc6769740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.648 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7072ac1e-a57a-4f4a-8685-70fa80e6a9db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8575e824-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:16:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 118], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662100, 'reachable_time': 19533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325396, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.665 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[28266cb8-d146-4970-8b18-b02b00b0ce27]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8575e824-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662112, 'tstamp': 662112}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325397, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8575e824-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 662115, 'tstamp': 662115}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325397, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.667 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 nova_compute[250269]: 2026-01-23 10:06:06.674 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.674 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8575e824-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8575e824-40, col_values=(('external_ids', {'iface-id': 'f7023d86-3158-4cc4-b690-f57bb76e92b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:06.675 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2254: 321 pgs: 321 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 208 op/s
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.268 250273 INFO nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.275 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.276 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.277 250273 INFO nova.virt.libvirt.driver [-] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Instance destroyed successfully.#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.278 250273 DEBUG nova.virt.libvirt.vif [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1175667419',display_name='tempest-ServerActionsTestOtherA-server-1175667419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1175667419',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDPMbCnqcp11s7OR05vsDdiZlZSU5ZbBJSLaqQpawTODCANj+91AmOb6Hdh0FgzlQPvmSu+VYXOLfZik0SA3L4m61/nruOol9dJ9Mz34f8cV2NJKksVR2Ar2t+W5r4M6w==',key_name='tempest-keypair-2078677939',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:05:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-z67ffbwd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:05:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=483afeac-561b-48ff-89d6-d02d1b615fc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2130726771-network", "vif_mac": "fa:16:3e:15:67:98"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.278 250273 DEBUG nova.network.os_vif_util [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherA-2130726771-network", "vif_mac": "fa:16:3e:15:67:98"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.279 250273 DEBUG nova.network.os_vif_util [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:06:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.279 250273 DEBUG os_vif [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:06:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:07.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.282 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf35157ad-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.316 250273 INFO os_vif [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f')#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.321 250273 DEBUG nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.321 250273 DEBUG nova.virt.libvirt.driver [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:07 np0005593232 nova_compute[250269]: 2026-01-23 10:06:07.767 250273 DEBUG oslo_concurrency.processutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:06:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:06:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4200625506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.197 250273 DEBUG oslo_concurrency.processutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.205 250273 DEBUG nova.compute.provider_tree [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.231 250273 DEBUG nova.scheduler.client.report [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.275 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:08.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.681 250273 INFO nova.scheduler.client.report [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Deleted allocations for instance 24f5e3af-2c13-43f5-a624-dc229b9023e3#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.827 250273 DEBUG nova.compute.manager [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-unplugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.827 250273 DEBUG oslo_concurrency.lockutils [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.827 250273 DEBUG oslo_concurrency.lockutils [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.828 250273 DEBUG oslo_concurrency.lockutils [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.828 250273 DEBUG nova.compute.manager [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] No waiting events found dispatching network-vif-unplugged-f35157ad-0f62-41af-962e-a3afcd66400e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:06:08 np0005593232 nova_compute[250269]: 2026-01-23 10:06:08.828 250273 WARNING nova.compute.manager [req-61da747e-304e-4733-8868-9cf5dca7e300 req-4c452f90-2fc5-4ca0-b9c9-5171e6c5e3db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received unexpected event network-vif-unplugged-f35157ad-0f62-41af-962e-a3afcd66400e for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:06:09 np0005593232 nova_compute[250269]: 2026-01-23 10:06:09.010 250273 DEBUG oslo_concurrency.lockutils [None req-bf0d4709-7161-48bc-9e37-1258fea00f04 1a65ac354e8a4e2b965a34382f0645d2 f418c2e5b22b4eeb9d97365ab71edb23 - - default default] Lock "24f5e3af-2c13-43f5-a624-dc229b9023e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2255: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 184 op/s
Jan 23 05:06:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:09.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:10.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2256: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 128 op/s
Jan 23 05:06:11 np0005593232 nova_compute[250269]: 2026-01-23 10:06:11.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:11.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:12.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.347 250273 DEBUG nova.compute.manager [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.347 250273 DEBUG oslo_concurrency.lockutils [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.348 250273 DEBUG oslo_concurrency.lockutils [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.348 250273 DEBUG oslo_concurrency.lockutils [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.348 250273 DEBUG nova.compute.manager [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] No waiting events found dispatching network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:06:12 np0005593232 nova_compute[250269]: 2026-01-23 10:06:12.348 250273 WARNING nova.compute.manager [req-8b786cf4-ff56-4032-9152-efdd85e93c14 req-f1b79472-5cb5-4207-8da8-cbdb56d67fc7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received unexpected event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:06:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2257: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 136 KiB/s wr, 128 op/s
Jan 23 05:06:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:13.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:14.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:14 np0005593232 podman[325436]: 2026-01-23 10:06:14.477802081 +0000 UTC m=+0.127232966 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 05:06:15 np0005593232 nova_compute[250269]: 2026-01-23 10:06:15.024 250273 DEBUG neutronclient.v2_0.client [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f35157ad-0f62-41af-962e-a3afcd66400e for host compute-2.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:06:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2258: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 30 KiB/s wr, 30 op/s
Jan 23 05:06:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:15.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:15 np0005593232 nova_compute[250269]: 2026-01-23 10:06:15.441 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:15 np0005593232 nova_compute[250269]: 2026-01-23 10:06:15.441 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:15 np0005593232 nova_compute[250269]: 2026-01-23 10:06:15.441 250273 DEBUG oslo_concurrency.lockutils [None req-a83e9424-9f46-4ec6-baee-dff8a11a74ed 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:16 np0005593232 nova_compute[250269]: 2026-01-23 10:06:16.222 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:16.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2259: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 30 KiB/s wr, 32 op/s
Jan 23 05:06:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:17.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:17 np0005593232 nova_compute[250269]: 2026-01-23 10:06:17.313 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:17 np0005593232 podman[325463]: 2026-01-23 10:06:17.38676886 +0000 UTC m=+0.048606202 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 23 05:06:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2260: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 10 KiB/s wr, 21 op/s
Jan 23 05:06:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:19.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.305 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162764.3038478, 24f5e3af-2c13-43f5-a624-dc229b9023e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.305 250273 INFO nova.compute.manager [-] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.416 250273 DEBUG nova.compute.manager [None req-0c5b86a6-f309-4663-950a-d0064bc53489 - - - - - -] [instance: 24f5e3af-2c13-43f5-a624-dc229b9023e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:06:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:19.452 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.453 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:19.453 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.798 250273 DEBUG nova.compute.manager [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.799 250273 DEBUG nova.compute.manager [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing instance network info cache due to event network-changed-f35157ad-0f62-41af-962e-a3afcd66400e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.799 250273 DEBUG oslo_concurrency.lockutils [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.799 250273 DEBUG oslo_concurrency.lockutils [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:06:19 np0005593232 nova_compute[250269]: 2026-01-23 10:06:19.799 250273 DEBUG nova.network.neutron [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Refreshing network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:06:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:20.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.431996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780432267, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 828, "num_deletes": 251, "total_data_size": 1133300, "memory_usage": 1150736, "flush_reason": "Manual Compaction"}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780450590, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1121837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49738, "largest_seqno": 50565, "table_properties": {"data_size": 1117637, "index_size": 1852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9647, "raw_average_key_size": 19, "raw_value_size": 1109191, "raw_average_value_size": 2277, "num_data_blocks": 81, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162717, "oldest_key_time": 1769162717, "file_creation_time": 1769162780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 18722 microseconds, and 10251 cpu microseconds.
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.450751) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1121837 bytes OK
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.450864) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.454186) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.454233) EVENT_LOG_v1 {"time_micros": 1769162780454222, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.454260) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1129197, prev total WAL file size 1129197, number of live WAL files 2.
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.455161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1095KB)], [110(12MB)]
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780455329, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 14330349, "oldest_snapshot_seqno": -1}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7668 keys, 12511568 bytes, temperature: kUnknown
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780559832, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 12511568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12459372, "index_size": 31924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 198685, "raw_average_key_size": 25, "raw_value_size": 12321590, "raw_average_value_size": 1606, "num_data_blocks": 1261, "num_entries": 7668, "num_filter_entries": 7668, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.560156) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 12511568 bytes
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.563924) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 137.0 rd, 119.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 12.6 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(23.9) write-amplify(11.2) OK, records in: 8182, records dropped: 514 output_compression: NoCompression
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.563962) EVENT_LOG_v1 {"time_micros": 1769162780563938, "job": 66, "event": "compaction_finished", "compaction_time_micros": 104630, "compaction_time_cpu_micros": 41374, "output_level": 6, "num_output_files": 1, "total_output_size": 12511568, "num_input_records": 8182, "num_output_records": 7668, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780564305, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162780566912, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.454945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.566994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.566999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.567002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.567003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:06:20.567005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:06:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2261: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 3 op/s
Jan 23 05:06:21 np0005593232 nova_compute[250269]: 2026-01-23 10:06:21.225 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:21.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:21 np0005593232 nova_compute[250269]: 2026-01-23 10:06:21.777 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162766.7759922, 483afeac-561b-48ff-89d6-d02d1b615fc9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:06:21 np0005593232 nova_compute[250269]: 2026-01-23 10:06:21.778 250273 INFO nova.compute.manager [-] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:06:22 np0005593232 nova_compute[250269]: 2026-01-23 10:06:22.316 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:22.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2262: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 23 05:06:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:23.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:23 np0005593232 nova_compute[250269]: 2026-01-23 10:06:23.417 250273 DEBUG nova.compute.manager [None req-44559805-d98c-4780-9564-d1f8c013106d - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:06:23 np0005593232 nova_compute[250269]: 2026-01-23 10:06:23.422 250273 DEBUG nova.compute.manager [None req-44559805-d98c-4780-9564-d1f8c013106d - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_migrated, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:06:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:23.456 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:23 np0005593232 nova_compute[250269]: 2026-01-23 10:06:23.593 250273 INFO nova.compute.manager [None req-44559805-d98c-4780-9564-d1f8c013106d - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] During the sync_power process the instance has moved from host compute-2.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 05:06:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:24.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2263: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 23 05:06:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:25.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:26 np0005593232 nova_compute[250269]: 2026-01-23 10:06:26.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:26.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2264: 321 pgs: 321 active+clean; 359 MiB data, 1015 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 71 op/s
Jan 23 05:06:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:27.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:27 np0005593232 nova_compute[250269]: 2026-01-23 10:06:27.318 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:28.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:28 np0005593232 ovn_controller[151001]: 2026-01-23T10:06:28Z|00396|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 23 05:06:28 np0005593232 nova_compute[250269]: 2026-01-23 10:06:28.720 250273 DEBUG nova.network.neutron [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updated VIF entry in instance network info cache for port f35157ad-0f62-41af-962e-a3afcd66400e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:06:28 np0005593232 nova_compute[250269]: 2026-01-23 10:06:28.720 250273 DEBUG nova.network.neutron [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:06:28 np0005593232 nova_compute[250269]: 2026-01-23 10:06:28.765 250273 DEBUG oslo_concurrency.lockutils [req-9be21528-3741-4bcd-8c70-cef1125531f1 req-acc1caeb-d312-45d5-a999-980f8c89c69f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:06:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2265: 321 pgs: 321 active+clean; 385 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 948 KiB/s wr, 85 op/s
Jan 23 05:06:29 np0005593232 nova_compute[250269]: 2026-01-23 10:06:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:29.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:30 np0005593232 systemd[1]: Starting dnf makecache...
Jan 23 05:06:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:30.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:30 np0005593232 dnf[325541]: Metadata cache refreshed recently.
Jan 23 05:06:30 np0005593232 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 23 05:06:30 np0005593232 systemd[1]: Finished dnf makecache.
Jan 23 05:06:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2266: 321 pgs: 321 active+clean; 385 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 948 KiB/s wr, 84 op/s
Jan 23 05:06:31 np0005593232 nova_compute[250269]: 2026-01-23 10:06:31.228 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:31.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:32 np0005593232 nova_compute[250269]: 2026-01-23 10:06:32.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:32 np0005593232 nova_compute[250269]: 2026-01-23 10:06:32.321 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:32.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2267: 321 pgs: 321 active+clean; 452 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Jan 23 05:06:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:33.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:34.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2268: 321 pgs: 321 active+clean; 452 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 149 op/s
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.138 250273 DEBUG nova.compute.manager [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.138 250273 DEBUG oslo_concurrency.lockutils [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.139 250273 DEBUG oslo_concurrency.lockutils [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.139 250273 DEBUG oslo_concurrency.lockutils [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.139 250273 DEBUG nova.compute.manager [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] No waiting events found dispatching network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.140 250273 WARNING nova.compute.manager [req-337f2ec7-f944-4dc9-b7fb-017dc608ac6d req-3e3ee73f-2662-409f-8103-80829efe1a04 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received unexpected event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:35 np0005593232 nova_compute[250269]: 2026-01-23 10:06:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:35.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:36 np0005593232 nova_compute[250269]: 2026-01-23 10:06:36.232 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:36.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2269: 321 pgs: 321 active+clean; 453 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:06:37
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', 'vms', '.mgr']
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:06:37 np0005593232 nova_compute[250269]: 2026-01-23 10:06:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:37 np0005593232 nova_compute[250269]: 2026-01-23 10:06:37.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:06:37 np0005593232 nova_compute[250269]: 2026-01-23 10:06:37.323 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:37.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.344 250273 DEBUG nova.compute.manager [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.344 250273 DEBUG oslo_concurrency.lockutils [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.345 250273 DEBUG oslo_concurrency.lockutils [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.345 250273 DEBUG oslo_concurrency.lockutils [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.345 250273 DEBUG nova.compute.manager [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] No waiting events found dispatching network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:06:38 np0005593232 nova_compute[250269]: 2026-01-23 10:06:38.346 250273 WARNING nova.compute.manager [req-e1a1057e-e9ad-4b83-bd68-3e5ae3f10407 req-dee2f310-3fa7-426f-878b-7e76075f008f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Received unexpected event network-vif-plugged-f35157ad-0f62-41af-962e-a3afcd66400e for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:06:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:38.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2270: 321 pgs: 321 active+clean; 454 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.6 MiB/s wr, 272 op/s
Jan 23 05:06:39 np0005593232 nova_compute[250269]: 2026-01-23 10:06:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:39 np0005593232 nova_compute[250269]: 2026-01-23 10:06:39.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:06:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:39.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:39 np0005593232 nova_compute[250269]: 2026-01-23 10:06:39.336 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:06:40 np0005593232 nova_compute[250269]: 2026-01-23 10:06:40.158 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "483afeac-561b-48ff-89d6-d02d1b615fc9" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:40 np0005593232 nova_compute[250269]: 2026-01-23 10:06:40.159 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:40 np0005593232 nova_compute[250269]: 2026-01-23 10:06:40.159 250273 DEBUG nova.compute.manager [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Going to confirm migration 16 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 05:06:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:06:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:40.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:06:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2271: 321 pgs: 321 active+clean; 454 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 2.7 MiB/s wr, 256 op/s
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.234 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:41.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.410 250273 DEBUG neutronclient.v2_0.client [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port f35157ad-0f62-41af-962e-a3afcd66400e for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.411 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.411 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquired lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.411 250273 DEBUG nova.network.neutron [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:06:41 np0005593232 nova_compute[250269]: 2026-01-23 10:06:41.411 250273 DEBUG nova.objects.instance [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'info_cache' on Instance uuid 483afeac-561b-48ff-89d6-d02d1b615fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:06:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:42 np0005593232 nova_compute[250269]: 2026-01-23 10:06:42.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:42.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:42.617 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:42.618 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:06:42.618 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2272: 321 pgs: 321 active+clean; 454 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.7 MiB/s wr, 307 op/s
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:06:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:43.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.457 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.457 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:06:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:06:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/791901340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:06:43 np0005593232 nova_compute[250269]: 2026-01-23 10:06:43.932 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.092 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.093 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.097 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.097 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.263 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.264 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4204MB free_disk=20.809558868408203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.265 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.266 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.338 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration for instance 483afeac-561b-48ff-89d6-d02d1b615fc9 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 05:06:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:44.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.384 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating resource usage from migration df920c0b-dafc-41b8-b8ba-e843582c7bd4#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.385 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Starting to track outgoing migration df920c0b-dafc-41b8-b8ba-e843582c7bd4 with flavor 68d42077-c749-4366-ba3e-07758debb02d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1444#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.441 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 34c9552e-fca1-4094-96d1-eb627cda17ab actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.442 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration df920c0b-dafc-41b8-b8ba-e843582c7bd4 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.442 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.442 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:06:44 np0005593232 nova_compute[250269]: 2026-01-23 10:06:44.675 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:06:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:06:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843168584' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:06:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2273: 321 pgs: 321 active+clean; 454 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 24 KiB/s wr, 175 op/s
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.133 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.140 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.171 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.199 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.199 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:45.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:45 np0005593232 podman[325594]: 2026-01-23 10:06:45.434725843 +0000 UTC m=+0.092870280 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.653 250273 DEBUG nova.network.neutron [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 483afeac-561b-48ff-89d6-d02d1b615fc9] Updating instance_info_cache with network_info: [{"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.691 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Releasing lock "refresh_cache-483afeac-561b-48ff-89d6-d02d1b615fc9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.691 250273 DEBUG nova.objects.instance [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'migration_context' on Instance uuid 483afeac-561b-48ff-89d6-d02d1b615fc9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.740 250273 DEBUG nova.storage.rbd_utils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] rbd image 483afeac-561b-48ff-89d6-d02d1b615fc9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.748 250273 DEBUG nova.virt.libvirt.vif [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:05:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1175667419',display_name='tempest-ServerActionsTestOtherA-server-1175667419',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1175667419',id=116,image_ref='',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPDPMbCnqcp11s7OR05vsDdiZlZSU5ZbBJSLaqQpawTODCANj+91AmOb6Hdh0FgzlQPvmSu+VYXOLfZik0SA3L4m61/nruOol9dJ9Mz34f8cV2NJKksVR2Ar2t+W5r4M6w==',key_name='tempest-keypair-2078677939',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:06:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-z67ffbwd',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:06:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='29710db389c842df836944048225740f',uuid=483afeac-561b-48ff-89d6-d02d1b615fc9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.748 250273 DEBUG nova.network.os_vif_util [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "f35157ad-0f62-41af-962e-a3afcd66400e", "address": "fa:16:3e:15:67:98", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf35157ad-0f", "ovs_interfaceid": "f35157ad-0f62-41af-962e-a3afcd66400e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.749 250273 DEBUG nova.network.os_vif_util [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.749 250273 DEBUG os_vif [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.751 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.752 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf35157ad-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.752 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.754 250273 INFO os_vif [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:67:98,bridge_name='br-int',has_traffic_filtering=True,id=f35157ad-0f62-41af-962e-a3afcd66400e,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf35157ad-0f')#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.755 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.755 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:06:45 np0005593232 nova_compute[250269]: 2026-01-23 10:06:45.890 250273 DEBUG oslo_concurrency.processutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:06:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3500712266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.344 250273 DEBUG oslo_concurrency.processutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.352 250273 DEBUG nova.compute.provider_tree [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:06:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:46.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.376 250273 DEBUG nova.scheduler.client.report [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.471 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.652 250273 INFO nova.scheduler.client.report [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Deleted allocation for migration df920c0b-dafc-41b8-b8ba-e843582c7bd4#033[00m
Jan 23 05:06:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:46 np0005593232 nova_compute[250269]: 2026-01-23 10:06:46.796 250273 DEBUG oslo_concurrency.lockutils [None req-70a268f8-e1e0-4688-8838-2a51af630ed1 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "483afeac-561b-48ff-89d6-d02d1b615fc9" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 6.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008515341464265773 of space, bias 1.0, pg target 2.5546024392797317 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002165958554345803 of space, bias 1.0, pg target 0.6454556491950494 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:06:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:06:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2274: 321 pgs: 321 active+clean; 473 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.0 MiB/s wr, 199 op/s
Jan 23 05:06:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:47.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:47 np0005593232 nova_compute[250269]: 2026-01-23 10:06:47.360 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:48.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:48 np0005593232 podman[325714]: 2026-01-23 10:06:48.407845596 +0000 UTC m=+0.061432286 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 23 05:06:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2275: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.2 MiB/s wr, 179 op/s
Jan 23 05:06:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:49.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:50.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:50 np0005593232 nova_compute[250269]: 2026-01-23 10:06:50.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2276: 321 pgs: 321 active+clean; 486 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 05:06:51 np0005593232 nova_compute[250269]: 2026-01-23 10:06:51.239 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:51.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:52.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:52 np0005593232 nova_compute[250269]: 2026-01-23 10:06:52.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2277: 321 pgs: 321 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 229 op/s
Jan 23 05:06:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:06:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:53.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:06:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:54.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2278: 321 pgs: 321 active+clean; 519 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 178 op/s
Jan 23 05:06:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:55.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:56 np0005593232 nova_compute[250269]: 2026-01-23 10:06:56.242 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:56.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:06:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2279: 321 pgs: 321 active+clean; 474 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 212 op/s
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:06:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:57.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:57 np0005593232 nova_compute[250269]: 2026-01-23 10:06:57.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:06:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3fa908c3-ca28-4d0c-97d5-24f875b6346b does not exist
Jan 23 05:06:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3d3f09f2-d1c0-4d97-a434-937debe395ff does not exist
Jan 23 05:06:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ddc5f91f-b77e-47e7-8372-f7b1883b475d does not exist
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:06:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.089332751 +0000 UTC m=+0.037385364 container create 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:06:58 np0005593232 systemd[1]: Started libpod-conmon-2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539.scope.
Jan 23 05:06:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.160103812 +0000 UTC m=+0.108156515 container init 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.166296368 +0000 UTC m=+0.114349001 container start 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.073214213 +0000 UTC m=+0.021266846 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.170904819 +0000 UTC m=+0.118957432 container attach 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:06:58 np0005593232 mystifying_hertz[326025]: 167 167
Jan 23 05:06:58 np0005593232 systemd[1]: libpod-2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539.scope: Deactivated successfully.
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.172171475 +0000 UTC m=+0.120224108 container died 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:06:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b4a038eb028bb6d7d3af16774fe1931101eac32025d7deebc9f8728f925163db-merged.mount: Deactivated successfully.
Jan 23 05:06:58 np0005593232 podman[326009]: 2026-01-23 10:06:58.208749334 +0000 UTC m=+0.156801957 container remove 2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hertz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:06:58 np0005593232 systemd[1]: libpod-conmon-2f96e512a7be233b31ea94024e2739b12a575fb2ef463df913a75b56dfdd2539.scope: Deactivated successfully.
Jan 23 05:06:58 np0005593232 podman[326050]: 2026-01-23 10:06:58.374607837 +0000 UTC m=+0.038215787 container create ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:06:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:06:58.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:58 np0005593232 systemd[1]: Started libpod-conmon-ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628.scope.
Jan 23 05:06:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:06:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:06:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:06:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:06:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:06:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:06:58 np0005593232 podman[326050]: 2026-01-23 10:06:58.433949833 +0000 UTC m=+0.097557803 container init ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:06:58 np0005593232 podman[326050]: 2026-01-23 10:06:58.441982201 +0000 UTC m=+0.105590151 container start ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:06:58 np0005593232 podman[326050]: 2026-01-23 10:06:58.445473561 +0000 UTC m=+0.109081531 container attach ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:06:58 np0005593232 podman[326050]: 2026-01-23 10:06:58.358672994 +0000 UTC m=+0.022280954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:06:58 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:06:58 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:06:59 np0005593232 nova_compute[250269]: 2026-01-23 10:06:59.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:06:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2280: 321 pgs: 321 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.2 MiB/s wr, 238 op/s
Jan 23 05:06:59 np0005593232 romantic_jones[326066]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:06:59 np0005593232 romantic_jones[326066]: --> relative data size: 1.0
Jan 23 05:06:59 np0005593232 romantic_jones[326066]: --> All data devices are unavailable
Jan 23 05:06:59 np0005593232 systemd[1]: libpod-ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628.scope: Deactivated successfully.
Jan 23 05:06:59 np0005593232 conmon[326066]: conmon ff37b83be6cec070af87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628.scope/container/memory.events
Jan 23 05:06:59 np0005593232 podman[326050]: 2026-01-23 10:06:59.26544387 +0000 UTC m=+0.929051820 container died ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:06:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-22e19bf741212826f777054b3c66fddad784d403f2e6c7f58ca55fb05de9f80a-merged.mount: Deactivated successfully.
Jan 23 05:06:59 np0005593232 podman[326050]: 2026-01-23 10:06:59.316039697 +0000 UTC m=+0.979647647 container remove ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:06:59 np0005593232 systemd[1]: libpod-conmon-ff37b83be6cec070af872727da730ea72e8294443cbb8bd82cc507bb6c40a628.scope: Deactivated successfully.
Jan 23 05:06:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:06:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:06:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:06:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:06:59 np0005593232 podman[326238]: 2026-01-23 10:06:59.915589084 +0000 UTC m=+0.042896890 container create 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:06:59 np0005593232 systemd[1]: Started libpod-conmon-3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03.scope.
Jan 23 05:06:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:06:59 np0005593232 podman[326238]: 2026-01-23 10:06:59.898132698 +0000 UTC m=+0.025440524 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:07:00 np0005593232 podman[326238]: 2026-01-23 10:07:00.005734705 +0000 UTC m=+0.133042521 container init 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:07:00 np0005593232 podman[326238]: 2026-01-23 10:07:00.013673501 +0000 UTC m=+0.140981307 container start 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:07:00 np0005593232 podman[326238]: 2026-01-23 10:07:00.01680598 +0000 UTC m=+0.144113786 container attach 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:07:00 np0005593232 zealous_diffie[326255]: 167 167
Jan 23 05:07:00 np0005593232 systemd[1]: libpod-3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03.scope: Deactivated successfully.
Jan 23 05:07:00 np0005593232 podman[326238]: 2026-01-23 10:07:00.021478883 +0000 UTC m=+0.148786689 container died 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:07:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b40ff03be45ec61d7606519a87eb53c5dcc722bc054a19eeb2bc5bbc84bb160d-merged.mount: Deactivated successfully.
Jan 23 05:07:00 np0005593232 podman[326238]: 2026-01-23 10:07:00.059575615 +0000 UTC m=+0.186883421 container remove 3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:07:00 np0005593232 systemd[1]: libpod-conmon-3ae2bfe10021211ce598bb3d82edaa0be006494c3579d7c5945c7eb7457cca03.scope: Deactivated successfully.
Jan 23 05:07:00 np0005593232 podman[326280]: 2026-01-23 10:07:00.247573377 +0000 UTC m=+0.053536572 container create fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:07:00 np0005593232 systemd[1]: Started libpod-conmon-fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59.scope.
Jan 23 05:07:00 np0005593232 podman[326280]: 2026-01-23 10:07:00.22936389 +0000 UTC m=+0.035327115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:07:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:07:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2771308692ecfbe4c9edc9594391faa18a91bee732d6259397151d67ced231f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2771308692ecfbe4c9edc9594391faa18a91bee732d6259397151d67ced231f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2771308692ecfbe4c9edc9594391faa18a91bee732d6259397151d67ced231f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2771308692ecfbe4c9edc9594391faa18a91bee732d6259397151d67ced231f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:00 np0005593232 podman[326280]: 2026-01-23 10:07:00.357283515 +0000 UTC m=+0.163246800 container init fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:07:00 np0005593232 podman[326280]: 2026-01-23 10:07:00.364260503 +0000 UTC m=+0.170223728 container start fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:07:00 np0005593232 podman[326280]: 2026-01-23 10:07:00.368158264 +0000 UTC m=+0.174121499 container attach fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:07:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2281: 321 pgs: 321 active+clean; 387 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 839 KiB/s rd, 3.0 MiB/s wr, 190 op/s
Jan 23 05:07:01 np0005593232 magical_cerf[326296]: {
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:    "0": [
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:        {
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "devices": [
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "/dev/loop3"
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            ],
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "lv_name": "ceph_lv0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "lv_size": "7511998464",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "name": "ceph_lv0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "tags": {
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.cluster_name": "ceph",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.crush_device_class": "",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.encrypted": "0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.osd_id": "0",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.type": "block",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:                "ceph.vdo": "0"
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            },
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "type": "block",
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:            "vg_name": "ceph_vg0"
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:        }
Jan 23 05:07:01 np0005593232 magical_cerf[326296]:    ]
Jan 23 05:07:01 np0005593232 magical_cerf[326296]: }
Jan 23 05:07:01 np0005593232 systemd[1]: libpod-fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59.scope: Deactivated successfully.
Jan 23 05:07:01 np0005593232 podman[326280]: 2026-01-23 10:07:01.188385621 +0000 UTC m=+0.994348826 container died fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:07:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2771308692ecfbe4c9edc9594391faa18a91bee732d6259397151d67ced231f5-merged.mount: Deactivated successfully.
Jan 23 05:07:01 np0005593232 nova_compute[250269]: 2026-01-23 10:07:01.243 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:01 np0005593232 podman[326280]: 2026-01-23 10:07:01.273260463 +0000 UTC m=+1.079223658 container remove fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:07:01 np0005593232 systemd[1]: libpod-conmon-fdf0e25ff0694e51b2dace346c0779af172b294ca272e9dfaae4e11f36892a59.scope: Deactivated successfully.
Jan 23 05:07:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:01 np0005593232 podman[326459]: 2026-01-23 10:07:01.88338584 +0000 UTC m=+0.040105791 container create d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:07:01 np0005593232 systemd[1]: Started libpod-conmon-d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed.scope.
Jan 23 05:07:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:07:01 np0005593232 podman[326459]: 2026-01-23 10:07:01.866392957 +0000 UTC m=+0.023112928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:07:02 np0005593232 podman[326459]: 2026-01-23 10:07:02.014674631 +0000 UTC m=+0.171394592 container init d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:07:02 np0005593232 podman[326459]: 2026-01-23 10:07:02.027154495 +0000 UTC m=+0.183874446 container start d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:07:02 np0005593232 stupefied_ramanujan[326476]: 167 167
Jan 23 05:07:02 np0005593232 systemd[1]: libpod-d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed.scope: Deactivated successfully.
Jan 23 05:07:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:07:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4198788587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:07:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:07:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4198788587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:07:02 np0005593232 podman[326459]: 2026-01-23 10:07:02.21417804 +0000 UTC m=+0.370898021 container attach d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:07:02 np0005593232 podman[326459]: 2026-01-23 10:07:02.215472006 +0000 UTC m=+0.372191967 container died d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:07:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a8562942752ba548ee247d48c5748253898d5ce338e670bfec547948cc5c01b-merged.mount: Deactivated successfully.
Jan 23 05:07:02 np0005593232 podman[326459]: 2026-01-23 10:07:02.313249805 +0000 UTC m=+0.469969756 container remove d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:07:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:02.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:02 np0005593232 systemd[1]: libpod-conmon-d8bd91f1b4a3305881da20babcb73c11ea15fdd8b5c25571182feeadf740a2ed.scope: Deactivated successfully.
Jan 23 05:07:02 np0005593232 nova_compute[250269]: 2026-01-23 10:07:02.412 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:02 np0005593232 podman[326498]: 2026-01-23 10:07:02.573526801 +0000 UTC m=+0.089604497 container create 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:07:02 np0005593232 podman[326498]: 2026-01-23 10:07:02.514799752 +0000 UTC m=+0.030877478 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:07:02 np0005593232 systemd[1]: Started libpod-conmon-1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e.scope.
Jan 23 05:07:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea58abb46ba099d5769ce2390e9e163b8f383d6585b2254d9bf9cb3e9244a01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea58abb46ba099d5769ce2390e9e163b8f383d6585b2254d9bf9cb3e9244a01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea58abb46ba099d5769ce2390e9e163b8f383d6585b2254d9bf9cb3e9244a01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eea58abb46ba099d5769ce2390e9e163b8f383d6585b2254d9bf9cb3e9244a01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:02 np0005593232 podman[326498]: 2026-01-23 10:07:02.81531129 +0000 UTC m=+0.331389016 container init 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:07:02 np0005593232 podman[326498]: 2026-01-23 10:07:02.822776373 +0000 UTC m=+0.338854089 container start 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:07:02 np0005593232 podman[326498]: 2026-01-23 10:07:02.946099547 +0000 UTC m=+0.462177243 container attach 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:07:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2282: 321 pgs: 321 active+clean; 377 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.0 MiB/s wr, 284 op/s
Jan 23 05:07:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:03.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.381 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.382 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.382 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.382 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.383 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.384 250273 INFO nova.compute.manager [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Terminating instance#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.385 250273 DEBUG nova.compute.manager [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:07:03 np0005593232 kernel: tap800b7313-6d (unregistering): left promiscuous mode
Jan 23 05:07:03 np0005593232 NetworkManager[49057]: <info>  [1769162823.4474] device (tap800b7313-6d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:07:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:03Z|00397|binding|INFO|Releasing lport 800b7313-6d9b-4fd7-9175-cdecef348ba1 from this chassis (sb_readonly=0)
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:03Z|00398|binding|INFO|Setting lport 800b7313-6d9b-4fd7-9175-cdecef348ba1 down in Southbound
Jan 23 05:07:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:03Z|00399|binding|INFO|Removing iface tap800b7313-6d ovn-installed in OVS
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.527 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:03.538 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:13:67 10.100.0.12'], port_security=['fa:16:3e:eb:13:67 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '34c9552e-fca1-4094-96d1-eb627cda17ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8c16cd713fa74a88b43e4edf01c273bd', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6c3aed5f-30b8-4c57-808e-87764ab67fc8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=800b7313-6d9b-4fd7-9175-cdecef348ba1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:07:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:03.540 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 800b7313-6d9b-4fd7-9175-cdecef348ba1 in datapath 8575e824-4be0-4206-873e-2f9a3d1ded0b unbound from our chassis#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.541 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:03.541 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8575e824-4be0-4206-873e-2f9a3d1ded0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:07:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:03.543 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8080052b-ab84-4cdf-906e-8149a03b1cd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:03.543 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b namespace which is not needed anymore#033[00m
Jan 23 05:07:03 np0005593232 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000070.scope: Deactivated successfully.
Jan 23 05:07:03 np0005593232 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000070.scope: Consumed 18.754s CPU time.
Jan 23 05:07:03 np0005593232 systemd-machined[215836]: Machine qemu-46-instance-00000070 terminated.
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.627 250273 INFO nova.virt.libvirt.driver [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Instance destroyed successfully.#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.628 250273 DEBUG nova.objects.instance [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lazy-loading 'resources' on Instance uuid 34c9552e-fca1-4094-96d1-eb627cda17ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.653 250273 DEBUG nova.virt.libvirt.vif [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:04:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1576726897',display_name='tempest-ServerActionsTestOtherA-server-1576726897',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1576726897',id=112,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:05:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8c16cd713fa74a88b43e4edf01c273bd',ramdisk_id='',reservation_id='r-bcswvvws',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-882763067',owner_user_name='tempest-ServerActionsTestOtherA-882763067-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:05:09Z,user_data=None,user_id='29710db389c842df836944048225740f',uuid=34c9552e-fca1-4094-96d1-eb627cda17ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.654 250273 DEBUG nova.network.os_vif_util [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converting VIF {"id": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "address": "fa:16:3e:eb:13:67", "network": {"id": "8575e824-4be0-4206-873e-2f9a3d1ded0b", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-2130726771-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8c16cd713fa74a88b43e4edf01c273bd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap800b7313-6d", "ovs_interfaceid": "800b7313-6d9b-4fd7-9175-cdecef348ba1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.655 250273 DEBUG nova.network.os_vif_util [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.656 250273 DEBUG os_vif [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.658 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.658 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap800b7313-6d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.660 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:03 np0005593232 nova_compute[250269]: 2026-01-23 10:07:03.665 250273 INFO os_vif [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:eb:13:67,bridge_name='br-int',has_traffic_filtering=True,id=800b7313-6d9b-4fd7-9175-cdecef348ba1,network=Network(8575e824-4be0-4206-873e-2f9a3d1ded0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap800b7313-6d')#033[00m
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]: {
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:        "osd_id": 0,
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:        "type": "bluestore"
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]:    }
Jan 23 05:07:03 np0005593232 vigorous_beaver[326514]: }
Jan 23 05:07:03 np0005593232 systemd[1]: libpod-1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e.scope: Deactivated successfully.
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [NOTICE]   (323097) : haproxy version is 2.8.14-c23fe91
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [NOTICE]   (323097) : path to executable is /usr/sbin/haproxy
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [WARNING]  (323097) : Exiting Master process...
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [WARNING]  (323097) : Exiting Master process...
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [ALERT]    (323097) : Current worker (323099) exited with code 143 (Terminated)
Jan 23 05:07:03 np0005593232 neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b[323093]: [WARNING]  (323097) : All workers exited. Exiting... (0)
Jan 23 05:07:03 np0005593232 systemd[1]: libpod-09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5.scope: Deactivated successfully.
Jan 23 05:07:03 np0005593232 podman[326498]: 2026-01-23 10:07:03.81511552 +0000 UTC m=+1.331193216 container died 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:07:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-eea58abb46ba099d5769ce2390e9e163b8f383d6585b2254d9bf9cb3e9244a01-merged.mount: Deactivated successfully.
Jan 23 05:07:04 np0005593232 podman[326596]: 2026-01-23 10:07:04.306050961 +0000 UTC m=+0.551478292 container remove 1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:07:04 np0005593232 systemd[1]: libpod-conmon-1601de77462beced37458094fac8972937c653f21ddc8da0c56d444a20c3ae9e.scope: Deactivated successfully.
Jan 23 05:07:04 np0005593232 podman[326561]: 2026-01-23 10:07:04.347391805 +0000 UTC m=+0.696665677 container died 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:07:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2c9a7458-eae4-443c-b237-3b8e9b827e24 does not exist
Jan 23 05:07:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 23ac57b0-ae7b-4301-9408-74c17ad90d19 does not exist
Jan 23 05:07:04 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 406ae9f4-285f-41de-ab0c-5aa5af5f630c does not exist
Jan 23 05:07:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5-userdata-shm.mount: Deactivated successfully.
Jan 23 05:07:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ca82ce983b391702a97a93312e0ac7f12a12df67597cde3640bf1eed58e2efdf-merged.mount: Deactivated successfully.
Jan 23 05:07:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:04.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.384 250273 DEBUG nova.compute.manager [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-unplugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:04 np0005593232 podman[326561]: 2026-01-23 10:07:04.385086987 +0000 UTC m=+0.734360859 container cleanup 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.385 250273 DEBUG oslo_concurrency.lockutils [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.385 250273 DEBUG oslo_concurrency.lockutils [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.386 250273 DEBUG oslo_concurrency.lockutils [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.386 250273 DEBUG nova.compute.manager [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] No waiting events found dispatching network-vif-unplugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.386 250273 DEBUG nova.compute.manager [req-02f06e0e-3183-4cbf-a1ed-efe705e95edc req-4ffe25a2-f6a5-4412-a9fa-d0bcc69427b7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-unplugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:07:04 np0005593232 systemd[1]: libpod-conmon-09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5.scope: Deactivated successfully.
Jan 23 05:07:04 np0005593232 podman[326640]: 2026-01-23 10:07:04.469101214 +0000 UTC m=+0.056584129 container remove 09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.476 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2fc2a7-f240-412b-a10e-37902c01b23b]: (4, ('Fri Jan 23 10:07:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5)\n09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5\nFri Jan 23 10:07:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b (09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5)\n09863a87bd8a2a15540611efe2547262b2e35d6c07815dc21e07c3904af6cde5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.478 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c90f1692-48e7-4766-a3e0-973edda70bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.479 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8575e824-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.481 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:04 np0005593232 kernel: tap8575e824-40: left promiscuous mode
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.499 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a980bb06-7fab-4f87-acbb-e60dab365049]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.516 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bbec7819-c384-48a0-9341-415ab4c44382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.517 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0efece-ff5c-46b9-b043-ee8e7d4b1e76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.534 250273 INFO nova.virt.libvirt.driver [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Deleting instance files /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab_del#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.534 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b53ef9-bef5-4b1c-a761-3cc951dc4a51]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 662092, 'reachable_time': 32759, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326695, 'error': None, 'target': 'ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.535 250273 INFO nova.virt.libvirt.driver [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Deletion of /var/lib/nova/instances/34c9552e-fca1-4094-96d1-eb627cda17ab_del complete#033[00m
Jan 23 05:07:04 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8575e824\x2d4be0\x2d4206\x2d873e\x2d2f9a3d1ded0b.mount: Deactivated successfully.
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.538 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8575e824-4be0-4206-873e-2f9a3d1ded0b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:07:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:04.539 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[dec82cf9-6a8f-4bac-a250-fd298a351bbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.608 250273 INFO nova.compute.manager [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Took 1.22 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.608 250273 DEBUG oslo.service.loopingcall [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.609 250273 DEBUG nova.compute.manager [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:07:04 np0005593232 nova_compute[250269]: 2026-01-23 10:07:04.609 250273 DEBUG nova.network.neutron [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:07:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:07:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2283: 321 pgs: 321 active+clean; 377 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 177 op/s
Jan 23 05:07:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:05.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:05 np0005593232 nova_compute[250269]: 2026-01-23 10:07:05.967 250273 DEBUG nova.network.neutron [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.014 250273 INFO nova.compute.manager [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Took 1.41 seconds to deallocate network for instance.#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.111 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.112 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.195 250273 DEBUG oslo_concurrency.processutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:06.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.564 250273 DEBUG nova.compute.manager [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.564 250273 DEBUG oslo_concurrency.lockutils [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.565 250273 DEBUG oslo_concurrency.lockutils [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.565 250273 DEBUG oslo_concurrency.lockutils [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.566 250273 DEBUG nova.compute.manager [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] No waiting events found dispatching network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.566 250273 WARNING nova.compute.manager [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received unexpected event network-vif-plugged-800b7313-6d9b-4fd7-9175-cdecef348ba1 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.566 250273 DEBUG nova.compute.manager [req-b912144e-0867-4050-b11b-45f3165060a5 req-812714e0-e0c2-4d8c-9718-ad8593843cf6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Received event network-vif-deleted-800b7313-6d9b-4fd7-9175-cdecef348ba1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:07:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2631228532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.662 250273 DEBUG oslo_concurrency.processutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.669 250273 DEBUG nova.compute.provider_tree [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.703 250273 DEBUG nova.scheduler.client.report [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.749 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.789 250273 INFO nova.scheduler.client.report [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Deleted allocations for instance 34c9552e-fca1-4094-96d1-eb627cda17ab#033[00m
Jan 23 05:07:06 np0005593232 nova_compute[250269]: 2026-01-23 10:07:06.916 250273 DEBUG oslo_concurrency.lockutils [None req-a32501ac-b661-4466-9e8c-a7bd51915b1a 29710db389c842df836944048225740f 8c16cd713fa74a88b43e4edf01c273bd - - default default] Lock "34c9552e-fca1-4094-96d1-eb627cda17ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2284: 321 pgs: 321 active+clean; 294 MiB data, 988 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 192 op/s
Jan 23 05:07:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:07.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:08.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:08 np0005593232 nova_compute[250269]: 2026-01-23 10:07:08.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:07:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222470178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:07:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:07:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2222470178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2285: 321 pgs: 321 active+clean; 200 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 201 op/s
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.263 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.263 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.300 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:07:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:09.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.457 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.457 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.468 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.468 250273 INFO nova.compute.claims [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:07:09 np0005593232 nova_compute[250269]: 2026-01-23 10:07:09.738 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:07:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2086786194' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.226 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.231 250273 DEBUG nova.compute.provider_tree [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.256 250273 DEBUG nova.scheduler.client.report [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.293 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.294 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:07:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.411 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.411 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.452 250273 INFO nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.474 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.590 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.591 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.592 250273 INFO nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Creating image(s)#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.621 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.654 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.684 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.689 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.763 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.764 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.765 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.765 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.786 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.791 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:10 np0005593232 nova_compute[250269]: 2026-01-23 10:07:10.955 250273 DEBUG nova.policy [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c99d09acd2e849a69846a6ccda1e0bc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '924f976bcbb74ec195730b68eebe1f2a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.097 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2286: 321 pgs: 321 active+clean; 200 MiB data, 940 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 958 KiB/s wr, 151 op/s
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.208 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] resizing rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.247 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.311 250273 DEBUG nova.objects.instance [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'migration_context' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:11.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.507 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.508 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Ensure instance console log exists: /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.508 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.508 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:11 np0005593232 nova_compute[250269]: 2026-01-23 10:07:11.509 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2287: 321 pgs: 321 active+clean; 127 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 238 op/s
Jan 23 05:07:13 np0005593232 nova_compute[250269]: 2026-01-23 10:07:13.381 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Successfully created port: 06d14633-c41f-49f8-a5ac-9c37e350d1ae _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:07:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:13 np0005593232 nova_compute[250269]: 2026-01-23 10:07:13.667 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2288: 321 pgs: 321 active+clean; 127 MiB data, 934 MiB used, 20 GiB / 21 GiB avail; 316 KiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 23 05:07:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:15 np0005593232 nova_compute[250269]: 2026-01-23 10:07:15.767 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Successfully updated port: 06d14633-c41f-49f8-a5ac-9c37e350d1ae _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:07:15 np0005593232 nova_compute[250269]: 2026-01-23 10:07:15.797 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:07:15 np0005593232 nova_compute[250269]: 2026-01-23 10:07:15.797 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquired lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:07:15 np0005593232 nova_compute[250269]: 2026-01-23 10:07:15.797 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:07:16 np0005593232 nova_compute[250269]: 2026-01-23 10:07:16.096 250273 DEBUG nova.compute.manager [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-changed-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:16 np0005593232 nova_compute[250269]: 2026-01-23 10:07:16.096 250273 DEBUG nova.compute.manager [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Refreshing instance network info cache due to event network-changed-06d14633-c41f-49f8-a5ac-9c37e350d1ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:07:16 np0005593232 nova_compute[250269]: 2026-01-23 10:07:16.096 250273 DEBUG oslo_concurrency.lockutils [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:07:16 np0005593232 nova_compute[250269]: 2026-01-23 10:07:16.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:16 np0005593232 podman[326964]: 2026-01-23 10:07:16.440412893 +0000 UTC m=+0.086209031 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 23 05:07:16 np0005593232 nova_compute[250269]: 2026-01-23 10:07:16.609 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:07:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2289: 321 pgs: 321 active+clean; 88 MiB data, 897 MiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 1.8 MiB/s wr, 159 op/s
Jan 23 05:07:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:18.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.621 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162823.6193516, 34c9552e-fca1-4094-96d1-eb627cda17ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.621 250273 INFO nova.compute.manager [-] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.665 250273 DEBUG nova.compute.manager [None req-fe243c70-224b-4332-a1a4-7cacc2da9ee0 - - - - - -] [instance: 34c9552e-fca1-4094-96d1-eb627cda17ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.735 250273 DEBUG nova.network.neutron [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.771 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Releasing lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.772 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Instance network_info: |[{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.772 250273 DEBUG oslo_concurrency.lockutils [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.772 250273 DEBUG nova.network.neutron [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Refreshing network info cache for port 06d14633-c41f-49f8-a5ac-9c37e350d1ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.774 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Start _get_guest_xml network_info=[{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.778 250273 WARNING nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.785 250273 DEBUG nova.virt.libvirt.host [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.785 250273 DEBUG nova.virt.libvirt.host [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.791 250273 DEBUG nova.virt.libvirt.host [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.792 250273 DEBUG nova.virt.libvirt.host [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.793 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.793 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.793 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.793 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.794 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.794 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.794 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.794 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.794 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.795 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.795 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.795 250273 DEBUG nova.virt.hardware [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:07:18 np0005593232 nova_compute[250269]: 2026-01-23 10:07:18.797 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2290: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 99 KiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 23 05:07:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:07:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904143204' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.246 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.275 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.279 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:19 np0005593232 podman[327034]: 2026-01-23 10:07:19.391819668 +0000 UTC m=+0.053058859 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:07:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:19.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:07:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3361328408' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.719 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.721 250273 DEBUG nova.virt.libvirt.vif [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1100313845',display_name='tempest-AttachVolumeNegativeTest-server-1100313845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1100313845',id=119,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODZCFafhZGTU8x6piM66+lVcw3K+KK8nP6yLx1MHDrMWs+9viWA6aoR0PFHQNfHFDvg2apwI3HRmvR0Oj6kIYMWeOEyUtGbuCEFROlPgdLi2uevA8Ne8YkCFZBSHl/ptQ==',key_name='tempest-keypair-1293560348',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-1xj02ksr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:07:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=6e5889e0-b5b3-442c-b7dd-0434b1da7c96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.722 250273 DEBUG nova.network.os_vif_util [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.723 250273 DEBUG nova.network.os_vif_util [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.724 250273 DEBUG nova.objects.instance [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'pci_devices' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.819 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <uuid>6e5889e0-b5b3-442c-b7dd-0434b1da7c96</uuid>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <name>instance-00000077</name>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1100313845</nova:name>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:07:18</nova:creationTime>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:user uuid="c99d09acd2e849a69846a6ccda1e0bc7">tempest-AttachVolumeNegativeTest-1470050886-project-member</nova:user>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:project uuid="924f976bcbb74ec195730b68eebe1f2a">tempest-AttachVolumeNegativeTest-1470050886</nova:project>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <nova:port uuid="06d14633-c41f-49f8-a5ac-9c37e350d1ae">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="serial">6e5889e0-b5b3-442c-b7dd-0434b1da7c96</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="uuid">6e5889e0-b5b3-442c-b7dd-0434b1da7c96</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:53:dd:ea"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <target dev="tap06d14633-c4"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/console.log" append="off"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:07:19 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:07:19 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:07:19 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:07:19 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.821 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Preparing to wait for external event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.822 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.822 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.823 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.823 250273 DEBUG nova.virt.libvirt.vif [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1100313845',display_name='tempest-AttachVolumeNegativeTest-server-1100313845',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1100313845',id=119,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODZCFafhZGTU8x6piM66+lVcw3K+KK8nP6yLx1MHDrMWs+9viWA6aoR0PFHQNfHFDvg2apwI3HRmvR0Oj6kIYMWeOEyUtGbuCEFROlPgdLi2uevA8Ne8YkCFZBSHl/ptQ==',key_name='tempest-keypair-1293560348',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-1xj02ksr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:07:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=6e5889e0-b5b3-442c-b7dd-0434b1da7c96,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.824 250273 DEBUG nova.network.os_vif_util [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.825 250273 DEBUG nova.network.os_vif_util [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.825 250273 DEBUG os_vif [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.826 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.826 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.827 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.830 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06d14633-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.830 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap06d14633-c4, col_values=(('external_ids', {'iface-id': '06d14633-c41f-49f8-a5ac-9c37e350d1ae', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:dd:ea', 'vm-uuid': '6e5889e0-b5b3-442c-b7dd-0434b1da7c96'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:19 np0005593232 NetworkManager[49057]: <info>  [1769162839.8327] manager: (tap06d14633-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/196)
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.835 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.841 250273 INFO os_vif [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4')#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.936 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.937 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.937 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No VIF found with MAC fa:16:3e:53:dd:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.937 250273 INFO nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Using config drive#033[00m
Jan 23 05:07:19 np0005593232 nova_compute[250269]: 2026-01-23 10:07:19.962 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:20.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2291: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.251 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:21.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.679671) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841679716, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 820, "num_deletes": 250, "total_data_size": 1109399, "memory_usage": 1134680, "flush_reason": "Manual Compaction"}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841686731, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 715406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50566, "largest_seqno": 51385, "table_properties": {"data_size": 711904, "index_size": 1218, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9673, "raw_average_key_size": 20, "raw_value_size": 704393, "raw_average_value_size": 1524, "num_data_blocks": 53, "num_entries": 462, "num_filter_entries": 462, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162781, "oldest_key_time": 1769162781, "file_creation_time": 1769162841, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 7124 microseconds, and 2851 cpu microseconds.
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.686792) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 715406 bytes OK
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.686814) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.688352) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.688383) EVENT_LOG_v1 {"time_micros": 1769162841688365, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.688404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1105305, prev total WAL file size 1105305, number of live WAL files 2.
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.688980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373536' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(698KB)], [113(11MB)]
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841689004, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 13226974, "oldest_snapshot_seqno": -1}
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.750 250273 INFO nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Creating config drive at /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config#033[00m
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.756 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvby3lsea execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7642 keys, 9760340 bytes, temperature: kUnknown
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841796599, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 9760340, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9712398, "index_size": 27734, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 198430, "raw_average_key_size": 25, "raw_value_size": 9579313, "raw_average_value_size": 1253, "num_data_blocks": 1087, "num_entries": 7642, "num_filter_entries": 7642, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162841, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.797034) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 9760340 bytes
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.858487) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.8 rd, 90.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(32.1) write-amplify(13.6) OK, records in: 8130, records dropped: 488 output_compression: NoCompression
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.858584) EVENT_LOG_v1 {"time_micros": 1769162841858549, "job": 68, "event": "compaction_finished", "compaction_time_micros": 107728, "compaction_time_cpu_micros": 23369, "output_level": 6, "num_output_files": 1, "total_output_size": 9760340, "num_input_records": 8130, "num_output_records": 7642, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841859374, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162841865303, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.688944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.865386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.865393) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.865395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.865397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:07:21.865398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.898 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvby3lsea" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.931 250273 DEBUG nova.storage.rbd_utils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:21 np0005593232 nova_compute[250269]: 2026-01-23 10:07:21.935 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:22.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:22 np0005593232 nova_compute[250269]: 2026-01-23 10:07:22.953 250273 DEBUG nova.network.neutron [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updated VIF entry in instance network info cache for port 06d14633-c41f-49f8-a5ac-9c37e350d1ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:07:22 np0005593232 nova_compute[250269]: 2026-01-23 10:07:22.954 250273 DEBUG nova.network.neutron [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:07:22 np0005593232 nova_compute[250269]: 2026-01-23 10:07:22.995 250273 DEBUG oslo_concurrency.lockutils [req-daf327eb-1536-4c28-821f-3fb718a6c3ec req-422a30bd-d84b-44fc-8a3f-69ee5494aafe 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.069 250273 DEBUG oslo_concurrency.processutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config 6e5889e0-b5b3-442c-b7dd-0434b1da7c96_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.069 250273 INFO nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Deleting local config drive /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96/disk.config because it was imported into RBD.#033[00m
Jan 23 05:07:23 np0005593232 kernel: tap06d14633-c4: entered promiscuous mode
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.1246] manager: (tap06d14633-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/197)
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:23Z|00400|binding|INFO|Claiming lport 06d14633-c41f-49f8-a5ac-9c37e350d1ae for this chassis.
Jan 23 05:07:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:23Z|00401|binding|INFO|06d14633-c41f-49f8-a5ac-9c37e350d1ae: Claiming fa:16:3e:53:dd:ea 10.100.0.5
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.130 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2292: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.150 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:dd:ea 10.100.0.5'], port_security=['fa:16:3e:53:dd:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6e5889e0-b5b3-442c-b7dd-0434b1da7c96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93735878-f62d-4a5f-96df-bf97f85d787a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '924f976bcbb74ec195730b68eebe1f2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '30915781-ba6b-4325-9c75-4d72f58120e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1f72e5c-e22f-424b-b6ed-0c502ff13aa3, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=06d14633-c41f-49f8-a5ac-9c37e350d1ae) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.152 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 06d14633-c41f-49f8-a5ac-9c37e350d1ae in datapath 93735878-f62d-4a5f-96df-bf97f85d787a bound to our chassis#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.154 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93735878-f62d-4a5f-96df-bf97f85d787a#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.165 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[89ac5aae-b634-405c-b406-b394be4dc1c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.166 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93735878-f1 in ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:07:23 np0005593232 systemd-udevd[327150]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.168 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93735878-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.168 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a89fcc3f-78a6-482e-9e21-0a039f02fba7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.168 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6569d4b6-ab2e-4ba5-9cfe-9a50943d3a22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 systemd-machined[215836]: New machine qemu-50-instance-00000077.
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.1867] device (tap06d14633-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.1877] device (tap06d14633-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.188 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[5735cc99-c7b8-4f97-9433-92187dc13bfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 systemd[1]: Started Virtual Machine qemu-50-instance-00000077.
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:23Z|00402|binding|INFO|Setting lport 06d14633-c41f-49f8-a5ac-9c37e350d1ae ovn-installed in OVS
Jan 23 05:07:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:23Z|00403|binding|INFO|Setting lport 06d14633-c41f-49f8-a5ac-9c37e350d1ae up in Southbound
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.211 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.214 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bcede76e-f95c-4249-8415-4db9e405b1ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.248 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d53e318e-a214-4570-8bbd-d66d81d16872]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.254 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[933d9cac-efaf-47db-b99c-b6bb0b95be43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.2566] manager: (tap93735878-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/198)
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.289 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[68949365-04f6-4dc7-9e70-fec294de6a2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.293 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8d798d74-0c7d-40b3-b0e8-6fb2ab9b57c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.3231] device (tap93735878-f0): carrier: link connected
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.329 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[001103a6-b9b5-497e-b17c-538c0aacd49b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.349 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[690a25a8-e60f-44fa-a884-85b0ad859503]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93735878-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:41:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675646, 'reachable_time': 44856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327182, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.366 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a053af6-435d-42be-abe4-209b4c557e1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:41c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 675646, 'tstamp': 675646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327183, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.387 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3b24daa3-4cfb-4739-8dda-3a332c36335a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93735878-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:41:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675646, 'reachable_time': 44856, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327184, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.435 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[19b230d3-6a73-46ce-958d-fbcbe8d0aa5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.560 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a795e6c8-a5e1-4153-802b-ac59c786ee4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.562 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93735878-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.562 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.563 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93735878-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:23 np0005593232 NetworkManager[49057]: <info>  [1769162843.5655] manager: (tap93735878-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.565 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 kernel: tap93735878-f0: entered promiscuous mode
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.567 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93735878-f0, col_values=(('external_ids', {'iface-id': 'c75eef02-aabe-4477-9239-97f7fb86cd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:23Z|00404|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.570 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.571 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.572 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9b07ef86-e054-4d99-b284-1edc053d59ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.573 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-93735878-f62d-4a5f-96df-bf97f85d787a
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 93735878-f62d-4a5f-96df-bf97f85d787a
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:07:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:23.576 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'env', 'PROCESS_TAG=haproxy-93735878-f62d-4a5f-96df-bf97f85d787a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93735878-f62d-4a5f-96df-bf97f85d787a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:07:23 np0005593232 nova_compute[250269]: 2026-01-23 10:07:23.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:23 np0005593232 podman[327231]: 2026-01-23 10:07:23.949380473 +0000 UTC m=+0.048732865 container create 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:07:23 np0005593232 systemd[1]: Started libpod-conmon-4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e.scope.
Jan 23 05:07:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:07:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877c98b958e76ad23ec03ce0660b6af6f29b18f60c401e3b3c3ae5dcc2c1616f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:07:24 np0005593232 podman[327231]: 2026-01-23 10:07:24.016632194 +0000 UTC m=+0.115984586 container init 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:07:24 np0005593232 podman[327231]: 2026-01-23 10:07:23.924842366 +0000 UTC m=+0.024194778 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:07:24 np0005593232 podman[327231]: 2026-01-23 10:07:24.023001105 +0000 UTC m=+0.122353497 container start 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 05:07:24 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [NOTICE]   (327278) : New worker (327281) forked
Jan 23 05:07:24 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [NOTICE]   (327278) : Loading success.
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.084 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162844.0833783, 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.084 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] VM Started (Lifecycle Event)#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.116 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.120 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162844.0835848, 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.120 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.143 250273 DEBUG nova.compute.manager [req-5bd6054e-79cc-4fd1-a2c2-ea0c646cb7ed req-0930d41c-9eb4-416d-bbc3-a950c0c88799 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.143 250273 DEBUG oslo_concurrency.lockutils [req-5bd6054e-79cc-4fd1-a2c2-ea0c646cb7ed req-0930d41c-9eb4-416d-bbc3-a950c0c88799 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.144 250273 DEBUG oslo_concurrency.lockutils [req-5bd6054e-79cc-4fd1-a2c2-ea0c646cb7ed req-0930d41c-9eb4-416d-bbc3-a950c0c88799 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.144 250273 DEBUG oslo_concurrency.lockutils [req-5bd6054e-79cc-4fd1-a2c2-ea0c646cb7ed req-0930d41c-9eb4-416d-bbc3-a950c0c88799 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.144 250273 DEBUG nova.compute.manager [req-5bd6054e-79cc-4fd1-a2c2-ea0c646cb7ed req-0930d41c-9eb4-416d-bbc3-a950c0c88799 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Processing event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.144 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.148 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.151 250273 INFO nova.virt.libvirt.driver [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Instance spawned successfully.#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.151 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.167 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.173 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162844.1482394, 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.174 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.176 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.176 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.177 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.177 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.177 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.178 250273 DEBUG nova.virt.libvirt.driver [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.212 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.216 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.264 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.293 250273 INFO nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Took 13.70 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.294 250273 DEBUG nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:07:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:24.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.596 250273 INFO nova.compute.manager [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Took 15.19 seconds to build instance.#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.680 250273 DEBUG oslo_concurrency.lockutils [None req-b39cb4a9-9295-4edd-8e55-7ba66c56533d c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:24 np0005593232 nova_compute[250269]: 2026-01-23 10:07:24.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2293: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Jan 23 05:07:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:26 np0005593232 nova_compute[250269]: 2026-01-23 10:07:26.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:26.490 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:07:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:26.491 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:07:26 np0005593232 nova_compute[250269]: 2026-01-23 10:07:26.491 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.093 250273 DEBUG nova.compute.manager [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.094 250273 DEBUG oslo_concurrency.lockutils [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.094 250273 DEBUG oslo_concurrency.lockutils [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.094 250273 DEBUG oslo_concurrency.lockutils [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.094 250273 DEBUG nova.compute.manager [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] No waiting events found dispatching network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:07:27 np0005593232 nova_compute[250269]: 2026-01-23 10:07:27.095 250273 WARNING nova.compute.manager [req-3ab56c5e-52e0-42e6-9620-289dbbafa42b req-4d7bad73-bfbc-49da-abf4-0196b5c24b15 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received unexpected event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae for instance with vm_state active and task_state None.#033[00m
Jan 23 05:07:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2294: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 956 KiB/s rd, 1.2 KiB/s wr, 51 op/s
Jan 23 05:07:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:28.493 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:07:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2295: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:07:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:29 np0005593232 nova_compute[250269]: 2026-01-23 10:07:29.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:30 np0005593232 NetworkManager[49057]: <info>  [1769162850.2469] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/200)
Jan 23 05:07:30 np0005593232 nova_compute[250269]: 2026-01-23 10:07:30.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:30 np0005593232 NetworkManager[49057]: <info>  [1769162850.2491] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/201)
Jan 23 05:07:30 np0005593232 nova_compute[250269]: 2026-01-23 10:07:30.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:30Z|00405|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:07:30 np0005593232 nova_compute[250269]: 2026-01-23 10:07:30.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:30.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:30 np0005593232 nova_compute[250269]: 2026-01-23 10:07:30.428 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2296: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.300 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.363 250273 DEBUG nova.compute.manager [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-changed-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.364 250273 DEBUG nova.compute.manager [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Refreshing instance network info cache due to event network-changed-06d14633-c41f-49f8-a5ac-9c37e350d1ae. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.364 250273 DEBUG oslo_concurrency.lockutils [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.364 250273 DEBUG oslo_concurrency.lockutils [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.365 250273 DEBUG nova.network.neutron [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Refreshing network info cache for port 06d14633-c41f-49f8-a5ac-9c37e350d1ae _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:07:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:31 np0005593232 nova_compute[250269]: 2026-01-23 10:07:31.853 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:32 np0005593232 nova_compute[250269]: 2026-01-23 10:07:32.200 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2297: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:07:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:34 np0005593232 nova_compute[250269]: 2026-01-23 10:07:34.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:34.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:34 np0005593232 nova_compute[250269]: 2026-01-23 10:07:34.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2298: 321 pgs: 321 active+clean; 88 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:07:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:35.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:35 np0005593232 nova_compute[250269]: 2026-01-23 10:07:35.630 250273 DEBUG nova.network.neutron [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updated VIF entry in instance network info cache for port 06d14633-c41f-49f8-a5ac-9c37e350d1ae. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:07:35 np0005593232 nova_compute[250269]: 2026-01-23 10:07:35.630 250273 DEBUG nova.network.neutron [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:07:35 np0005593232 nova_compute[250269]: 2026-01-23 10:07:35.665 250273 DEBUG oslo_concurrency.lockutils [req-56ed6a34-1d28-4167-9190-6928086a6573 req-a52e457b-82c5-4aba-a7b5-e288aa4d42bd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:07:36 np0005593232 nova_compute[250269]: 2026-01-23 10:07:36.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:36 np0005593232 nova_compute[250269]: 2026-01-23 10:07:36.335 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:36.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2299: 321 pgs: 321 active+clean; 92 MiB data, 872 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 389 KiB/s wr, 75 op/s
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:07:37
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'images', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr']
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:07:37 np0005593232 nova_compute[250269]: 2026-01-23 10:07:37.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:07:38 np0005593232 nova_compute[250269]: 2026-01-23 10:07:38.327 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:38Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:dd:ea 10.100.0.5
Jan 23 05:07:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:07:38Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:dd:ea 10.100.0.5
Jan 23 05:07:38 np0005593232 nova_compute[250269]: 2026-01-23 10:07:38.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2300: 321 pgs: 321 active+clean; 114 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 23 05:07:39 np0005593232 nova_compute[250269]: 2026-01-23 10:07:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:39 np0005593232 nova_compute[250269]: 2026-01-23 10:07:39.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:07:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:39 np0005593232 nova_compute[250269]: 2026-01-23 10:07:39.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:40.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2301: 321 pgs: 321 active+clean; 114 MiB data, 892 MiB used, 20 GiB / 21 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.337 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.696 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.696 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.696 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:07:41 np0005593232 nova_compute[250269]: 2026-01-23 10:07:41.696 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:42.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:42.619 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:42.620 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:07:42.620 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2302: 321 pgs: 321 active+clean; 167 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 23 05:07:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:44.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:07:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466365027' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:07:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3466365027' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:07:44 np0005593232 nova_compute[250269]: 2026-01-23 10:07:44.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2303: 321 pgs: 321 active+clean; 167 MiB data, 921 MiB used, 20 GiB / 21 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 23 05:07:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.752 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.774 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.775 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.775 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.776 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.776 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.826 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.827 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.827 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.827 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:07:45 np0005593232 nova_compute[250269]: 2026-01-23 10:07:45.828 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:07:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/963404562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.349 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:46.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.452 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.453 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:07:46 np0005593232 podman[327426]: 2026-01-23 10:07:46.63279441 +0000 UTC m=+0.093654982 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.646 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.647 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4216MB free_disk=20.922107696533203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.648 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.648 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.783 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.784 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.784 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:07:46 np0005593232 nova_compute[250269]: 2026-01-23 10:07:46.948 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031528810138656245 of space, bias 1.0, pg target 0.9458643041596874 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:07:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:07:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2304: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 23 05:07:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:07:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446084275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.437 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.442 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.447 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.470 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.509 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:07:47 np0005593232 nova_compute[250269]: 2026-01-23 10:07:47.509 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:48.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2305: 321 pgs: 321 active+clean; 191 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 349 KiB/s rd, 4.8 MiB/s wr, 100 op/s
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.195 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.195 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.233 250273 DEBUG nova.objects.instance [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.282 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.087s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:49.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.619 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.620 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.620 250273 INFO nova.compute.manager [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Attaching volume 509947fc-76b8-4398-98d1-07609c1b5d35 to /dev/vdb#033[00m
Jan 23 05:07:49 np0005593232 nova_compute[250269]: 2026-01-23 10:07:49.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.003 250273 DEBUG os_brick.utils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.006 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.066 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.066 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[53973597-2cbb-459b-836b-5ca0eaa292c7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.069 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.078 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.079 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[4306e2c6-5c6a-4cef-99b7-10835fea6450]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.081 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.096 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.097 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b82bf887-a5a7-45f2-83c0-70be399294bb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.098 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[4719dbe2-2128-4049-86cf-b140ba341c38]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.099 250273 DEBUG oslo_concurrency.processutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.130 250273 DEBUG oslo_concurrency.processutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.133 250273 DEBUG os_brick.initiator.connectors.lightos [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.133 250273 DEBUG os_brick.initiator.connectors.lightos [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.133 250273 DEBUG os_brick.initiator.connectors.lightos [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.134 250273 DEBUG os_brick.utils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] <== get_connector_properties: return (129ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:07:50 np0005593232 nova_compute[250269]: 2026-01-23 10:07:50.134 250273 DEBUG nova.virt.block_device [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating existing volume attachment record: 37519fcc-265f-48cd-92f5-8f40812f1365 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:07:50 np0005593232 podman[327485]: 2026-01-23 10:07:50.424102482 +0000 UTC m=+0.067302103 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 23 05:07:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:50.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2306: 321 pgs: 321 active+clean; 191 MiB data, 929 MiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.0 MiB/s wr, 43 op/s
Jan 23 05:07:51 np0005593232 nova_compute[250269]: 2026-01-23 10:07:51.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:51.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:51 np0005593232 nova_compute[250269]: 2026-01-23 10:07:51.505 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:07:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:07:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/717426992' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.083 250273 DEBUG nova.objects.instance [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.129 250273 DEBUG nova.virt.libvirt.driver [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Attempting to attach volume 509947fc-76b8-4398-98d1-07609c1b5d35 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.132 250273 DEBUG nova.virt.libvirt.guest [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-509947fc-76b8-4398-98d1-07609c1b5d35">
Jan 23 05:07:52 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:07:52 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:07:52 np0005593232 nova_compute[250269]:  <serial>509947fc-76b8-4398-98d1-07609c1b5d35</serial>
Jan 23 05:07:52 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:07:52 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.293 250273 DEBUG nova.virt.libvirt.driver [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.294 250273 DEBUG nova.virt.libvirt.driver [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.294 250273 DEBUG nova.virt.libvirt.driver [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.294 250273 DEBUG nova.virt.libvirt.driver [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No VIF found with MAC fa:16:3e:53:dd:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:07:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:52.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:52 np0005593232 nova_compute[250269]: 2026-01-23 10:07:52.600 250273 DEBUG oslo_concurrency.lockutils [None req-63af1f34-53ed-4ec0-abcd-0c41903bce40 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2307: 321 pgs: 321 active+clean; 213 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 210 KiB/s rd, 3.6 MiB/s wr, 77 op/s
Jan 23 05:07:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:53.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:54.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:54 np0005593232 nova_compute[250269]: 2026-01-23 10:07:54.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2308: 321 pgs: 321 active+clean; 213 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 186 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 23 05:07:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:55.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:56 np0005593232 nova_compute[250269]: 2026-01-23 10:07:56.343 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:07:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:56.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:07:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2309: 321 pgs: 321 active+clean; 210 MiB data, 942 MiB used, 20 GiB / 21 GiB avail; 386 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 23 05:07:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:07:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.380 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.380 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.431 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:07:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:07:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:07:58.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.574 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.575 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.581 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.581 250273 INFO nova.compute.claims [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:07:58 np0005593232 nova_compute[250269]: 2026-01-23 10:07:58.836 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2310: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 23 05:07:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:07:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761444140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.260 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.266 250273 DEBUG nova.compute.provider_tree [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.291 250273 DEBUG nova.scheduler.client.report [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.334 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.335 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.415 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.416 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.441 250273 INFO nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:07:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:07:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:07:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:07:59.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.470 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.631 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.633 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.634 250273 INFO nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Creating image(s)#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.671 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.702 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.728 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.731 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.788 250273 DEBUG nova.policy [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aca3cab576d641d3b89e7dddf155d467', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.814 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.815 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.816 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.816 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.841 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.845 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:07:59 np0005593232 nova_compute[250269]: 2026-01-23 10:07:59.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.196 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.265 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] resizing rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.379 250273 DEBUG nova.objects.instance [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'migration_context' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.395 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.395 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Ensure instance console log exists: /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.396 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.396 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:00 np0005593232 nova_compute[250269]: 2026-01-23 10:08:00.396 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:00.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2311: 321 pgs: 321 active+clean; 167 MiB data, 923 MiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 590 KiB/s wr, 119 op/s
Jan 23 05:08:01 np0005593232 nova_compute[250269]: 2026-01-23 10:08:01.345 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:01.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:02 np0005593232 nova_compute[250269]: 2026-01-23 10:08:02.074 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Successfully created port: 8ad4c021-5d44-41aa-adad-f593da5206c1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:08:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:02.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:02Z|00406|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:08:02 np0005593232 nova_compute[250269]: 2026-01-23 10:08:02.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.110 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Successfully updated port: 8ad4c021-5d44-41aa-adad-f593da5206c1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.127 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.128 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.128 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:08:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2312: 321 pgs: 321 active+clean; 213 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 219 op/s
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.258 250273 DEBUG nova.compute.manager [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.259 250273 DEBUG nova.compute.manager [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing instance network info cache due to event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.259 250273 DEBUG oslo_concurrency.lockutils [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:08:03 np0005593232 nova_compute[250269]: 2026-01-23 10:08:03.435 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:08:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:04.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.878 250273 DEBUG nova.network.neutron [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.900 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.900 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance network_info: |[{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.901 250273 DEBUG oslo_concurrency.lockutils [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.901 250273 DEBUG nova.network.neutron [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.905 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Start _get_guest_xml network_info=[{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.906 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.911 250273 WARNING nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.919 250273 DEBUG nova.virt.libvirt.host [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.919 250273 DEBUG nova.virt.libvirt.host [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.930 250273 DEBUG nova.virt.libvirt.host [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.931 250273 DEBUG nova.virt.libvirt.host [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.932 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.933 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.933 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.933 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.933 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.934 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.934 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.934 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.934 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.935 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.935 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.935 250273 DEBUG nova.virt.hardware [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:08:04 np0005593232 nova_compute[250269]: 2026-01-23 10:08:04.939 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2313: 321 pgs: 321 active+clean; 213 MiB data, 944 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.8 MiB/s wr, 185 op/s
Jan 23 05:08:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:08:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2762199042' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.398 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.423 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.430 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:08:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116842760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.905 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.907 250273 DEBUG nova.virt.libvirt.vif [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:07:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-8628670',display_name='tempest-ServerActionsTestOtherB-server-8628670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-8628670',id=122,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-05boc59s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:07:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=81a8be01-ddd9-4fd2-91a1-886e7f47bfa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.908 250273 DEBUG nova.network.os_vif_util [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.909 250273 DEBUG nova.network.os_vif_util [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.910 250273 DEBUG nova.objects.instance [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.926 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <uuid>81a8be01-ddd9-4fd2-91a1-886e7f47bfa3</uuid>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <name>instance-0000007a</name>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherB-server-8628670</nova:name>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:08:04</nova:creationTime>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:user uuid="aca3cab576d641d3b89e7dddf155d467">tempest-ServerActionsTestOtherB-1052932467-project-member</nova:user>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:project uuid="9dd869ce76e44fc8a82b8bbee1654d33">tempest-ServerActionsTestOtherB-1052932467</nova:project>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <nova:port uuid="8ad4c021-5d44-41aa-adad-f593da5206c1">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="serial">81a8be01-ddd9-4fd2-91a1-886e7f47bfa3</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="uuid">81a8be01-ddd9-4fd2-91a1-886e7f47bfa3</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:46:50:89"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <target dev="tap8ad4c021-5d"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/console.log" append="off"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:08:05 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:08:05 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:08:05 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:08:05 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.926 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Preparing to wait for external event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.926 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.927 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.927 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.928 250273 DEBUG nova.virt.libvirt.vif [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:07:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-8628670',display_name='tempest-ServerActionsTestOtherB-server-8628670',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-8628670',id=122,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-05boc59s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:07:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=81a8be01-ddd9-4fd2-91a1-886e7f47bfa3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.928 250273 DEBUG nova.network.os_vif_util [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.928 250273 DEBUG nova.network.os_vif_util [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.929 250273 DEBUG os_vif [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.929 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.930 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.930 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.933 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ad4c021-5d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.934 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8ad4c021-5d, col_values=(('external_ids', {'iface-id': '8ad4c021-5d44-41aa-adad-f593da5206c1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:50:89', 'vm-uuid': '81a8be01-ddd9-4fd2-91a1-886e7f47bfa3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:05 np0005593232 NetworkManager[49057]: <info>  [1769162885.9364] manager: (tap8ad4c021-5d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:05 np0005593232 nova_compute[250269]: 2026-01-23 10:08:05.944 250273 INFO os_vif [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d')#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.002 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.002 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.002 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No VIF found with MAC fa:16:3e:46:50:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.002 250273 INFO nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Using config drive#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.028 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:08:06 np0005593232 nova_compute[250269]: 2026-01-23 10:08:06.348 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:08:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:08:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:06.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2314: 321 pgs: 321 active+clean; 228 MiB data, 948 MiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 198 op/s
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bee50374-13f1-4190-bff8-9445778838b6 does not exist
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 18a6abc5-5709-4f6a-997f-a714b5f784e1 does not exist
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2af633a9-8cd0-4319-997c-9586de8f98d6 does not exist
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:08:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:08:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:07 np0005593232 nova_compute[250269]: 2026-01-23 10:08:07.681 250273 INFO nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Creating config drive at /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config#033[00m
Jan 23 05:08:07 np0005593232 nova_compute[250269]: 2026-01-23 10:08:07.689 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbp3alc_4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:07 np0005593232 nova_compute[250269]: 2026-01-23 10:08:07.832 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbp3alc_4" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:07 np0005593232 nova_compute[250269]: 2026-01-23 10:08:07.869 250273 DEBUG nova.storage.rbd_utils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:08:07 np0005593232 nova_compute[250269]: 2026-01-23 10:08:07.874 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:07 np0005593232 podman[328147]: 2026-01-23 10:08:07.939996692 +0000 UTC m=+0.046465511 container create 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:08:07 np0005593232 systemd[1]: Started libpod-conmon-14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41.scope.
Jan 23 05:08:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:08.010510176 +0000 UTC m=+0.116979005 container init 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:07.920949861 +0000 UTC m=+0.027418720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:08.018676348 +0000 UTC m=+0.125145167 container start 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:08.02259601 +0000 UTC m=+0.129064909 container attach 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:08:08 np0005593232 serene_sinoussi[328179]: 167 167
Jan 23 05:08:08 np0005593232 systemd[1]: libpod-14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41.scope: Deactivated successfully.
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:08.024804492 +0000 UTC m=+0.131273301 container died 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:08:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ace1da1f65edd1a385ca1a2538bc65ec3baa794c2d2ce53e760ada78692436b2-merged.mount: Deactivated successfully.
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.068 250273 DEBUG oslo_concurrency.processutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.070 250273 INFO nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Deleting local config drive /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/disk.config because it was imported into RBD.#033[00m
Jan 23 05:08:08 np0005593232 podman[328147]: 2026-01-23 10:08:08.072906139 +0000 UTC m=+0.179374958 container remove 14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:08:08 np0005593232 systemd[1]: libpod-conmon-14838bdd0c028862f7b3b5b63c617080eab3036d35f88c1828afac186e89aa41.scope: Deactivated successfully.
Jan 23 05:08:08 np0005593232 kernel: tap8ad4c021-5d: entered promiscuous mode
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.1182] manager: (tap8ad4c021-5d): new Tun device (/org/freedesktop/NetworkManager/Devices/203)
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.119 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:08Z|00407|binding|INFO|Claiming lport 8ad4c021-5d44-41aa-adad-f593da5206c1 for this chassis.
Jan 23 05:08:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:08Z|00408|binding|INFO|8ad4c021-5d44-41aa-adad-f593da5206c1: Claiming fa:16:3e:46:50:89 10.100.0.7
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.127 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:50:89 10.100.0.7'], port_security=['fa:16:3e:46:50:89 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '81a8be01-ddd9-4fd2-91a1-886e7f47bfa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8ad4c021-5d44-41aa-adad-f593da5206c1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.129 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8ad4c021-5d44-41aa-adad-f593da5206c1 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d bound to our chassis#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.131 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8d9599b4-8855-4310-af02-cdd058438f7d#033[00m
Jan 23 05:08:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:08Z|00409|binding|INFO|Setting lport 8ad4c021-5d44-41aa-adad-f593da5206c1 ovn-installed in OVS
Jan 23 05:08:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:08Z|00410|binding|INFO|Setting lport 8ad4c021-5d44-41aa-adad-f593da5206c1 up in Southbound
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.143 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.144 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85352a0b-e515-491e-9111-1bdc6e78fa7b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.145 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8d9599b4-81 in ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.148 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8d9599b4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.148 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5a1a1282-359f-417c-a601-ef2e9783f182]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.149 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[88b9b98c-e349-4563-a24c-b801093e278e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 systemd-udevd[328214]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.163 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7fb5b1-15c0-468d-baaf-4c4fdc7f1068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 systemd-machined[215836]: New machine qemu-51-instance-0000007a.
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.1699] device (tap8ad4c021-5d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.1705] device (tap8ad4c021-5d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.179 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a3986bd5-228c-4b08-8063-a63f5889e927]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 systemd[1]: Started Virtual Machine qemu-51-instance-0000007a.
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.211 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[cab76ef0-ebcf-40c7-96da-2f2f9f7ec7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.2178] manager: (tap8d9599b4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/204)
Jan 23 05:08:08 np0005593232 systemd-udevd[328217]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.220 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[669bfdca-ab02-4a99-a899-72ae81af22ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.253 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5278e051-8966-47b3-84b0-c0e7b85de5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.257 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed5b778-4f82-4306-ae84-14a494854367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 podman[328227]: 2026-01-23 10:08:08.264990207 +0000 UTC m=+0.047214692 container create bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.2823] device (tap8d9599b4-80): carrier: link connected
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.288 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[afc36f59-6640-4b6e-acac-f717d322012a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 systemd[1]: Started libpod-conmon-bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256.scope.
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.305 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3318de99-91f3-44a6-bbfe-239f914d462c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680142, 'reachable_time': 36224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 328267, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.322 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2768c83a-d39d-4322-b3a3-588f92770d3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 680142, 'tstamp': 680142}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 328270, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.338 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8dd2f6-63b2-4648-adfb-90e4c2ae6ff0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680142, 'reachable_time': 36224, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 328272, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 podman[328227]: 2026-01-23 10:08:08.247072308 +0000 UTC m=+0.029296813 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:08 np0005593232 podman[328227]: 2026-01-23 10:08:08.34918211 +0000 UTC m=+0.131406595 container init bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:08:08 np0005593232 podman[328227]: 2026-01-23 10:08:08.35692274 +0000 UTC m=+0.139147215 container start bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:08:08 np0005593232 podman[328227]: 2026-01-23 10:08:08.360292605 +0000 UTC m=+0.142517330 container attach bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.376 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2143d214-2086-49d4-b906-beb42c0fe6e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.431 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c1144f8a-89f9-4e1a-873c-897e5b834b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.433 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.433 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.433 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9599b4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:08.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:08 np0005593232 kernel: tap8d9599b4-80: entered promiscuous mode
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 NetworkManager[49057]: <info>  [1769162888.4745] manager: (tap8d9599b4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.475 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8d9599b4-80, col_values=(('external_ids', {'iface-id': 'b57bd565-3bb1-4ecc-8df0-a7c439ac84a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:08Z|00411|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.479 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.480 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[474f21d7-4182-43bf-9df1-3ee970cb2142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.481 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:08:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:08.481 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'env', 'PROCESS_TAG=haproxy-8d9599b4-8855-4310-af02-cdd058438f7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8d9599b4-8855-4310-af02-cdd058438f7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.494 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.644 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162888.643674, 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.645 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] VM Started (Lifecycle Event)#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.679 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.686 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162888.6439705, 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.687 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.710 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.714 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:08:08 np0005593232 nova_compute[250269]: 2026-01-23 10:08:08.757 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:08:08 np0005593232 podman[328348]: 2026-01-23 10:08:08.874443655 +0000 UTC m=+0.056077424 container create 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:08:08 np0005593232 systemd[1]: Started libpod-conmon-035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95.scope.
Jan 23 05:08:08 np0005593232 podman[328348]: 2026-01-23 10:08:08.838449193 +0000 UTC m=+0.020082972 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:08:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c78cccda20f28183b304bde3bec28fe642b369594233a83435c4e458bcf50642/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:08 np0005593232 podman[328348]: 2026-01-23 10:08:08.966772639 +0000 UTC m=+0.148406408 container init 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:08:08 np0005593232 podman[328348]: 2026-01-23 10:08:08.971851813 +0000 UTC m=+0.153485572 container start 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:08:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [NOTICE]   (328367) : New worker (328369) forked
Jan 23 05:08:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [NOTICE]   (328367) : Loading success.
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.156 250273 DEBUG nova.network.neutron [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updated VIF entry in instance network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.158 250273 DEBUG nova.network.neutron [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:08:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2315: 321 pgs: 321 active+clean; 277 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 203 op/s
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.184 250273 DEBUG nova.compute.manager [req-ae9eeb4d-5e11-4c3a-a244-5863179df0dc req-980ed607-0204-414c-834f-8de174765490 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.185 250273 DEBUG oslo_concurrency.lockutils [req-ae9eeb4d-5e11-4c3a-a244-5863179df0dc req-980ed607-0204-414c-834f-8de174765490 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.185 250273 DEBUG oslo_concurrency.lockutils [req-ae9eeb4d-5e11-4c3a-a244-5863179df0dc req-980ed607-0204-414c-834f-8de174765490 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.185 250273 DEBUG oslo_concurrency.lockutils [req-ae9eeb4d-5e11-4c3a-a244-5863179df0dc req-980ed607-0204-414c-834f-8de174765490 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.185 250273 DEBUG nova.compute.manager [req-ae9eeb4d-5e11-4c3a-a244-5863179df0dc req-980ed607-0204-414c-834f-8de174765490 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Processing event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.186 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.189 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162889.189485, 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.190 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.192 250273 DEBUG oslo_concurrency.lockutils [req-e51cdf84-e551-4f89-b5d3-d2439cbefdf6 req-be7c4ba7-2e0e-40d4-acec-5a0aec4f8524 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.193 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.197 250273 INFO nova.virt.libvirt.driver [-] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance spawned successfully.#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.198 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.226 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.230 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:08:09 np0005593232 recursing_thompson[328268]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:08:09 np0005593232 recursing_thompson[328268]: --> relative data size: 1.0
Jan 23 05:08:09 np0005593232 recursing_thompson[328268]: --> All data devices are unavailable
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.240 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.240 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.241 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.241 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.241 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.242 250273 DEBUG nova.virt.libvirt.driver [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:08:09 np0005593232 systemd[1]: libpod-bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256.scope: Deactivated successfully.
Jan 23 05:08:09 np0005593232 conmon[328268]: conmon bc6d14cd9b4a5ae7263a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256.scope/container/memory.events
Jan 23 05:08:09 np0005593232 podman[328227]: 2026-01-23 10:08:09.268492823 +0000 UTC m=+1.050717318 container died bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.272 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:08:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d196fbc18dd3c36eca2aa72903adf2b6d1421fae39d92f33b06fd6843caa67a6-merged.mount: Deactivated successfully.
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.317 250273 INFO nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Took 9.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.317 250273 DEBUG nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:09 np0005593232 podman[328227]: 2026-01-23 10:08:09.327154719 +0000 UTC m=+1.109379194 container remove bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:08:09 np0005593232 systemd[1]: libpod-conmon-bc6d14cd9b4a5ae7263a0d72b9ba301d1748ff481b0644faf5a78d469bc52256.scope: Deactivated successfully.
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.450 250273 INFO nova.compute.manager [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Took 10.91 seconds to build instance.#033[00m
Jan 23 05:08:09 np0005593232 nova_compute[250269]: 2026-01-23 10:08:09.480 250273 DEBUG oslo_concurrency.lockutils [None req-76083263-1ad2-4b49-9a37-b2d1a85c1174 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:09 np0005593232 podman[328543]: 2026-01-23 10:08:09.961073703 +0000 UTC m=+0.050388913 container create fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:08:10 np0005593232 systemd[1]: Started libpod-conmon-fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42.scope.
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:09.94058268 +0000 UTC m=+0.029897920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:10.062149545 +0000 UTC m=+0.151464765 container init fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:10.069605637 +0000 UTC m=+0.158920837 container start fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:10.072428377 +0000 UTC m=+0.161743577 container attach fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:08:10 np0005593232 hopeful_joliot[328559]: 167 167
Jan 23 05:08:10 np0005593232 systemd[1]: libpod-fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42.scope: Deactivated successfully.
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:10.075326949 +0000 UTC m=+0.164642139 container died fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:08:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-762f6a12a827e5756324724b9ba731358a4de2b80b1d73e9fa3d567cabb30eac-merged.mount: Deactivated successfully.
Jan 23 05:08:10 np0005593232 podman[328543]: 2026-01-23 10:08:10.11513402 +0000 UTC m=+0.204449210 container remove fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_joliot, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:08:10 np0005593232 systemd[1]: libpod-conmon-fb1a79961d75a60de141cfdb52b57a5aefbddc425fefe490d5797122d7460e42.scope: Deactivated successfully.
Jan 23 05:08:10 np0005593232 podman[328583]: 2026-01-23 10:08:10.295084694 +0000 UTC m=+0.040627886 container create 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:08:10 np0005593232 systemd[1]: Started libpod-conmon-5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b.scope.
Jan 23 05:08:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994cb96f58fab636b7c3559377ebdcab7cce3d0aab4f7ab4166f0681948f67d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994cb96f58fab636b7c3559377ebdcab7cce3d0aab4f7ab4166f0681948f67d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994cb96f58fab636b7c3559377ebdcab7cce3d0aab4f7ab4166f0681948f67d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3994cb96f58fab636b7c3559377ebdcab7cce3d0aab4f7ab4166f0681948f67d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:10 np0005593232 podman[328583]: 2026-01-23 10:08:10.278601445 +0000 UTC m=+0.024144647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:10 np0005593232 podman[328583]: 2026-01-23 10:08:10.381273763 +0000 UTC m=+0.126816955 container init 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:08:10 np0005593232 podman[328583]: 2026-01-23 10:08:10.387691215 +0000 UTC m=+0.133234407 container start 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:08:10 np0005593232 podman[328583]: 2026-01-23 10:08:10.392562984 +0000 UTC m=+0.138106196 container attach 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:08:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:10.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:10 np0005593232 nova_compute[250269]: 2026-01-23 10:08:10.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2316: 321 pgs: 321 active+clean; 277 MiB data, 966 MiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 23 05:08:11 np0005593232 priceless_morse[328600]: {
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:    "0": [
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:        {
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "devices": [
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "/dev/loop3"
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            ],
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "lv_name": "ceph_lv0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "lv_size": "7511998464",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "name": "ceph_lv0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "tags": {
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.cluster_name": "ceph",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.crush_device_class": "",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.encrypted": "0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.osd_id": "0",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.type": "block",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:                "ceph.vdo": "0"
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            },
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "type": "block",
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:            "vg_name": "ceph_vg0"
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:        }
Jan 23 05:08:11 np0005593232 priceless_morse[328600]:    ]
Jan 23 05:08:11 np0005593232 priceless_morse[328600]: }
Jan 23 05:08:11 np0005593232 systemd[1]: libpod-5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b.scope: Deactivated successfully.
Jan 23 05:08:11 np0005593232 podman[328583]: 2026-01-23 10:08:11.203415134 +0000 UTC m=+0.948958346 container died 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:08:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3994cb96f58fab636b7c3559377ebdcab7cce3d0aab4f7ab4166f0681948f67d-merged.mount: Deactivated successfully.
Jan 23 05:08:11 np0005593232 podman[328583]: 2026-01-23 10:08:11.264278453 +0000 UTC m=+1.009821645 container remove 5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_morse, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.286 250273 DEBUG nova.compute.manager [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.288 250273 DEBUG oslo_concurrency.lockutils [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.289 250273 DEBUG oslo_concurrency.lockutils [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.289 250273 DEBUG oslo_concurrency.lockutils [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.289 250273 DEBUG nova.compute.manager [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] No waiting events found dispatching network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.289 250273 WARNING nova.compute.manager [req-06975c92-e569-424a-8efb-8ae6bb7dcc77 req-4a0c6cba-8f10-4031-8446-0240caf6cd7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received unexpected event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:08:11 np0005593232 systemd[1]: libpod-conmon-5ecac752da4492934c19500a68c14636c4fec320666fd85ea3282dfc23a8f41b.scope: Deactivated successfully.
Jan 23 05:08:11 np0005593232 nova_compute[250269]: 2026-01-23 10:08:11.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.879034712 +0000 UTC m=+0.039354880 container create 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:08:11 np0005593232 systemd[1]: Started libpod-conmon-34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3.scope.
Jan 23 05:08:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.948573038 +0000 UTC m=+0.108893346 container init 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.955057822 +0000 UTC m=+0.115377980 container start 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.85996231 +0000 UTC m=+0.020282488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.958699195 +0000 UTC m=+0.119019383 container attach 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:08:11 np0005593232 systemd[1]: libpod-34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3.scope: Deactivated successfully.
Jan 23 05:08:11 np0005593232 practical_meitner[328778]: 167 167
Jan 23 05:08:11 np0005593232 conmon[328778]: conmon 34c4a498ac946ff3b4a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3.scope/container/memory.events
Jan 23 05:08:11 np0005593232 podman[328761]: 2026-01-23 10:08:11.961926697 +0000 UTC m=+0.122246855 container died 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 05:08:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-79a4b1ab2adcd07b7b184b4ba03c40d7dd542c900fc9f8be080b09d6ea4d94a5-merged.mount: Deactivated successfully.
Jan 23 05:08:12 np0005593232 podman[328761]: 2026-01-23 10:08:12.001068429 +0000 UTC m=+0.161388587 container remove 34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:08:12 np0005593232 systemd[1]: libpod-conmon-34c4a498ac946ff3b4a7f937a097465012537a99e4c25458d8508d1b5d1fccf3.scope: Deactivated successfully.
Jan 23 05:08:12 np0005593232 podman[328801]: 2026-01-23 10:08:12.181333712 +0000 UTC m=+0.038658730 container create 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:08:12 np0005593232 systemd[1]: Started libpod-conmon-535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a.scope.
Jan 23 05:08:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:08:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d791dd47ed83069212c04fbd7a4893f967d23dae3f79e99eb2c4dfc34e43ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d791dd47ed83069212c04fbd7a4893f967d23dae3f79e99eb2c4dfc34e43ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d791dd47ed83069212c04fbd7a4893f967d23dae3f79e99eb2c4dfc34e43ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d791dd47ed83069212c04fbd7a4893f967d23dae3f79e99eb2c4dfc34e43ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:08:12 np0005593232 podman[328801]: 2026-01-23 10:08:12.165732308 +0000 UTC m=+0.023057336 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:08:12 np0005593232 podman[328801]: 2026-01-23 10:08:12.277662939 +0000 UTC m=+0.134987957 container init 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Jan 23 05:08:12 np0005593232 podman[328801]: 2026-01-23 10:08:12.284476383 +0000 UTC m=+0.141801401 container start 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:08:12 np0005593232 podman[328801]: 2026-01-23 10:08:12.289387012 +0000 UTC m=+0.146712050 container attach 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:08:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:12.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]: {
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:        "osd_id": 0,
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:        "type": "bluestore"
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]:    }
Jan 23 05:08:13 np0005593232 confident_northcutt[328819]: }
Jan 23 05:08:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2317: 321 pgs: 321 active+clean; 328 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 7.4 MiB/s wr, 257 op/s
Jan 23 05:08:13 np0005593232 systemd[1]: libpod-535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a.scope: Deactivated successfully.
Jan 23 05:08:13 np0005593232 podman[328801]: 2026-01-23 10:08:13.168201584 +0000 UTC m=+1.025526602 container died 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:08:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55d791dd47ed83069212c04fbd7a4893f967d23dae3f79e99eb2c4dfc34e43ac-merged.mount: Deactivated successfully.
Jan 23 05:08:13 np0005593232 podman[328801]: 2026-01-23 10:08:13.221259532 +0000 UTC m=+1.078584560 container remove 535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Jan 23 05:08:13 np0005593232 systemd[1]: libpod-conmon-535b753d9e01bcfc1a4338b8e469d37f0684dfaf385dbf1d2f7e6854a669fa6a.scope: Deactivated successfully.
Jan 23 05:08:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:08:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:08:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 74411ffa-79fa-44ff-a6e7-9b6935af059a does not exist
Jan 23 05:08:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ae2c63ce-7b28-46e6-a9c5-7e37d765305a does not exist
Jan 23 05:08:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ac5cdcd6-44d6-4b19-ad16-2a0c463d215a does not exist
Jan 23 05:08:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.490 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.645 250273 DEBUG nova.compute.manager [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.646 250273 DEBUG nova.compute.manager [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing instance network info cache due to event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.647 250273 DEBUG oslo_concurrency.lockutils [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.647 250273 DEBUG oslo_concurrency.lockutils [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:08:13 np0005593232 nova_compute[250269]: 2026-01-23 10:08:13.647 250273 DEBUG nova.network.neutron [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:08:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:08:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:14.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2318: 321 pgs: 321 active+clean; 328 MiB data, 1011 MiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 158 op/s
Jan 23 05:08:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:15.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:15 np0005593232 nova_compute[250269]: 2026-01-23 10:08:15.991 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:16 np0005593232 nova_compute[250269]: 2026-01-23 10:08:16.351 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2319: 321 pgs: 321 active+clean; 336 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 196 op/s
Jan 23 05:08:17 np0005593232 podman[328904]: 2026-01-23 10:08:17.429974414 +0000 UTC m=+0.089403621 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:08:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:17.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:18.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:18 np0005593232 nova_compute[250269]: 2026-01-23 10:08:18.931 250273 DEBUG nova.network.neutron [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updated VIF entry in instance network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:08:18 np0005593232 nova_compute[250269]: 2026-01-23 10:08:18.932 250273 DEBUG nova.network.neutron [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:08:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2320: 321 pgs: 321 active+clean; 339 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 5.4 MiB/s wr, 262 op/s
Jan 23 05:08:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:19.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:20.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:21 np0005593232 nova_compute[250269]: 2026-01-23 10:08:21.030 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:21 np0005593232 nova_compute[250269]: 2026-01-23 10:08:21.031 250273 DEBUG oslo_concurrency.lockutils [req-87f2c096-0005-4bb6-9dd9-c9346f69e56e req-a9648e4f-2193-472a-b127-77b877c1c67c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:08:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2321: 321 pgs: 321 active+clean; 339 MiB data, 1012 MiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.6 MiB/s wr, 232 op/s
Jan 23 05:08:21 np0005593232 nova_compute[250269]: 2026-01-23 10:08:21.355 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:21 np0005593232 podman[328934]: 2026-01-23 10:08:21.395763673 +0000 UTC m=+0.053110991 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:08:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:22.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:23Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:50:89 10.100.0.7
Jan 23 05:08:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:23Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:50:89 10.100.0.7
Jan 23 05:08:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2322: 321 pgs: 321 active+clean; 357 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 5.2 MiB/s wr, 324 op/s
Jan 23 05:08:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:23.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2323: 321 pgs: 321 active+clean; 357 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.7 MiB/s wr, 208 op/s
Jan 23 05:08:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:25.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:26 np0005593232 nova_compute[250269]: 2026-01-23 10:08:26.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:26 np0005593232 nova_compute[250269]: 2026-01-23 10:08:26.359 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:26.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2324: 321 pgs: 321 active+clean; 372 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 227 op/s
Jan 23 05:08:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:27.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:27 np0005593232 nova_compute[250269]: 2026-01-23 10:08:27.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:28.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2325: 321 pgs: 321 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.2 MiB/s wr, 251 op/s
Jan 23 05:08:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:29.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:30.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:31 np0005593232 nova_compute[250269]: 2026-01-23 10:08:31.068 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2326: 321 pgs: 321 active+clean; 398 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.2 MiB/s wr, 173 op/s
Jan 23 05:08:31 np0005593232 nova_compute[250269]: 2026-01-23 10:08:31.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:31.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:32 np0005593232 nova_compute[250269]: 2026-01-23 10:08:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:32 np0005593232 nova_compute[250269]: 2026-01-23 10:08:32.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:32 np0005593232 nova_compute[250269]: 2026-01-23 10:08:32.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:08:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:32.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2327: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 221 op/s
Jan 23 05:08:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:33.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:34.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2328: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 523 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Jan 23 05:08:35 np0005593232 nova_compute[250269]: 2026-01-23 10:08:35.393 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:36 np0005593232 nova_compute[250269]: 2026-01-23 10:08:36.117 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:36 np0005593232 nova_compute[250269]: 2026-01-23 10:08:36.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:36.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2329: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 523 KiB/s rd, 2.7 MiB/s wr, 130 op/s
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:08:37
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'volumes']
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:08:37 np0005593232 nova_compute[250269]: 2026-01-23 10:08:37.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:37.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:08:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:38.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.118 250273 DEBUG nova.compute.manager [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2330: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 425 KiB/s rd, 1.5 MiB/s wr, 111 op/s
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.189 250273 INFO nova.compute.manager [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] instance snapshotting#033[00m
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.191 250273 DEBUG nova.objects.instance [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'flavor' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:08:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:39.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.718 250273 INFO nova.virt.libvirt.driver [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Beginning live snapshot process#033[00m
Jan 23 05:08:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:08:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3004133419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:08:39 np0005593232 nova_compute[250269]: 2026-01-23 10:08:39.895 250273 DEBUG nova.virt.libvirt.imagebackend [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:08:40 np0005593232 nova_compute[250269]: 2026-01-23 10:08:40.193 250273 DEBUG nova.storage.rbd_utils [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(bdaceac74e9c470493dccfe4fe000d58) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:08:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 23 05:08:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 23 05:08:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 23 05:08:40 np0005593232 nova_compute[250269]: 2026-01-23 10:08:40.634 250273 DEBUG nova.storage.rbd_utils [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] cloning vms/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk@bdaceac74e9c470493dccfe4fe000d58 to images/c895afcd-4576-403b-b7a1-afa282118258 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:08:40 np0005593232 nova_compute[250269]: 2026-01-23 10:08:40.772 250273 DEBUG nova.storage.rbd_utils [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] flattening images/c895afcd-4576-403b-b7a1-afa282118258 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:08:41 np0005593232 nova_compute[250269]: 2026-01-23 10:08:41.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:41 np0005593232 nova_compute[250269]: 2026-01-23 10:08:41.171 250273 DEBUG nova.storage.rbd_utils [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] removing snapshot(bdaceac74e9c470493dccfe4fe000d58) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:08:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2332: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 123 KiB/s rd, 144 KiB/s wr, 58 op/s
Jan 23 05:08:41 np0005593232 nova_compute[250269]: 2026-01-23 10:08:41.365 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:41.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 23 05:08:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 23 05:08:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 23 05:08:41 np0005593232 nova_compute[250269]: 2026-01-23 10:08:41.627 250273 DEBUG nova.storage.rbd_utils [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(snap) on rbd image(c895afcd-4576-403b-b7a1-afa282118258) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:08:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:08:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:42.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 23 05:08:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 23 05:08:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 23 05:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:42.620 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:42.622 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:42.623 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.655 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.655 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.656 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:08:42 np0005593232 nova_compute[250269]: 2026-01-23 10:08:42.656 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:08:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2607303914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:08:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2335: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 207 op/s
Jan 23 05:08:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:43.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:44 np0005593232 nova_compute[250269]: 2026-01-23 10:08:44.392 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:44.392 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:08:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:44.395 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:08:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:44.397 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:44.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:44Z|00412|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:08:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:44Z|00413|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:08:44 np0005593232 nova_compute[250269]: 2026-01-23 10:08:44.817 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2336: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 206 op/s
Jan 23 05:08:45 np0005593232 nova_compute[250269]: 2026-01-23 10:08:45.315 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [{"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:08:45 np0005593232 nova_compute[250269]: 2026-01-23 10:08:45.375 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-6e5889e0-b5b3-442c-b7dd-0434b1da7c96" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:08:45 np0005593232 nova_compute[250269]: 2026-01-23 10:08:45.375 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:08:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.095 250273 INFO nova.virt.libvirt.driver [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Snapshot image upload complete#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.096 250273 INFO nova.compute.manager [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Took 6.86 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:46.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:08:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/568618751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.820 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.956 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.957 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.958 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000077 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.963 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:08:46 np0005593232 nova_compute[250269]: 2026-01-23 10:08:46.963 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:08:46 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006513142971803462 of space, bias 1.0, pg target 1.9539428915410386 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.215954810220082 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.050 250273 DEBUG nova.compute.manager [None req-972d74b2-4c61-44de-a9d5-d66282dbfb6c aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.157 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.159 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3982MB free_disk=20.851581573486328GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.159 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.160 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2337: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.1 MiB/s wr, 193 op/s
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.441 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.441 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.442 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.442 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:08:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:47.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 11K writes, 52K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1597 writes, 7491 keys, 1597 commit groups, 1.0 writes per commit group, ingest: 10.71 MB, 0.02 MB/s#012Interval WAL: 1597 writes, 1597 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     40.0      1.73              0.25        34    0.051       0      0       0.0       0.0#012  L6      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.5     78.8     66.1      4.71              0.98        33    0.143    202K    18K       0.0       0.0#012 Sum      1/0    9.31 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.5     57.7     59.1      6.44              1.24        67    0.096    202K    18K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.9     65.1     63.8      1.36              0.27        14    0.097     55K   3600       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0     78.8     66.1      4.71              0.98        33    0.143    202K    18K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     40.1      1.72              0.25        33    0.052       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.068, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.09 MB/s write, 0.36 GB read, 0.09 MB/s read, 6.4 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 1.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 40.89 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000381 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2379,39.36 MB,12.946%) FilterBlock(68,576.48 KB,0.185188%) IndexBlock(68,989.44 KB,0.317845%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:08:47 np0005593232 nova_compute[250269]: 2026-01-23 10:08:47.909 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:08:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:08:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3377184346' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.397 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.406 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.448 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:08:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:48.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:48 np0005593232 podman[329251]: 2026-01-23 10:08:48.488197546 +0000 UTC m=+0.140835763 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.489 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.490 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.567 250273 DEBUG nova.compute.manager [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.624 250273 INFO nova.compute.manager [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] instance snapshotting#033[00m
Jan 23 05:08:48 np0005593232 nova_compute[250269]: 2026-01-23 10:08:48.626 250273 DEBUG nova.objects.instance [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'flavor' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2338: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 178 op/s
Jan 23 05:08:49 np0005593232 nova_compute[250269]: 2026-01-23 10:08:49.365 250273 INFO nova.virt.libvirt.driver [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Beginning live snapshot process#033[00m
Jan 23 05:08:49 np0005593232 nova_compute[250269]: 2026-01-23 10:08:49.539 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:49 np0005593232 nova_compute[250269]: 2026-01-23 10:08:49.545 250273 DEBUG nova.virt.libvirt.imagebackend [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:08:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:08:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:49.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.133 250273 DEBUG nova.storage.rbd_utils [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(b466bb749a3b49d8a9617512061460cc) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:08:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:08:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 23 05:08:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.427 250273 DEBUG oslo_concurrency.lockutils [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.427 250273 DEBUG oslo_concurrency.lockutils [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.442 250273 INFO nova.compute.manager [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Detaching volume 509947fc-76b8-4398-98d1-07609c1b5d35#033[00m
Jan 23 05:08:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:50.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.596 250273 DEBUG nova.storage.rbd_utils [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] cloning vms/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk@b466bb749a3b49d8a9617512061460cc to images/1751577e-a4ac-4cf5-912a-f31cdff15a23 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.698 250273 INFO nova.virt.block_device [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Attempting to driver detach volume 509947fc-76b8-4398-98d1-07609c1b5d35 from mountpoint /dev/vdb#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.707 250273 DEBUG nova.virt.libvirt.driver [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Attempting to detach device vdb from instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.708 250273 DEBUG nova.virt.libvirt.guest [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-509947fc-76b8-4398-98d1-07609c1b5d35">
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <serial>509947fc-76b8-4398-98d1-07609c1b5d35</serial>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:08:50 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.717 250273 INFO nova.virt.libvirt.driver [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully detached device vdb from instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 from the persistent domain config.#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.718 250273 DEBUG nova.virt.libvirt.driver [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.719 250273 DEBUG nova.virt.libvirt.guest [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-509947fc-76b8-4398-98d1-07609c1b5d35">
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <serial>509947fc-76b8-4398-98d1-07609c1b5d35</serial>
Jan 23 05:08:50 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:08:50 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:08:50 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.841 250273 DEBUG nova.storage.rbd_utils [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] flattening images/1751577e-a4ac-4cf5-912a-f31cdff15a23 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.905 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769162930.839664, 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.906 250273 DEBUG nova.virt.libvirt.driver [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:08:50 np0005593232 nova_compute[250269]: 2026-01-23 10:08:50.909 250273 INFO nova.virt.libvirt.driver [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully detached device vdb from instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 from the live domain config.#033[00m
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2340: 321 pgs: 321 active+clean; 359 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 4.3 KiB/s wr, 21 op/s
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.218 250273 DEBUG nova.objects.instance [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.291 250273 DEBUG nova.storage.rbd_utils [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] removing snapshot(b466bb749a3b49d8a9617512061460cc) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.322 250273 DEBUG oslo_concurrency.lockutils [None req-ccb3250d-257c-4ee8-8405-140134885d24 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.370 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 23 05:08:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:51 np0005593232 nova_compute[250269]: 2026-01-23 10:08:51.589 250273 DEBUG nova.storage.rbd_utils [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(snap) on rbd image(1751577e-a4ac-4cf5-912a-f31cdff15a23) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 23 05:08:51 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.335 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.336 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.336 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.336 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.337 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.338 250273 INFO nova.compute.manager [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Terminating instance#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.340 250273 DEBUG nova.compute.manager [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:08:52 np0005593232 podman[329425]: 2026-01-23 10:08:52.397336197 +0000 UTC m=+0.052646217 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:08:52 np0005593232 kernel: tap06d14633-c4 (unregistering): left promiscuous mode
Jan 23 05:08:52 np0005593232 NetworkManager[49057]: <info>  [1769162932.4283] device (tap06d14633-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:08:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:52Z|00414|binding|INFO|Releasing lport 06d14633-c41f-49f8-a5ac-9c37e350d1ae from this chassis (sb_readonly=0)
Jan 23 05:08:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:52Z|00415|binding|INFO|Setting lport 06d14633-c41f-49f8-a5ac-9c37e350d1ae down in Southbound
Jan 23 05:08:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:08:52Z|00416|binding|INFO|Removing iface tap06d14633-c4 ovn-installed in OVS
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.453 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:dd:ea 10.100.0.5'], port_security=['fa:16:3e:53:dd:ea 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '6e5889e0-b5b3-442c-b7dd-0434b1da7c96', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93735878-f62d-4a5f-96df-bf97f85d787a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '924f976bcbb74ec195730b68eebe1f2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '30915781-ba6b-4325-9c75-4d72f58120e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1f72e5c-e22f-424b-b6ed-0c502ff13aa3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=06d14633-c41f-49f8-a5ac-9c37e350d1ae) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.454 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 06d14633-c41f-49f8-a5ac-9c37e350d1ae in datapath 93735878-f62d-4a5f-96df-bf97f85d787a unbound from our chassis#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.456 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93735878-f62d-4a5f-96df-bf97f85d787a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.458 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e08b66d7-fe38-402a-b8f3-c78525f149ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.458 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a namespace which is not needed anymore#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:52 np0005593232 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 23 05:08:52 np0005593232 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d00000077.scope: Consumed 17.176s CPU time.
Jan 23 05:08:52 np0005593232 systemd-machined[215836]: Machine qemu-50-instance-00000077 terminated.
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.562 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.581 250273 INFO nova.virt.libvirt.driver [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Instance destroyed successfully.#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.582 250273 DEBUG nova.objects.instance [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'resources' on Instance uuid 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.600 250273 DEBUG nova.virt.libvirt.vif [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:07:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1100313845',display_name='tempest-AttachVolumeNegativeTest-server-1100313845',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1100313845',id=119,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBODZCFafhZGTU8x6piM66+lVcw3K+KK8nP6yLx1MHDrMWs+9viWA6aoR0PFHQNfHFDvg2apwI3HRmvR0Oj6kIYMWeOEyUtGbuCEFROlPgdLi2uevA8Ne8YkCFZBSHl/ptQ==',key_name='tempest-keypair-1293560348',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:07:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-1xj02ksr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:07:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=6e5889e0-b5b3-442c-b7dd-0434b1da7c96,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.600 250273 DEBUG nova.network.os_vif_util [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "address": "fa:16:3e:53:dd:ea", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap06d14633-c4", "ovs_interfaceid": "06d14633-c41f-49f8-a5ac-9c37e350d1ae", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.602 250273 DEBUG nova.network.os_vif_util [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.603 250273 DEBUG os_vif [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.606 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06d14633-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.612 250273 INFO os_vif [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:dd:ea,bridge_name='br-int',has_traffic_filtering=True,id=06d14633-c41f-49f8-a5ac-9c37e350d1ae,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap06d14633-c4')#033[00m
Jan 23 05:08:52 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [NOTICE]   (327278) : haproxy version is 2.8.14-c23fe91
Jan 23 05:08:52 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [NOTICE]   (327278) : path to executable is /usr/sbin/haproxy
Jan 23 05:08:52 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [WARNING]  (327278) : Exiting Master process...
Jan 23 05:08:52 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [ALERT]    (327278) : Current worker (327281) exited with code 143 (Terminated)
Jan 23 05:08:52 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[327271]: [WARNING]  (327278) : All workers exited. Exiting... (0)
Jan 23 05:08:52 np0005593232 systemd[1]: libpod-4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e.scope: Deactivated successfully.
Jan 23 05:08:52 np0005593232 podman[329472]: 2026-01-23 10:08:52.632235402 +0000 UTC m=+0.050430024 container died 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:08:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e-userdata-shm.mount: Deactivated successfully.
Jan 23 05:08:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-877c98b958e76ad23ec03ce0660b6af6f29b18f60c401e3b3c3ae5dcc2c1616f-merged.mount: Deactivated successfully.
Jan 23 05:08:52 np0005593232 podman[329472]: 2026-01-23 10:08:52.713886862 +0000 UTC m=+0.132081484 container cleanup 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:08:52 np0005593232 systemd[1]: libpod-conmon-4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e.scope: Deactivated successfully.
Jan 23 05:08:52 np0005593232 podman[329527]: 2026-01-23 10:08:52.77329532 +0000 UTC m=+0.039594806 container remove 4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.779 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec280c9-a3b3-4c1e-8e04-c31813657d06]: (4, ('Fri Jan 23 10:08:52 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a (4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e)\n4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e\nFri Jan 23 10:08:52 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a (4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e)\n4b79f8651e62e0e3723f85e1f308dccf0e4cebccb9d7c61728fb1b203cebaa6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.782 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c801d7cc-1bd4-4c31-8d2c-dbd760e774b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.782 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93735878-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 kernel: tap93735878-f0: left promiscuous mode
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.804 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5d87602c-0273-4137-bd79-53f141383f10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.816 250273 DEBUG nova.compute.manager [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-unplugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.817 250273 DEBUG oslo_concurrency.lockutils [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.817 250273 DEBUG oslo_concurrency.lockutils [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.818 250273 DEBUG oslo_concurrency.lockutils [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.818 250273 DEBUG nova.compute.manager [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] No waiting events found dispatching network-vif-unplugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.818 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[693c4ae0-8e68-4af5-b25a-fa6138489396]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 nova_compute[250269]: 2026-01-23 10:08:52.819 250273 DEBUG nova.compute.manager [req-2316ea90-3bc0-4094-9b0f-26f71efa2dcb req-9a737c89-b3c7-41a8-80f1-eb2d34f74bc0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-unplugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.819 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b91154-9d73-4c25-a83a-c2f070790a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.835 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ab19852e-a2ac-4b72-b535-533dead01798]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 675638, 'reachable_time': 41098, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329542, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.838 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:08:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:08:52.838 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6c29d682-5ab6-4a0e-b95c-c3597905fc0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:08:52 np0005593232 systemd[1]: run-netns-ovnmeta\x2d93735878\x2df62d\x2d4a5f\x2d96df\x2dbf97f85d787a.mount: Deactivated successfully.
Jan 23 05:08:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2343: 321 pgs: 321 active+clean; 438 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 178 op/s
Jan 23 05:08:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:54.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.973 250273 DEBUG nova.compute.manager [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.973 250273 DEBUG oslo_concurrency.lockutils [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.974 250273 DEBUG oslo_concurrency.lockutils [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.974 250273 DEBUG oslo_concurrency.lockutils [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.974 250273 DEBUG nova.compute.manager [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] No waiting events found dispatching network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:08:54 np0005593232 nova_compute[250269]: 2026-01-23 10:08:54.974 250273 WARNING nova.compute.manager [req-c2172644-eb9c-4aa4-8a0e-9cd293a99955 req-f879d5ef-9d4c-430a-9142-8bdc8708fa4c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received unexpected event network-vif-plugged-06d14633-c41f-49f8-a5ac-9c37e350d1ae for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:08:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2344: 321 pgs: 321 active+clean; 438 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 153 op/s
Jan 23 05:08:55 np0005593232 nova_compute[250269]: 2026-01-23 10:08:55.523 250273 INFO nova.virt.libvirt.driver [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Snapshot image upload complete#033[00m
Jan 23 05:08:55 np0005593232 nova_compute[250269]: 2026-01-23 10:08:55.524 250273 INFO nova.compute.manager [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Took 6.87 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:08:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:55.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.276 250273 DEBUG nova.compute.manager [None req-f093e56a-ebe3-4f74-a2b1-18e384789223 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.477 250273 INFO nova.virt.libvirt.driver [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Deleting instance files /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96_del#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.478 250273 INFO nova.virt.libvirt.driver [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Deletion of /var/lib/nova/instances/6e5889e0-b5b3-442c-b7dd-0434b1da7c96_del complete#033[00m
Jan 23 05:08:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:08:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.617 250273 INFO nova.compute.manager [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Took 4.28 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.618 250273 DEBUG oslo.service.loopingcall [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.619 250273 DEBUG nova.compute.manager [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:08:56 np0005593232 nova_compute[250269]: 2026-01-23 10:08:56.619 250273 DEBUG nova.network.neutron [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:08:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:08:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2345: 321 pgs: 321 active+clean; 412 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.9 MiB/s rd, 6.8 MiB/s wr, 156 op/s
Jan 23 05:08:57 np0005593232 nova_compute[250269]: 2026-01-23 10:08:57.238 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:57.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:57 np0005593232 nova_compute[250269]: 2026-01-23 10:08:57.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:08:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:08:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:58 np0005593232 nova_compute[250269]: 2026-01-23 10:08:58.879 250273 DEBUG nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:08:58 np0005593232 nova_compute[250269]: 2026-01-23 10:08:58.945 250273 INFO nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] instance snapshotting#033[00m
Jan 23 05:08:58 np0005593232 nova_compute[250269]: 2026-01-23 10:08:58.946 250273 DEBUG nova.objects.instance [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'flavor' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:08:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2346: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 175 op/s
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.419 250273 INFO nova.virt.libvirt.driver [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Beginning live snapshot process#033[00m
Jan 23 05:08:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:08:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:08:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:08:59.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.616 250273 DEBUG nova.virt.libvirt.imagebackend [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.785 250273 DEBUG nova.network.neutron [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.816 250273 INFO nova.compute.manager [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Took 3.20 seconds to deallocate network for instance.#033[00m
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.940 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:08:59 np0005593232 nova_compute[250269]: 2026-01-23 10:08:59.941 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.016 250273 DEBUG nova.compute.manager [req-b6d72baf-ae96-40e9-96a6-d51f373e9910 req-458fb7be-cf83-484d-86e7-2a6a5f5f972b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Received event network-vif-deleted-06d14633-c41f-49f8-a5ac-9c37e350d1ae external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.086 250273 DEBUG oslo_concurrency.processutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.167 250273 DEBUG nova.storage.rbd_utils [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(5d99d5ef3a7a46f4b680c2a98ad840d4) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:09:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:00.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:09:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/542812977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.524 250273 DEBUG oslo_concurrency.processutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.530 250273 DEBUG nova.compute.provider_tree [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.560 250273 DEBUG nova.scheduler.client.report [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:09:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.610 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.676 250273 INFO nova.scheduler.client.report [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Deleted allocations for instance 6e5889e0-b5b3-442c-b7dd-0434b1da7c96#033[00m
Jan 23 05:09:00 np0005593232 nova_compute[250269]: 2026-01-23 10:09:00.805 250273 DEBUG oslo_concurrency.lockutils [None req-cf9d397a-eb54-4805-941d-7acaf59b2b1b c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "6e5889e0-b5b3-442c-b7dd-0434b1da7c96" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2347: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.0 MiB/s wr, 112 op/s
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 23 05:09:01 np0005593232 nova_compute[250269]: 2026-01-23 10:09:01.325 250273 DEBUG nova.storage.rbd_utils [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] cloning vms/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk@5d99d5ef3a7a46f4b680c2a98ad840d4 to images/6bbc8a2d-8fd2-4651-865b-0da5b89043d8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:09:01 np0005593232 nova_compute[250269]: 2026-01-23 10:09:01.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:01.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:01 np0005593232 nova_compute[250269]: 2026-01-23 10:09:01.693 250273 DEBUG nova.storage.rbd_utils [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] flattening images/6bbc8a2d-8fd2-4651-865b-0da5b89043d8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 23 05:09:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 23 05:09:02 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.272 250273 DEBUG nova.storage.rbd_utils [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] removing snapshot(5d99d5ef3a7a46f4b680c2a98ad840d4) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.313 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.314 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.335 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:09:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:02.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.628 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 23 05:09:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 23 05:09:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 23 05:09:02 np0005593232 nova_compute[250269]: 2026-01-23 10:09:02.978 250273 DEBUG nova.storage.rbd_utils [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(snap) on rbd image(6bbc8a2d-8fd2-4651-865b-0da5b89043d8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:09:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2351: 321 pgs: 321 active+clean; 408 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.6 MiB/s rd, 4.3 MiB/s wr, 139 op/s
Jan 23 05:09:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:03.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 23 05:09:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 23 05:09:04 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 23 05:09:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:04 np0005593232 nova_compute[250269]: 2026-01-23 10:09:04.981 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2353: 321 pgs: 321 active+clean; 408 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 9.9 MiB/s rd, 6.4 MiB/s wr, 125 op/s
Jan 23 05:09:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:06 np0005593232 nova_compute[250269]: 2026-01-23 10:09:06.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:06.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.057 250273 INFO nova.virt.libvirt.driver [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Snapshot image upload complete#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.057 250273 INFO nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Took 8.09 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2354: 321 pgs: 321 active+clean; 438 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 7.9 MiB/s wr, 143 op/s
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.578 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162932.5776412, 6e5889e0-b5b3-442c-b7dd-0434b1da7c96 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.579 250273 INFO nova.compute.manager [-] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:07.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.619 250273 DEBUG nova.compute.manager [None req-90275508-ab16-4b1d-93cb-f599b447c742 - - - - - -] [instance: 6e5889e0-b5b3-442c-b7dd-0434b1da7c96] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.665 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.855 250273 DEBUG nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.856 250273 DEBUG nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458#033[00m
Jan 23 05:09:07 np0005593232 nova_compute[250269]: 2026-01-23 10:09:07.857 250273 DEBUG nova.compute.manager [None req-0bb44e07-073d-405a-a2a9-e21fe80cd944 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Deleting image c895afcd-4576-403b-b7a1-afa282118258 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463#033[00m
Jan 23 05:09:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:08.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2355: 321 pgs: 321 active+clean; 438 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 6.3 MiB/s wr, 139 op/s
Jan 23 05:09:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 23 05:09:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:09.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 23 05:09:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 23 05:09:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:10.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2357: 321 pgs: 321 active+clean; 438 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 976 KiB/s rd, 2.6 MiB/s wr, 68 op/s
Jan 23 05:09:11 np0005593232 nova_compute[250269]: 2026-01-23 10:09:11.463 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:11.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 23 05:09:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 23 05:09:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 23 05:09:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:12.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:12 np0005593232 nova_compute[250269]: 2026-01-23 10:09:12.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2359: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1001 KiB/s rd, 2.7 MiB/s wr, 108 op/s
Jan 23 05:09:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:13.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:14 np0005593232 podman[329940]: 2026-01-23 10:09:14.420137977 +0000 UTC m=+0.064049041 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:09:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:14.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:14 np0005593232 podman[329940]: 2026-01-23 10:09:14.523424012 +0000 UTC m=+0.167335086 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:09:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 23 05:09:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 23 05:09:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 23 05:09:15 np0005593232 podman[330095]: 2026-01-23 10:09:15.139147968 +0000 UTC m=+0.058550735 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:09:15 np0005593232 podman[330095]: 2026-01-23 10:09:15.161225085 +0000 UTC m=+0.080627852 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:09:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2361: 321 pgs: 321 active+clean; 358 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 20 KiB/s wr, 52 op/s
Jan 23 05:09:15 np0005593232 podman[330159]: 2026-01-23 10:09:15.357235824 +0000 UTC m=+0.045973257 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Jan 23 05:09:15 np0005593232 podman[330159]: 2026-01-23 10:09:15.373116185 +0000 UTC m=+0.061853588 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, version=2.2.4, distribution-scope=public, release=1793)
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:15.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 52d7ba52-3fa6-4431-88c4-cfc09b890cc7 does not exist
Jan 23 05:09:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f27e6f53-3e22-4911-8398-da120f088a93 does not exist
Jan 23 05:09:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eedbd46f-c2c7-4d4d-b873-7ed640a98c82 does not exist
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:09:16 np0005593232 nova_compute[250269]: 2026-01-23 10:09:16.465 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:16.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.738472103 +0000 UTC m=+0.039980477 container create a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 23 05:09:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 23 05:09:16 np0005593232 systemd[1]: Started libpod-conmon-a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a.scope.
Jan 23 05:09:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.717014843 +0000 UTC m=+0.018523247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.833275117 +0000 UTC m=+0.134783521 container init a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.8421765 +0000 UTC m=+0.143684884 container start a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.846475232 +0000 UTC m=+0.147983646 container attach a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:09:16 np0005593232 systemd[1]: libpod-a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a.scope: Deactivated successfully.
Jan 23 05:09:16 np0005593232 goofy_villani[330481]: 167 167
Jan 23 05:09:16 np0005593232 conmon[330481]: conmon a9dc44fdadcf6e56f7d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a.scope/container/memory.events
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.84994137 +0000 UTC m=+0.151449764 container died a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:09:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-012b7c8a5062a5e0f7195075e66bda86d2dfe32ec58a20876c5958213bac74f8-merged.mount: Deactivated successfully.
Jan 23 05:09:16 np0005593232 podman[330465]: 2026-01-23 10:09:16.893900099 +0000 UTC m=+0.195408483 container remove a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_villani, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 05:09:16 np0005593232 systemd[1]: libpod-conmon-a9dc44fdadcf6e56f7d902ea2df0bf2140399bca94b0554b5c306111b053f55a.scope: Deactivated successfully.
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.049674956 +0000 UTC m=+0.037038444 container create 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:09:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:09:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:09:17 np0005593232 systemd[1]: Started libpod-conmon-92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f.scope.
Jan 23 05:09:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.12968795 +0000 UTC m=+0.117051478 container init 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.034012951 +0000 UTC m=+0.021376479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.138686255 +0000 UTC m=+0.126049753 container start 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.14203937 +0000 UTC m=+0.129402898 container attach 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:09:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2363: 321 pgs: 321 active+clean; 333 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 54 KiB/s rd, 172 KiB/s wr, 92 op/s
Jan 23 05:09:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.629 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.629 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:09:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:17.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.650 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.721 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.816 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.816 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.832 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:09:17 np0005593232 nova_compute[250269]: 2026-01-23 10:09:17.833 250273 INFO nova.compute.claims [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:09:17 np0005593232 jolly_dijkstra[330522]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:09:17 np0005593232 jolly_dijkstra[330522]: --> relative data size: 1.0
Jan 23 05:09:17 np0005593232 jolly_dijkstra[330522]: --> All data devices are unavailable
Jan 23 05:09:17 np0005593232 systemd[1]: libpod-92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f.scope: Deactivated successfully.
Jan 23 05:09:17 np0005593232 podman[330506]: 2026-01-23 10:09:17.972723225 +0000 UTC m=+0.960086733 container died 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:09:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6c13046778e3779b7900248991477d4344ff05925cf99369fac7255b9584d598-merged.mount: Deactivated successfully.
Jan 23 05:09:18 np0005593232 podman[330506]: 2026-01-23 10:09:18.025232527 +0000 UTC m=+1.012596025 container remove 92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:09:18 np0005593232 systemd[1]: libpod-conmon-92db1da0306ce71287d483cf87519a41dbaefa3ab2c7abea204a3a90e9dcd95f.scope: Deactivated successfully.
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.331 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:18.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.58664784 +0000 UTC m=+0.037770064 container create f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:09:18 np0005593232 systemd[1]: Started libpod-conmon-f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6.scope.
Jan 23 05:09:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.658335337 +0000 UTC m=+0.109457581 container init f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.570722607 +0000 UTC m=+0.021844831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.669061652 +0000 UTC m=+0.120183876 container start f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.672098008 +0000 UTC m=+0.123220242 container attach f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:09:18 np0005593232 relaxed_wing[330724]: 167 167
Jan 23 05:09:18 np0005593232 systemd[1]: libpod-f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6.scope: Deactivated successfully.
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.676011249 +0000 UTC m=+0.127133473 container died f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:09:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4acb6c99c077f2013ef56ffedadb84c14ec3d512133ad4afd9293038d82fc1e5-merged.mount: Deactivated successfully.
Jan 23 05:09:18 np0005593232 podman[330707]: 2026-01-23 10:09:18.718081305 +0000 UTC m=+0.169203529 container remove f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_wing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:18 np0005593232 systemd[1]: libpod-conmon-f1a401115f934d38d33719c378a3895c2977e8242bbee71c8f36cdc4f5d3d9b6.scope: Deactivated successfully.
Jan 23 05:09:18 np0005593232 podman[330721]: 2026-01-23 10:09:18.742064466 +0000 UTC m=+0.116733188 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 05:09:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:09:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2046914220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.774 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.782 250273 DEBUG nova.compute.provider_tree [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.809 250273 DEBUG nova.scheduler.client.report [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.848 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.849 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:09:18 np0005593232 podman[330777]: 2026-01-23 10:09:18.878459341 +0000 UTC m=+0.037210478 container create bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:09:18 np0005593232 systemd[1]: Started libpod-conmon-bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1.scope.
Jan 23 05:09:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c00d469759aa886c155c01cf02ab9acc0a6f7d299bca14d21804eeec9a09c15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c00d469759aa886c155c01cf02ab9acc0a6f7d299bca14d21804eeec9a09c15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c00d469759aa886c155c01cf02ab9acc0a6f7d299bca14d21804eeec9a09c15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c00d469759aa886c155c01cf02ab9acc0a6f7d299bca14d21804eeec9a09c15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:18 np0005593232 podman[330777]: 2026-01-23 10:09:18.949019786 +0000 UTC m=+0.107770953 container init bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:09:18 np0005593232 podman[330777]: 2026-01-23 10:09:18.955446059 +0000 UTC m=+0.114197196 container start bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:09:18 np0005593232 podman[330777]: 2026-01-23 10:09:18.958793564 +0000 UTC m=+0.117544701 container attach bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:09:18 np0005593232 podman[330777]: 2026-01-23 10:09:18.863272909 +0000 UTC m=+0.022024066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.985 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:09:18 np0005593232 nova_compute[250269]: 2026-01-23 10:09:18.986 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.014 250273 INFO nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.071 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:09:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2364: 321 pgs: 321 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 90 KiB/s rd, 2.9 MiB/s wr, 133 op/s
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.216 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.217 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.218 250273 INFO nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Creating image(s)#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.246 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.279 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.308 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.312 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.344 250273 DEBUG nova.policy [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'c99d09acd2e849a69846a6ccda1e0bc7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '924f976bcbb74ec195730b68eebe1f2a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.382 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.383 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.383 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.384 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.409 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.414 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d843365b-283f-47da-ba45-e68489a5fbdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:19.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.689 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d843365b-283f-47da-ba45-e68489a5fbdd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:19 np0005593232 practical_snyder[330795]: {
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:    "0": [
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:        {
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "devices": [
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "/dev/loop3"
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            ],
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "lv_name": "ceph_lv0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "lv_size": "7511998464",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "name": "ceph_lv0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "tags": {
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.cluster_name": "ceph",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.crush_device_class": "",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.encrypted": "0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.osd_id": "0",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.type": "block",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:                "ceph.vdo": "0"
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            },
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "type": "block",
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:            "vg_name": "ceph_vg0"
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:        }
Jan 23 05:09:19 np0005593232 practical_snyder[330795]:    ]
Jan 23 05:09:19 np0005593232 practical_snyder[330795]: }
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.754 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] resizing rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:09:19 np0005593232 systemd[1]: libpod-bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1.scope: Deactivated successfully.
Jan 23 05:09:19 np0005593232 podman[330950]: 2026-01-23 10:09:19.813436349 +0000 UTC m=+0.025224338 container died bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:09:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6c00d469759aa886c155c01cf02ab9acc0a6f7d299bca14d21804eeec9a09c15-merged.mount: Deactivated successfully.
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.850 250273 DEBUG nova.objects.instance [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'migration_context' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:19 np0005593232 podman[330950]: 2026-01-23 10:09:19.864327435 +0000 UTC m=+0.076115394 container remove bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:09:19 np0005593232 systemd[1]: libpod-conmon-bf3f50d172e22a991f1c3ca524b9513700838e085352bcb41bd34b20731d79c1.scope: Deactivated successfully.
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.870 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.870 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Ensure instance console log exists: /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.871 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.871 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:19 np0005593232 nova_compute[250269]: 2026-01-23 10:09:19.871 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.451750967 +0000 UTC m=+0.038031192 container create aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:09:20 np0005593232 systemd[1]: Started libpod-conmon-aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366.scope.
Jan 23 05:09:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:20.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.517034332 +0000 UTC m=+0.103314577 container init aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.522344763 +0000 UTC m=+0.108624988 container start aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:09:20 np0005593232 zen_raman[331143]: 167 167
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.52609607 +0000 UTC m=+0.112376295 container attach aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.52682768 +0000 UTC m=+0.113107905 container died aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:09:20 np0005593232 systemd[1]: libpod-aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366.scope: Deactivated successfully.
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.434908698 +0000 UTC m=+0.021188943 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-30d7b4d7bb210816b374fc471e2f12a8aa971f3e6960d587623aceb9e1b570fd-merged.mount: Deactivated successfully.
Jan 23 05:09:20 np0005593232 podman[331127]: 2026-01-23 10:09:20.559007075 +0000 UTC m=+0.145287300 container remove aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_raman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:20 np0005593232 systemd[1]: libpod-conmon-aa0f3d95d39a877572c03364b386838c85ebe05e8efacebdf458ad202ffe7366.scope: Deactivated successfully.
Jan 23 05:09:20 np0005593232 podman[331167]: 2026-01-23 10:09:20.720576126 +0000 UTC m=+0.042014985 container create 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:09:20 np0005593232 systemd[1]: Started libpod-conmon-4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac.scope.
Jan 23 05:09:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2d85db1de9e9e9467f80dd112ccbfa384a0e95bbc772dd96bd646dc6e6e20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2d85db1de9e9e9467f80dd112ccbfa384a0e95bbc772dd96bd646dc6e6e20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2d85db1de9e9e9467f80dd112ccbfa384a0e95bbc772dd96bd646dc6e6e20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2d85db1de9e9e9467f80dd112ccbfa384a0e95bbc772dd96bd646dc6e6e20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:20 np0005593232 podman[331167]: 2026-01-23 10:09:20.789529695 +0000 UTC m=+0.110968564 container init 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:09:20 np0005593232 podman[331167]: 2026-01-23 10:09:20.704385386 +0000 UTC m=+0.025824265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:09:20 np0005593232 podman[331167]: 2026-01-23 10:09:20.799881779 +0000 UTC m=+0.121320638 container start 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:09:20 np0005593232 podman[331167]: 2026-01-23 10:09:20.802756811 +0000 UTC m=+0.124195690 container attach 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2365: 321 pgs: 321 active+clean; 325 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 94 op/s
Jan 23 05:09:21 np0005593232 nova_compute[250269]: 2026-01-23 10:09:21.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:21.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]: {
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:        "osd_id": 0,
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:        "type": "bluestore"
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]:    }
Jan 23 05:09:21 np0005593232 peaceful_margulis[331183]: }
Jan 23 05:09:21 np0005593232 systemd[1]: libpod-4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac.scope: Deactivated successfully.
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 23 05:09:21 np0005593232 podman[331204]: 2026-01-23 10:09:21.750726798 +0000 UTC m=+0.029674514 container died 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-67a2d85db1de9e9e9467f80dd112ccbfa384a0e95bbc772dd96bd646dc6e6e20-merged.mount: Deactivated successfully.
Jan 23 05:09:21 np0005593232 podman[331204]: 2026-01-23 10:09:21.79758528 +0000 UTC m=+0.076532966 container remove 4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:09:21 np0005593232 systemd[1]: libpod-conmon-4cc8c7895173ef3a4543c031f7922704c701426528d4a6cd37910f2a04bafeac.scope: Deactivated successfully.
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:09:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a6b02523-6aec-4e01-b507-0ba3c5ebf20f does not exist
Jan 23 05:09:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d5cf832f-c1eb-4df0-8deb-ffb5b1acef8a does not exist
Jan 23 05:09:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e1ccd204-7e07-4a9e-8981-aaff0a41e67b does not exist
Jan 23 05:09:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:22.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:22 np0005593232 nova_compute[250269]: 2026-01-23 10:09:22.605 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Successfully created port: cadc4646-467c-4840-8c7d-bfa5246d88f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:09:22 np0005593232 nova_compute[250269]: 2026-01-23 10:09:22.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:09:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2367: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 97 KiB/s rd, 5.3 MiB/s wr, 150 op/s
Jan 23 05:09:23 np0005593232 podman[331271]: 2026-01-23 10:09:23.395928216 +0000 UTC m=+0.057990349 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 05:09:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:23.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:24.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2368: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 77 KiB/s rd, 4.9 MiB/s wr, 113 op/s
Jan 23 05:09:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:25.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:26 np0005593232 nova_compute[250269]: 2026-01-23 10:09:26.516 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:26.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 23 05:09:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 23 05:09:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 23 05:09:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2370: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 55 op/s
Jan 23 05:09:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:27.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:27 np0005593232 nova_compute[250269]: 2026-01-23 10:09:27.727 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:28.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2371: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 55 op/s
Jan 23 05:09:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:29.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:29 np0005593232 nova_compute[250269]: 2026-01-23 10:09:29.740 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Successfully updated port: cadc4646-467c-4840-8c7d-bfa5246d88f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:09:30 np0005593232 nova_compute[250269]: 2026-01-23 10:09:30.124 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:09:30 np0005593232 nova_compute[250269]: 2026-01-23 10:09:30.124 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquired lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:09:30 np0005593232 nova_compute[250269]: 2026-01-23 10:09:30.125 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:09:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:30.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2372: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.6 MiB/s wr, 37 op/s
Jan 23 05:09:31 np0005593232 nova_compute[250269]: 2026-01-23 10:09:31.243 250273 DEBUG nova.compute.manager [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-changed-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:31 np0005593232 nova_compute[250269]: 2026-01-23 10:09:31.244 250273 DEBUG nova.compute.manager [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Refreshing instance network info cache due to event network-changed-cadc4646-467c-4840-8c7d-bfa5246d88f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:09:31 np0005593232 nova_compute[250269]: 2026-01-23 10:09:31.244 250273 DEBUG oslo_concurrency.lockutils [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:09:31 np0005593232 nova_compute[250269]: 2026-01-23 10:09:31.519 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:31 np0005593232 nova_compute[250269]: 2026-01-23 10:09:31.670 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:09:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:32.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.535 250273 DEBUG nova.network.neutron [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updating instance_info_cache with network_info: [{"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.559 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Releasing lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.560 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Instance network_info: |[{"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.560 250273 DEBUG oslo_concurrency.lockutils [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.561 250273 DEBUG nova.network.neutron [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Refreshing network info cache for port cadc4646-467c-4840-8c7d-bfa5246d88f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.565 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Start _get_guest_xml network_info=[{"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.572 250273 WARNING nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.582 250273 DEBUG nova.virt.libvirt.host [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.583 250273 DEBUG nova.virt.libvirt.host [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.586 250273 DEBUG nova.virt.libvirt.host [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.586 250273 DEBUG nova.virt.libvirt.host [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.588 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.588 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.588 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.588 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.589 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.589 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.589 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.589 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.589 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.590 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.590 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.590 250273 DEBUG nova.virt.hardware [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.593 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:32 np0005593232 nova_compute[250269]: 2026-01-23 10:09:32.729 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536441880' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.037 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.065 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.070 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2373: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.314 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3905353531' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.523 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.525 250273 DEBUG nova.virt.libvirt.vif [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1088904730',display_name='tempest-AttachVolumeNegativeTest-server-1088904730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1088904730',id=126,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmAI3Z1MxeFb9aT2JDg2S7+wDpcpLLGaswStih3d8lYYLaJ1fS7fJVCYjd05ZJp70nkjBgezzm4tQCK8F7QF65l0K872oxXoH1fvdXzUqr8tBwew9latST+eWgu4bSQXg==',key_name='tempest-keypair-1710633129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-zxmggslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:09:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=d843365b-283f-47da-ba45-e68489a5fbdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.526 250273 DEBUG nova.network.os_vif_util [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.527 250273 DEBUG nova.network.os_vif_util [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.529 250273 DEBUG nova.objects.instance [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'pci_devices' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.544 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <uuid>d843365b-283f-47da-ba45-e68489a5fbdd</uuid>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <name>instance-0000007e</name>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:name>tempest-AttachVolumeNegativeTest-server-1088904730</nova:name>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:09:32</nova:creationTime>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:user uuid="c99d09acd2e849a69846a6ccda1e0bc7">tempest-AttachVolumeNegativeTest-1470050886-project-member</nova:user>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:project uuid="924f976bcbb74ec195730b68eebe1f2a">tempest-AttachVolumeNegativeTest-1470050886</nova:project>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <nova:port uuid="cadc4646-467c-4840-8c7d-bfa5246d88f8">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="serial">d843365b-283f-47da-ba45-e68489a5fbdd</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="uuid">d843365b-283f-47da-ba45-e68489a5fbdd</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d843365b-283f-47da-ba45-e68489a5fbdd_disk">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d843365b-283f-47da-ba45-e68489a5fbdd_disk.config">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:f8:af:f5"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <target dev="tapcadc4646-46"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/console.log" append="off"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:09:33 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:09:33 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:09:33 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:09:33 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.546 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Preparing to wait for external event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.547 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.547 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.547 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.548 250273 DEBUG nova.virt.libvirt.vif [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1088904730',display_name='tempest-AttachVolumeNegativeTest-server-1088904730',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1088904730',id=126,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmAI3Z1MxeFb9aT2JDg2S7+wDpcpLLGaswStih3d8lYYLaJ1fS7fJVCYjd05ZJp70nkjBgezzm4tQCK8F7QF65l0K872oxXoH1fvdXzUqr8tBwew9latST+eWgu4bSQXg==',key_name='tempest-keypair-1710633129',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-zxmggslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:09:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=d843365b-283f-47da-ba45-e68489a5fbdd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.548 250273 DEBUG nova.network.os_vif_util [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.549 250273 DEBUG nova.network.os_vif_util [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.549 250273 DEBUG os_vif [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.550 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.551 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.551 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.556 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.557 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcadc4646-46, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.558 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcadc4646-46, col_values=(('external_ids', {'iface-id': 'cadc4646-467c-4840-8c7d-bfa5246d88f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:af:f5', 'vm-uuid': 'd843365b-283f-47da-ba45-e68489a5fbdd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.590 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:33 np0005593232 NetworkManager[49057]: <info>  [1769162973.5911] manager: (tapcadc4646-46): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.598 250273 INFO os_vif [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46')#033[00m
Jan 23 05:09:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.669 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.670 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.670 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No VIF found with MAC fa:16:3e:f8:af:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.670 250273 INFO nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Using config drive#033[00m
Jan 23 05:09:33 np0005593232 nova_compute[250269]: 2026-01-23 10:09:33.702 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.945584) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162973945829, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1635, "num_deletes": 258, "total_data_size": 2653141, "memory_usage": 2698976, "flush_reason": "Manual Compaction"}
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162973975603, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 2600978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51386, "largest_seqno": 53020, "table_properties": {"data_size": 2593315, "index_size": 4541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16662, "raw_average_key_size": 20, "raw_value_size": 2577779, "raw_average_value_size": 3218, "num_data_blocks": 198, "num_entries": 801, "num_filter_entries": 801, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162842, "oldest_key_time": 1769162842, "file_creation_time": 1769162973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 30079 microseconds, and 14007 cpu microseconds.
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.975750) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 2600978 bytes OK
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.975809) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.978830) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.978902) EVENT_LOG_v1 {"time_micros": 1769162973978893, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.978930) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2645986, prev total WAL file size 2645986, number of live WAL files 2.
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.980964) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(2540KB)], [116(9531KB)]
Jan 23 05:09:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162973981196, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 12361318, "oldest_snapshot_seqno": -1}
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7913 keys, 10457988 bytes, temperature: kUnknown
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162974082600, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10457988, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10407351, "index_size": 29725, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 204991, "raw_average_key_size": 25, "raw_value_size": 10268604, "raw_average_value_size": 1297, "num_data_blocks": 1165, "num_entries": 7913, "num_filter_entries": 7913, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769162973, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.083080) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10457988 bytes
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.084655) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.7 rd, 103.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.3 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(8.8) write-amplify(4.0) OK, records in: 8443, records dropped: 530 output_compression: NoCompression
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.084687) EVENT_LOG_v1 {"time_micros": 1769162974084673, "job": 70, "event": "compaction_finished", "compaction_time_micros": 101539, "compaction_time_cpu_micros": 46606, "output_level": 6, "num_output_files": 1, "total_output_size": 10457988, "num_input_records": 8443, "num_output_records": 7913, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162974085766, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769162974089345, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:33.980409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.089561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.089569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.089571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.089575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:09:34.089577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.522 250273 INFO nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Creating config drive at /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.529 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7vtb31b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:34.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.690 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7vtb31b" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.727 250273 DEBUG nova.storage.rbd_utils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] rbd image d843365b-283f-47da-ba45-e68489a5fbdd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.732 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config d843365b-283f-47da-ba45-e68489a5fbdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.975 250273 DEBUG oslo_concurrency.processutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config d843365b-283f-47da-ba45-e68489a5fbdd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:34 np0005593232 nova_compute[250269]: 2026-01-23 10:09:34.976 250273 INFO nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Deleting local config drive /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd/disk.config because it was imported into RBD.#033[00m
Jan 23 05:09:35 np0005593232 kernel: tapcadc4646-46: entered promiscuous mode
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.036 250273 DEBUG nova.network.neutron [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updated VIF entry in instance network info cache for port cadc4646-467c-4840-8c7d-bfa5246d88f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.037 250273 DEBUG nova.network.neutron [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updating instance_info_cache with network_info: [{"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.0383] manager: (tapcadc4646-46): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.059 250273 DEBUG oslo_concurrency.lockutils [req-5936ca7b-2e7c-46b5-a9d8-888bb3b02dc7 req-9f3381ee-f657-48d3-a228-ba7a61aed861 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:35Z|00417|binding|INFO|Claiming lport cadc4646-467c-4840-8c7d-bfa5246d88f8 for this chassis.
Jan 23 05:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:35Z|00418|binding|INFO|cadc4646-467c-4840-8c7d-bfa5246d88f8: Claiming fa:16:3e:f8:af:f5 10.100.0.11
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.092 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:af:f5 10.100.0.11'], port_security=['fa:16:3e:f8:af:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd843365b-283f-47da-ba45-e68489a5fbdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93735878-f62d-4a5f-96df-bf97f85d787a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '924f976bcbb74ec195730b68eebe1f2a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fc795ddc-1837-4675-a407-b23a97d1b748', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1f72e5c-e22f-424b-b6ed-0c502ff13aa3, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=cadc4646-467c-4840-8c7d-bfa5246d88f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.094 161902 INFO neutron.agent.ovn.metadata.agent [-] Port cadc4646-467c-4840-8c7d-bfa5246d88f8 in datapath 93735878-f62d-4a5f-96df-bf97f85d787a bound to our chassis#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.096 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 93735878-f62d-4a5f-96df-bf97f85d787a#033[00m
Jan 23 05:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:35Z|00419|binding|INFO|Setting lport cadc4646-467c-4840-8c7d-bfa5246d88f8 ovn-installed in OVS
Jan 23 05:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:35Z|00420|binding|INFO|Setting lport cadc4646-467c-4840-8c7d-bfa5246d88f8 up in Southbound
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.110 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1e16751b-ccf5-498b-a45e-28abb476da7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.111 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap93735878-f1 in ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.113 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap93735878-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.113 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[55adf075-3a31-46cc-8e7a-ec652b2049fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.114 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d6e5e63f-d56e-46cd-8244-7d56554729a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 systemd-machined[215836]: New machine qemu-52-instance-0000007e.
Jan 23 05:09:35 np0005593232 systemd-udevd[331484]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.132 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbd06b0-c85a-4958-935a-1057663729fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 systemd[1]: Started Virtual Machine qemu-52-instance-0000007e.
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.1428] device (tapcadc4646-46): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.1437] device (tapcadc4646-46): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.159 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b68b62f-20fe-42b4-b340-58eb86b0619f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.198 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4306f198-9e95-4b80-b1ba-6d2f3ec21671]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2374: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.2072] manager: (tap93735878-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Jan 23 05:09:35 np0005593232 systemd-udevd[331487]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.206 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d63fb5a7-3d57-4425-8c82-64d55417cb04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.242 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[42ebbef7-bab0-4cf1-b284-3e301f4fe336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.246 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7c1b52-a68b-4f10-a0f8-14a19f2e5293]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.2697] device (tap93735878-f0): carrier: link connected
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.274 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9680a377-fa0c-43c8-938c-402f3c110b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.296 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3402b3a2-91d1-4ba7-b745-e4d9a90b4195]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93735878-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:41:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688841, 'reachable_time': 39044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331515, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.315 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d71e2276-0d3f-4173-bc4f-bffb9be5fa4d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:41c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 688841, 'tstamp': 688841}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331516, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.333 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[adef43db-f0d3-48b0-a072-be6f49bff1a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap93735878-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:41:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 129], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688841, 'reachable_time': 39044, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 331517, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.376 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a74b421b-9c11-4e41-ab3c-3ed31977791f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.451 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ece79dc9-9de5-40ef-8cfe-ad8d983811c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.454 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93735878-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.455 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.455 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap93735878-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:35 np0005593232 NetworkManager[49057]: <info>  [1769162975.4587] manager: (tap93735878-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Jan 23 05:09:35 np0005593232 kernel: tap93735878-f0: entered promiscuous mode
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.458 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.463 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap93735878-f0, col_values=(('external_ids', {'iface-id': 'c75eef02-aabe-4477-9239-97f7fb86cd02'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:35Z|00421|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.464 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.483 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.485 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f0d7ef1-24a3-40a0-8caa-b5f1c7345c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.486 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-93735878-f62d-4a5f-96df-bf97f85d787a
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/93735878-f62d-4a5f-96df-bf97f85d787a.pid.haproxy
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 93735878-f62d-4a5f-96df-bf97f85d787a
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:35.486 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'env', 'PROCESS_TAG=haproxy-93735878-f62d-4a5f-96df-bf97f85d787a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/93735878-f62d-4a5f-96df-bf97f85d787a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.568 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162975.5681093, d843365b-283f-47da-ba45-e68489a5fbdd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.569 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] VM Started (Lifecycle Event)#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.597 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.602 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162975.5690327, d843365b-283f-47da-ba45-e68489a5fbdd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.602 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.607 250273 DEBUG nova.compute.manager [req-8cf3b599-c1b1-4dec-963f-d1e6178ac131 req-cf830fe6-f554-473e-bbc9-991bdc39813e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.608 250273 DEBUG oslo_concurrency.lockutils [req-8cf3b599-c1b1-4dec-963f-d1e6178ac131 req-cf830fe6-f554-473e-bbc9-991bdc39813e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.608 250273 DEBUG oslo_concurrency.lockutils [req-8cf3b599-c1b1-4dec-963f-d1e6178ac131 req-cf830fe6-f554-473e-bbc9-991bdc39813e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.608 250273 DEBUG oslo_concurrency.lockutils [req-8cf3b599-c1b1-4dec-963f-d1e6178ac131 req-cf830fe6-f554-473e-bbc9-991bdc39813e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.608 250273 DEBUG nova.compute.manager [req-8cf3b599-c1b1-4dec-963f-d1e6178ac131 req-cf830fe6-f554-473e-bbc9-991bdc39813e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Processing event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.609 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.631 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.634 250273 INFO nova.virt.libvirt.driver [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Instance spawned successfully.#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.634 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.652 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.656 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769162975.61307, d843365b-283f-47da-ba45-e68489a5fbdd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.657 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:09:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.681 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.682 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.683 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.683 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.684 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.684 250273 DEBUG nova.virt.libvirt.driver [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.688 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.691 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.728 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.772 250273 INFO nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Took 16.56 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.772 250273 DEBUG nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:35 np0005593232 podman[331591]: 2026-01-23 10:09:35.864412586 +0000 UTC m=+0.046012778 container create 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:09:35 np0005593232 systemd[1]: Started libpod-conmon-1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8.scope.
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.904 250273 INFO nova.compute.manager [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Took 18.17 seconds to build instance.#033[00m
Jan 23 05:09:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:09:35 np0005593232 nova_compute[250269]: 2026-01-23 10:09:35.920 250273 DEBUG oslo_concurrency.lockutils [None req-fbbe9772-2f39-4e51-a8c6-eddc5cf74967 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a8542d9f19fa12eecd71ca24c29dd1f7ba66297eeef527386e0305daced02fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:09:35 np0005593232 podman[331591]: 2026-01-23 10:09:35.839247811 +0000 UTC m=+0.020848013 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:09:35 np0005593232 podman[331591]: 2026-01-23 10:09:35.940238231 +0000 UTC m=+0.121838443 container init 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:09:35 np0005593232 podman[331591]: 2026-01-23 10:09:35.945168261 +0000 UTC m=+0.126768453 container start 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:09:35 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [NOTICE]   (331611) : New worker (331613) forked
Jan 23 05:09:35 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [NOTICE]   (331611) : Loading success.
Jan 23 05:09:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:36 np0005593232 nova_compute[250269]: 2026-01-23 10:09:36.563 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2375: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 31 KiB/s wr, 92 op/s
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:09:37
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'vms', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups']
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:09:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.824 250273 DEBUG nova.compute.manager [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.825 250273 DEBUG oslo_concurrency.lockutils [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.825 250273 DEBUG oslo_concurrency.lockutils [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.825 250273 DEBUG oslo_concurrency.lockutils [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.825 250273 DEBUG nova.compute.manager [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] No waiting events found dispatching network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:09:37 np0005593232 nova_compute[250269]: 2026-01-23 10:09:37.826 250273 WARNING nova.compute.manager [req-5520a33a-a832-4a6a-8a64-55d4eef64aaa req-438a83cd-0f5c-4b66-b23b-f9712daa89fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received unexpected event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:09:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:38 np0005593232 nova_compute[250269]: 2026-01-23 10:09:38.591 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2376: 321 pgs: 321 active+clean; 320 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 953 KiB/s wr, 154 op/s
Jan 23 05:09:39 np0005593232 nova_compute[250269]: 2026-01-23 10:09:39.360 250273 DEBUG nova.compute.manager [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-changed-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:39 np0005593232 nova_compute[250269]: 2026-01-23 10:09:39.361 250273 DEBUG nova.compute.manager [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Refreshing instance network info cache due to event network-changed-cadc4646-467c-4840-8c7d-bfa5246d88f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:09:39 np0005593232 nova_compute[250269]: 2026-01-23 10:09:39.361 250273 DEBUG oslo_concurrency.lockutils [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:09:39 np0005593232 nova_compute[250269]: 2026-01-23 10:09:39.362 250273 DEBUG oslo_concurrency.lockutils [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:09:39 np0005593232 nova_compute[250269]: 2026-01-23 10:09:39.362 250273 DEBUG nova.network.neutron [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Refreshing network info cache for port cadc4646-467c-4840-8c7d-bfa5246d88f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:09:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:39.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:40 np0005593232 nova_compute[250269]: 2026-01-23 10:09:40.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:40.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2377: 321 pgs: 321 active+clean; 320 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 953 KiB/s wr, 154 op/s
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.602 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:41.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.836 250273 DEBUG nova.network.neutron [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updated VIF entry in instance network info cache for port cadc4646-467c-4840-8c7d-bfa5246d88f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.837 250273 DEBUG nova.network.neutron [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updating instance_info_cache with network_info: [{"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:09:41 np0005593232 nova_compute[250269]: 2026-01-23 10:09:41.861 250273 DEBUG oslo_concurrency.lockutils [req-bad77f43-b581-43de-93bf-9c40393baeed req-dca60a35-4b2a-4047-bd90-dab3d6befcff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d843365b-283f-47da-ba45-e68489a5fbdd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:09:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:42.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:42.621 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:42.622 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:42.624 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2378: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 200 op/s
Jan 23 05:09:43 np0005593232 nova_compute[250269]: 2026-01-23 10:09:43.595 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:44 np0005593232 nova_compute[250269]: 2026-01-23 10:09:44.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:44 np0005593232 nova_compute[250269]: 2026-01-23 10:09:44.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:09:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:44.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:45 np0005593232 nova_compute[250269]: 2026-01-23 10:09:45.045 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:09:45 np0005593232 nova_compute[250269]: 2026-01-23 10:09:45.046 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:09:45 np0005593232 nova_compute[250269]: 2026-01-23 10:09:45.047 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:09:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2379: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 23 05:09:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:45.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:46.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:46 np0005593232 nova_compute[250269]: 2026-01-23 10:09:46.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006325754932067746 of space, bias 1.0, pg target 1.897726479620324 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.00038041044342080775 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.150 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.171 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.172 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.172 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2380: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.327 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.327 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.327 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:47Z|00422|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:09:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:47Z|00423|binding|INFO|Releasing lport c75eef02-aabe-4477-9239-97f7fb86cd02 from this chassis (sb_readonly=0)
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.444 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:09:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/318647137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.901 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.991 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.991 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.997 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:09:47 np0005593232 nova_compute[250269]: 2026-01-23 10:09:47.998 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:09:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:48.125 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:09:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:48.126 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.127 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.220 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.224 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3937MB free_disk=20.855510711669922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.225 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.226 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.335 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.336 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance d843365b-283f-47da-ba45-e68489a5fbdd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.336 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.337 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.364 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.403 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.404 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.433 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.475 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.555 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:09:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:48.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:09:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:48Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:af:f5 10.100.0.11
Jan 23 05:09:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:48Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:af:f5 10.100.0.11
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.629 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:09:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900391563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:09:48 np0005593232 nova_compute[250269]: 2026-01-23 10:09:48.997 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:49 np0005593232 nova_compute[250269]: 2026-01-23 10:09:49.004 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:09:49 np0005593232 nova_compute[250269]: 2026-01-23 10:09:49.023 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:09:49 np0005593232 nova_compute[250269]: 2026-01-23 10:09:49.051 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:09:49 np0005593232 nova_compute[250269]: 2026-01-23 10:09:49.051 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2381: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 186 op/s
Jan 23 05:09:49 np0005593232 podman[331723]: 2026-01-23 10:09:49.472855005 +0000 UTC m=+0.130352275 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 05:09:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:49.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:50.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:50 np0005593232 nova_compute[250269]: 2026-01-23 10:09:50.616 250273 DEBUG nova.compute.manager [None req-b103c1b9-8721-4b4f-a0a2-2a29e58e360e aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Getting vnc console get_vnc_console /usr/lib/python3.9/site-packages/nova/compute/manager.py:7196#033[00m
Jan 23 05:09:51 np0005593232 nova_compute[250269]: 2026-01-23 10:09:51.046 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:09:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2382: 321 pgs: 321 active+clean; 293 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 902 KiB/s wr, 112 op/s
Jan 23 05:09:51 np0005593232 nova_compute[250269]: 2026-01-23 10:09:51.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:09:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:51.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:09:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:52.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2383: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.0 MiB/s wr, 187 op/s
Jan 23 05:09:53 np0005593232 nova_compute[250269]: 2026-01-23 10:09:53.633 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:53.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:54 np0005593232 podman[331753]: 2026-01-23 10:09:54.397421518 +0000 UTC m=+0.053602064 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.535 250273 DEBUG oslo_concurrency.lockutils [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.536 250273 DEBUG oslo_concurrency.lockutils [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.536 250273 DEBUG nova.compute.manager [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.540 250273 DEBUG nova.compute.manager [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.541 250273 DEBUG nova.objects.instance [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'flavor' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:54 np0005593232 nova_compute[250269]: 2026-01-23 10:09:54.566 250273 DEBUG nova.virt.libvirt.driver [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:09:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:54.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2384: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Jan 23 05:09:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:55.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.129 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:56.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:56 np0005593232 nova_compute[250269]: 2026-01-23 10:09:56.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:09:56 np0005593232 kernel: tap8ad4c021-5d (unregistering): left promiscuous mode
Jan 23 05:09:56 np0005593232 NetworkManager[49057]: <info>  [1769162996.7950] device (tap8ad4c021-5d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:09:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:56Z|00424|binding|INFO|Releasing lport 8ad4c021-5d44-41aa-adad-f593da5206c1 from this chassis (sb_readonly=0)
Jan 23 05:09:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:56Z|00425|binding|INFO|Setting lport 8ad4c021-5d44-41aa-adad-f593da5206c1 down in Southbound
Jan 23 05:09:56 np0005593232 nova_compute[250269]: 2026-01-23 10:09:56.801 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:09:56Z|00426|binding|INFO|Removing iface tap8ad4c021-5d ovn-installed in OVS
Jan 23 05:09:56 np0005593232 nova_compute[250269]: 2026-01-23 10:09:56.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.817 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:50:89 10.100.0.7'], port_security=['fa:16:3e:46:50:89 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '81a8be01-ddd9-4fd2-91a1-886e7f47bfa3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8ad4c021-5d44-41aa-adad-f593da5206c1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.818 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8ad4c021-5d44-41aa-adad-f593da5206c1 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d unbound from our chassis#033[00m
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.819 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d9599b4-8855-4310-af02-cdd058438f7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.821 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[19e584ad-c69e-42bd-865e-0743f312b0f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:56.821 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace which is not needed anymore#033[00m
Jan 23 05:09:56 np0005593232 nova_compute[250269]: 2026-01-23 10:09:56.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:56 np0005593232 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Jan 23 05:09:56 np0005593232 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000007a.scope: Consumed 18.259s CPU time.
Jan 23 05:09:56 np0005593232 systemd-machined[215836]: Machine qemu-51-instance-0000007a terminated.
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [NOTICE]   (328367) : haproxy version is 2.8.14-c23fe91
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [NOTICE]   (328367) : path to executable is /usr/sbin/haproxy
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [WARNING]  (328367) : Exiting Master process...
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [WARNING]  (328367) : Exiting Master process...
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [ALERT]    (328367) : Current worker (328369) exited with code 143 (Terminated)
Jan 23 05:09:56 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[328363]: [WARNING]  (328367) : All workers exited. Exiting... (0)
Jan 23 05:09:56 np0005593232 systemd[1]: libpod-035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95.scope: Deactivated successfully.
Jan 23 05:09:56 np0005593232 conmon[328363]: conmon 035cb19ba3e9da2312f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95.scope/container/memory.events
Jan 23 05:09:56 np0005593232 podman[331796]: 2026-01-23 10:09:56.956737171 +0000 UTC m=+0.045033781 container died 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:09:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95-userdata-shm.mount: Deactivated successfully.
Jan 23 05:09:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c78cccda20f28183b304bde3bec28fe642b369594233a83435c4e458bcf50642-merged.mount: Deactivated successfully.
Jan 23 05:09:57 np0005593232 podman[331796]: 2026-01-23 10:09:57.006721851 +0000 UTC m=+0.095018461 container cleanup 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.011 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.012 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:57 np0005593232 systemd[1]: libpod-conmon-035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95.scope: Deactivated successfully.
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.033 250273 DEBUG nova.objects.instance [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:57 np0005593232 podman[331827]: 2026-01-23 10:09:57.073464938 +0000 UTC m=+0.045131414 container remove 035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.080 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.082 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[970d371d-0870-4aa4-9fe6-98c74a99d9d9]: (4, ('Fri Jan 23 10:09:56 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95)\n035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95\nFri Jan 23 10:09:57 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95)\n035cb19ba3e9da2312f7f323a158ed1cf4cc61cfa4cc5d8ccace6355b57d3a95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.083 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ac498faf-1c20-4f0b-bf01-036ac7cbbe9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.084 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:57 np0005593232 kernel: tap8d9599b4-80: left promiscuous mode
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.105 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[599e70ce-9561-42f2-836b-93558778ec6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.116 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.121 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[401603ad-ddc1-4372-a1ab-bd902829517b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.123 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c628fcc-0821-427e-9277-a20f6e896566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.136 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[603f0f9b-eea0-492b-a44f-9e94dd3dae0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 680134, 'reachable_time': 19304, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331852, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8d9599b4\x2d8855\x2d4310\x2daf02\x2dcdd058438f7d.mount: Deactivated successfully.
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.139 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:09:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:09:57.139 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f951aa18-6a31-4ec1-af90-163282570631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2385: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.509 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.509 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.510 250273 INFO nova.compute.manager [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Attaching volume b0abb614-0c00-40fa-9977-96c7edfc0c3c to /dev/vdb#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.582 250273 INFO nova.virt.libvirt.driver [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.588 250273 INFO nova.virt.libvirt.driver [-] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance destroyed successfully.#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.588 250273 DEBUG nova.objects.instance [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'numa_topology' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.592 250273 DEBUG nova.compute.manager [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-vif-unplugged-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.592 250273 DEBUG oslo_concurrency.lockutils [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.593 250273 DEBUG oslo_concurrency.lockutils [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.593 250273 DEBUG oslo_concurrency.lockutils [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.593 250273 DEBUG nova.compute.manager [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] No waiting events found dispatching network-vif-unplugged-8ad4c021-5d44-41aa-adad-f593da5206c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.593 250273 WARNING nova.compute.manager [req-ae1db12d-83d4-4639-9f71-c3cd225770af req-c2248591-b825-4112-845f-ab1b02b70221 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received unexpected event network-vif-unplugged-8ad4c021-5d44-41aa-adad-f593da5206c1 for instance with vm_state active and task_state powering-off.#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.609 250273 DEBUG nova.compute.manager [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.660 250273 DEBUG oslo_concurrency.lockutils [None req-47f9d8ce-c904-4cf0-8fab-2c31171702cd aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:57.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.769 250273 DEBUG os_brick.utils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.773 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.786 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.787 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[e999a4f2-bfd7-4187-b129-f1abe575615e]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.788 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.796 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.797 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[71918832-209f-4dff-93ec-04e68ec0105f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.799 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.809 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.810 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd6ad39-8a6b-45e8-9bfa-ec10bd477d8e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.812 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[97b18d6f-ac76-4b23-901b-9c4dd9a53934]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.813 250273 DEBUG oslo_concurrency.processutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.843 250273 DEBUG oslo_concurrency.processutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.846 250273 DEBUG os_brick.initiator.connectors.lightos [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.846 250273 DEBUG os_brick.initiator.connectors.lightos [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.847 250273 DEBUG os_brick.initiator.connectors.lightos [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.847 250273 DEBUG os_brick.utils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:09:57 np0005593232 nova_compute[250269]: 2026-01-23 10:09:57.848 250273 DEBUG nova.virt.block_device [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updating existing volume attachment record: 9b24b8a4-e157-403f-89a0-2131fd1bc2bb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:09:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:09:58.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:09:58 np0005593232 nova_compute[250269]: 2026-01-23 10:09:58.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:09:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:09:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328906345' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:09:58 np0005593232 nova_compute[250269]: 2026-01-23 10:09:58.909 250273 DEBUG nova.objects.instance [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:09:58 np0005593232 nova_compute[250269]: 2026-01-23 10:09:58.933 250273 DEBUG nova.virt.libvirt.driver [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Attempting to attach volume b0abb614-0c00-40fa-9977-96c7edfc0c3c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:09:58 np0005593232 nova_compute[250269]: 2026-01-23 10:09:58.936 250273 DEBUG nova.virt.libvirt.guest [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b0abb614-0c00-40fa-9977-96c7edfc0c3c">
Jan 23 05:09:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:09:58 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:09:58 np0005593232 nova_compute[250269]:  <serial>b0abb614-0c00-40fa-9977-96c7edfc0c3c</serial>
Jan 23 05:09:58 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:09:58 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:09:59 np0005593232 nova_compute[250269]: 2026-01-23 10:09:59.042 250273 DEBUG nova.virt.libvirt.driver [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:09:59 np0005593232 nova_compute[250269]: 2026-01-23 10:09:59.042 250273 DEBUG nova.virt.libvirt.driver [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:09:59 np0005593232 nova_compute[250269]: 2026-01-23 10:09:59.042 250273 DEBUG nova.virt.libvirt.driver [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:09:59 np0005593232 nova_compute[250269]: 2026-01-23 10:09:59.043 250273 DEBUG nova.virt.libvirt.driver [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] No VIF found with MAC fa:16:3e:f8:af:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:09:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2386: 321 pgs: 321 active+clean; 341 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.4 MiB/s wr, 142 op/s
Jan 23 05:09:59 np0005593232 nova_compute[250269]: 2026-01-23 10:09:59.569 250273 DEBUG oslo_concurrency.lockutils [None req-34f8dc1b-e9af-41db-8042-cbff0e2bb169 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:09:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:09:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:09:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:09:59.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.049 250273 DEBUG nova.compute.manager [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.049 250273 DEBUG oslo_concurrency.lockutils [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.050 250273 DEBUG oslo_concurrency.lockutils [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.050 250273 DEBUG oslo_concurrency.lockutils [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.051 250273 DEBUG nova.compute.manager [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] No waiting events found dispatching network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:00 np0005593232 nova_compute[250269]: 2026-01-23 10:10:00.051 250273 WARNING nova.compute.manager [req-16ff4900-c986-4106-a83f-70970fa1e0bb req-f13a154d-5428-40a8-a5af-0fda84987973 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received unexpected event network-vif-plugged-8ad4c021-5d44-41aa-adad-f593da5206c1 for instance with vm_state stopped and task_state resize_prep.#033[00m
Jan 23 05:10:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:00.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:10:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2387: 321 pgs: 321 active+clean; 341 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 685 KiB/s rd, 3.4 MiB/s wr, 95 op/s
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.322 250273 DEBUG oslo_concurrency.lockutils [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.323 250273 DEBUG oslo_concurrency.lockutils [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.345 250273 INFO nova.compute.manager [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Detaching volume b0abb614-0c00-40fa-9977-96c7edfc0c3c#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.554 250273 INFO nova.virt.block_device [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Attempting to driver detach volume b0abb614-0c00-40fa-9977-96c7edfc0c3c from mountpoint /dev/vdb#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.563 250273 DEBUG nova.virt.libvirt.driver [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Attempting to detach device vdb from instance d843365b-283f-47da-ba45-e68489a5fbdd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.564 250273 DEBUG nova.virt.libvirt.guest [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b0abb614-0c00-40fa-9977-96c7edfc0c3c">
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <serial>b0abb614-0c00-40fa-9977-96c7edfc0c3c</serial>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:10:01 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.571 250273 INFO nova.virt.libvirt.driver [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully detached device vdb from instance d843365b-283f-47da-ba45-e68489a5fbdd from the persistent domain config.#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.572 250273 DEBUG nova.virt.libvirt.driver [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance d843365b-283f-47da-ba45-e68489a5fbdd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.572 250273 DEBUG nova.virt.libvirt.guest [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b0abb614-0c00-40fa-9977-96c7edfc0c3c">
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <serial>b0abb614-0c00-40fa-9977-96c7edfc0c3c</serial>
Jan 23 05:10:01 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:10:01 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:10:01 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.626 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769163001.626337, d843365b-283f-47da-ba45-e68489a5fbdd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.628 250273 DEBUG nova.virt.libvirt.driver [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance d843365b-283f-47da-ba45-e68489a5fbdd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.630 250273 INFO nova.virt.libvirt.driver [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully detached device vdb from instance d843365b-283f-47da-ba45-e68489a5fbdd from the live domain config.#033[00m
Jan 23 05:10:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:01.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:01 np0005593232 nova_compute[250269]: 2026-01-23 10:10:01.958 250273 DEBUG nova.objects.instance [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'flavor' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:02 np0005593232 nova_compute[250269]: 2026-01-23 10:10:02.004 250273 DEBUG oslo_concurrency.lockutils [None req-5ba54649-1358-4e51-a3cc-62c61a7278c4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:02.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.111 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.112 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.112 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.113 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.113 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.115 250273 INFO nova.compute.manager [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Terminating instance#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.116 250273 DEBUG nova.compute.manager [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:10:03 np0005593232 kernel: tapcadc4646-46 (unregistering): left promiscuous mode
Jan 23 05:10:03 np0005593232 NetworkManager[49057]: <info>  [1769163003.1740] device (tapcadc4646-46): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:10:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:03Z|00427|binding|INFO|Releasing lport cadc4646-467c-4840-8c7d-bfa5246d88f8 from this chassis (sb_readonly=0)
Jan 23 05:10:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:03Z|00428|binding|INFO|Setting lport cadc4646-467c-4840-8c7d-bfa5246d88f8 down in Southbound
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.184 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:03Z|00429|binding|INFO|Removing iface tapcadc4646-46 ovn-installed in OVS
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.186 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.192 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:af:f5 10.100.0.11'], port_security=['fa:16:3e:f8:af:f5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd843365b-283f-47da-ba45-e68489a5fbdd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-93735878-f62d-4a5f-96df-bf97f85d787a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '924f976bcbb74ec195730b68eebe1f2a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fc795ddc-1837-4675-a407-b23a97d1b748', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.183'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e1f72e5c-e22f-424b-b6ed-0c502ff13aa3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=cadc4646-467c-4840-8c7d-bfa5246d88f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.193 161902 INFO neutron.agent.ovn.metadata.agent [-] Port cadc4646-467c-4840-8c7d-bfa5246d88f8 in datapath 93735878-f62d-4a5f-96df-bf97f85d787a unbound from our chassis#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.195 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 93735878-f62d-4a5f-96df-bf97f85d787a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.196 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[be18c585-5d62-47f1-8f6c-4596a3834bdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.197 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a namespace which is not needed anymore#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2388: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 983 KiB/s rd, 4.3 MiB/s wr, 143 op/s
Jan 23 05:10:03 np0005593232 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Jan 23 05:10:03 np0005593232 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000007e.scope: Consumed 14.524s CPU time.
Jan 23 05:10:03 np0005593232 systemd-machined[215836]: Machine qemu-52-instance-0000007e terminated.
Jan 23 05:10:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:10:03 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [NOTICE]   (331611) : haproxy version is 2.8.14-c23fe91
Jan 23 05:10:03 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [NOTICE]   (331611) : path to executable is /usr/sbin/haproxy
Jan 23 05:10:03 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [WARNING]  (331611) : Exiting Master process...
Jan 23 05:10:03 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [ALERT]    (331611) : Current worker (331613) exited with code 143 (Terminated)
Jan 23 05:10:03 np0005593232 neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a[331607]: [WARNING]  (331611) : All workers exited. Exiting... (0)
Jan 23 05:10:03 np0005593232 systemd[1]: libpod-1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8.scope: Deactivated successfully.
Jan 23 05:10:03 np0005593232 podman[331910]: 2026-01-23 10:10:03.319234688 +0000 UTC m=+0.044296090 container died 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:10:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8-userdata-shm.mount: Deactivated successfully.
Jan 23 05:10:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a8542d9f19fa12eecd71ca24c29dd1f7ba66297eeef527386e0305daced02fb-merged.mount: Deactivated successfully.
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.362 250273 INFO nova.virt.libvirt.driver [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Instance destroyed successfully.#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.362 250273 DEBUG nova.objects.instance [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lazy-loading 'resources' on Instance uuid d843365b-283f-47da-ba45-e68489a5fbdd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:03 np0005593232 podman[331910]: 2026-01-23 10:10:03.371070231 +0000 UTC m=+0.096131633 container cleanup 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:10:03 np0005593232 systemd[1]: libpod-conmon-1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8.scope: Deactivated successfully.
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.390 250273 DEBUG nova.virt.libvirt.vif [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:09:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachVolumeNegativeTest-server-1088904730',display_name='tempest-AttachVolumeNegativeTest-server-1088904730',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumenegativetest-server-1088904730',id=126,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLmAI3Z1MxeFb9aT2JDg2S7+wDpcpLLGaswStih3d8lYYLaJ1fS7fJVCYjd05ZJp70nkjBgezzm4tQCK8F7QF65l0K872oxXoH1fvdXzUqr8tBwew9latST+eWgu4bSQXg==',key_name='tempest-keypair-1710633129',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:09:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='924f976bcbb74ec195730b68eebe1f2a',ramdisk_id='',reservation_id='r-zxmggslp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeNegativeTest-1470050886',owner_user_name='tempest-AttachVolumeNegativeTest-1470050886-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:09:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='c99d09acd2e849a69846a6ccda1e0bc7',uuid=d843365b-283f-47da-ba45-e68489a5fbdd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.391 250273 DEBUG nova.network.os_vif_util [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converting VIF {"id": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "address": "fa:16:3e:f8:af:f5", "network": {"id": "93735878-f62d-4a5f-96df-bf97f85d787a", "bridge": "br-int", "label": "tempest-AttachVolumeNegativeTest-259751152-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.183", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "924f976bcbb74ec195730b68eebe1f2a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcadc4646-46", "ovs_interfaceid": "cadc4646-467c-4840-8c7d-bfa5246d88f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.392 250273 DEBUG nova.network.os_vif_util [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.392 250273 DEBUG os_vif [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.394 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.395 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcadc4646-46, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.396 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.399 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.402 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.404 250273 INFO os_vif [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:af:f5,bridge_name='br-int',has_traffic_filtering=True,id=cadc4646-467c-4840-8c7d-bfa5246d88f8,network=Network(93735878-f62d-4a5f-96df-bf97f85d787a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcadc4646-46')#033[00m
Jan 23 05:10:03 np0005593232 podman[331950]: 2026-01-23 10:10:03.431118307 +0000 UTC m=+0.039898275 container remove 1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.437 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[677411da-3587-4c8c-85b3-c0e1536bea27]: (4, ('Fri Jan 23 10:10:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a (1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8)\n1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8\nFri Jan 23 10:10:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a (1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8)\n1f91cbf0caf0c14572d6b0782877e20d446ca672a88f81e10fc3605bdce28cc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.438 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[680f729b-62ea-4d1c-aa69-173b9b88a618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.439 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap93735878-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.441 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 kernel: tap93735878-f0: left promiscuous mode
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.457 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.460 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9b35c632-cfd9-4195-b7cf-733461b6d74c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.475 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[df9168ae-3c1e-4e81-958d-b84101c0fb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.476 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc9f487-3c2d-4c1f-8a6d-8a5595e20fb6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.490 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ed32f756-ad16-4f4d-983a-c010fbd98e89]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 688833, 'reachable_time': 25052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331992, 'error': None, 'target': 'ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.492 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-93735878-f62d-4a5f-96df-bf97f85d787a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:10:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2d93735878\x2df62d\x2d4a5f\x2d96df\x2dbf97f85d787a.mount: Deactivated successfully.
Jan 23 05:10:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:03.493 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[14b8fab4-73e1-42bd-a9eb-ad3f8032a36b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:03.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.807 250273 INFO nova.virt.libvirt.driver [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Deleting instance files /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd_del#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.808 250273 INFO nova.virt.libvirt.driver [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Deletion of /var/lib/nova/instances/d843365b-283f-47da-ba45-e68489a5fbdd_del complete#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.926 250273 INFO nova.compute.manager [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.927 250273 DEBUG oslo.service.loopingcall [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.928 250273 DEBUG nova.compute.manager [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:10:03 np0005593232 nova_compute[250269]: 2026-01-23 10:10:03.928 250273 DEBUG nova.network.neutron [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.041 250273 DEBUG nova.compute.manager [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-unplugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.042 250273 DEBUG oslo_concurrency.lockutils [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.042 250273 DEBUG oslo_concurrency.lockutils [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.043 250273 DEBUG oslo_concurrency.lockutils [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.043 250273 DEBUG nova.compute.manager [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] No waiting events found dispatching network-vif-unplugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.043 250273 DEBUG nova.compute.manager [req-5860b042-9e6c-47ef-9812-301d210ae74e req-babf1999-c41e-4030-ab3d-c49b0ac55a6f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-unplugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.063 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.063 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:04 np0005593232 nova_compute[250269]: 2026-01-23 10:10:04.063 250273 DEBUG nova.network.neutron [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:10:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2389: 321 pgs: 321 active+clean; 359 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 23 05:10:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:05.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.167 250273 DEBUG nova.network.neutron [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.194 250273 INFO nova.compute.manager [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Took 2.27 seconds to deallocate network for instance.#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.205 250273 DEBUG nova.compute.manager [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.205 250273 DEBUG oslo_concurrency.lockutils [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.206 250273 DEBUG oslo_concurrency.lockutils [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.206 250273 DEBUG oslo_concurrency.lockutils [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.207 250273 DEBUG nova.compute.manager [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] No waiting events found dispatching network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.207 250273 WARNING nova.compute.manager [req-7065bb4f-9748-4786-842b-563c1e647d01 req-716587b2-1399-4de3-838e-2b93095dbb18 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received unexpected event network-vif-plugged-cadc4646-467c-4840-8c7d-bfa5246d88f8 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.252 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.253 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.320 250273 DEBUG oslo_concurrency.processutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:06.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.612 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606283232' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.840 250273 DEBUG oslo_concurrency.processutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.849 250273 DEBUG nova.compute.provider_tree [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.878 250273 DEBUG nova.scheduler.client.report [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.907 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:06 np0005593232 nova_compute[250269]: 2026-01-23 10:10:06.934 250273 INFO nova.scheduler.client.report [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Deleted allocations for instance d843365b-283f-47da-ba45-e68489a5fbdd#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.019 250273 DEBUG oslo_concurrency.lockutils [None req-bb6b43a1-abce-4516-ae47-988ba16089b4 c99d09acd2e849a69846a6ccda1e0bc7 924f976bcbb74ec195730b68eebe1f2a - - default default] Lock "d843365b-283f-47da-ba45-e68489a5fbdd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.096 250273 DEBUG nova.network.neutron [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.119 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.223 250273 DEBUG nova.virt.libvirt.driver [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.223 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Creating file /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/34a71ed9d50d4537a8d7603b44adc519.tmp on remote host 192.168.122.101 create_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:79#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.224 250273 DEBUG oslo_concurrency.processutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/34a71ed9d50d4537a8d7603b44adc519.tmp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2390: 321 pgs: 321 active+clean; 333 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.261 250273 DEBUG nova.compute.manager [req-0766026a-d26d-42ed-9a9d-9cc0fa9725fe req-7290914e-0b6f-417e-b829-6a7d755ca354 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Received event network-vif-deleted-cadc4646-467c-4840-8c7d-bfa5246d88f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:07.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.726 250273 DEBUG oslo_concurrency.processutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/34a71ed9d50d4537a8d7603b44adc519.tmp" returned: 1 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.727 250273 DEBUG oslo_concurrency.processutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] 'ssh -o BatchMode=yes 192.168.122.101 touch /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3/34a71ed9d50d4537a8d7603b44adc519.tmp' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.728 250273 DEBUG nova.virt.libvirt.volume.remotefs [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Creating directory /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 on remote host 192.168.122.101 create_dir /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/remotefs.py:91#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.728 250273 DEBUG oslo_concurrency.processutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.953 250273 DEBUG oslo_concurrency.processutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ssh -o BatchMode=yes 192.168.122.101 mkdir -p /var/lib/nova/instances/81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.956 250273 INFO nova.virt.libvirt.driver [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance already shutdown.#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.962 250273 INFO nova.virt.libvirt.driver [-] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Instance destroyed successfully.#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.963 250273 DEBUG nova.virt.libvirt.vif [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:07:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-8628670',display_name='tempest-ServerActionsTestOtherB-server-8628670',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-8628670',id=122,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:08:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-05boc59s',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:10:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=81a8be01-ddd9-4fd2-91a1-886e7f47bfa3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:46:50:89"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.963 250273 DEBUG nova.network.os_vif_util [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:46:50:89"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.964 250273 DEBUG nova.network.os_vif_util [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.964 250273 DEBUG os_vif [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.967 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ad4c021-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.975 250273 INFO os_vif [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d')#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.981 250273 DEBUG nova.virt.libvirt.driver [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:10:07 np0005593232 nova_compute[250269]: 2026-01-23 10:10:07.981 250273 DEBUG nova.virt.libvirt.driver [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.301 250273 DEBUG neutronclient.v2_0.client [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8ad4c021-5d44-41aa-adad-f593da5206c1 for host compute-1.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.448 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.448 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.449 250273 DEBUG oslo_concurrency.lockutils [None req-892060a3-9c28-4ce5-8cf0-479dd3439d7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.535 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.536 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.553 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:10:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:08.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.625 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.625 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.633 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.633 250273 INFO nova.compute.claims [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:10:08 np0005593232 nova_compute[250269]: 2026-01-23 10:10:08.872 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2391: 321 pgs: 321 active+clean; 279 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 348 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 23 05:10:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3018780498' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.328 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.335 250273 DEBUG nova.compute.provider_tree [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.365 250273 DEBUG nova.scheduler.client.report [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.396 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.398 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.472 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.473 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.499 250273 INFO nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.524 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:10:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.881 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.883 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.884 250273 INFO nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Creating image(s)#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.917 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.948 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.977 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:09 np0005593232 nova_compute[250269]: 2026-01-23 10:10:09.982 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.061 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.063 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.064 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.064 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.094 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.100 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.139 250273 DEBUG nova.policy [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb500aabc93044e380f4bc905205803d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f00cc6e26e5c435b902306c6421e146d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.442 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.342s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.540 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] resizing rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:10:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.662 250273 DEBUG nova.objects.instance [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'migration_context' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.689 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.690 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Ensure instance console log exists: /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.691 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.693 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:10 np0005593232 nova_compute[250269]: 2026-01-23 10:10:10.693 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.190 250273 DEBUG nova.compute.manager [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Received event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.190 250273 DEBUG nova.compute.manager [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing instance network info cache due to event network-changed-8ad4c021-5d44-41aa-adad-f593da5206c1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.191 250273 DEBUG oslo_concurrency.lockutils [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.192 250273 DEBUG oslo_concurrency.lockutils [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.192 250273 DEBUG nova.network.neutron [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Refreshing network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:10:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2392: 321 pgs: 321 active+clean; 279 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 317 KiB/s rd, 951 KiB/s wr, 76 op/s
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.661 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:11.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:11 np0005593232 nova_compute[250269]: 2026-01-23 10:10:11.764 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Successfully created port: acd0a614-69ea-41fa-9830-ca2c81f259b9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.052 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769162997.0517042, 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.053 250273 INFO nova.compute.manager [-] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.076 250273 DEBUG nova.compute.manager [None req-6f832f80-6649-4777-8a22-2e4ded273554 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.081 250273 DEBUG nova.compute.manager [None req-6f832f80-6649-4777-8a22-2e4ded273554 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: stopped, current task_state: resize_migrated, current DB power_state: 4, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.108 250273 INFO nova.compute.manager [None req-6f832f80-6649-4777-8a22-2e4ded273554 - - - - - -] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] During the sync_power process the instance has moved from host compute-1.ctlplane.example.com to host compute-0.ctlplane.example.com#033[00m
Jan 23 05:10:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:12.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.828 250273 DEBUG nova.network.neutron [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updated VIF entry in instance network info cache for port 8ad4c021-5d44-41aa-adad-f593da5206c1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.829 250273 DEBUG nova.network.neutron [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.848 250273 DEBUG oslo_concurrency.lockutils [req-8db6933e-79be-4b10-bffb-58c161d1fd2b req-d241e5a8-bd60-45f6-a71a-632722eb6189 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:12 np0005593232 nova_compute[250269]: 2026-01-23 10:10:12.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2393: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 334 KiB/s rd, 2.7 MiB/s wr, 104 op/s
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.473 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Successfully updated port: acd0a614-69ea-41fa-9830-ca2c81f259b9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.520 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.521 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquired lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.521 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.643 250273 DEBUG nova.compute.manager [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-changed-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.644 250273 DEBUG nova.compute.manager [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Refreshing instance network info cache due to event network-changed-acd0a614-69ea-41fa-9830-ca2c81f259b9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.644 250273 DEBUG oslo_concurrency.lockutils [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:13.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 23 05:10:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 23 05:10:13 np0005593232 nova_compute[250269]: 2026-01-23 10:10:13.765 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:10:13 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 23 05:10:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:14.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2395: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 43 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 05:10:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:15.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:15 np0005593232 nova_compute[250269]: 2026-01-23 10:10:15.858 250273 DEBUG nova.network.neutron [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.152 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Releasing lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.153 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance network_info: |[{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.154 250273 DEBUG oslo_concurrency.lockutils [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.154 250273 DEBUG nova.network.neutron [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Refreshing network info cache for port acd0a614-69ea-41fa-9830-ca2c81f259b9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.159 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start _get_guest_xml network_info=[{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.167 250273 WARNING nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.173 250273 DEBUG nova.virt.libvirt.host [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.174 250273 DEBUG nova.virt.libvirt.host [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.178 250273 DEBUG nova.virt.libvirt.host [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.179 250273 DEBUG nova.virt.libvirt.host [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.181 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.182 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.182 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.183 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.183 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.184 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.184 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.185 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.185 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.186 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.186 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.186 250273 DEBUG nova.virt.hardware [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.190 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:16.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:10:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3291634168' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.672 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.702 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:16 np0005593232 nova_compute[250269]: 2026-01-23 10:10:16.706 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.030 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.031 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.031 250273 DEBUG nova.compute.manager [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Going to confirm migration 17 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 05:10:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:10:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600427585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.149 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.150 250273 DEBUG nova.virt.libvirt.vif [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1237513179',display_name='tempest-ServerRescueTestJSON-server-1237513179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1237513179',id=128,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f00cc6e26e5c435b902306c6421e146d',ramdisk_id='',reservation_id='r-1la91wzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-837476510',owner_user_name='tempest-ServerRescueTestJSON-837476510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:10:09Z,user_data=None,user_id='eb500aabc93044e380f4bc905205803d',uuid=280ec2fb-6ca3-4b43-bff4-ba64ac3a935b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.151 250273 DEBUG nova.network.os_vif_util [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converting VIF {"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.152 250273 DEBUG nova.network.os_vif_util [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.153 250273 DEBUG nova.objects.instance [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'pci_devices' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2396: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.246 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <uuid>280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</uuid>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <name>instance-00000080</name>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerRescueTestJSON-server-1237513179</nova:name>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:10:16</nova:creationTime>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:user uuid="eb500aabc93044e380f4bc905205803d">tempest-ServerRescueTestJSON-837476510-project-member</nova:user>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:project uuid="f00cc6e26e5c435b902306c6421e146d">tempest-ServerRescueTestJSON-837476510</nova:project>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <nova:port uuid="acd0a614-69ea-41fa-9830-ca2c81f259b9">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="serial">280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="uuid">280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:50:c1:86"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <target dev="tapacd0a614-69"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/console.log" append="off"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:10:17 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:10:17 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:10:17 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:10:17 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.247 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Preparing to wait for external event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.247 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.248 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.248 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.248 250273 DEBUG nova.virt.libvirt.vif [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1237513179',display_name='tempest-ServerRescueTestJSON-server-1237513179',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1237513179',id=128,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f00cc6e26e5c435b902306c6421e146d',ramdisk_id='',reservation_id='r-1la91wzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-837476510',owner_user_name='tempest-ServerRescueTestJSON-837476510-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:10:09Z,user_data=None,user_id='eb500aabc93044e380f4bc905205803d',uuid=280ec2fb-6ca3-4b43-bff4-ba64ac3a935b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.249 250273 DEBUG nova.network.os_vif_util [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converting VIF {"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.249 250273 DEBUG nova.network.os_vif_util [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.250 250273 DEBUG os_vif [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.250 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.251 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.251 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.254 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapacd0a614-69, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.255 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapacd0a614-69, col_values=(('external_ids', {'iface-id': 'acd0a614-69ea-41fa-9830-ca2c81f259b9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:c1:86', 'vm-uuid': '280ec2fb-6ca3-4b43-bff4-ba64ac3a935b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.256 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:17 np0005593232 NetworkManager[49057]: <info>  [1769163017.2576] manager: (tapacd0a614-69): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/210)
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.259 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.264 250273 INFO os_vif [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69')#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.487 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.488 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.488 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No VIF found with MAC fa:16:3e:50:c1:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.489 250273 INFO nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Using config drive#033[00m
Jan 23 05:10:17 np0005593232 nova_compute[250269]: 2026-01-23 10:10:17.525 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:17.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.355 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163003.3540733, d843365b-283f-47da-ba45-e68489a5fbdd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.356 250273 INFO nova.compute.manager [-] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:10:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:18.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.736 250273 INFO nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Creating config drive at /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.742 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphcjeb_x6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.811 250273 DEBUG neutronclient.v2_0.client [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Error message: {"NeutronError": {"type": "PortBindingNotFound", "message": "Binding for port 8ad4c021-5d44-41aa-adad-f593da5206c1 for host compute-0.ctlplane.example.com could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.813 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.813 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.813 250273 DEBUG nova.network.neutron [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.814 250273 DEBUG nova.objects.instance [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'info_cache' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.885 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphcjeb_x6" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.918 250273 DEBUG nova.storage.rbd_utils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.922 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:18 np0005593232 nova_compute[250269]: 2026-01-23 10:10:18.992 250273 DEBUG nova.compute.manager [None req-8fdb519f-3fca-4115-87ab-fc7ac1fffde8 - - - - - -] [instance: d843365b-283f-47da-ba45-e68489a5fbdd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.100 250273 DEBUG oslo_concurrency.processutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.102 250273 INFO nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Deleting local config drive /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config because it was imported into RBD.#033[00m
Jan 23 05:10:19 np0005593232 kernel: tapacd0a614-69: entered promiscuous mode
Jan 23 05:10:19 np0005593232 NetworkManager[49057]: <info>  [1769163019.1760] manager: (tapacd0a614-69): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Jan 23 05:10:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:19Z|00430|binding|INFO|Claiming lport acd0a614-69ea-41fa-9830-ca2c81f259b9 for this chassis.
Jan 23 05:10:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:19Z|00431|binding|INFO|acd0a614-69ea-41fa-9830-ca2c81f259b9: Claiming fa:16:3e:50:c1:86 10.100.0.2
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.179 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:19Z|00432|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 ovn-installed in OVS
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.203 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:19 np0005593232 systemd-udevd[332399]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:10:19 np0005593232 systemd-machined[215836]: New machine qemu-53-instance-00000080.
Jan 23 05:10:19 np0005593232 NetworkManager[49057]: <info>  [1769163019.2304] device (tapacd0a614-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:10:19 np0005593232 NetworkManager[49057]: <info>  [1769163019.2314] device (tapacd0a614-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:10:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2397: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 23 05:10:19 np0005593232 systemd[1]: Started Virtual Machine qemu-53-instance-00000080.
Jan 23 05:10:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:19Z|00433|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 up in Southbound
Jan 23 05:10:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:19.238 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c1:86 10.100.0.2'], port_security=['fa:16:3e:50:c1:86 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '280ec2fb-6ca3-4b43-bff4-ba64ac3a935b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88b571fd-69ad-4860-a596-3bd637fdb189', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f00cc6e26e5c435b902306c6421e146d', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a7b9167c-c78b-48f5-9e9d-ac8ada29e0a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d050303a-8173-4865-aab2-724e0c0624de, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=acd0a614-69ea-41fa-9830-ca2c81f259b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:10:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:19.240 161902 INFO neutron.agent.ovn.metadata.agent [-] Port acd0a614-69ea-41fa-9830-ca2c81f259b9 in datapath 88b571fd-69ad-4860-a596-3bd637fdb189 bound to our chassis#033[00m
Jan 23 05:10:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:19.241 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 88b571fd-69ad-4860-a596-3bd637fdb189 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 05:10:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:19.242 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa25a01-f79c-4b24-a16c-a4dd79d54772]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:19.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.851 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163019.8509855, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.854 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Started (Lifecycle Event)#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.881 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.885 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163019.8513746, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.885 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.909 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.912 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.939 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.962 250273 DEBUG nova.compute.manager [req-fdca2f43-5f18-44c1-be37-2dfd5cda44be req-9ced1d94-9eb7-4b69-b3a1-305533e42c4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.963 250273 DEBUG oslo_concurrency.lockutils [req-fdca2f43-5f18-44c1-be37-2dfd5cda44be req-9ced1d94-9eb7-4b69-b3a1-305533e42c4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.964 250273 DEBUG oslo_concurrency.lockutils [req-fdca2f43-5f18-44c1-be37-2dfd5cda44be req-9ced1d94-9eb7-4b69-b3a1-305533e42c4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.964 250273 DEBUG oslo_concurrency.lockutils [req-fdca2f43-5f18-44c1-be37-2dfd5cda44be req-9ced1d94-9eb7-4b69-b3a1-305533e42c4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.965 250273 DEBUG nova.compute.manager [req-fdca2f43-5f18-44c1-be37-2dfd5cda44be req-9ced1d94-9eb7-4b69-b3a1-305533e42c4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Processing event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.966 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.970 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163019.969962, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.970 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.972 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.975 250273 INFO nova.virt.libvirt.driver [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance spawned successfully.#033[00m
Jan 23 05:10:19 np0005593232 nova_compute[250269]: 2026-01-23 10:10:19.975 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.007 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.013 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.013 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.014 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.014 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.015 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.016 250273 DEBUG nova.virt.libvirt.driver [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.021 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.054 250273 DEBUG nova.network.neutron [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updated VIF entry in instance network info cache for port acd0a614-69ea-41fa-9830-ca2c81f259b9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.054 250273 DEBUG nova.network.neutron [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.095 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.105 250273 DEBUG oslo_concurrency.lockutils [req-e51379c4-70e8-4d3a-a6a5-6af9f1c6b6b7 req-f7cc664e-ef9c-4289-b92b-91b22106fbfc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.181 250273 INFO nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Took 10.30 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.181 250273 DEBUG nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.241 250273 INFO nova.compute.manager [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Took 11.64 seconds to build instance.#033[00m
Jan 23 05:10:20 np0005593232 nova_compute[250269]: 2026-01-23 10:10:20.273 250273 DEBUG oslo_concurrency.lockutils [None req-055d62ac-bf46-4a35-b556-7e913df68723 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:20 np0005593232 podman[332452]: 2026-01-23 10:10:20.447011741 +0000 UTC m=+0.095343640 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:10:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:20.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2398: 321 pgs: 321 active+clean; 326 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Jan 23 05:10:21 np0005593232 nova_compute[250269]: 2026-01-23 10:10:21.667 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:21.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.095 250273 DEBUG nova.compute.manager [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.096 250273 DEBUG oslo_concurrency.lockutils [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.096 250273 DEBUG oslo_concurrency.lockutils [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.097 250273 DEBUG oslo_concurrency.lockutils [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.097 250273 DEBUG nova.compute.manager [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.098 250273 WARNING nova.compute.manager [req-b1d470ba-3850-42a9-a349-41a998cf118c req-95440dca-677c-4870-a742-a41ee805e3a7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.119 250273 DEBUG nova.network.neutron [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Updating instance_info_cache with network_info: [{"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.146 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.146 250273 DEBUG nova.objects.instance [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'migration_context' on Instance uuid 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.297 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:22 np0005593232 nova_compute[250269]: 2026-01-23 10:10:22.298 250273 DEBUG nova.storage.rbd_utils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] removing snapshot(nova-resize) on rbd image(81a8be01-ddd9-4fd2-91a1-886e7f47bfa3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:10:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2399: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.257 250273 INFO nova.compute.manager [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Rescuing#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.258 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.259 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquired lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.259 250273 DEBUG nova.network.neutron [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.372 250273 DEBUG nova.virt.libvirt.vif [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:07:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-8628670',display_name='tempest-ServerActionsTestOtherB-server-8628670',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-1.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-8628670',id=122,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:10:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-1.ctlplane.example.com',numa_topology=<?>,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-05boc59s',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='stopped',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:10:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=81a8be01-ddd9-4fd2-91a1-886e7f47bfa3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.373 250273 DEBUG nova.network.os_vif_util [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "8ad4c021-5d44-41aa-adad-f593da5206c1", "address": "fa:16:3e:46:50:89", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8ad4c021-5d", "ovs_interfaceid": "8ad4c021-5d44-41aa-adad-f593da5206c1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.373 250273 DEBUG nova.network.os_vif_util [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.373 250273 DEBUG os_vif [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.375 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ad4c021-5d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.375 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.377 250273 INFO os_vif [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:50:89,bridge_name='br-int',has_traffic_filtering=True,id=8ad4c021-5d44-41aa-adad-f593da5206c1,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8ad4c021-5d')#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.377 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.378 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b09a6371-acbd-4dc9-abff-943830cc4171 does not exist
Jan 23 05:10:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d9a98223-ceaf-4f0b-8b3e-1646da03a482 does not exist
Jan 23 05:10:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 73a8f668-1d9a-4426-994f-523fac9f2a52 does not exist
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:10:23 np0005593232 nova_compute[250269]: 2026-01-23 10:10:23.532 250273 DEBUG oslo_concurrency.processutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:23.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012230524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:23.99940016 +0000 UTC m=+0.052484202 container create df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.010 250273 DEBUG oslo_concurrency.processutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.023 250273 DEBUG nova.compute.provider_tree [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:10:24 np0005593232 systemd[1]: Started libpod-conmon-df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba.scope.
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:23.967682389 +0000 UTC m=+0.020766461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.067 250273 DEBUG nova.scheduler.client.report [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:10:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:24.097210809 +0000 UTC m=+0.150294871 container init df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:24.106594916 +0000 UTC m=+0.159678968 container start df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:24.110700363 +0000 UTC m=+0.163784425 container attach df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:24 np0005593232 peaceful_williamson[332824]: 167 167
Jan 23 05:10:24 np0005593232 systemd[1]: libpod-df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba.scope: Deactivated successfully.
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:24.113824032 +0000 UTC m=+0.166908094 container died df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.136 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.137 250273 DEBUG nova.compute.manager [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 81a8be01-ddd9-4fd2-91a1-886e7f47bfa3] Resized/migrated instance is powered off. Setting vm_state to 'stopped'. _confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4805#033[00m
Jan 23 05:10:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9e4f9c24ba37537af9c8935b062bfba5d22e9ec0b3fcf369e4c234f0eddd62f9-merged.mount: Deactivated successfully.
Jan 23 05:10:24 np0005593232 podman[332806]: 2026-01-23 10:10:24.154336693 +0000 UTC m=+0.207420725 container remove df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_williamson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:10:24 np0005593232 systemd[1]: libpod-conmon-df68a5b1cf5611345f1fe2237201f12067f97b796e9dc0e2084c8370a4dae4ba.scope: Deactivated successfully.
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.307 250273 INFO nova.scheduler.client.report [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Deleted allocation for migration fef9a971-4d26-4fce-accb-849c814745fe#033[00m
Jan 23 05:10:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:10:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:10:24 np0005593232 podman[332848]: 2026-01-23 10:10:24.361018856 +0000 UTC m=+0.054538801 container create 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:24 np0005593232 nova_compute[250269]: 2026-01-23 10:10:24.371 250273 DEBUG oslo_concurrency.lockutils [None req-210556c1-a915-4fe2-92b6-a58272a3e035 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "81a8be01-ddd9-4fd2-91a1-886e7f47bfa3" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 7.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:24 np0005593232 systemd[1]: Started libpod-conmon-9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae.scope.
Jan 23 05:10:24 np0005593232 podman[332848]: 2026-01-23 10:10:24.336456138 +0000 UTC m=+0.029976173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:24 np0005593232 podman[332848]: 2026-01-23 10:10:24.452627669 +0000 UTC m=+0.146147654 container init 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:10:24 np0005593232 podman[332848]: 2026-01-23 10:10:24.534225335 +0000 UTC m=+0.227745280 container start 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:10:24 np0005593232 podman[332848]: 2026-01-23 10:10:24.53827342 +0000 UTC m=+0.231793385 container attach 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:24 np0005593232 podman[332868]: 2026-01-23 10:10:24.579258007 +0000 UTC m=+0.126755413 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:10:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:24.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:25 np0005593232 nova_compute[250269]: 2026-01-23 10:10:25.119 250273 DEBUG nova.network.neutron [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2401: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Jan 23 05:10:25 np0005593232 cool_mclean[332865]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:10:25 np0005593232 cool_mclean[332865]: --> relative data size: 1.0
Jan 23 05:10:25 np0005593232 cool_mclean[332865]: --> All data devices are unavailable
Jan 23 05:10:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 23 05:10:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 23 05:10:25 np0005593232 systemd[1]: libpod-9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae.scope: Deactivated successfully.
Jan 23 05:10:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 23 05:10:25 np0005593232 podman[332896]: 2026-01-23 10:10:25.42930107 +0000 UTC m=+0.028030617 container died 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:10:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a649713041602b57ae0ecb9606e0d9de8a384637144f1201f7c52e9f4be5c2b6-merged.mount: Deactivated successfully.
Jan 23 05:10:25 np0005593232 podman[332896]: 2026-01-23 10:10:25.487313339 +0000 UTC m=+0.086042876 container remove 9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_mclean, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:10:25 np0005593232 systemd[1]: libpod-conmon-9401eb96ccc5532633220d98ebfa72eec55f51acfebf3ccfa52badc2946fefae.scope: Deactivated successfully.
Jan 23 05:10:25 np0005593232 nova_compute[250269]: 2026-01-23 10:10:25.600 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Releasing lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:25.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.347394378 +0000 UTC m=+0.040066120 container create be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:10:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 23 05:10:26 np0005593232 systemd[1]: Started libpod-conmon-be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf.scope.
Jan 23 05:10:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.328968204 +0000 UTC m=+0.021639966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.444018193 +0000 UTC m=+0.136689995 container init be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.453612466 +0000 UTC m=+0.146284208 container start be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.456677953 +0000 UTC m=+0.149349715 container attach be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:26 np0005593232 recursing_cartwright[333069]: 167 167
Jan 23 05:10:26 np0005593232 systemd[1]: libpod-be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf.scope: Deactivated successfully.
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.470265869 +0000 UTC m=+0.162937611 container died be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:10:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b397e0edb44163ef8548e13bdc8cb627fed515e1f5af18869aa3e4c6f34776cb-merged.mount: Deactivated successfully.
Jan 23 05:10:26 np0005593232 podman[333052]: 2026-01-23 10:10:26.50794702 +0000 UTC m=+0.200618762 container remove be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_cartwright, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:10:26 np0005593232 systemd[1]: libpod-conmon-be5ea2120a9eb9507316a66ea6bef02fd1fd7708bf2c3060fc672fe27faf6adf.scope: Deactivated successfully.
Jan 23 05:10:26 np0005593232 nova_compute[250269]: 2026-01-23 10:10:26.579 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:10:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:26.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:26 np0005593232 nova_compute[250269]: 2026-01-23 10:10:26.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 23 05:10:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 23 05:10:26 np0005593232 podman[333092]: 2026-01-23 10:10:26.716711652 +0000 UTC m=+0.053064079 container create 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 05:10:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:26 np0005593232 systemd[1]: Started libpod-conmon-837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520.scope.
Jan 23 05:10:26 np0005593232 podman[333092]: 2026-01-23 10:10:26.694322286 +0000 UTC m=+0.030674713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9844fd4b7cbf4c612c551f818f23b27340432842b0ffbe5cf41e76a9ed85246/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9844fd4b7cbf4c612c551f818f23b27340432842b0ffbe5cf41e76a9ed85246/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9844fd4b7cbf4c612c551f818f23b27340432842b0ffbe5cf41e76a9ed85246/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9844fd4b7cbf4c612c551f818f23b27340432842b0ffbe5cf41e76a9ed85246/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:26 np0005593232 podman[333092]: 2026-01-23 10:10:26.810836975 +0000 UTC m=+0.147189382 container init 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:10:26 np0005593232 podman[333092]: 2026-01-23 10:10:26.818292497 +0000 UTC m=+0.154644904 container start 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:10:26 np0005593232 podman[333092]: 2026-01-23 10:10:26.821740125 +0000 UTC m=+0.158092592 container attach 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:10:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2404: 321 pgs: 321 active+clean; 412 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.3 MiB/s wr, 316 op/s
Jan 23 05:10:27 np0005593232 nova_compute[250269]: 2026-01-23 10:10:27.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:27 np0005593232 angry_kare[333108]: {
Jan 23 05:10:27 np0005593232 angry_kare[333108]:    "0": [
Jan 23 05:10:27 np0005593232 angry_kare[333108]:        {
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "devices": [
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "/dev/loop3"
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            ],
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "lv_name": "ceph_lv0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "lv_size": "7511998464",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "name": "ceph_lv0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "tags": {
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.cluster_name": "ceph",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.crush_device_class": "",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.encrypted": "0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.osd_id": "0",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.type": "block",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:                "ceph.vdo": "0"
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            },
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "type": "block",
Jan 23 05:10:27 np0005593232 angry_kare[333108]:            "vg_name": "ceph_vg0"
Jan 23 05:10:27 np0005593232 angry_kare[333108]:        }
Jan 23 05:10:27 np0005593232 angry_kare[333108]:    ]
Jan 23 05:10:27 np0005593232 angry_kare[333108]: }
Jan 23 05:10:27 np0005593232 systemd[1]: libpod-837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520.scope: Deactivated successfully.
Jan 23 05:10:27 np0005593232 podman[333092]: 2026-01-23 10:10:27.62361804 +0000 UTC m=+0.959970477 container died 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:10:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d9844fd4b7cbf4c612c551f818f23b27340432842b0ffbe5cf41e76a9ed85246-merged.mount: Deactivated successfully.
Jan 23 05:10:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 23 05:10:27 np0005593232 podman[333092]: 2026-01-23 10:10:27.710727605 +0000 UTC m=+1.047080022 container remove 837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:27 np0005593232 systemd[1]: libpod-conmon-837153bffe10023ff059e69fa90cdfcc3318184f3b905767ec2af64115a86520.scope: Deactivated successfully.
Jan 23 05:10:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:27.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 23 05:10:28 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.309807758 +0000 UTC m=+0.049092706 container create c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:10:28 np0005593232 systemd[1]: Started libpod-conmon-c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84.scope.
Jan 23 05:10:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.287970238 +0000 UTC m=+0.027255206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.398361094 +0000 UTC m=+0.137646052 container init c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.40806071 +0000 UTC m=+0.147345698 container start c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.412309311 +0000 UTC m=+0.151594299 container attach c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:10:28 np0005593232 interesting_shaw[333334]: 167 167
Jan 23 05:10:28 np0005593232 systemd[1]: libpod-c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84.scope: Deactivated successfully.
Jan 23 05:10:28 np0005593232 conmon[333334]: conmon c6c348ff0a895b66f27c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84.scope/container/memory.events
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.418040353 +0000 UTC m=+0.157325301 container died c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:10:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-31af241f94d719bb21613502fb38af7e09a07ba2d75c67a5e430ca2c903586c8-merged.mount: Deactivated successfully.
Jan 23 05:10:28 np0005593232 podman[333318]: 2026-01-23 10:10:28.461938431 +0000 UTC m=+0.201223389 container remove c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_shaw, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:10:28 np0005593232 systemd[1]: libpod-conmon-c6c348ff0a895b66f27c304be1da2fafe05d059eaecca4ec3a231fb8e9457d84.scope: Deactivated successfully.
Jan 23 05:10:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:28.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:28 np0005593232 podman[333359]: 2026-01-23 10:10:28.709072503 +0000 UTC m=+0.063980519 container create 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:10:28 np0005593232 systemd[1]: Started libpod-conmon-22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89.scope.
Jan 23 05:10:28 np0005593232 podman[333359]: 2026-01-23 10:10:28.671753073 +0000 UTC m=+0.026661149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:10:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:10:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde3344d2fb9d5f6ac2c5c8d2bf2b1facd50ff6ae6c3103f07fa1a64beafe4a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde3344d2fb9d5f6ac2c5c8d2bf2b1facd50ff6ae6c3103f07fa1a64beafe4a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde3344d2fb9d5f6ac2c5c8d2bf2b1facd50ff6ae6c3103f07fa1a64beafe4a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde3344d2fb9d5f6ac2c5c8d2bf2b1facd50ff6ae6c3103f07fa1a64beafe4a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:10:28 np0005593232 podman[333359]: 2026-01-23 10:10:28.825327806 +0000 UTC m=+0.180235842 container init 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:10:28 np0005593232 podman[333359]: 2026-01-23 10:10:28.833075837 +0000 UTC m=+0.187983833 container start 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:10:28 np0005593232 podman[333359]: 2026-01-23 10:10:28.837204824 +0000 UTC m=+0.192112870 container attach 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:10:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2406: 321 pgs: 321 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 8.0 MiB/s rd, 7.9 MiB/s wr, 151 op/s
Jan 23 05:10:29 np0005593232 gifted_banach[333375]: {
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:        "osd_id": 0,
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:        "type": "bluestore"
Jan 23 05:10:29 np0005593232 gifted_banach[333375]:    }
Jan 23 05:10:29 np0005593232 gifted_banach[333375]: }
Jan 23 05:10:29 np0005593232 systemd[1]: libpod-22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89.scope: Deactivated successfully.
Jan 23 05:10:29 np0005593232 podman[333359]: 2026-01-23 10:10:29.676576554 +0000 UTC m=+1.031484530 container died 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:10:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bde3344d2fb9d5f6ac2c5c8d2bf2b1facd50ff6ae6c3103f07fa1a64beafe4a5-merged.mount: Deactivated successfully.
Jan 23 05:10:29 np0005593232 podman[333359]: 2026-01-23 10:10:29.731303229 +0000 UTC m=+1.086211215 container remove 22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banach, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:10:29 np0005593232 systemd[1]: libpod-conmon-22c986eeac3a4673d1be490c7dfdfae04ba3de40553413c507ef7e6846630d89.scope: Deactivated successfully.
Jan 23 05:10:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:10:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:10:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 318c6adb-f70e-4751-bd28-80541f6efbe0 does not exist
Jan 23 05:10:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c494c042-1802-47e5-ae4e-323a58e37793 does not exist
Jan 23 05:10:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ffbc65d4-1a83-4cb9-80e1-cff9c10bcc3e does not exist
Jan 23 05:10:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:30.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:10:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2407: 321 pgs: 321 active+clean; 451 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 148 op/s
Jan 23 05:10:31 np0005593232 nova_compute[250269]: 2026-01-23 10:10:31.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 23 05:10:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 23 05:10:31 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 23 05:10:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:31.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:32 np0005593232 nova_compute[250269]: 2026-01-23 10:10:32.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:32.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2409: 321 pgs: 321 active+clean; 464 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 6.2 MiB/s wr, 184 op/s
Jan 23 05:10:33 np0005593232 nova_compute[250269]: 2026-01-23 10:10:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:33.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:34.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2410: 321 pgs: 321 active+clean; 464 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 5.1 MiB/s wr, 151 op/s
Jan 23 05:10:35 np0005593232 nova_compute[250269]: 2026-01-23 10:10:35.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:35.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:36.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:36 np0005593232 nova_compute[250269]: 2026-01-23 10:10:36.650 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:10:36 np0005593232 nova_compute[250269]: 2026-01-23 10:10:36.717 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 23 05:10:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 23 05:10:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2412: 321 pgs: 321 active+clean; 427 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 216 op/s
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:10:37
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'vms', 'volumes', 'default.rgw.meta']
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:10:37 np0005593232 nova_compute[250269]: 2026-01-23 10:10:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:37 np0005593232 nova_compute[250269]: 2026-01-23 10:10:37.306 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3593204294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:10:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:37.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:10:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:38.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2413: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.2 MiB/s rd, 3.2 MiB/s wr, 374 op/s
Jan 23 05:10:39 np0005593232 kernel: tapacd0a614-69 (unregistering): left promiscuous mode
Jan 23 05:10:39 np0005593232 NetworkManager[49057]: <info>  [1769163039.2822] device (tapacd0a614-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00434|binding|INFO|Releasing lport acd0a614-69ea-41fa-9830-ca2c81f259b9 from this chassis (sb_readonly=0)
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00435|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 down in Southbound
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00436|binding|INFO|Removing iface tapacd0a614-69 ovn-installed in OVS
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.337 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 23 05:10:39 np0005593232 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000080.scope: Consumed 14.188s CPU time.
Jan 23 05:10:39 np0005593232 systemd-machined[215836]: Machine qemu-53-instance-00000080 terminated.
Jan 23 05:10:39 np0005593232 kernel: tapacd0a614-69: entered promiscuous mode
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00437|if_status|INFO|Dropped 9 log messages in last 474 seconds (most recently, 474 seconds ago) due to excessive rate
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00438|if_status|INFO|Not updating pb chassis for acd0a614-69ea-41fa-9830-ca2c81f259b9 now as sb is readonly
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.539 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 NetworkManager[49057]: <info>  [1769163039.5436] manager: (tapacd0a614-69): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 23 05:10:39 np0005593232 kernel: tapacd0a614-69 (unregistering): left promiscuous mode
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.571 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00439|binding|INFO|Releasing lport acd0a614-69ea-41fa-9830-ca2c81f259b9 from this chassis (sb_readonly=1)
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00440|if_status|INFO|Dropped 2 log messages in last 1547 seconds (most recently, 1547 seconds ago) due to excessive rate
Jan 23 05:10:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:39Z|00441|if_status|INFO|Not setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 down as sb is readonly
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.666 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.672 250273 INFO nova.virt.libvirt.driver [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance destroyed successfully.#033[00m
Jan 23 05:10:39 np0005593232 nova_compute[250269]: 2026-01-23 10:10:39.672 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'numa_topology' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:39.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:39.922 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c1:86 10.100.0.2'], port_security=['fa:16:3e:50:c1:86 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '280ec2fb-6ca3-4b43-bff4-ba64ac3a935b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88b571fd-69ad-4860-a596-3bd637fdb189', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f00cc6e26e5c435b902306c6421e146d', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a7b9167c-c78b-48f5-9e9d-ac8ada29e0a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d050303a-8173-4865-aab2-724e0c0624de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=acd0a614-69ea-41fa-9830-ca2c81f259b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:10:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:39.923 161902 INFO neutron.agent.ovn.metadata.agent [-] Port acd0a614-69ea-41fa-9830-ca2c81f259b9 in datapath 88b571fd-69ad-4860-a596-3bd637fdb189 unbound from our chassis#033[00m
Jan 23 05:10:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:39.924 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 88b571fd-69ad-4860-a596-3bd637fdb189 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 05:10:39 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:10:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:39.927 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c165bc9f-5507-4282-b150-685f96a12756]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.001 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Attempting rescue#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.003 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.008 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.008 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Creating image(s)#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.043 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.048 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'trusted_certs' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.225 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.255 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.260 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.347 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.348 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.349 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.349 250273 DEBUG oslo_concurrency.lockutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.378 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.382 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.553 250273 DEBUG nova.compute.manager [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.556 250273 DEBUG oslo_concurrency.lockutils [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.557 250273 DEBUG oslo_concurrency.lockutils [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.557 250273 DEBUG oslo_concurrency.lockutils [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.557 250273 DEBUG nova.compute.manager [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.557 250273 WARNING nova.compute.manager [req-81d1cb62-a43e-43c9-84cf-dab0e6901489 req-48b89b17-a7f9-4ef6-a2cb-0252638ac375 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:10:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.694 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.695 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'migration_context' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.725 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.726 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start _get_guest_xml network_info=[{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1616424882-network", "vif_mac": "fa:16:3e:50:c1:86"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.726 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'resources' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:40 np0005593232 nova_compute[250269]: 2026-01-23 10:10:40.774 250273 WARNING nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.007 250273 DEBUG nova.virt.libvirt.host [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.008 250273 DEBUG nova.virt.libvirt.host [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.011 250273 DEBUG nova.virt.libvirt.host [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.012 250273 DEBUG nova.virt.libvirt.host [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.013 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.014 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.014 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.014 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.015 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.015 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.015 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.015 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.016 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.016 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.016 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.017 250273 DEBUG nova.virt.hardware [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.017 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'vcpu_model' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.043 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2414: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 2.7 MiB/s wr, 311 op/s
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:10:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:10:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047452357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.560 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.562 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:41 np0005593232 nova_compute[250269]: 2026-01-23 10:10:41.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:41.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:10:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1442176773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.041 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.042 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:10:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321861254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.466 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.467 250273 DEBUG nova.virt.libvirt.vif [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1237513179',display_name='tempest-ServerRescueTestJSON-server-1237513179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1237513179',id=128,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f00cc6e26e5c435b902306c6421e146d',ramdisk_id='',reservation_id='r-1la91wzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-837476510',owner_user_name='tempest-ServerRescueTestJSON-837476510-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:10:20Z,user_data=None,user_id='eb500aabc93044e380f4bc905205803d',uuid=280ec2fb-6ca3-4b43-bff4-ba64ac3a935b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1616424882-network", "vif_mac": "fa:16:3e:50:c1:86"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.468 250273 DEBUG nova.network.os_vif_util [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converting VIF {"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1616424882-network", "vif_mac": "fa:16:3e:50:c1:86"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.469 250273 DEBUG nova.network.os_vif_util [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:10:42 np0005593232 nova_compute[250269]: 2026-01-23 10:10:42.470 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'pci_devices' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:42.622 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:42.623 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:42.623 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2415: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Jan 23 05:10:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:43.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:44.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.122 250273 DEBUG nova.compute.manager [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.122 250273 DEBUG oslo_concurrency.lockutils [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.123 250273 DEBUG oslo_concurrency.lockutils [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.123 250273 DEBUG oslo_concurrency.lockutils [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.124 250273 DEBUG nova.compute.manager [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.124 250273 WARNING nova.compute.manager [req-0692547d-702c-49bb-8d8c-e17f942dc16e req-fba21c44-a882-4ba6-a2fc-ce000b26b330 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.145 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <uuid>280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</uuid>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <name>instance-00000080</name>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerRescueTestJSON-server-1237513179</nova:name>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:10:40</nova:creationTime>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:user uuid="eb500aabc93044e380f4bc905205803d">tempest-ServerRescueTestJSON-837476510-project-member</nova:user>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:project uuid="f00cc6e26e5c435b902306c6421e146d">tempest-ServerRescueTestJSON-837476510</nova:project>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <nova:port uuid="acd0a614-69ea-41fa-9830-ca2c81f259b9">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.2" ipVersion="4"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="serial">280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="uuid">280ec2fb-6ca3-4b43-bff4-ba64ac3a935b</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.rescue">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config.rescue">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:50:c1:86"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <target dev="tapacd0a614-69"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/console.log" append="off"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:10:45 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:10:45 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:10:45 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:10:45 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.153 250273 INFO nova.virt.libvirt.driver [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance destroyed successfully.#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.238 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.239 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.239 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.240 250273 DEBUG nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] No VIF found with MAC fa:16:3e:50:c1:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.241 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Using config drive#033[00m
Jan 23 05:10:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2416: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.1 MiB/s wr, 261 op/s
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.287 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.349 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'ec2_ids' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:45 np0005593232 nova_compute[250269]: 2026-01-23 10:10:45.464 250273 DEBUG nova.objects.instance [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'keypairs' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:45.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.191 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Creating config drive at /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.200 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sfp65cu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.367 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sfp65cu" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.398 250273 DEBUG nova.storage.rbd_utils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] rbd image 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.402 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.451 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.452 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.452 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.453 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.629 250273 DEBUG oslo_concurrency.processutils [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.630 250273 INFO nova.virt.libvirt.driver [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Deleting local config drive /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b/disk.config.rescue because it was imported into RBD.#033[00m
Jan 23 05:10:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:46.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:46 np0005593232 kernel: tapacd0a614-69: entered promiscuous mode
Jan 23 05:10:46 np0005593232 NetworkManager[49057]: <info>  [1769163046.6966] manager: (tapacd0a614-69): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Jan 23 05:10:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:46 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:46Z|00442|binding|INFO|Claiming lport acd0a614-69ea-41fa-9830-ca2c81f259b9 for this chassis.
Jan 23 05:10:46 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:46Z|00443|binding|INFO|acd0a614-69ea-41fa-9830-ca2c81f259b9: Claiming fa:16:3e:50:c1:86 10.100.0.2
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:46 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:46Z|00444|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 ovn-installed in OVS
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:46 np0005593232 nova_compute[250269]: 2026-01-23 10:10:46.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:46 np0005593232 ovn_controller[151001]: 2026-01-23T10:10:46Z|00445|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 up in Southbound
Jan 23 05:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:46.776 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c1:86 10.100.0.2'], port_security=['fa:16:3e:50:c1:86 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '280ec2fb-6ca3-4b43-bff4-ba64ac3a935b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88b571fd-69ad-4860-a596-3bd637fdb189', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f00cc6e26e5c435b902306c6421e146d', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'a7b9167c-c78b-48f5-9e9d-ac8ada29e0a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d050303a-8173-4865-aab2-724e0c0624de, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=acd0a614-69ea-41fa-9830-ca2c81f259b9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:46.777 161902 INFO neutron.agent.ovn.metadata.agent [-] Port acd0a614-69ea-41fa-9830-ca2c81f259b9 in datapath 88b571fd-69ad-4860-a596-3bd637fdb189 bound to our chassis#033[00m
Jan 23 05:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:46.778 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 88b571fd-69ad-4860-a596-3bd637fdb189 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 05:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:46.779 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d13ca825-2465-4683-b7e8-7bf04f21264d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:10:46 np0005593232 systemd-udevd[333709]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:10:46 np0005593232 NetworkManager[49057]: <info>  [1769163046.7934] device (tapacd0a614-69): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:10:46 np0005593232 systemd-machined[215836]: New machine qemu-54-instance-00000080.
Jan 23 05:10:46 np0005593232 NetworkManager[49057]: <info>  [1769163046.7940] device (tapacd0a614-69): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:10:46 np0005593232 systemd[1]: Started Virtual Machine qemu-54-instance-00000080.
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006322665119579378 of space, bias 1.0, pg target 1.8967995358738134 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.635783082077052e-06 of space, bias 1.0, pg target 0.0004890991415410385 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.004066738495719337 of space, bias 1.0, pg target 1.215954810220082 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:10:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2417: 321 pgs: 321 active+clean; 375 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.1 MiB/s wr, 203 op/s
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.271 250273 DEBUG nova.compute.manager [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.272 250273 DEBUG oslo_concurrency.lockutils [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.273 250273 DEBUG oslo_concurrency.lockutils [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.273 250273 DEBUG oslo_concurrency.lockutils [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.273 250273 DEBUG nova.compute.manager [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.274 250273 WARNING nova.compute.manager [req-af416c84-7f6d-4cca-b3b0-92faee2214f6 req-e0a20f21-8eb7-4abb-bb53-c2b4c54d4d54 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.571 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.573 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163047.570826, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.573 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.581 250273 DEBUG nova.compute.manager [None req-e8267e25-b95c-466d-ad10-807d6eba6c19 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.626 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.630 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.672 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163047.5734892, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.672 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Started (Lifecycle Event)#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.697 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:10:47 np0005593232 nova_compute[250269]: 2026-01-23 10:10:47.701 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:10:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:47.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:48.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2418: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 218 op/s
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.708 250273 DEBUG nova.compute.manager [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.709 250273 DEBUG oslo_concurrency.lockutils [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.710 250273 DEBUG oslo_concurrency.lockutils [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.710 250273 DEBUG oslo_concurrency.lockutils [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.711 250273 DEBUG nova.compute.manager [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:10:49 np0005593232 nova_compute[250269]: 2026-01-23 10:10:49.711 250273 WARNING nova.compute.manager [req-c9696d1b-fc21-42ac-95f9-76174b0c3058 req-4c544bf0-348b-47c6-82b1-757cf2dad6ab 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state rescued and task_state None.#033[00m
Jan 23 05:10:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:10:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:49.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.067 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.259 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.259 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.260 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.260 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.260 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.309 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.309 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.309 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.310 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.310 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:50.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2654693583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:50 np0005593232 nova_compute[250269]: 2026-01-23 10:10:50.788 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:50 np0005593232 podman[333855]: 2026-01-23 10:10:50.924879242 +0000 UTC m=+0.093850677 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:10:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2419: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 357 KiB/s rd, 3.9 MiB/s wr, 113 op/s
Jan 23 05:10:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:51 np0005593232 nova_compute[250269]: 2026-01-23 10:10:51.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:51.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.668 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.668 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.668 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.814 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.815 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4157MB free_disk=20.830928802490234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.815 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:10:52 np0005593232 nova_compute[250269]: 2026-01-23 10:10:52.815 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.134 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.135 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.135 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.232 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:10:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2420: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 186 op/s
Jan 23 05:10:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:10:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810855817' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.672 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:10:53 np0005593232 nova_compute[250269]: 2026-01-23 10:10:53.680 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:10:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:53.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:54 np0005593232 nova_compute[250269]: 2026-01-23 10:10:54.052 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:10:54 np0005593232 nova_compute[250269]: 2026-01-23 10:10:54.154 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:10:54 np0005593232 nova_compute[250269]: 2026-01-23 10:10:54.155 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:10:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:10:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:54.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:10:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2421: 321 pgs: 321 active+clean; 405 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 142 op/s
Jan 23 05:10:55 np0005593232 podman[333907]: 2026-01-23 10:10:55.387708531 +0000 UTC m=+0.050730222 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 05:10:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:55.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:56.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:10:56 np0005593232 nova_compute[250269]: 2026-01-23 10:10:56.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2422: 321 pgs: 321 active+clean; 412 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 145 op/s
Jan 23 05:10:57 np0005593232 nova_compute[250269]: 2026-01-23 10:10:57.313 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:57.797 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:10:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:10:57.798 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:10:57 np0005593232 nova_compute[250269]: 2026-01-23 10:10:57.799 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:10:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:57.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:10:58.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:10:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2423: 321 pgs: 321 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.7 MiB/s wr, 146 op/s
Jan 23 05:10:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:10:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:10:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:10:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:00.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2424: 321 pgs: 321 active+clean; 461 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Jan 23 05:11:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:01 np0005593232 nova_compute[250269]: 2026-01-23 10:11:01.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:01.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:02 np0005593232 nova_compute[250269]: 2026-01-23 10:11:02.315 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:02.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2425: 321 pgs: 321 active+clean; 530 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.3 MiB/s wr, 188 op/s
Jan 23 05:11:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:03.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:04.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2426: 321 pgs: 321 active+clean; 569 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.4 MiB/s wr, 173 op/s
Jan 23 05:11:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:06.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:06 np0005593232 nova_compute[250269]: 2026-01-23 10:11:06.777 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:06.799 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2427: 321 pgs: 321 active+clean; 570 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 7.4 MiB/s wr, 177 op/s
Jan 23 05:11:07 np0005593232 nova_compute[250269]: 2026-01-23 10:11:07.317 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:07.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:08.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2428: 321 pgs: 321 active+clean; 527 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 7.4 MiB/s wr, 263 op/s
Jan 23 05:11:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 23 05:11:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 23 05:11:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 23 05:11:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:09.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:10.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2430: 321 pgs: 321 active+clean; 527 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 6.5 MiB/s wr, 280 op/s
Jan 23 05:11:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:11 np0005593232 nova_compute[250269]: 2026-01-23 10:11:11.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:11.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:12 np0005593232 nova_compute[250269]: 2026-01-23 10:11:12.489 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:12.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2431: 321 pgs: 321 active+clean; 435 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.7 MiB/s rd, 2.6 MiB/s wr, 367 op/s
Jan 23 05:11:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:13.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:11:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:11:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2432: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 70 KiB/s wr, 352 op/s
Jan 23 05:11:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:15.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 23 05:11:16 np0005593232 nova_compute[250269]: 2026-01-23 10:11:16.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 37K writes, 141K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 37K writes, 13K syncs, 2.76 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7751 writes, 28K keys, 7751 commit groups, 1.0 writes per commit group, ingest: 28.51 MB, 0.05 MB/s#012Interval WAL: 7751 writes, 3242 syncs, 2.39 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411181308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:11:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3411181308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:11:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2434: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 7.0 MiB/s rd, 41 KiB/s wr, 311 op/s
Jan 23 05:11:17 np0005593232 nova_compute[250269]: 2026-01-23 10:11:17.491 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:17.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:18.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2435: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.7 MiB/s rd, 35 KiB/s wr, 255 op/s
Jan 23 05:11:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:19.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:20.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2436: 321 pgs: 321 active+clean; 420 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.6 MiB/s rd, 35 KiB/s wr, 252 op/s
Jan 23 05:11:21 np0005593232 podman[333991]: 2026-01-23 10:11:21.482072725 +0000 UTC m=+0.126134165 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 05:11:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:21 np0005593232 nova_compute[250269]: 2026-01-23 10:11:21.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:22 np0005593232 nova_compute[250269]: 2026-01-23 10:11:22.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2437: 321 pgs: 321 active+clean; 441 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 23 05:11:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:24.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2438: 321 pgs: 321 active+clean; 465 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 982 KiB/s rd, 4.1 MiB/s wr, 179 op/s
Jan 23 05:11:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:25.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:26 np0005593232 podman[334022]: 2026-01-23 10:11:26.390135927 +0000 UTC m=+0.050994130 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:11:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:26 np0005593232 nova_compute[250269]: 2026-01-23 10:11:26.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2439: 321 pgs: 321 active+clean; 483 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 5.0 MiB/s wr, 217 op/s
Jan 23 05:11:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:11:27 np0005593232 nova_compute[250269]: 2026-01-23 10:11:27.538 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2440: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 23 05:11:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:31 np0005593232 nova_compute[250269]: 2026-01-23 10:11:31.067 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:31 np0005593232 nova_compute[250269]: 2026-01-23 10:11:31.253 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2441: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 203 op/s
Jan 23 05:11:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:31 np0005593232 nova_compute[250269]: 2026-01-23 10:11:31.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:31.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:11:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:11:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:32 np0005593232 nova_compute[250269]: 2026-01-23 10:11:32.541 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:32.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 909de65b-a596-46e2-a60c-0c7c9d18566a does not exist
Jan 23 05:11:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0b2e9612-c0a1-4d3f-8982-f5ec592e6d9e does not exist
Jan 23 05:11:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d256aa7f-40dd-4533-b30c-9f2bec1c00c4 does not exist
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:11:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2442: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.3 MiB/s wr, 209 op/s
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.410329522 +0000 UTC m=+0.042253212 container create f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:11:33 np0005593232 systemd[1]: Started libpod-conmon-f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5.scope.
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.38950098 +0000 UTC m=+0.021424680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.51972113 +0000 UTC m=+0.151644860 container init f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.528435778 +0000 UTC m=+0.160359468 container start f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.532342109 +0000 UTC m=+0.164265819 container attach f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:11:33 np0005593232 great_kilby[334383]: 167 167
Jan 23 05:11:33 np0005593232 systemd[1]: libpod-f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5.scope: Deactivated successfully.
Jan 23 05:11:33 np0005593232 conmon[334383]: conmon f655d8fdcf2835fd6cc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5.scope/container/memory.events
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.537111184 +0000 UTC m=+0.169035694 container died f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:11:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-214c00c58686f32088290107bca899bb47004f37e54317209c6de78fc5f30976-merged.mount: Deactivated successfully.
Jan 23 05:11:33 np0005593232 podman[334367]: 2026-01-23 10:11:33.598142709 +0000 UTC m=+0.230066399 container remove f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:11:33 np0005593232 systemd[1]: libpod-conmon-f655d8fdcf2835fd6cc8ed37a17695db88a4678def72d00f2394f47fda95def5.scope: Deactivated successfully.
Jan 23 05:11:33 np0005593232 podman[334407]: 2026-01-23 10:11:33.759116443 +0000 UTC m=+0.041061068 container create c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:11:33 np0005593232 systemd[1]: Started libpod-conmon-c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5.scope.
Jan 23 05:11:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:33 np0005593232 podman[334407]: 2026-01-23 10:11:33.742159981 +0000 UTC m=+0.024104606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:33 np0005593232 podman[334407]: 2026-01-23 10:11:33.846607229 +0000 UTC m=+0.128551874 container init c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:11:33 np0005593232 podman[334407]: 2026-01-23 10:11:33.853264268 +0000 UTC m=+0.135208893 container start c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:11:33 np0005593232 podman[334407]: 2026-01-23 10:11:33.857071856 +0000 UTC m=+0.139016481 container attach c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:11:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:33.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:34.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:34 np0005593232 optimistic_jackson[334424]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:11:34 np0005593232 optimistic_jackson[334424]: --> relative data size: 1.0
Jan 23 05:11:34 np0005593232 optimistic_jackson[334424]: --> All data devices are unavailable
Jan 23 05:11:34 np0005593232 systemd[1]: libpod-c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5.scope: Deactivated successfully.
Jan 23 05:11:34 np0005593232 podman[334407]: 2026-01-23 10:11:34.735171277 +0000 UTC m=+1.017115922 container died c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:11:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-be884b5aa94ee8553fa8192b67e46de2b94f2d0624b3008efc5fb23f9ff9d922-merged.mount: Deactivated successfully.
Jan 23 05:11:34 np0005593232 podman[334407]: 2026-01-23 10:11:34.968810895 +0000 UTC m=+1.250755520 container remove c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:11:34 np0005593232 systemd[1]: libpod-conmon-c1b3535e2a003408070633b4d839022f80500117d57772ee54d9fb26689c1fe5.scope: Deactivated successfully.
Jan 23 05:11:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2443: 321 pgs: 321 active+clean; 486 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 893 KiB/s rd, 2.6 MiB/s wr, 154 op/s
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.560666442 +0000 UTC m=+0.042519649 container create d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:11:35 np0005593232 systemd[1]: Started libpod-conmon-d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c.scope.
Jan 23 05:11:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.540203651 +0000 UTC m=+0.022056888 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.643787814 +0000 UTC m=+0.125641051 container init d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.65245828 +0000 UTC m=+0.134311477 container start d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.656179206 +0000 UTC m=+0.138032443 container attach d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:11:35 np0005593232 systemd[1]: libpod-d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c.scope: Deactivated successfully.
Jan 23 05:11:35 np0005593232 affectionate_cohen[334610]: 167 167
Jan 23 05:11:35 np0005593232 conmon[334610]: conmon d329e669d069cdd0d71c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c.scope/container/memory.events
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.658280266 +0000 UTC m=+0.140133473 container died d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:11:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e758c4ff99772294e2b90bf5df3d07361f13ece5e94eb9802201595ceb9d1acb-merged.mount: Deactivated successfully.
Jan 23 05:11:35 np0005593232 podman[334594]: 2026-01-23 10:11:35.712352832 +0000 UTC m=+0.194206039 container remove d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_cohen, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:11:35 np0005593232 systemd[1]: libpod-conmon-d329e669d069cdd0d71c7f2a2513fbab1a4d70b5e675873cb53be6517f3f662c.scope: Deactivated successfully.
Jan 23 05:11:35 np0005593232 podman[334633]: 2026-01-23 10:11:35.866613325 +0000 UTC m=+0.040390538 container create 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:11:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:35.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:35 np0005593232 systemd[1]: Started libpod-conmon-7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a.scope.
Jan 23 05:11:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82c77661c652beaf67375d6eefb4ef6addcddb483a0d964959a505783ddd8ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82c77661c652beaf67375d6eefb4ef6addcddb483a0d964959a505783ddd8ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82c77661c652beaf67375d6eefb4ef6addcddb483a0d964959a505783ddd8ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c82c77661c652beaf67375d6eefb4ef6addcddb483a0d964959a505783ddd8ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:35 np0005593232 podman[334633]: 2026-01-23 10:11:35.848907182 +0000 UTC m=+0.022684405 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:35 np0005593232 podman[334633]: 2026-01-23 10:11:35.959166985 +0000 UTC m=+0.132944198 container init 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:11:35 np0005593232 podman[334633]: 2026-01-23 10:11:35.96599571 +0000 UTC m=+0.139772903 container start 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:11:35 np0005593232 podman[334633]: 2026-01-23 10:11:35.972527475 +0000 UTC m=+0.146304668 container attach 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:11:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:36.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:36 np0005593232 dazzling_black[334652]: {
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:    "0": [
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:        {
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "devices": [
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "/dev/loop3"
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            ],
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "lv_name": "ceph_lv0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "lv_size": "7511998464",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "name": "ceph_lv0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "tags": {
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.cluster_name": "ceph",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.crush_device_class": "",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.encrypted": "0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.osd_id": "0",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.type": "block",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:                "ceph.vdo": "0"
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            },
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "type": "block",
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:            "vg_name": "ceph_vg0"
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:        }
Jan 23 05:11:36 np0005593232 dazzling_black[334652]:    ]
Jan 23 05:11:36 np0005593232 dazzling_black[334652]: }
Jan 23 05:11:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:36 np0005593232 systemd[1]: libpod-7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a.scope: Deactivated successfully.
Jan 23 05:11:36 np0005593232 conmon[334652]: conmon 7ae3809e12cc95fede1a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a.scope/container/memory.events
Jan 23 05:11:36 np0005593232 podman[334633]: 2026-01-23 10:11:36.782273504 +0000 UTC m=+0.956050697 container died 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:11:36 np0005593232 nova_compute[250269]: 2026-01-23 10:11:36.789 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c82c77661c652beaf67375d6eefb4ef6addcddb483a0d964959a505783ddd8ff-merged.mount: Deactivated successfully.
Jan 23 05:11:36 np0005593232 podman[334633]: 2026-01-23 10:11:36.834014524 +0000 UTC m=+1.007791717 container remove 7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_black, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:11:36 np0005593232 systemd[1]: libpod-conmon-7ae3809e12cc95fede1a9674f55b7ec93e1ce386f69b52a37ac6906320e0f18a.scope: Deactivated successfully.
Jan 23 05:11:36 np0005593232 auditd[705]: Audit daemon rotating log files
Jan 23 05:11:37 np0005593232 nova_compute[250269]: 2026-01-23 10:11:37.186 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:37 np0005593232 nova_compute[250269]: 2026-01-23 10:11:37.186 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2444: 321 pgs: 321 active+clean; 510 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 403 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:11:37
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes']
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:11:37 np0005593232 nova_compute[250269]: 2026-01-23 10:11:37.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.418713788 +0000 UTC m=+0.039044690 container create c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:11:37 np0005593232 systemd[1]: Started libpod-conmon-c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca.scope.
Jan 23 05:11:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.401937561 +0000 UTC m=+0.022268493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.500277776 +0000 UTC m=+0.120608708 container init c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.508314704 +0000 UTC m=+0.128645606 container start c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.512026349 +0000 UTC m=+0.132357241 container attach c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:11:37 np0005593232 competent_brahmagupta[334828]: 167 167
Jan 23 05:11:37 np0005593232 systemd[1]: libpod-c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca.scope: Deactivated successfully.
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.514995084 +0000 UTC m=+0.135325986 container died c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:11:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dd1dbbb148b5f7caf2ddc7a529b28cbbdf3b436ed030289ccb335e8e13808e9c-merged.mount: Deactivated successfully.
Jan 23 05:11:37 np0005593232 nova_compute[250269]: 2026-01-23 10:11:37.543 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:37 np0005593232 podman[334811]: 2026-01-23 10:11:37.555071693 +0000 UTC m=+0.175402585 container remove c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:11:37 np0005593232 systemd[1]: libpod-conmon-c48d547b6746cd1ba7ef496210b7a2cdfc9d31dfdaf272e4c494134c9f3bfaca.scope: Deactivated successfully.
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:11:37 np0005593232 podman[334851]: 2026-01-23 10:11:37.738400422 +0000 UTC m=+0.041848880 container create fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:11:37 np0005593232 systemd[1]: Started libpod-conmon-fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628.scope.
Jan 23 05:11:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:11:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e93f7f7a7681917bfec1af70fa770bbe3970458721f8d056b54809b8aaf8f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e93f7f7a7681917bfec1af70fa770bbe3970458721f8d056b54809b8aaf8f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e93f7f7a7681917bfec1af70fa770bbe3970458721f8d056b54809b8aaf8f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e93f7f7a7681917bfec1af70fa770bbe3970458721f8d056b54809b8aaf8f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:11:37 np0005593232 podman[334851]: 2026-01-23 10:11:37.719915907 +0000 UTC m=+0.023364385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:11:37 np0005593232 podman[334851]: 2026-01-23 10:11:37.822121971 +0000 UTC m=+0.125570449 container init fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:11:37 np0005593232 podman[334851]: 2026-01-23 10:11:37.829301345 +0000 UTC m=+0.132749803 container start fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:11:37 np0005593232 podman[334851]: 2026-01-23 10:11:37.833108723 +0000 UTC m=+0.136557211 container attach fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:11:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:37.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:11:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:38.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]: {
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:        "osd_id": 0,
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:        "type": "bluestore"
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]:    }
Jan 23 05:11:38 np0005593232 condescending_diffie[334867]: }
Jan 23 05:11:38 np0005593232 systemd[1]: libpod-fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628.scope: Deactivated successfully.
Jan 23 05:11:38 np0005593232 podman[334889]: 2026-01-23 10:11:38.794020626 +0000 UTC m=+0.024731954 container died fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:11:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d6e93f7f7a7681917bfec1af70fa770bbe3970458721f8d056b54809b8aaf8f9-merged.mount: Deactivated successfully.
Jan 23 05:11:38 np0005593232 podman[334889]: 2026-01-23 10:11:38.845794557 +0000 UTC m=+0.076505865 container remove fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_diffie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:11:38 np0005593232 systemd[1]: libpod-conmon-fb38119322b6a5fea9fe4e4fd320953c882a5ef6d47724b55a7fb19103a01628.scope: Deactivated successfully.
Jan 23 05:11:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:11:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:11:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e5089f87-c87a-4d2e-8bc0-562ee8c70655 does not exist
Jan 23 05:11:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2b7f33bc-f90b-481e-b221-a2ee10ea764a does not exist
Jan 23 05:11:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 89d1c20f-5123-4e44-a614-c35569ccb520 does not exist
Jan 23 05:11:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:11:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2445: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 1.9 MiB/s wr, 52 op/s
Jan 23 05:11:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:39.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:40.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2446: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:41 np0005593232 nova_compute[250269]: 2026-01-23 10:11:41.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.882239) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163101882396, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1533, "num_deletes": 261, "total_data_size": 2458415, "memory_usage": 2489992, "flush_reason": "Manual Compaction"}
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163101906764, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 2420708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53021, "largest_seqno": 54553, "table_properties": {"data_size": 2413520, "index_size": 4131, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15848, "raw_average_key_size": 20, "raw_value_size": 2398786, "raw_average_value_size": 3087, "num_data_blocks": 180, "num_entries": 777, "num_filter_entries": 777, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769162974, "oldest_key_time": 1769162974, "file_creation_time": 1769163101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 24609 microseconds, and 8264 cpu microseconds.
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.906842) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 2420708 bytes OK
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.906927) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.909150) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.909178) EVENT_LOG_v1 {"time_micros": 1769163101909171, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.909199) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2451623, prev total WAL file size 2451623, number of live WAL files 2.
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.910840) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303036' seq:72057594037927935, type:22 .. '6C6F676D0032323630' seq:0, type:0; will stop at (end)
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(2363KB)], [119(10212KB)]
Jan 23 05:11:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163101911088, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 12878696, "oldest_snapshot_seqno": -1}
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 8150 keys, 12736865 bytes, temperature: kUnknown
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163102022970, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 12736865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12682101, "index_size": 33276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 211141, "raw_average_key_size": 25, "raw_value_size": 12536728, "raw_average_value_size": 1538, "num_data_blocks": 1314, "num_entries": 8150, "num_filter_entries": 8150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163101, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.023323) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 12736865 bytes
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.024540) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(10.6) write-amplify(5.3) OK, records in: 8690, records dropped: 540 output_compression: NoCompression
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.024561) EVENT_LOG_v1 {"time_micros": 1769163102024552, "job": 72, "event": "compaction_finished", "compaction_time_micros": 111967, "compaction_time_cpu_micros": 50225, "output_level": 6, "num_output_files": 1, "total_output_size": 12736865, "num_input_records": 8690, "num_output_records": 8150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163102025209, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163102027520, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:41.910301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.027684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.027696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.027697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.027699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:11:42.027700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.548 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:42.624 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:42.625 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:42.625 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:42.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.770 250273 DEBUG nova.compute.manager [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.893 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.894 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.927 250273 DEBUG nova.objects.instance [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_requests' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.946 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.947 250273 INFO nova.compute.claims [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.948 250273 DEBUG nova.objects.instance [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'resources' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:11:42 np0005593232 nova_compute[250269]: 2026-01-23 10:11:42.965 250273 DEBUG nova.objects.instance [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.038 250273 INFO nova.compute.resource_tracker [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating resource usage from migration 01ea1762-4856-47b8-a387-167e93cabc21#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.039 250273 DEBUG nova.compute.resource_tracker [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Starting to track incoming migration 01ea1762-4856-47b8-a387-167e93cabc21 with flavor eebea5f8-9b11-45ad-873d-c4ea90d3de87 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.142 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2447: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 23 05:11:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:11:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475068321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.579 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.586 250273 DEBUG nova.compute.provider_tree [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.607 250273 DEBUG nova.scheduler.client.report [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.641 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:43 np0005593232 nova_compute[250269]: 2026-01-23 10:11:43.641 250273 INFO nova.compute.manager [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Migrating#033[00m
Jan 23 05:11:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:44.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2448: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 23 05:11:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:45.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:46.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:46 np0005593232 nova_compute[250269]: 2026-01-23 10:11:46.793 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01284271473106272 of space, bias 1.0, pg target 3.852814419318816 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.4540294062907128e-06 of space, bias 1.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:11:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2449: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:47.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.991 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.992 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.992 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:11:47 np0005593232 nova_compute[250269]: 2026-01-23 10:11:47.992 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:11:48 np0005593232 systemd[1]: Created slice User Slice of UID 42436.
Jan 23 05:11:48 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 23 05:11:48 np0005593232 systemd-logind[808]: New session 55 of user nova.
Jan 23 05:11:48 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 23 05:11:48 np0005593232 systemd[1]: Starting User Manager for UID 42436...
Jan 23 05:11:48 np0005593232 systemd[335035]: Queued start job for default target Main User Target.
Jan 23 05:11:48 np0005593232 systemd[335035]: Created slice User Application Slice.
Jan 23 05:11:48 np0005593232 systemd[335035]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:11:48 np0005593232 systemd[335035]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 05:11:48 np0005593232 systemd[335035]: Reached target Paths.
Jan 23 05:11:48 np0005593232 systemd[335035]: Reached target Timers.
Jan 23 05:11:48 np0005593232 systemd[335035]: Starting D-Bus User Message Bus Socket...
Jan 23 05:11:48 np0005593232 systemd[335035]: Starting Create User's Volatile Files and Directories...
Jan 23 05:11:48 np0005593232 systemd[335035]: Finished Create User's Volatile Files and Directories.
Jan 23 05:11:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:48.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:48 np0005593232 systemd[335035]: Listening on D-Bus User Message Bus Socket.
Jan 23 05:11:48 np0005593232 systemd[335035]: Reached target Sockets.
Jan 23 05:11:48 np0005593232 systemd[335035]: Reached target Basic System.
Jan 23 05:11:48 np0005593232 systemd[335035]: Reached target Main User Target.
Jan 23 05:11:48 np0005593232 systemd[335035]: Startup finished in 182ms.
Jan 23 05:11:48 np0005593232 systemd[1]: Started User Manager for UID 42436.
Jan 23 05:11:48 np0005593232 systemd[1]: Started Session 55 of User nova.
Jan 23 05:11:48 np0005593232 systemd[1]: session-55.scope: Deactivated successfully.
Jan 23 05:11:48 np0005593232 systemd-logind[808]: Session 55 logged out. Waiting for processes to exit.
Jan 23 05:11:48 np0005593232 systemd-logind[808]: Removed session 55.
Jan 23 05:11:48 np0005593232 systemd-logind[808]: New session 57 of user nova.
Jan 23 05:11:48 np0005593232 systemd[1]: Started Session 57 of User nova.
Jan 23 05:11:49 np0005593232 systemd[1]: session-57.scope: Deactivated successfully.
Jan 23 05:11:49 np0005593232 systemd-logind[808]: Session 57 logged out. Waiting for processes to exit.
Jan 23 05:11:49 np0005593232 systemd-logind[808]: Removed session 57.
Jan 23 05:11:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2450: 321 pgs: 321 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 845 KiB/s wr, 157 op/s
Jan 23 05:11:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:49.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:11:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:50.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.261 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [{"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:11:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2451: 321 pgs: 321 active+clean; 517 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 139 op/s
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.281 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.281 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.281 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.281 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.282 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.325 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.795 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:11:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609940212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.822 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.908 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.908 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.908 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:11:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.987 250273 DEBUG nova.compute.manager [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.988 250273 DEBUG oslo_concurrency.lockutils [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.988 250273 DEBUG oslo_concurrency.lockutils [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.988 250273 DEBUG oslo_concurrency.lockutils [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.988 250273 DEBUG nova.compute.manager [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:11:51 np0005593232 nova_compute[250269]: 2026-01-23 10:11:51.989 250273 WARNING nova.compute.manager [req-6963f36a-c3ba-412a-9e8e-ad8898896979 req-34106798-0446-4e01-b1c4-05d79a3b85f4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.099 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.100 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4149MB free_disk=20.723583221435547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.100 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.101 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.179 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Migration for instance a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.218 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating resource usage from migration 01ea1762-4856-47b8-a387-167e93cabc21#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.218 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Starting to track incoming migration 01ea1762-4856-47b8-a387-167e93cabc21 with flavor eebea5f8-9b11-45ad-873d-c4ea90d3de87 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.260 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.285 250273 WARNING nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c has been moved to another host compute-2.ctlplane.example.com(compute-2.ctlplane.example.com). There are allocations remaining against the source host that might need to be removed: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}.#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.286 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.286 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=832MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.360 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:52 np0005593232 podman[335081]: 2026-01-23 10:11:52.489859689 +0000 UTC m=+0.144420395 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.596 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:52.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:11:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562677735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.832 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.839 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.858 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.860 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:11:52 np0005593232 nova_compute[250269]: 2026-01-23 10:11:52.860 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2452: 321 pgs: 321 active+clean; 447 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 198 op/s
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.863 250273 INFO nova.network.neutron [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating port 87b7656f-9fbc-466f-bfe3-06171df90096 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.877 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.878 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.878 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.879 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.879 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.880 250273 INFO nova.compute.manager [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Terminating instance#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.881 250273 DEBUG nova.compute.manager [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:11:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:53.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:53 np0005593232 kernel: tapacd0a614-69 (unregistering): left promiscuous mode
Jan 23 05:11:53 np0005593232 NetworkManager[49057]: <info>  [1769163113.9447] device (tapacd0a614-69): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:11:53Z|00446|binding|INFO|Releasing lport acd0a614-69ea-41fa-9830-ca2c81f259b9 from this chassis (sb_readonly=0)
Jan 23 05:11:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:11:53Z|00447|binding|INFO|Setting lport acd0a614-69ea-41fa-9830-ca2c81f259b9 down in Southbound
Jan 23 05:11:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:11:53Z|00448|binding|INFO|Removing iface tapacd0a614-69 ovn-installed in OVS
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:53.959 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:c1:86 10.100.0.2'], port_security=['fa:16:3e:50:c1:86 10.100.0.2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': '280ec2fb-6ca3-4b43-bff4-ba64ac3a935b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-88b571fd-69ad-4860-a596-3bd637fdb189', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f00cc6e26e5c435b902306c6421e146d', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'a7b9167c-c78b-48f5-9e9d-ac8ada29e0a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d050303a-8173-4865-aab2-724e0c0624de, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=acd0a614-69ea-41fa-9830-ca2c81f259b9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:11:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:53.960 161902 INFO neutron.agent.ovn.metadata.agent [-] Port acd0a614-69ea-41fa-9830-ca2c81f259b9 in datapath 88b571fd-69ad-4860-a596-3bd637fdb189 unbound from our chassis#033[00m
Jan 23 05:11:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:53.961 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 88b571fd-69ad-4860-a596-3bd637fdb189 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 05:11:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:11:53.962 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[073ac338-464d-4657-b384-58f2d434fe99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:11:53 np0005593232 nova_compute[250269]: 2026-01-23 10:11:53.972 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 23 05:11:54 np0005593232 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000080.scope: Consumed 16.139s CPU time.
Jan 23 05:11:54 np0005593232 systemd-machined[215836]: Machine qemu-54-instance-00000080 terminated.
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.121 250273 INFO nova.virt.libvirt.driver [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Instance destroyed successfully.#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.121 250273 DEBUG nova.objects.instance [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lazy-loading 'resources' on Instance uuid 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.153 250273 DEBUG nova.virt.libvirt.vif [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1237513179',display_name='tempest-ServerRescueTestJSON-server-1237513179',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1237513179',id=128,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:10:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f00cc6e26e5c435b902306c6421e146d',ramdisk_id='',reservation_id='r-1la91wzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-837476510',owner_user_name='tempest-ServerRescueTestJSON-837476510-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:10:47Z,user_data=None,user_id='eb500aabc93044e380f4bc905205803d',uuid=280ec2fb-6ca3-4b43-bff4-ba64ac3a935b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.154 250273 DEBUG nova.network.os_vif_util [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converting VIF {"id": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "address": "fa:16:3e:50:c1:86", "network": {"id": "88b571fd-69ad-4860-a596-3bd637fdb189", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1616424882-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.2", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "f00cc6e26e5c435b902306c6421e146d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapacd0a614-69", "ovs_interfaceid": "acd0a614-69ea-41fa-9830-ca2c81f259b9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.155 250273 DEBUG nova.network.os_vif_util [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.156 250273 DEBUG os_vif [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.158 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapacd0a614-69, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.160 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.161 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.166 250273 DEBUG nova.compute.manager [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.166 250273 DEBUG oslo_concurrency.lockutils [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.166 250273 DEBUG oslo_concurrency.lockutils [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.167 250273 DEBUG oslo_concurrency.lockutils [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.167 250273 DEBUG nova.compute.manager [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.167 250273 WARNING nova.compute.manager [req-23e6e034-c6b8-47b5-8d6c-1d702760e38b req-cba8e35e-e088-4ce6-b199-4147fba3b6d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.175 250273 INFO os_vif [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:c1:86,bridge_name='br-int',has_traffic_filtering=True,id=acd0a614-69ea-41fa-9830-ca2c81f259b9,network=Network(88b571fd-69ad-4860-a596-3bd637fdb189),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapacd0a614-69')#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.290 250273 DEBUG nova.compute.manager [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.291 250273 DEBUG oslo_concurrency.lockutils [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.291 250273 DEBUG oslo_concurrency.lockutils [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.292 250273 DEBUG oslo_concurrency.lockutils [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.292 250273 DEBUG nova.compute.manager [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:11:54 np0005593232 nova_compute[250269]: 2026-01-23 10:11:54.292 250273 DEBUG nova.compute.manager [req-fd685f3a-849e-4871-abdd-a323204125c3 req-3f02f338-e019-4f79-8745-22f05f2c1522 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-unplugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:11:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:54.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.239 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.240 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.241 250273 DEBUG nova.network.neutron [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:11:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2453: 321 pgs: 321 active+clean; 409 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 22 KiB/s wr, 168 op/s
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.528 250273 DEBUG nova.compute.manager [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-changed-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.528 250273 DEBUG nova.compute.manager [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Refreshing instance network info cache due to event network-changed-87b7656f-9fbc-466f-bfe3-06171df90096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:11:55 np0005593232 nova_compute[250269]: 2026-01-23 10:11:55.529 250273 DEBUG oslo_concurrency.lockutils [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:11:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:11:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.397 250273 DEBUG nova.compute.manager [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.398 250273 DEBUG oslo_concurrency.lockutils [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.398 250273 DEBUG oslo_concurrency.lockutils [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.398 250273 DEBUG oslo_concurrency.lockutils [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.398 250273 DEBUG nova.compute.manager [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] No waiting events found dispatching network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.398 250273 WARNING nova.compute.manager [req-22f34259-b185-47a7-b4ba-b447ab19133c req-6c2cddd2-29e2-4a04-9dae-a962bcd7cdf4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received unexpected event network-vif-plugged-acd0a614-69ea-41fa-9830-ca2c81f259b9 for instance with vm_state rescued and task_state deleting.#033[00m
Jan 23 05:11:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:56.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:11:56 np0005593232 nova_compute[250269]: 2026-01-23 10:11:56.795 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2454: 321 pgs: 321 active+clean; 396 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 22 KiB/s wr, 155 op/s
Jan 23 05:11:57 np0005593232 podman[335168]: 2026-01-23 10:11:57.394833452 +0000 UTC m=+0.055516359 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:11:57 np0005593232 nova_compute[250269]: 2026-01-23 10:11:57.719 250273 DEBUG nova.network.neutron [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating instance_info_cache with network_info: [{"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:11:57 np0005593232 nova_compute[250269]: 2026-01-23 10:11:57.846 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:11:57 np0005593232 nova_compute[250269]: 2026-01-23 10:11:57.849 250273 DEBUG oslo_concurrency.lockutils [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:11:57 np0005593232 nova_compute[250269]: 2026-01-23 10:11:57.849 250273 DEBUG nova.network.neutron [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Refreshing network info cache for port 87b7656f-9fbc-466f-bfe3-06171df90096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:11:57 np0005593232 nova_compute[250269]: 2026-01-23 10:11:57.854 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:11:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.341 250273 INFO nova.virt.libvirt.driver [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Deleting instance files /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_del#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.342 250273 INFO nova.virt.libvirt.driver [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Deletion of /var/lib/nova/instances/280ec2fb-6ca3-4b43-bff4-ba64ac3a935b_del complete#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.694 250273 DEBUG os_brick.utils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.697 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.702 250273 INFO nova.compute.manager [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Took 4.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.703 250273 DEBUG oslo.service.loopingcall [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.703 250273 DEBUG nova.compute.manager [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.703 250273 DEBUG nova.network.neutron [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.712 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.713 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cb0260-f7e1-4352-950e-5b363f57e460]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.714 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.722 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.722 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[6fceb6f8-4376-4dbf-b469-3632c7cfeef3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.724 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:11:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.735 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.735 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[955224a8-efa6-43de-87f8-adca0578c939]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.736 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[73d113fc-aee2-48c2-9c91-be80984513e7]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.737 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.764 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.767 250273 DEBUG os_brick.initiator.connectors.lightos [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.767 250273 DEBUG os_brick.initiator.connectors.lightos [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.767 250273 DEBUG os_brick.initiator.connectors.lightos [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:11:58 np0005593232 nova_compute[250269]: 2026-01-23 10:11:58.768 250273 DEBUG os_brick.utils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:11:59 np0005593232 systemd[1]: Stopping User Manager for UID 42436...
Jan 23 05:11:59 np0005593232 systemd[335035]: Activating special unit Exit the Session...
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped target Main User Target.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped target Basic System.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped target Paths.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped target Sockets.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped target Timers.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 05:11:59 np0005593232 systemd[335035]: Closed D-Bus User Message Bus Socket.
Jan 23 05:11:59 np0005593232 systemd[335035]: Stopped Create User's Volatile Files and Directories.
Jan 23 05:11:59 np0005593232 systemd[335035]: Removed slice User Application Slice.
Jan 23 05:11:59 np0005593232 systemd[335035]: Reached target Shutdown.
Jan 23 05:11:59 np0005593232 systemd[335035]: Finished Exit the Session.
Jan 23 05:11:59 np0005593232 systemd[335035]: Reached target Exit the Session.
Jan 23 05:11:59 np0005593232 systemd[1]: user@42436.service: Deactivated successfully.
Jan 23 05:11:59 np0005593232 systemd[1]: Stopped User Manager for UID 42436.
Jan 23 05:11:59 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 23 05:11:59 np0005593232 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 23 05:11:59 np0005593232 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 23 05:11:59 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 23 05:11:59 np0005593232 systemd[1]: Removed slice User Slice of UID 42436.
Jan 23 05:11:59 np0005593232 nova_compute[250269]: 2026-01-23 10:11:59.161 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:11:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2455: 321 pgs: 321 active+clean; 305 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 188 op/s
Jan 23 05:11:59 np0005593232 nova_compute[250269]: 2026-01-23 10:11:59.913 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 23 05:11:59 np0005593232 nova_compute[250269]: 2026-01-23 10:11:59.915 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:11:59 np0005593232 nova_compute[250269]: 2026-01-23 10:11:59.915 250273 INFO nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Creating image(s)#033[00m
Jan 23 05:11:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:11:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:11:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:11:59.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:11:59 np0005593232 nova_compute[250269]: 2026-01-23 10:11:59.954 250273 DEBUG nova.storage.rbd_utils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(nova-resize) on rbd image(a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.125 250273 DEBUG nova.objects.instance [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.129 250273 DEBUG nova.network.neutron [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.198 250273 INFO nova.compute.manager [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Took 1.50 seconds to deallocate network for instance.#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.239 250273 DEBUG nova.compute.manager [req-a577e1e7-9db3-4f2c-b6d6-17ffaa6071ef req-f82e7688-ebff-4dfa-beab-8dfcd546b048 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Received event network-vif-deleted-acd0a614-69ea-41fa-9830-ca2c81f259b9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.272 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.272 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Ensure instance console log exists: /var/lib/nova/instances/a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.273 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.273 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.273 250273 DEBUG oslo_concurrency.lockutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.276 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Start _get_guest_xml network_info=[{"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:a4:3b:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '78bf6722-1b89-4063-a036-3b0e1fd729ac', 'device_type': 'disk', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-649f8ce8-126a-4838-b42c-047bd1f41e67', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '649f8ce8-126a-4838-b42c-047bd1f41e67', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c', 'attached_at': '2026-01-23T10:11:59.000000', 'detached_at': '', 'volume_id': '649f8ce8-126a-4838-b42c-047bd1f41e67', 'serial': '649f8ce8-126a-4838-b42c-047bd1f41e67'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.281 250273 WARNING nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.285 250273 DEBUG nova.virt.libvirt.host [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.286 250273 DEBUG nova.virt.libvirt.host [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.289 250273 DEBUG nova.virt.libvirt.host [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.289 250273 DEBUG nova.virt.libvirt.host [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.290 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.290 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eebea5f8-9b11-45ad-873d-c4ea90d3de87',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.291 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.291 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.291 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.292 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.292 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.292 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.293 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.293 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.293 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.293 250273 DEBUG nova.virt.hardware [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.294 250273 DEBUG nova.objects.instance [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.332 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.332 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.342 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.449 250273 DEBUG oslo_concurrency.processutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.689 250273 DEBUG nova.network.neutron [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updated VIF entry in instance network info cache for port 87b7656f-9fbc-466f-bfe3-06171df90096. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.690 250273 DEBUG nova.network.neutron [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating instance_info_cache with network_info: [{"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.714 250273 DEBUG oslo_concurrency.lockutils [req-35a18290-a23a-4826-b034-0e16db4e6147 req-1c19cb95-010a-4aa3-bec6-93e88bf97af7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:00.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2292748820' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.776 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.814 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:12:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230354687' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.877 250273 DEBUG oslo_concurrency.processutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.884 250273 DEBUG nova.compute.provider_tree [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.902 250273 DEBUG nova.scheduler.client.report [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.929 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:00.958 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.958 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:00.959 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:12:00 np0005593232 nova_compute[250269]: 2026-01-23 10:12:00.997 250273 INFO nova.scheduler.client.report [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Deleted allocations for instance 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.131 250273 DEBUG oslo_concurrency.lockutils [None req-7aa9e883-b54e-4f50-8076-67ce884b05a5 eb500aabc93044e380f4bc905205803d f00cc6e26e5c435b902306c6421e146d - - default default] Lock "280ec2fb-6ca3-4b43-bff4-ba64ac3a935b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:12:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220199298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:12:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2457: 321 pgs: 321 active+clean; 305 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 24 KiB/s wr, 164 op/s
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.276 250273 DEBUG oslo_concurrency.processutils [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.311 250273 DEBUG nova.virt.libvirt.vif [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:10:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1307986454',display_name='tempest-ServerActionsTestOtherB-server-1307986454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1307986454',id=130,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:11:08Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-7gk6dzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:a4:3b:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.312 250273 DEBUG nova.network.os_vif_util [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:a4:3b:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.313 250273 DEBUG nova.network.os_vif_util [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.315 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <uuid>a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c</uuid>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <name>instance-00000082</name>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <memory>196608</memory>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherB-server-1307986454</nova:name>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:12:00</nova:creationTime>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.micro">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:memory>192</nova:memory>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:user uuid="aca3cab576d641d3b89e7dddf155d467">tempest-ServerActionsTestOtherB-1052932467-project-member</nova:user>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:project uuid="9dd869ce76e44fc8a82b8bbee1654d33">tempest-ServerActionsTestOtherB-1052932467</nova:project>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <nova:port uuid="87b7656f-9fbc-466f-bfe3-06171df90096">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="serial">a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="uuid">a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c_disk">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c_disk.config">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-649f8ce8-126a-4838-b42c-047bd1f41e67">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <serial>649f8ce8-126a-4838-b42c-047bd1f41e67</serial>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:a4:3b:96"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <target dev="tap87b7656f-9f"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c/console.log" append="off"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:12:01 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:12:01 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:12:01 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:12:01 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.317 250273 DEBUG nova.virt.libvirt.vif [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:10:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1307986454',display_name='tempest-ServerActionsTestOtherB-server-1307986454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1307986454',id=130,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:11:08Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-7gk6dzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:11:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:a4:3b:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.317 250273 DEBUG nova.network.os_vif_util [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerActionsTestOtherB-1325714374-network", "vif_mac": "fa:16:3e:a4:3b:96"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.318 250273 DEBUG nova.network.os_vif_util [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.318 250273 DEBUG os_vif [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.319 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.319 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.319 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.323 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.324 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87b7656f-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.324 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87b7656f-9f, col_values=(('external_ids', {'iface-id': '87b7656f-9fbc-466f-bfe3-06171df90096', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:3b:96', 'vm-uuid': 'a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.3272] manager: (tap87b7656f-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.333 250273 INFO os_vif [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f')#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.404 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.406 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.407 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.408 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No VIF found with MAC fa:16:3e:a4:3b:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.410 250273 INFO nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Using config drive#033[00m
Jan 23 05:12:01 np0005593232 kernel: tap87b7656f-9f: entered promiscuous mode
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:01Z|00449|binding|INFO|Claiming lport 87b7656f-9fbc-466f-bfe3-06171df90096 for this chassis.
Jan 23 05:12:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:01Z|00450|binding|INFO|87b7656f-9fbc-466f-bfe3-06171df90096: Claiming fa:16:3e:a4:3b:96 10.100.0.12
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.5570] manager: (tap87b7656f-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.563 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.565 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.5668] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.5675] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 23 05:12:01 np0005593232 systemd-udevd[335383]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.583 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3b:96 10.100.0.12'], port_security=['fa:16:3e:a4:3b:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=87b7656f-9fbc-466f-bfe3-06171df90096) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.584 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 87b7656f-9fbc-466f-bfe3-06171df90096 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d bound to our chassis#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.586 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8d9599b4-8855-4310-af02-cdd058438f7d#033[00m
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.5966] device (tap87b7656f-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.5975] device (tap87b7656f-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.599 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[14ba295a-7a64-454f-a6c8-e5d91a320e2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.600 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8d9599b4-81 in ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.603 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8d9599b4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.603 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5822d672-0abf-4947-9e38-3470f5cecd4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.604 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[64d3914e-7a51-4268-852a-e8b9179eb879]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 systemd-machined[215836]: New machine qemu-55-instance-00000082.
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.621 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[fd079f6a-1027-453d-9c25-cf8fefc4ba9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 systemd[1]: Started Virtual Machine qemu-55-instance-00000082.
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.638 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a5491565-7226-4d47-90f9-19270761261c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.699 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[91752050-ae98-4ad0-b4fc-35b6df2a105b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.7266] manager: (tap8d9599b4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/218)
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.725 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[33768ded-ff47-4771-adb2-85bc3986f98a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.760 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.780 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.780 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[89aea580-e62f-41d3-9aa1-18f3fffcef53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.788 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8c8b2ad1-0d1c-4022-895b-1be6da83f71a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:01Z|00451|binding|INFO|Setting lport 87b7656f-9fbc-466f-bfe3-06171df90096 ovn-installed in OVS
Jan 23 05:12:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:01Z|00452|binding|INFO|Setting lport 87b7656f-9fbc-466f-bfe3-06171df90096 up in Southbound
Jan 23 05:12:01 np0005593232 nova_compute[250269]: 2026-01-23 10:12:01.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:01 np0005593232 NetworkManager[49057]: <info>  [1769163121.8290] device (tap8d9599b4-80): carrier: link connected
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.837 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[990f650e-1207-4cda-99a2-4be59b8bc48e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.857 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb0e4bd-6624-45e9-8858-117a89cf600b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703497, 'reachable_time': 21682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335419, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.876 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ad17f37f-fb8b-47d4-9d97-283bee206927]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703497, 'tstamp': 703497}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335420, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.899 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[623ff681-49a2-4584-a1fd-50f70fa2736e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 137], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703497, 'reachable_time': 21682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 335436, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:01.950 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[96655546-825a-4962-b7f3-8b6fca11ab1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.021 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ff84b2a5-9043-42e5-b2b9-36e6407e635c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.024 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.025 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.025 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9599b4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.028 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:02 np0005593232 kernel: tap8d9599b4-80: entered promiscuous mode
Jan 23 05:12:02 np0005593232 NetworkManager[49057]: <info>  [1769163122.0288] manager: (tap8d9599b4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/219)
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.030 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.033 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8d9599b4-80, col_values=(('external_ids', {'iface-id': 'b57bd565-3bb1-4ecc-8df0-a7c439ac84a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:02Z|00453|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.035 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.036 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0abeb1b-a9e2-4714-8fa5-d58718196ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.037 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:12:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:02.038 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'env', 'PROCESS_TAG=haproxy-8d9599b4-8855-4310-af02-cdd058438f7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8d9599b4-8855-4310-af02-cdd058438f7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.205 250273 DEBUG nova.compute.manager [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.205 250273 DEBUG oslo_concurrency.lockutils [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.211 250273 DEBUG oslo_concurrency.lockutils [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.211 250273 DEBUG oslo_concurrency.lockutils [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.212 250273 DEBUG nova.compute.manager [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.212 250273 WARNING nova.compute.manager [req-8b8b184d-09ef-4796-a9e6-846a5bb82fac req-f4313b99-945d-4774-afa9-6c383d2b70c0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 05:12:02 np0005593232 podman[335488]: 2026-01-23 10:12:02.393721893 +0000 UTC m=+0.048325874 container create 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:12:02 np0005593232 systemd[1]: Started libpod-conmon-84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960.scope.
Jan 23 05:12:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05397a38262da1c053d5dc02c00f836fa1136544d7f582d9cd81aaa4deadf117/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:02 np0005593232 podman[335488]: 2026-01-23 10:12:02.365532352 +0000 UTC m=+0.020136353 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:12:02 np0005593232 podman[335488]: 2026-01-23 10:12:02.464574956 +0000 UTC m=+0.119178947 container init 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 05:12:02 np0005593232 podman[335488]: 2026-01-23 10:12:02.472670306 +0000 UTC m=+0.127274287 container start 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:12:02 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [NOTICE]   (335531) : New worker (335533) forked
Jan 23 05:12:02 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [NOTICE]   (335531) : Loading success.
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.542 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163122.5416837, a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.543 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.546 250273 DEBUG nova.compute.manager [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.551 250273 INFO nova.virt.libvirt.driver [-] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Instance running successfully.#033[00m
Jan 23 05:12:02 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.554 250273 DEBUG nova.virt.libvirt.guest [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.554 250273 DEBUG nova.virt.libvirt.driver [None req-14e0c743-b013-4bda-87da-8f34d660792a aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.708 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.710 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:12:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:02.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.780 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.780 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163122.5417933, a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.780 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] VM Started (Lifecycle Event)#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.830 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.835 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:12:02 np0005593232 nova_compute[250269]: 2026-01-23 10:12:02.887 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:12:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2458: 321 pgs: 321 active+clean; 231 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 13 KiB/s wr, 160 op/s
Jan 23 05:12:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.448 250273 DEBUG nova.compute.manager [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.448 250273 DEBUG oslo_concurrency.lockutils [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.449 250273 DEBUG oslo_concurrency.lockutils [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.449 250273 DEBUG oslo_concurrency.lockutils [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.449 250273 DEBUG nova.compute.manager [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:04 np0005593232 nova_compute[250269]: 2026-01-23 10:12:04.449 250273 WARNING nova.compute.manager [req-923f0934-72d2-44f2-8422-71d22decaba3 req-016605a5-5206-4b50-a726-a57983246d2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:12:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2459: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 7.2 KiB/s wr, 190 op/s
Jan 23 05:12:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:05Z|00454|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:12:05 np0005593232 nova_compute[250269]: 2026-01-23 10:12:05.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:05 np0005593232 nova_compute[250269]: 2026-01-23 10:12:05.771 250273 DEBUG nova.network.neutron [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Port 87b7656f-9fbc-466f-bfe3-06171df90096 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Jan 23 05:12:05 np0005593232 nova_compute[250269]: 2026-01-23 10:12:05.772 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:12:05 np0005593232 nova_compute[250269]: 2026-01-23 10:12:05.772 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:12:05 np0005593232 nova_compute[250269]: 2026-01-23 10:12:05.773 250273 DEBUG nova.network.neutron [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:12:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:06 np0005593232 nova_compute[250269]: 2026-01-23 10:12:06.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:06.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:06 np0005593232 nova_compute[250269]: 2026-01-23 10:12:06.801 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:07Z|00455|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:12:07 np0005593232 nova_compute[250269]: 2026-01-23 10:12:07.115 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2460: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 6.0 KiB/s wr, 181 op/s
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:07 np0005593232 nova_compute[250269]: 2026-01-23 10:12:07.811 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:07.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.046 250273 DEBUG nova.network.neutron [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating instance_info_cache with network_info: [{"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.094 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:08 np0005593232 kernel: tap87b7656f-9f (unregistering): left promiscuous mode
Jan 23 05:12:08 np0005593232 NetworkManager[49057]: <info>  [1769163128.2492] device (tap87b7656f-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:12:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:08Z|00456|binding|INFO|Releasing lport 87b7656f-9fbc-466f-bfe3-06171df90096 from this chassis (sb_readonly=0)
Jan 23 05:12:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:08Z|00457|binding|INFO|Setting lport 87b7656f-9fbc-466f-bfe3-06171df90096 down in Southbound
Jan 23 05:12:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:08Z|00458|binding|INFO|Removing iface tap87b7656f-9f ovn-installed in OVS
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.261 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.270 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:3b:96 10.100.0.12'], port_security=['fa:16:3e:a4:3b:96 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=87b7656f-9fbc-466f-bfe3-06171df90096) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.272 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 87b7656f-9fbc-466f-bfe3-06171df90096 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d unbound from our chassis#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.273 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d9599b4-8855-4310-af02-cdd058438f7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.274 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[90eb89cc-a05a-4a58-92d7-30014b52ebcc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.275 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace which is not needed anymore#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.280 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 23 05:12:08 np0005593232 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000082.scope: Consumed 6.181s CPU time.
Jan 23 05:12:08 np0005593232 systemd-machined[215836]: Machine qemu-55-instance-00000082 terminated.
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.354 250273 INFO nova.virt.libvirt.driver [-] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Instance destroyed successfully.#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.355 250273 DEBUG nova.objects.instance [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'resources' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.376 250273 DEBUG nova.virt.libvirt.vif [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:10:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1307986454',display_name='tempest-ServerActionsTestOtherB-server-1307986454',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1307986454',id=130,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:12:02Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-7gk6dzv9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state='resize_reverting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:12:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='resized') vif={"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.377 250273 DEBUG nova.network.os_vif_util [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.378 250273 DEBUG nova.network.os_vif_util [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.378 250273 DEBUG os_vif [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.380 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87b7656f-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.386 250273 INFO os_vif [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:3b:96,bridge_name='br-int',has_traffic_filtering=True,id=87b7656f-9fbc-466f-bfe3-06171df90096,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87b7656f-9f')#033[00m
Jan 23 05:12:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [NOTICE]   (335531) : haproxy version is 2.8.14-c23fe91
Jan 23 05:12:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [NOTICE]   (335531) : path to executable is /usr/sbin/haproxy
Jan 23 05:12:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [WARNING]  (335531) : Exiting Master process...
Jan 23 05:12:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [ALERT]    (335531) : Current worker (335533) exited with code 143 (Terminated)
Jan 23 05:12:08 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[335523]: [WARNING]  (335531) : All workers exited. Exiting... (0)
Jan 23 05:12:08 np0005593232 systemd[1]: libpod-84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960.scope: Deactivated successfully.
Jan 23 05:12:08 np0005593232 conmon[335523]: conmon 84ae22e13f038aefc152 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960.scope/container/memory.events
Jan 23 05:12:08 np0005593232 podman[335580]: 2026-01-23 10:12:08.411254737 +0000 UTC m=+0.046660877 container died 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:12:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960-userdata-shm.mount: Deactivated successfully.
Jan 23 05:12:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-05397a38262da1c053d5dc02c00f836fa1136544d7f582d9cd81aaa4deadf117-merged.mount: Deactivated successfully.
Jan 23 05:12:08 np0005593232 podman[335580]: 2026-01-23 10:12:08.448639119 +0000 UTC m=+0.084045259 container cleanup 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:12:08 np0005593232 systemd[1]: libpod-conmon-84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960.scope: Deactivated successfully.
Jan 23 05:12:08 np0005593232 podman[335635]: 2026-01-23 10:12:08.513483342 +0000 UTC m=+0.040800600 container remove 84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.519 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2aac87d2-be6e-4fc7-9da3-8fc06623543c]: (4, ('Fri Jan 23 10:12:08 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960)\n84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960\nFri Jan 23 10:12:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960)\n84ae22e13f038aefc1522e6193b6b42d31fd573d0d2286c0cabc5532f85fa960\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.521 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[55074bdd-6d9e-4b71-a7b2-adf4a51ac64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.522 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.523 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 kernel: tap8d9599b4-80: left promiscuous mode
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.526 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.530 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cb375d01-88bf-4c31-b383-60f841859701]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.540 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.546 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[28704548-c692-4b70-81a9-022ad676d3c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.547 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cf30a8f4-ed27-4f70-9048-eb0588fb5201]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.562 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[79d0bf1d-9096-406a-a2fe-e4220441e556]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703483, 'reachable_time': 37138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335674, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8d9599b4\x2d8855\x2d4310\x2daf02\x2dcdd058438f7d.mount: Deactivated successfully.
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.566 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.566 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[76f54a18-d707-41dd-8958-94ea16f2d129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.693 250273 DEBUG nova.compute.manager [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.694 250273 DEBUG oslo_concurrency.lockutils [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.694 250273 DEBUG oslo_concurrency.lockutils [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.695 250273 DEBUG oslo_concurrency.lockutils [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.695 250273 DEBUG nova.compute.manager [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:08 np0005593232 nova_compute[250269]: 2026-01-23 10:12:08.695 250273 WARNING nova.compute.manager [req-7074a644-3664-45a2-abf8-af3823623710 req-7e7412e3-0234-4375-aafd-9798ff4eac7d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-unplugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 23 05:12:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:08.961 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.120 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163114.1186464, 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.120 250273 INFO nova.compute.manager [-] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.143 250273 DEBUG nova.compute.manager [None req-95d815de-5007-4558-b56f-24d9a9ad8489 - - - - - -] [instance: 280ec2fb-6ca3-4b43-bff4-ba64ac3a935b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2461: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 5.6 KiB/s wr, 163 op/s
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.473 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.473 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.494 250273 DEBUG nova.objects.instance [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'migration_context' on Instance uuid a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:09 np0005593232 nova_compute[250269]: 2026-01-23 10:12:09.575 250273 DEBUG oslo_concurrency.processutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:09.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:12:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373219130' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:12:10 np0005593232 nova_compute[250269]: 2026-01-23 10:12:10.014 250273 DEBUG oslo_concurrency.processutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:10 np0005593232 nova_compute[250269]: 2026-01-23 10:12:10.019 250273 DEBUG nova.compute.provider_tree [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:12:10 np0005593232 nova_compute[250269]: 2026-01-23 10:12:10.052 250273 DEBUG nova.scheduler.client.report [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:12:10 np0005593232 nova_compute[250269]: 2026-01-23 10:12:10.124 250273 DEBUG oslo_concurrency.lockutils [None req-f2e789ae-fa14-4d30-ac2a-ede8fc3cfc83 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:10.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.036 250273 DEBUG nova.compute.manager [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.037 250273 DEBUG oslo_concurrency.lockutils [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.037 250273 DEBUG oslo_concurrency.lockutils [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.037 250273 DEBUG oslo_concurrency.lockutils [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.037 250273 DEBUG nova.compute.manager [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.037 250273 WARNING nova.compute.manager [req-f74dbd75-d2a3-4bc1-ab55-042a0ed026a5 req-242d5e0f-614b-49dc-be0f-3b1c74c2d501 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 23 05:12:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2462: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.0 KiB/s wr, 145 op/s
Jan 23 05:12:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:11 np0005593232 nova_compute[250269]: 2026-01-23 10:12:11.802 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:11.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:12 np0005593232 nova_compute[250269]: 2026-01-23 10:12:12.292 250273 DEBUG nova.compute.manager [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-changed-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:12 np0005593232 nova_compute[250269]: 2026-01-23 10:12:12.292 250273 DEBUG nova.compute.manager [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Refreshing instance network info cache due to event network-changed-87b7656f-9fbc-466f-bfe3-06171df90096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:12:12 np0005593232 nova_compute[250269]: 2026-01-23 10:12:12.292 250273 DEBUG oslo_concurrency.lockutils [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:12:12 np0005593232 nova_compute[250269]: 2026-01-23 10:12:12.292 250273 DEBUG oslo_concurrency.lockutils [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:12:12 np0005593232 nova_compute[250269]: 2026-01-23 10:12:12.293 250273 DEBUG nova.network.neutron [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Refreshing network info cache for port 87b7656f-9fbc-466f-bfe3-06171df90096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:12:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:12.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2463: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 KiB/s wr, 136 op/s
Jan 23 05:12:13 np0005593232 nova_compute[250269]: 2026-01-23 10:12:13.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:14.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:15 np0005593232 nova_compute[250269]: 2026-01-23 10:12:15.133 250273 DEBUG nova.network.neutron [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updated VIF entry in instance network info cache for port 87b7656f-9fbc-466f-bfe3-06171df90096. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:12:15 np0005593232 nova_compute[250269]: 2026-01-23 10:12:15.134 250273 DEBUG nova.network.neutron [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Updating instance_info_cache with network_info: [{"id": "87b7656f-9fbc-466f-bfe3-06171df90096", "address": "fa:16:3e:a4:3b:96", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87b7656f-9f", "ovs_interfaceid": "87b7656f-9fbc-466f-bfe3-06171df90096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:15 np0005593232 nova_compute[250269]: 2026-01-23 10:12:15.162 250273 DEBUG oslo_concurrency.lockutils [req-6cfd2254-0d84-4421-913c-69643e1118fc req-d8118929-ea14-41e8-8d0b-65a6d4faba1b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2464: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 597 B/s wr, 80 op/s
Jan 23 05:12:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:12:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3696689965' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:12:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:16.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:16 np0005593232 nova_compute[250269]: 2026-01-23 10:12:16.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 23 05:12:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 23 05:12:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 23 05:12:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2466: 321 pgs: 321 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 0 B/s wr, 44 op/s
Jan 23 05:12:17 np0005593232 nova_compute[250269]: 2026-01-23 10:12:17.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.385 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.601 250273 DEBUG nova.compute.manager [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.602 250273 DEBUG oslo_concurrency.lockutils [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.603 250273 DEBUG oslo_concurrency.lockutils [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.603 250273 DEBUG oslo_concurrency.lockutils [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.603 250273 DEBUG nova.compute.manager [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] No waiting events found dispatching network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:18 np0005593232 nova_compute[250269]: 2026-01-23 10:12:18.604 250273 WARNING nova.compute.manager [req-c1dbf397-743a-47c3-9cb4-63696a7360d5 req-2e1a555e-6f7a-42c4-aa66-59ae51e53201 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Received unexpected event network-vif-plugged-87b7656f-9fbc-466f-bfe3-06171df90096 for instance with vm_state resized and task_state resize_reverting.#033[00m
Jan 23 05:12:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2467: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 102 B/s wr, 16 op/s
Jan 23 05:12:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:12:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:12:19 np0005593232 nova_compute[250269]: 2026-01-23 10:12:19.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:20.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2468: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 200 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 102 B/s wr, 16 op/s
Jan 23 05:12:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:21 np0005593232 nova_compute[250269]: 2026-01-23 10:12:21.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:21.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:22.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2469: 321 pgs: 321 active+clean; 214 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 963 KiB/s wr, 91 op/s
Jan 23 05:12:23 np0005593232 nova_compute[250269]: 2026-01-23 10:12:23.352 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163128.3511987, a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:23 np0005593232 nova_compute[250269]: 2026-01-23 10:12:23.352 250273 INFO nova.compute.manager [-] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:12:23 np0005593232 nova_compute[250269]: 2026-01-23 10:12:23.389 250273 DEBUG nova.compute.manager [None req-d0b65722-a438-4a95-8cca-c6148e9e4acc - - - - - -] [instance: a4f2647f-5c8b-4e7d-bbf2-eb149db4db2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:23 np0005593232 nova_compute[250269]: 2026-01-23 10:12:23.390 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:23 np0005593232 podman[335705]: 2026-01-23 10:12:23.44670405 +0000 UTC m=+0.106458096 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 05:12:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:24.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:25 np0005593232 nova_compute[250269]: 2026-01-23 10:12:25.129 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2470: 321 pgs: 321 active+clean; 189 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 137 op/s
Jan 23 05:12:25 np0005593232 nova_compute[250269]: 2026-01-23 10:12:25.499 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:25.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:26.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 23 05:12:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 23 05:12:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 23 05:12:26 np0005593232 nova_compute[250269]: 2026-01-23 10:12:26.846 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2472: 321 pgs: 321 active+clean; 189 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Jan 23 05:12:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:28 np0005593232 nova_compute[250269]: 2026-01-23 10:12:28.392 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:28 np0005593232 podman[335735]: 2026-01-23 10:12:28.397994348 +0000 UTC m=+0.055334153 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 05:12:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:28.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2473: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Jan 23 05:12:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:29.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:30.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2474: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 152 op/s
Jan 23 05:12:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:31 np0005593232 nova_compute[250269]: 2026-01-23 10:12:31.849 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:31.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.552 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.552 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.580 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:12:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:32.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.854 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.855 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.863 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.863 250273 INFO nova.compute.claims [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:12:32 np0005593232 nova_compute[250269]: 2026-01-23 10:12:32.979 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2475: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 77 op/s
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.395 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:12:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3794114849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.502 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.509 250273 DEBUG nova.compute.provider_tree [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.533 250273 DEBUG nova.scheduler.client.report [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.583 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.584 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.649 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.649 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.672 250273 INFO nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.691 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.833 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.834 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.835 250273 INFO nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Creating image(s)#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.873 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.913 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.946 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:33 np0005593232 nova_compute[250269]: 2026-01-23 10:12:33.951 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:12:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:33.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.005 250273 DEBUG nova.policy [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fae914e59ec54f6b80928ef3cc68dbdb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.043 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.045 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.046 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.047 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.093 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.101 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.485 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.593 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] resizing rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.731 250273 DEBUG nova.objects.instance [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'migration_context' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.769 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.769 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Ensure instance console log exists: /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.770 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.771 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.771 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:34.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:34 np0005593232 nova_compute[250269]: 2026-01-23 10:12:34.874 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Successfully created port: 5247b656-d92f-4246-8db1-32dd4ca770b1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:12:35 np0005593232 nova_compute[250269]: 2026-01-23 10:12:35.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2476: 321 pgs: 321 active+clean; 167 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 833 KiB/s wr, 33 op/s
Jan 23 05:12:35 np0005593232 nova_compute[250269]: 2026-01-23 10:12:35.867 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:35.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.045 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Successfully updated port: 5247b656-d92f-4246-8db1-32dd4ca770b1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.075 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.075 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.075 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.213 250273 DEBUG nova.compute.manager [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-changed-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.214 250273 DEBUG nova.compute.manager [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Refreshing instance network info cache due to event network-changed-5247b656-d92f-4246-8db1-32dd4ca770b1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.214 250273 DEBUG oslo_concurrency.lockutils [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.390 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:12:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:36.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:36 np0005593232 nova_compute[250269]: 2026-01-23 10:12:36.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:12:37
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', '.mgr', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.control']
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2477: 321 pgs: 321 active+clean; 186 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 896 KiB/s rd, 718 KiB/s wr, 53 op/s
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:12:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:37.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.400 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.720 250273 DEBUG nova.network.neutron [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.770 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.771 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance network_info: |[{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.771 250273 DEBUG oslo_concurrency.lockutils [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.771 250273 DEBUG nova.network.neutron [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Refreshing network info cache for port 5247b656-d92f-4246-8db1-32dd4ca770b1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.773 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start _get_guest_xml network_info=[{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.780 250273 WARNING nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.785 250273 DEBUG nova.virt.libvirt.host [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.786 250273 DEBUG nova.virt.libvirt.host [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:12:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:38.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.794 250273 DEBUG nova.virt.libvirt.host [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.795 250273 DEBUG nova.virt.libvirt.host [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.797 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.797 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.797 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.798 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.798 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.798 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.799 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.799 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.799 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.799 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.800 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.800 250273 DEBUG nova.virt.hardware [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:12:38 np0005593232 nova_compute[250269]: 2026-01-23 10:12:38.803 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2478: 321 pgs: 321 active+clean; 243 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 128 op/s
Jan 23 05:12:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:12:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559609477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.346 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.389 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.395 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:12:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294185647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.849 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.853 250273 DEBUG nova.virt.libvirt.vif [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1844411829',display_name='tempest-ServerRescueNegativeTestJSON-server-1844411829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1844411829',id=133,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a6ba16c4b9d49d3bc24cd7b44935d1f',ramdisk_id='',reservation_id='r-v1vp7v6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-87224704',owner_user_name='tempest-ServerRescueNegativeTestJSON-87224704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:12:33Z,user_data=None,user_id='fae914e59ec54f6b80928ef3cc68dbdb',uuid=8056a321-13d3-4dd8-bb33-70c832c17ac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.853 250273 DEBUG nova.network.os_vif_util [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converting VIF {"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.854 250273 DEBUG nova.network.os_vif_util [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.855 250273 DEBUG nova.objects.instance [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.876 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <uuid>8056a321-13d3-4dd8-bb33-70c832c17ac1</uuid>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <name>instance-00000085</name>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1844411829</nova:name>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:12:38</nova:creationTime>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:user uuid="fae914e59ec54f6b80928ef3cc68dbdb">tempest-ServerRescueNegativeTestJSON-87224704-project-member</nova:user>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:project uuid="0a6ba16c4b9d49d3bc24cd7b44935d1f">tempest-ServerRescueNegativeTestJSON-87224704</nova:project>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <nova:port uuid="5247b656-d92f-4246-8db1-32dd4ca770b1">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="serial">8056a321-13d3-4dd8-bb33-70c832c17ac1</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="uuid">8056a321-13d3-4dd8-bb33-70c832c17ac1</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/8056a321-13d3-4dd8-bb33-70c832c17ac1_disk">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:7a:b9:0c"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <target dev="tap5247b656-d9"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/console.log" append="off"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:12:39 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:12:39 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:12:39 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:12:39 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.877 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Preparing to wait for external event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.877 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.878 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.878 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.879 250273 DEBUG nova.virt.libvirt.vif [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1844411829',display_name='tempest-ServerRescueNegativeTestJSON-server-1844411829',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1844411829',id=133,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a6ba16c4b9d49d3bc24cd7b44935d1f',ramdisk_id='',reservation_id='r-v1vp7v6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-87224704',owner_user_name='tempest-ServerRescueNegativeTestJSON-87224704-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:12:33Z,user_data=None,user_id='fae914e59ec54f6b80928ef3cc68dbdb',uuid=8056a321-13d3-4dd8-bb33-70c832c17ac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.879 250273 DEBUG nova.network.os_vif_util [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converting VIF {"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.879 250273 DEBUG nova.network.os_vif_util [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.880 250273 DEBUG os_vif [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.882 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.882 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.887 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5247b656-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.888 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5247b656-d9, col_values=(('external_ids', {'iface-id': '5247b656-d92f-4246-8db1-32dd4ca770b1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:b9:0c', 'vm-uuid': '8056a321-13d3-4dd8-bb33-70c832c17ac1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:39 np0005593232 NetworkManager[49057]: <info>  [1769163159.8905] manager: (tap5247b656-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.891 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.900 250273 INFO os_vif [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9')#033[00m
Jan 23 05:12:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.975 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.975 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.975 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No VIF found with MAC fa:16:3e:7a:b9:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:12:39 np0005593232 nova_compute[250269]: 2026-01-23 10:12:39.976 250273 INFO nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Using config drive#033[00m
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.011 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:12:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.660 250273 DEBUG nova.network.neutron [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updated VIF entry in instance network info cache for port 5247b656-d92f-4246-8db1-32dd4ca770b1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.661 250273 DEBUG nova.network.neutron [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.679 250273 DEBUG oslo_concurrency.lockutils [req-8d460139-6d7b-4a0b-b80c-83d2b7e26bac req-58e9453c-39fc-4399-9f1f-0d3e084b61c1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.867 250273 INFO nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Creating config drive at /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config#033[00m
Jan 23 05:12:40 np0005593232 nova_compute[250269]: 2026-01-23 10:12:40.872 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk8ro9pyt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:41 np0005593232 nova_compute[250269]: 2026-01-23 10:12:41.034 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk8ro9pyt" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:41 np0005593232 nova_compute[250269]: 2026-01-23 10:12:41.075 250273 DEBUG nova.storage.rbd_utils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:12:41 np0005593232 nova_compute[250269]: 2026-01-23 10:12:41.080 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2479: 321 pgs: 321 active+clean; 243 MiB data, 1.0 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 123 op/s
Jan 23 05:12:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:41 np0005593232 nova_compute[250269]: 2026-01-23 10:12:41.857 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:41.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.096 250273 DEBUG oslo_concurrency.processutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.097 250273 INFO nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Deleting local config drive /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config because it was imported into RBD.#033[00m
Jan 23 05:12:42 np0005593232 kernel: tap5247b656-d9: entered promiscuous mode
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.1714] manager: (tap5247b656-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/221)
Jan 23 05:12:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:42Z|00459|binding|INFO|Claiming lport 5247b656-d92f-4246-8db1-32dd4ca770b1 for this chassis.
Jan 23 05:12:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:42Z|00460|binding|INFO|5247b656-d92f-4246-8db1-32dd4ca770b1: Claiming fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.172 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:42Z|00461|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 ovn-installed in OVS
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.192 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 systemd-udevd[336376]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:12:42 np0005593232 systemd-machined[215836]: New machine qemu-56-instance-00000085.
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.2237] device (tap5247b656-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.2242] device (tap5247b656-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:12:42 np0005593232 systemd[1]: Started Virtual Machine qemu-56-instance-00000085.
Jan 23 05:12:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:42Z|00462|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 up in Southbound
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.291 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.295 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 bound to our chassis#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.297 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.310 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd131d2-a157-42f8-bf75-694b62efe321]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.311 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00bd3319-b1 in ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.315 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00bd3319-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.315 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[19008f26-f920-44fc-a234-1d61af81928b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.316 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dd566731-af1d-4faf-878b-d7dc0a8f4827]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.338 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[22526c10-0da6-4225-bf75-61f2013876b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.363 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56e16b75-cd50-4bad-8459-67d640decbfa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4d2cb69e-deeb-4498-ae64-cc71212d3dc8 does not exist
Jan 23 05:12:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 136600bc-5c38-4343-bb17-8702b88f617c does not exist
Jan 23 05:12:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 88d008cd-17c4-4f31-a0bd-16a43e2645d1 does not exist
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:12:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.411 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[135ac984-2ac9-4286-bfe7-218f39810c22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 systemd-udevd[336378]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.4199] manager: (tap00bd3319-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/222)
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.421 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7a489094-06f0-48cc-bd67-ef52f7894d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.467 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[33050e47-80af-4b49-be95-b8808fa52e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.470 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9236c683-0d11-471b-adbb-6f5ff0f62fce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.5056] device (tap00bd3319-b0): carrier: link connected
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.513 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[cab657e5-f2b4-4923-a2ae-172c91837140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.535 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3e827973-5fd3-4c11-b965-f345163209e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707564, 'reachable_time': 28410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336458, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.554 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cad46a9b-976f-43f0-b62a-7003cd5e83db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:83f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 707564, 'tstamp': 707564}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336470, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.578 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97d12bb5-1441-46fc-97fd-f0acf6b36214]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 140], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707564, 'reachable_time': 28410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336473, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.614 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4596aac3-ede4-4ee9-ae6d-5fe547f63286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.626 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.627 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.666 250273 DEBUG nova.compute.manager [req-14cc49e9-dc7b-4e12-92ee-dbab50f40ca0 req-39dbfad8-2c0e-4ce6-8246-4c8f966fdbe7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.666 250273 DEBUG oslo_concurrency.lockutils [req-14cc49e9-dc7b-4e12-92ee-dbab50f40ca0 req-39dbfad8-2c0e-4ce6-8246-4c8f966fdbe7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.667 250273 DEBUG oslo_concurrency.lockutils [req-14cc49e9-dc7b-4e12-92ee-dbab50f40ca0 req-39dbfad8-2c0e-4ce6-8246-4c8f966fdbe7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.667 250273 DEBUG oslo_concurrency.lockutils [req-14cc49e9-dc7b-4e12-92ee-dbab50f40ca0 req-39dbfad8-2c0e-4ce6-8246-4c8f966fdbe7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.667 250273 DEBUG nova.compute.manager [req-14cc49e9-dc7b-4e12-92ee-dbab50f40ca0 req-39dbfad8-2c0e-4ce6-8246-4c8f966fdbe7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Processing event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.694 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[26be0112-4283-41d8-a288-02d9cefab4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.697 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.698 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.698 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00bd3319-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:42 np0005593232 NetworkManager[49057]: <info>  [1769163162.7016] manager: (tap00bd3319-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 kernel: tap00bd3319-b0: entered promiscuous mode
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.712 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00bd3319-b0, col_values=(('external_ids', {'iface-id': '1788b5e6-601b-4e3d-a584-c0138c3308f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.714 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:42Z|00463|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.715 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 nova_compute[250269]: 2026-01-23 10:12:42.731 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.732 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.733 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ab336e8e-481e-44c7-add9-a1c28b82dac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.734 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:12:42.735 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'env', 'PROCESS_TAG=haproxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00bd3319-bfe5-4acd-b2e4-17830ee847f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:12:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:12:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:12:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:12:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:12:43 np0005593232 podman[336609]: 2026-01-23 10:12:43.047179905 +0000 UTC m=+0.043873527 container create b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:12:43 np0005593232 systemd[1]: Started libpod-conmon-b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42.scope.
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.086 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.087 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163163.0854478, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.087 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.092 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.096 250273 INFO nova.virt.libvirt.driver [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance spawned successfully.#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.096 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.115 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.120 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:12:43 np0005593232 podman[336609]: 2026-01-23 10:12:43.024609474 +0000 UTC m=+0.021303116 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.132 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.134 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.135 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.135 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.136 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.136 250273 DEBUG nova.virt.libvirt.driver [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:12:43 np0005593232 podman[336609]: 2026-01-23 10:12:43.142144674 +0000 UTC m=+0.138838316 container init b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:12:43 np0005593232 podman[336609]: 2026-01-23 10:12:43.150835131 +0000 UTC m=+0.147528753 container start b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:12:43 np0005593232 podman[336609]: 2026-01-23 10:12:43.154607768 +0000 UTC m=+0.151301390 container attach b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:12:43 np0005593232 vibrant_hawking[336643]: 167 167
Jan 23 05:12:43 np0005593232 systemd[1]: libpod-b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42.scope: Deactivated successfully.
Jan 23 05:12:43 np0005593232 conmon[336643]: conmon b988d72402048094cf71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42.scope/container/memory.events
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.162 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.163 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163163.0863047, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.164 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:12:43 np0005593232 podman[336651]: 2026-01-23 10:12:43.187528263 +0000 UTC m=+0.056272820 container create f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.197 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.210 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163163.0912483, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:12:43 np0005593232 podman[336666]: 2026-01-23 10:12:43.210867166 +0000 UTC m=+0.031261719 container died b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.210 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.236 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:43 np0005593232 systemd[1]: Started libpod-conmon-f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6.scope.
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.242 250273 INFO nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Took 9.41 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.243 250273 DEBUG nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.245 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:12:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-846bb11a8e84a17593264456d8089d70a84dd09bd39657d264d818f366c7102a-merged.mount: Deactivated successfully.
Jan 23 05:12:43 np0005593232 podman[336651]: 2026-01-23 10:12:43.156823031 +0000 UTC m=+0.025567618 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:12:43 np0005593232 podman[336666]: 2026-01-23 10:12:43.268269917 +0000 UTC m=+0.088664460 container remove b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_hawking, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:12:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.274 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:12:43 np0005593232 systemd[1]: libpod-conmon-b988d72402048094cf71b3be11b618fddf8e425aab8ebb10c9fa6ac455ac1e42.scope: Deactivated successfully.
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce0d6928129688ea86fc5ce48defcae04f495f03b04e2de2d2eb6a3286a5a9f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 podman[336651]: 2026-01-23 10:12:43.296277143 +0000 UTC m=+0.165021700 container init f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:12:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2480: 321 pgs: 321 active+clean; 284 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.4 MiB/s wr, 140 op/s
Jan 23 05:12:43 np0005593232 podman[336651]: 2026-01-23 10:12:43.302529711 +0000 UTC m=+0.171274268 container start f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:12:43 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [NOTICE]   (336689) : New worker (336691) forked
Jan 23 05:12:43 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [NOTICE]   (336689) : Loading success.
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.340 250273 INFO nova.compute.manager [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Took 10.55 seconds to build instance.#033[00m
Jan 23 05:12:43 np0005593232 nova_compute[250269]: 2026-01-23 10:12:43.366 250273 DEBUG oslo_concurrency.lockutils [None req-e5831dd0-4125-4a4d-8c26-b6719db7f80a fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:43 np0005593232 podman[336705]: 2026-01-23 10:12:43.462858347 +0000 UTC m=+0.045630318 container create cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:12:43 np0005593232 systemd[1]: Started libpod-conmon-cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b.scope.
Jan 23 05:12:43 np0005593232 podman[336705]: 2026-01-23 10:12:43.44677683 +0000 UTC m=+0.029548821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:43 np0005593232 podman[336705]: 2026-01-23 10:12:43.574271322 +0000 UTC m=+0.157043303 container init cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:12:43 np0005593232 podman[336705]: 2026-01-23 10:12:43.585519962 +0000 UTC m=+0.168291933 container start cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:12:43 np0005593232 podman[336705]: 2026-01-23 10:12:43.58827798 +0000 UTC m=+0.171049981 container attach cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:12:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:43.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:44 np0005593232 flamboyant_hawking[336721]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:12:44 np0005593232 flamboyant_hawking[336721]: --> relative data size: 1.0
Jan 23 05:12:44 np0005593232 flamboyant_hawking[336721]: --> All data devices are unavailable
Jan 23 05:12:44 np0005593232 systemd[1]: libpod-cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b.scope: Deactivated successfully.
Jan 23 05:12:44 np0005593232 podman[336705]: 2026-01-23 10:12:44.572402764 +0000 UTC m=+1.155174735 container died cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 05:12:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4b9b8186fd5635a5b8e07050cd7eb3a6b940f28c73fb7de2fd475b0073487916-merged.mount: Deactivated successfully.
Jan 23 05:12:44 np0005593232 podman[336705]: 2026-01-23 10:12:44.683389088 +0000 UTC m=+1.266161059 container remove cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_hawking, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:12:44 np0005593232 systemd[1]: libpod-conmon-cf21c9e47de71d3b4a1a5cfe7bd3bd727933f33c4f898c1a5b1158cc2c1fd29b.scope: Deactivated successfully.
Jan 23 05:12:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:44.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.799 250273 DEBUG nova.compute.manager [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.800 250273 DEBUG oslo_concurrency.lockutils [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.800 250273 DEBUG oslo_concurrency.lockutils [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.801 250273 DEBUG oslo_concurrency.lockutils [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.801 250273 DEBUG nova.compute.manager [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.801 250273 WARNING nova.compute.manager [req-678979e3-08f8-48ac-b058-57d8d9f79c5b req-bc1c947f-b858-42a8-a226-95ff9f570c43 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:12:44 np0005593232 nova_compute[250269]: 2026-01-23 10:12:44.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:45 np0005593232 nova_compute[250269]: 2026-01-23 10:12:45.149 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2481: 321 pgs: 321 active+clean; 306 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.3 MiB/s wr, 172 op/s
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.304818765 +0000 UTC m=+0.023777406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.445027609 +0000 UTC m=+0.163986240 container create db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:12:45 np0005593232 systemd[1]: Started libpod-conmon-db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e.scope.
Jan 23 05:12:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.776401285 +0000 UTC m=+0.495359956 container init db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.78360699 +0000 UTC m=+0.502565611 container start db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:12:45 np0005593232 focused_noyce[336906]: 167 167
Jan 23 05:12:45 np0005593232 systemd[1]: libpod-db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e.scope: Deactivated successfully.
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.789155458 +0000 UTC m=+0.508114089 container attach db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.789626071 +0000 UTC m=+0.508584702 container died db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:12:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c7df5e88c7b1315a89270c8604cffc2f69a736f26387384b86ec9b2b3b27b752-merged.mount: Deactivated successfully.
Jan 23 05:12:45 np0005593232 podman[336890]: 2026-01-23 10:12:45.836269186 +0000 UTC m=+0.555227807 container remove db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:12:45 np0005593232 systemd[1]: libpod-conmon-db0648222b18773374d8f055056e029c22f933918ef596550c84186767b0df4e.scope: Deactivated successfully.
Jan 23 05:12:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:46 np0005593232 podman[336933]: 2026-01-23 10:12:45.981268547 +0000 UTC m=+0.023638413 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:46 np0005593232 podman[336933]: 2026-01-23 10:12:46.17531409 +0000 UTC m=+0.217683926 container create ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:12:46 np0005593232 systemd[1]: Started libpod-conmon-ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e.scope.
Jan 23 05:12:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee37b18f56ef67720f0d60cf30e7e613299b7e03988aad80adb98896290deca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee37b18f56ef67720f0d60cf30e7e613299b7e03988aad80adb98896290deca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee37b18f56ef67720f0d60cf30e7e613299b7e03988aad80adb98896290deca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ee37b18f56ef67720f0d60cf30e7e613299b7e03988aad80adb98896290deca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:46 np0005593232 podman[336933]: 2026-01-23 10:12:46.745405068 +0000 UTC m=+0.787774944 container init ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:12:46 np0005593232 podman[336933]: 2026-01-23 10:12:46.751758419 +0000 UTC m=+0.794128275 container start ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:12:46 np0005593232 podman[336933]: 2026-01-23 10:12:46.755732732 +0000 UTC m=+0.798102588 container attach ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:12:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:46.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:46 np0005593232 nova_compute[250269]: 2026-01-23 10:12:46.860 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0061381851386562445 of space, bias 1.0, pg target 1.8414555415968734 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:12:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2482: 321 pgs: 321 active+clean; 306 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 5.4 MiB/s wr, 202 op/s
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]: {
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:    "0": [
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:        {
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "devices": [
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "/dev/loop3"
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            ],
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "lv_name": "ceph_lv0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "lv_size": "7511998464",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "name": "ceph_lv0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "tags": {
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.cluster_name": "ceph",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.crush_device_class": "",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.encrypted": "0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.osd_id": "0",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.type": "block",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:                "ceph.vdo": "0"
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            },
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "type": "block",
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:            "vg_name": "ceph_vg0"
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:        }
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]:    ]
Jan 23 05:12:47 np0005593232 goofy_cartwright[336950]: }
Jan 23 05:12:47 np0005593232 systemd[1]: libpod-ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e.scope: Deactivated successfully.
Jan 23 05:12:47 np0005593232 podman[336933]: 2026-01-23 10:12:47.559401748 +0000 UTC m=+1.601771604 container died ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:12:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0ee37b18f56ef67720f0d60cf30e7e613299b7e03988aad80adb98896290deca-merged.mount: Deactivated successfully.
Jan 23 05:12:47 np0005593232 podman[336933]: 2026-01-23 10:12:47.617233761 +0000 UTC m=+1.659603597 container remove ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:12:47 np0005593232 systemd[1]: libpod-conmon-ff9e640ad6ffd9bb74843acb3cd880dd00b3801b727c7bd1622c17d666e5342e.scope: Deactivated successfully.
Jan 23 05:12:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.378507522 +0000 UTC m=+0.101953158 container create 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.298655793 +0000 UTC m=+0.022101439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:48 np0005593232 systemd[1]: Started libpod-conmon-2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433.scope.
Jan 23 05:12:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.460257415 +0000 UTC m=+0.183703061 container init 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:12:48 np0005593232 zealous_spence[337128]: 167 167
Jan 23 05:12:48 np0005593232 systemd[1]: libpod-2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433.scope: Deactivated successfully.
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.475021655 +0000 UTC m=+0.198467281 container start 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.477992689 +0000 UTC m=+0.201438325 container attach 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.478508504 +0000 UTC m=+0.201954130 container died 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 05:12:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-887a2c03fc068d70df44e9cc1ce6da8ec8e5a26ecf05733d685af0a020e669b1-merged.mount: Deactivated successfully.
Jan 23 05:12:48 np0005593232 podman[337112]: 2026-01-23 10:12:48.602829526 +0000 UTC m=+0.326275152 container remove 2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_spence, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:12:48 np0005593232 systemd[1]: libpod-conmon-2eeb46957c12bfda82e741869c80b013f5d80babb4dbeb3b24c78950d2f77433.scope: Deactivated successfully.
Jan 23 05:12:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:48.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:48 np0005593232 podman[337194]: 2026-01-23 10:12:48.797304762 +0000 UTC m=+0.026886855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:12:48 np0005593232 podman[337194]: 2026-01-23 10:12:48.90035785 +0000 UTC m=+0.129939923 container create a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:12:48 np0005593232 systemd[1]: Started libpod-conmon-a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba.scope.
Jan 23 05:12:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:12:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995ca948eea1ff75d5db9bbcccf9129cd337549e071cf276d4d273c39fd3501b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995ca948eea1ff75d5db9bbcccf9129cd337549e071cf276d4d273c39fd3501b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995ca948eea1ff75d5db9bbcccf9129cd337549e071cf276d4d273c39fd3501b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995ca948eea1ff75d5db9bbcccf9129cd337549e071cf276d4d273c39fd3501b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:12:49 np0005593232 podman[337194]: 2026-01-23 10:12:49.038699381 +0000 UTC m=+0.268281474 container init a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:12:49 np0005593232 podman[337194]: 2026-01-23 10:12:49.0470962 +0000 UTC m=+0.276678273 container start a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:12:49 np0005593232 podman[337194]: 2026-01-23 10:12:49.054098329 +0000 UTC m=+0.283680402 container attach a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Jan 23 05:12:49 np0005593232 nova_compute[250269]: 2026-01-23 10:12:49.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:49 np0005593232 nova_compute[250269]: 2026-01-23 10:12:49.295 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:12:49 np0005593232 nova_compute[250269]: 2026-01-23 10:12:49.295 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:12:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2483: 321 pgs: 321 active+clean; 312 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 5.1 MiB/s wr, 233 op/s
Jan 23 05:12:49 np0005593232 nova_compute[250269]: 2026-01-23 10:12:49.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]: {
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:        "osd_id": 0,
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:        "type": "bluestore"
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]:    }
Jan 23 05:12:50 np0005593232 awesome_hugle[337219]: }
Jan 23 05:12:50 np0005593232 systemd[1]: libpod-a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba.scope: Deactivated successfully.
Jan 23 05:12:50 np0005593232 podman[337194]: 2026-01-23 10:12:50.041306409 +0000 UTC m=+1.270888482 container died a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:12:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-995ca948eea1ff75d5db9bbcccf9129cd337549e071cf276d4d273c39fd3501b-merged.mount: Deactivated successfully.
Jan 23 05:12:50 np0005593232 podman[337194]: 2026-01-23 10:12:50.110939268 +0000 UTC m=+1.340521361 container remove a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:12:50 np0005593232 nova_compute[250269]: 2026-01-23 10:12:50.113 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:12:50 np0005593232 nova_compute[250269]: 2026-01-23 10:12:50.114 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:12:50 np0005593232 nova_compute[250269]: 2026-01-23 10:12:50.114 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:12:50 np0005593232 nova_compute[250269]: 2026-01-23 10:12:50.114 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:12:50 np0005593232 systemd[1]: libpod-conmon-a75856800075bb638fdcbca3dac193d871645369bb14ce4a7cb823d454de40ba.scope: Deactivated successfully.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.170247) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170170324, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 926, "num_deletes": 252, "total_data_size": 1269759, "memory_usage": 1297104, "flush_reason": "Manual Compaction"}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170183199, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1255303, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54554, "largest_seqno": 55479, "table_properties": {"data_size": 1250773, "index_size": 2118, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10853, "raw_average_key_size": 20, "raw_value_size": 1241281, "raw_average_value_size": 2311, "num_data_blocks": 93, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163102, "oldest_key_time": 1769163102, "file_creation_time": 1769163170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 12981 microseconds, and 5099 cpu microseconds.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.183232) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1255303 bytes OK
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.183248) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.185554) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.185610) EVENT_LOG_v1 {"time_micros": 1769163170185599, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.185637) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 1265233, prev total WAL file size 1286030, number of live WAL files 2.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.186882) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1225KB)], [122(12MB)]
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170186912, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 13992168, "oldest_snapshot_seqno": -1}
Jan 23 05:12:50 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 64b33544-1c66-45b5-89c7-7681e2e522b7 does not exist
Jan 23 05:12:50 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cc00003a-17dc-4f81-a602-d28af58eb394 does not exist
Jan 23 05:12:50 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f2057981-d71b-4e19-85c5-2e2863ddd052 does not exist
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 8166 keys, 12127989 bytes, temperature: kUnknown
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170295675, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 12127989, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12073692, "index_size": 32809, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 212330, "raw_average_key_size": 26, "raw_value_size": 11928581, "raw_average_value_size": 1460, "num_data_blocks": 1287, "num_entries": 8166, "num_filter_entries": 8166, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163170, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.296706) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 12127989 bytes
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.298033) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.6 rd, 110.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 12.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(20.8) write-amplify(9.7) OK, records in: 8687, records dropped: 521 output_compression: NoCompression
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.298051) EVENT_LOG_v1 {"time_micros": 1769163170298044, "job": 74, "event": "compaction_finished", "compaction_time_micros": 109625, "compaction_time_cpu_micros": 41849, "output_level": 6, "num_output_files": 1, "total_output_size": 12127989, "num_input_records": 8687, "num_output_records": 8166, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170298507, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163170300507, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.186754) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.300635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.300643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.300646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.300648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:12:50.300649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:12:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:50.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:12:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2484: 321 pgs: 321 active+clean; 312 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 150 op/s
Jan 23 05:12:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:51 np0005593232 nova_compute[250269]: 2026-01-23 10:12:51.862 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.565 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.591 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.592 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.593 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.593 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.594 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.623 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.623 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.624 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.624 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:12:52 np0005593232 nova_compute[250269]: 2026-01-23 10:12:52.624 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:52.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:12:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896327432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:12:53 np0005593232 nova_compute[250269]: 2026-01-23 10:12:53.126 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2485: 321 pgs: 321 active+clean; 336 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 254 op/s
Jan 23 05:12:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:54 np0005593232 podman[337327]: 2026-01-23 10:12:54.439260935 +0000 UTC m=+0.097082170 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 23 05:12:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:54 np0005593232 nova_compute[250269]: 2026-01-23 10:12:54.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.262 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.263 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:12:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2486: 321 pgs: 321 active+clean; 372 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.4 MiB/s wr, 309 op/s
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.457 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.459 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4146MB free_disk=20.854930877685547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.459 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:12:55 np0005593232 nova_compute[250269]: 2026-01-23 10:12:55.459 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:12:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:12:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.020 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 8056a321-13d3-4dd8-bb33-70c832c17ac1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.021 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.021 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.067 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:12:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:12:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081555647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.537 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.545 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.566 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.586 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.587 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:12:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:56Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:12:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:12:56Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:12:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:12:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:56.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:56 np0005593232 nova_compute[250269]: 2026-01-23 10:12:56.864 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2487: 321 pgs: 321 active+clean; 388 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 4.5 MiB/s wr, 303 op/s
Jan 23 05:12:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:57.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:12:58.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:12:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2488: 321 pgs: 321 active+clean; 413 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 6.0 MiB/s wr, 319 op/s
Jan 23 05:12:59 np0005593232 podman[337377]: 2026-01-23 10:12:59.388092724 +0000 UTC m=+0.052504833 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 05:12:59 np0005593232 nova_compute[250269]: 2026-01-23 10:12:59.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:12:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:12:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:12:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:12:59.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:00.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2489: 321 pgs: 321 active+clean; 413 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.6 MiB/s wr, 249 op/s
Jan 23 05:13:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:01 np0005593232 nova_compute[250269]: 2026-01-23 10:13:01.866 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:01.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:02.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2490: 321 pgs: 321 active+clean; 437 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 6.7 MiB/s wr, 283 op/s
Jan 23 05:13:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:03.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:04.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:04 np0005593232 nova_compute[250269]: 2026-01-23 10:13:04.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2491: 321 pgs: 321 active+clean; 463 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 7.1 MiB/s wr, 247 op/s
Jan 23 05:13:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:06.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:06.156 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:06 np0005593232 nova_compute[250269]: 2026-01-23 10:13:06.157 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:06.158 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:13:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:06.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:06 np0005593232 nova_compute[250269]: 2026-01-23 10:13:06.868 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:07.159 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2492: 321 pgs: 321 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 988 KiB/s rd, 6.8 MiB/s wr, 189 op/s
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:08.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:08.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2493: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 222 op/s
Jan 23 05:13:09 np0005593232 nova_compute[250269]: 2026-01-23 10:13:09.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:10.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:10.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2494: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Jan 23 05:13:11 np0005593232 nova_compute[250269]: 2026-01-23 10:13:11.870 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:12.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.267 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.267 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.296 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.407 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.408 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.417 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.417 250273 INFO nova.compute.claims [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.555 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.555 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.583 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.629 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:12 np0005593232 nova_compute[250269]: 2026-01-23 10:13:12.707 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:12.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:13:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634590009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.081 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.087 250273 DEBUG nova.compute.provider_tree [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.110 250273 DEBUG nova.scheduler.client.report [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.139 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.140 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.142 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.155 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.155 250273 INFO nova.compute.claims [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.220 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.221 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.253 250273 INFO nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.272 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:13:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2495: 321 pgs: 321 active+clean; 484 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 196 op/s
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.376 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.405 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.407 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.407 250273 INFO nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Creating image(s)#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.436 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.470 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.506 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.510 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.587 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.589 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.589 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.590 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.625 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.628 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:13:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2848370925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.838 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.844 250273 DEBUG nova.compute.provider_tree [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.869 250273 DEBUG nova.scheduler.client.report [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.894 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.895 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.931 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.965 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:13:13 np0005593232 nova_compute[250269]: 2026-01-23 10:13:13.966 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.013 250273 INFO nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:13:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:14.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.021 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] resizing rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.060 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.140 250273 DEBUG nova.objects.instance [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'migration_context' on Instance uuid b3f4a8f0-513b-4165-a2f5-3c01bac04576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.169 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.169 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Ensure instance console log exists: /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.170 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.170 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.170 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.346 250273 DEBUG nova.policy [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.349 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.350 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.350 250273 INFO nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Creating image(s)#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.378 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.407 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.438 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.444 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.473 250273 DEBUG nova.policy [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'aca3cab576d641d3b89e7dddf155d467', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.509 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.510 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.511 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.511 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.533 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.537 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.809 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:14.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.885 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] resizing rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:13:14 np0005593232 nova_compute[250269]: 2026-01-23 10:13:14.928 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.012 250273 DEBUG nova.objects.instance [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'migration_context' on Instance uuid 74fc0ca6-968a-48e2-8496-f53ebb50daf0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2496: 321 pgs: 321 active+clean; 476 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 203 op/s
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.830 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.831 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Ensure instance console log exists: /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.831 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.831 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:15 np0005593232 nova_compute[250269]: 2026-01-23 10:13:15.832 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:16.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:16.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:16 np0005593232 nova_compute[250269]: 2026-01-23 10:13:16.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2497: 321 pgs: 321 active+clean; 491 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 221 op/s
Jan 23 05:13:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:18.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:18 np0005593232 nova_compute[250269]: 2026-01-23 10:13:18.203 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Successfully created port: 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:13:18 np0005593232 nova_compute[250269]: 2026-01-23 10:13:18.556 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Successfully created port: faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:13:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:18.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2498: 321 pgs: 321 active+clean; 569 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.4 MiB/s wr, 357 op/s
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.555 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Successfully updated port: 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.577 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.578 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.578 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.717 250273 DEBUG nova.compute.manager [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-changed-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.717 250273 DEBUG nova.compute.manager [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Refreshing instance network info cache due to event network-changed-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.718 250273 DEBUG oslo_concurrency.lockutils [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:19 np0005593232 nova_compute[250269]: 2026-01-23 10:13:19.931 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:20.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.176 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.405 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Successfully updated port: faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.424 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.425 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.425 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:13:20 np0005593232 nova_compute[250269]: 2026-01-23 10:13:20.747 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:13:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:20.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2499: 321 pgs: 321 active+clean; 569 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.3 MiB/s wr, 299 op/s
Jan 23 05:13:21 np0005593232 nova_compute[250269]: 2026-01-23 10:13:21.875 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:22.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.185 250273 DEBUG nova.compute.manager [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-changed-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.185 250273 DEBUG nova.compute.manager [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Refreshing instance network info cache due to event network-changed-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.185 250273 DEBUG oslo_concurrency.lockutils [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.506 250273 DEBUG nova.network.neutron [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updating instance_info_cache with network_info: [{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.606 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.606 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance network_info: |[{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.607 250273 DEBUG oslo_concurrency.lockutils [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.607 250273 DEBUG nova.network.neutron [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Refreshing network info cache for port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.610 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Start _get_guest_xml network_info=[{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.615 250273 WARNING nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.621 250273 DEBUG nova.virt.libvirt.host [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.622 250273 DEBUG nova.virt.libvirt.host [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.625 250273 DEBUG nova.virt.libvirt.host [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.626 250273 DEBUG nova.virt.libvirt.host [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.627 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.627 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.627 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.628 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.628 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.628 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.628 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.629 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.629 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.629 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.629 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.630 250273 DEBUG nova.virt.hardware [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:13:22 np0005593232 nova_compute[250269]: 2026-01-23 10:13:22.632 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:22.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321448461' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.091 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.119 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.123 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.206 250273 DEBUG nova.network.neutron [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Updating instance_info_cache with network_info: [{"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2500: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 5.4 MiB/s wr, 329 op/s
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.336 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.336 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Instance network_info: |[{"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.337 250273 DEBUG oslo_concurrency.lockutils [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.337 250273 DEBUG nova.network.neutron [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Refreshing network info cache for port faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.340 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Start _get_guest_xml network_info=[{"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.344 250273 WARNING nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.349 250273 DEBUG nova.virt.libvirt.host [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.349 250273 DEBUG nova.virt.libvirt.host [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.351 250273 DEBUG nova.virt.libvirt.host [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.351 250273 DEBUG nova.virt.libvirt.host [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.352 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.352 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.353 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.353 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.353 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.354 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.354 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.354 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.354 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.354 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.355 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.355 250273 DEBUG nova.virt.hardware [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.358 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/319435126' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.570 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.572 250273 DEBUG nova.virt.libvirt.vif [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-256866531',display_name='tempest-ServerActionsTestOtherB-server-256866531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-256866531',id=138,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-33x0cr32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:13:14Z,user_data=None,user_id='aca3cab576d641d3b89e7dddf155d467',uuid=74fc0ca6-968a-48e2-8496-f53ebb50daf0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.572 250273 DEBUG nova.network.os_vif_util [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.573 250273 DEBUG nova.network.os_vif_util [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.574 250273 DEBUG nova.objects.instance [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74fc0ca6-968a-48e2-8496-f53ebb50daf0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2519461677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.799 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.825 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.829 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.858 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <uuid>74fc0ca6-968a-48e2-8496-f53ebb50daf0</uuid>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <name>instance-0000008a</name>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherB-server-256866531</nova:name>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:13:22</nova:creationTime>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:user uuid="aca3cab576d641d3b89e7dddf155d467">tempest-ServerActionsTestOtherB-1052932467-project-member</nova:user>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:project uuid="9dd869ce76e44fc8a82b8bbee1654d33">tempest-ServerActionsTestOtherB-1052932467</nova:project>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <nova:port uuid="5a2eb60f-bd18-4a91-b5f8-eaec207ac51c">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="serial">74fc0ca6-968a-48e2-8496-f53ebb50daf0</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="uuid">74fc0ca6-968a-48e2-8496-f53ebb50daf0</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:d9:7a:35"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <target dev="tap5a2eb60f-bd"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/console.log" append="off"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:13:23 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:13:23 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:13:23 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:13:23 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.860 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Preparing to wait for external event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.860 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.860 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.861 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.862 250273 DEBUG nova.virt.libvirt.vif [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-256866531',display_name='tempest-ServerActionsTestOtherB-server-256866531',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-256866531',id=138,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-33x0cr32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:13:14Z,user_data=None,user_id='aca3cab576d641d3b89e7dddf155d467',uuid=74fc0ca6-968a-48e2-8496-f53ebb50daf0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.862 250273 DEBUG nova.network.os_vif_util [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.863 250273 DEBUG nova.network.os_vif_util [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.863 250273 DEBUG os_vif [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.864 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.865 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.865 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.869 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.870 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a2eb60f-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.870 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a2eb60f-bd, col_values=(('external_ids', {'iface-id': '5a2eb60f-bd18-4a91-b5f8-eaec207ac51c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:7a:35', 'vm-uuid': '74fc0ca6-968a-48e2-8496-f53ebb50daf0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.872 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:13:23 np0005593232 NetworkManager[49057]: <info>  [1769163203.8744] manager: (tap5a2eb60f-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:23 np0005593232 nova_compute[250269]: 2026-01-23 10:13:23.883 250273 INFO os_vif [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd')#033[00m
Jan 23 05:13:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:24.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.075 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.076 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.076 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No VIF found with MAC fa:16:3e:d9:7a:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.077 250273 INFO nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Using config drive#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.102 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4252868274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.256 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.258 250273 DEBUG nova.virt.libvirt.vif [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-495186012',display_name='tempest-TestNetworkBasicOps-server-495186012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-495186012',id=137,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcRSiJbCJSpcIIny/K0weugJbU1lrFB/m2zYZiRLImaUCxqEBpaQ1Ck0Aehrf7Knf/qKUoGahOjDGUPouvtWFw4LRqeD1QCuGOxag4C/th3BIgZBXyUD1wzevSZevWMEw==',key_name='tempest-TestNetworkBasicOps-676077224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-mfrfos5b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:13:13Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=b3f4a8f0-513b-4165-a2f5-3c01bac04576,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.258 250273 DEBUG nova.network.os_vif_util [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.259 250273 DEBUG nova.network.os_vif_util [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.260 250273 DEBUG nova.objects.instance [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3f4a8f0-513b-4165-a2f5-3c01bac04576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.299 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <uuid>b3f4a8f0-513b-4165-a2f5-3c01bac04576</uuid>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <name>instance-00000089</name>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkBasicOps-server-495186012</nova:name>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:13:23</nova:creationTime>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <nova:port uuid="faf5bdcb-b5b5-4066-9cf4-a8f4014679a6">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="serial">b3f4a8f0-513b-4165-a2f5-3c01bac04576</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="uuid">b3f4a8f0-513b-4165-a2f5-3c01bac04576</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:13:3d:11"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <target dev="tapfaf5bdcb-b5"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/console.log" append="off"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:13:24 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:13:24 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:13:24 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:13:24 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.300 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Preparing to wait for external event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.301 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.301 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.301 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.302 250273 DEBUG nova.virt.libvirt.vif [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-495186012',display_name='tempest-TestNetworkBasicOps-server-495186012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-495186012',id=137,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcRSiJbCJSpcIIny/K0weugJbU1lrFB/m2zYZiRLImaUCxqEBpaQ1Ck0Aehrf7Knf/qKUoGahOjDGUPouvtWFw4LRqeD1QCuGOxag4C/th3BIgZBXyUD1wzevSZevWMEw==',key_name='tempest-TestNetworkBasicOps-676077224',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-mfrfos5b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:13:13Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=b3f4a8f0-513b-4165-a2f5-3c01bac04576,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.302 250273 DEBUG nova.network.os_vif_util [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.303 250273 DEBUG nova.network.os_vif_util [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.303 250273 DEBUG os_vif [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.304 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.304 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.304 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.308 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf5bdcb-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.308 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaf5bdcb-b5, col_values=(('external_ids', {'iface-id': 'faf5bdcb-b5b5-4066-9cf4-a8f4014679a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:3d:11', 'vm-uuid': 'b3f4a8f0-513b-4165-a2f5-3c01bac04576'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:24 np0005593232 NetworkManager[49057]: <info>  [1769163204.3106] manager: (tapfaf5bdcb-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.317 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.319 250273 INFO os_vif [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5')#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.423 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.424 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.424 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:13:3d:11, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.425 250273 INFO nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Using config drive#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.455 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:24 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:24Z|00464|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.514 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:24Z|00465|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.811 250273 INFO nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Creating config drive at /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.817 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8_y5dgm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.957 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj8_y5dgm" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.990 250273 DEBUG nova.storage.rbd_utils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:24 np0005593232 nova_compute[250269]: 2026-01-23 10:13:24.995 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.158 250273 DEBUG oslo_concurrency.processutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config 74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.159 250273 INFO nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Deleting local config drive /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0/disk.config because it was imported into RBD.#033[00m
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.2063] manager: (tap5a2eb60f-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/226)
Jan 23 05:13:25 np0005593232 kernel: tap5a2eb60f-bd: entered promiscuous mode
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00466|binding|INFO|Claiming lport 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c for this chassis.
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00467|binding|INFO|5a2eb60f-bd18-4a91-b5f8-eaec207ac51c: Claiming fa:16:3e:d9:7a:35 10.100.0.5
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.2221] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.2226] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/228)
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.221 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 systemd-machined[215836]: New machine qemu-57-instance-0000008a.
Jan 23 05:13:25 np0005593232 systemd[1]: Started Virtual Machine qemu-57-instance-0000008a.
Jan 23 05:13:25 np0005593232 systemd-udevd[338070]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.2738] device (tap5a2eb60f-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.2745] device (tap5a2eb60f-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.283 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:7a:35 10.100.0.5'], port_security=['fa:16:3e:d9:7a:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '74fc0ca6-968a-48e2-8496-f53ebb50daf0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5b72284-9167-4768-aa53-98b2ad243e70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.293 250273 INFO nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Creating config drive at /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.285 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c in datapath 8d9599b4-8855-4310-af02-cdd058438f7d bound to our chassis#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.286 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8d9599b4-8855-4310-af02-cdd058438f7d#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.298 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1r6mk7h3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.299 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[03c9401f-f886-43b2-ba1e-893c515e8ddb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.300 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8d9599b4-81 in ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.303 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8d9599b4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.303 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[33cafbc0-08bf-4b91-b5a7-c1a89a4d21f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.305 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[05f47b2c-d48a-4b22-91c0-d93761ae4c4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.318 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4df078-c395-4cce-880c-b56f9d3f7dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2501: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.3 MiB/s wr, 359 op/s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.342 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d710de5f-5787-4842-9725-325a96c7d798]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.374 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3e4be2-f0a2-4148-8311-5f1b500a7ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.384 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[894fc85c-77e7-4f7e-9cbb-8bccfed61e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.3854] manager: (tap8d9599b4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/229)
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.437 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1r6mk7h3" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00468|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.466 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[330afd62-e499-4a57-b460-63bfb74181e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00469|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:25 np0005593232 podman[338057]: 2026-01-23 10:13:25.471091222 +0000 UTC m=+0.230453209 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.472 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[51deed82-79ac-4fbd-aaf4-3e95e466a071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.476 250273 DEBUG nova.storage.rbd_utils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00470|binding|INFO|Setting lport 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c up in Southbound
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00471|binding|INFO|Setting lport 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c ovn-installed in OVS
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.490 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.4978] device (tap8d9599b4-80): carrier: link connected
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.505 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1a91b351-1ce6-4347-b714-6cdab2a1c37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.521 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d9457f2a-68f9-4ec9-a8f0-89fefec6054a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711864, 'reachable_time': 24078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338163, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.527 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.544 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b7344802-f0a7-482b-a007-e3a3adda44d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711864, 'tstamp': 711864}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338166, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.565 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e192a0f-260f-40bf-9c6c-d27c5e81d0bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 142], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711864, 'reachable_time': 24078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338179, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.571 250273 INFO nova.compute.manager [None req-8cfcf880-6644-47f4-bce2-074eb169f20d fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Pausing#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.572 250273 DEBUG nova.objects.instance [None req-8cfcf880-6644-47f4-bce2-074eb169f20d fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'flavor' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.596 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[559ef837-2221-4cb2-ad27-f4a11f943cfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.607 250273 DEBUG nova.network.neutron [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updated VIF entry in instance network info cache for port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.608 250273 DEBUG nova.network.neutron [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updating instance_info_cache with network_info: [{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.621 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163205.620849, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.621 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.623 250273 DEBUG nova.compute.manager [None req-8cfcf880-6644-47f4-bce2-074eb169f20d fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.624 250273 DEBUG oslo_concurrency.lockutils [req-6e3b1d5b-e7e8-4d9c-8c25-ee98d0c33130 req-a31b75c3-6065-42b0-8c16-55361fefa704 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.660 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0da3d219-886a-4a5f-a6c3-cb5392d6777f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.662 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.662 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.663 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.663 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9599b4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.665 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.6669] manager: (tap8d9599b4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.667 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 kernel: tap8d9599b4-80: entered promiscuous mode
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.672 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8d9599b4-80, col_values=(('external_ids', {'iface-id': 'b57bd565-3bb1-4ecc-8df0-a7c439ac84a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.673 250273 DEBUG oslo_concurrency.processutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config b3f4a8f0-513b-4165-a2f5-3c01bac04576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.673 250273 INFO nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Deleting local config drive /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576/disk.config because it was imported into RBD.#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00472|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.675 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.696 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.697 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a625ecab-9892-4038-93d3-5403a7544d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.698 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.700 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'env', 'PROCESS_TAG=haproxy-8d9599b4-8855-4310-af02-cdd058438f7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8d9599b4-8855-4310-af02-cdd058438f7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.705 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.706 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163205.7043872, 74fc0ca6-968a-48e2-8496-f53ebb50daf0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.706 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] VM Started (Lifecycle Event)#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.734 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 kernel: tapfaf5bdcb-b5: entered promiscuous mode
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.7390] manager: (tapfaf5bdcb-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.740 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 systemd-udevd[338109]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.743 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163205.7053018, 74fc0ca6-968a-48e2-8496-f53ebb50daf0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.744 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00473|binding|INFO|Claiming lport faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 for this chassis.
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00474|binding|INFO|faf5bdcb-b5b5-4066-9cf4-a8f4014679a6: Claiming fa:16:3e:13:3d:11 10.100.0.19
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.748 250273 DEBUG nova.compute.manager [req-fc61e60a-e21a-4e2e-ab58-a9f56a9c1166 req-72047d61-9b4d-4221-86c7-a846a04eb78d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.748 250273 DEBUG oslo_concurrency.lockutils [req-fc61e60a-e21a-4e2e-ab58-a9f56a9c1166 req-72047d61-9b4d-4221-86c7-a846a04eb78d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.749 250273 DEBUG oslo_concurrency.lockutils [req-fc61e60a-e21a-4e2e-ab58-a9f56a9c1166 req-72047d61-9b4d-4221-86c7-a846a04eb78d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.749 250273 DEBUG oslo_concurrency.lockutils [req-fc61e60a-e21a-4e2e-ab58-a9f56a9c1166 req-72047d61-9b4d-4221-86c7-a846a04eb78d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.749 250273 DEBUG nova.compute.manager [req-fc61e60a-e21a-4e2e-ab58-a9f56a9c1166 req-72047d61-9b4d-4221-86c7-a846a04eb78d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Processing event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.750 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:13:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:25.756 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:3d:11 10.100.0.19'], port_security=['fa:16:3e:13:3d:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'b3f4a8f0-513b-4165-a2f5-3c01bac04576', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a86a6e8-5e41-414d-9717-c71aa2218873', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ec17e7a3-c853-4135-b8b0-167a2ed6cea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9398d99-046a-4d16-8858-45cb48df3c81, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.7597] device (tapfaf5bdcb-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:13:25 np0005593232 NetworkManager[49057]: <info>  [1769163205.7607] device (tapfaf5bdcb-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.763 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.769 250273 INFO nova.virt.libvirt.driver [-] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance spawned successfully.#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.769 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.774 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 systemd-machined[215836]: New machine qemu-58-instance-00000089.
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.789 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 systemd[1]: Started Virtual Machine qemu-58-instance-00000089.
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00475|binding|INFO|Setting lport faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 ovn-installed in OVS
Jan 23 05:13:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:25Z|00476|binding|INFO|Setting lport faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 up in Southbound
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.792 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.793 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.794 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.794 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.795 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.795 250273 DEBUG nova.virt.libvirt.driver [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.799 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.801 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163205.7628415, 74fc0ca6-968a-48e2-8496-f53ebb50daf0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.801 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.837 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.853 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.877 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.883 250273 INFO nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Took 11.53 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.883 250273 DEBUG nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.965 250273 INFO nova.compute.manager [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Took 13.30 seconds to build instance.#033[00m
Jan 23 05:13:25 np0005593232 nova_compute[250269]: 2026-01-23 10:13:25.986 250273 DEBUG oslo_concurrency.lockutils [None req-dee1a3d3-d332-49cd-b6c0-53733aba0368 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:26.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:26 np0005593232 podman[338267]: 2026-01-23 10:13:26.079814258 +0000 UTC m=+0.048741726 container create db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:13:26 np0005593232 systemd[1]: Started libpod-conmon-db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360.scope.
Jan 23 05:13:26 np0005593232 podman[338267]: 2026-01-23 10:13:26.054710965 +0000 UTC m=+0.023638423 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.159 250273 DEBUG nova.network.neutron [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Updated VIF entry in instance network info cache for port faf5bdcb-b5b5-4066-9cf4-a8f4014679a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.159 250273 DEBUG nova.network.neutron [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Updating instance_info_cache with network_info: [{"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f892184163c8f753e3bdec484b585c9640cdf1b6bb02a0cce60b7c09b43256b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:26 np0005593232 podman[338267]: 2026-01-23 10:13:26.179279615 +0000 UTC m=+0.148207093 container init db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:13:26 np0005593232 podman[338267]: 2026-01-23 10:13:26.184219595 +0000 UTC m=+0.153147053 container start db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.188 250273 DEBUG oslo_concurrency.lockutils [req-604854ac-73b7-4e8c-9a59-fcb1094f9114 req-5254e40e-0b2c-427c-b1e6-f8e29b618f24 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b3f4a8f0-513b-4165-a2f5-3c01bac04576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:26 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [NOTICE]   (338286) : New worker (338288) forked
Jan 23 05:13:26 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [NOTICE]   (338286) : Loading success.
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.244 161902 INFO neutron.agent.ovn.metadata.agent [-] Port faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 in datapath 0a86a6e8-5e41-414d-9717-c71aa2218873 unbound from our chassis#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.247 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a86a6e8-5e41-414d-9717-c71aa2218873#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.259 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[57136073-a51a-4aa9-b76c-4af03c475c1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.260 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a86a6e8-51 in ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.264 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a86a6e8-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.264 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d7c61c-bb16-4217-994b-6395f02a3b07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.265 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[df51172a-864d-4d34-b908-10d80c11aa06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.283 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb12e37-e1ef-4d74-bae4-8fd6b8edaf4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.298 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[33baaeb5-02d7-41e2-80c4-53ae078793a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.332 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f06b6da9-ca65-4f3e-b2b3-1d45130f8ccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 NetworkManager[49057]: <info>  [1769163206.3404] manager: (tap0a86a6e8-50): new Veth device (/org/freedesktop/NetworkManager/Devices/232)
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.339 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[34ee56de-836b-4c8b-8dd8-45470b9cb1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.377 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc5dcc3-c694-4eee-aa13-e87383045a9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.380 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ad0a7c7a-a069-40c2-b97a-ee4b752c7ed7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 NetworkManager[49057]: <info>  [1769163206.4059] device (tap0a86a6e8-50): carrier: link connected
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.412 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e373748b-fa82-4fa7-bc0e-aee281b5cb5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.430 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f797d550-ac9f-4649-bb6d-61154ca63142]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a86a6e8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:f1:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711954, 'reachable_time': 31863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338341, 'error': None, 'target': 'ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.451 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59414876-adeb-43cc-8b3e-6d0ba58ce8f3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:f1c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711954, 'tstamp': 711954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338344, 'error': None, 'target': 'ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.474 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c110ae-6313-4211-a7e9-14b41813933b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a86a6e8-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:f1:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 144], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711954, 'reachable_time': 31863, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 338349, 'error': None, 'target': 'ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.511 250273 DEBUG nova.compute.manager [req-30415dab-232a-4fdf-838b-ff4701ba2658 req-5be6bb71-a634-46ea-a16d-b6eaec546aac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.511 250273 DEBUG oslo_concurrency.lockutils [req-30415dab-232a-4fdf-838b-ff4701ba2658 req-5be6bb71-a634-46ea-a16d-b6eaec546aac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.514 250273 DEBUG oslo_concurrency.lockutils [req-30415dab-232a-4fdf-838b-ff4701ba2658 req-5be6bb71-a634-46ea-a16d-b6eaec546aac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.514 250273 DEBUG oslo_concurrency.lockutils [req-30415dab-232a-4fdf-838b-ff4701ba2658 req-5be6bb71-a634-46ea-a16d-b6eaec546aac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.514 250273 DEBUG nova.compute.manager [req-30415dab-232a-4fdf-838b-ff4701ba2658 req-5be6bb71-a634-46ea-a16d-b6eaec546aac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Processing event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.518 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[09135190-0592-4263-bb1f-a5e214b34a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.571 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163206.5710733, b3f4a8f0-513b-4165-a2f5-3c01bac04576 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.571 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] VM Started (Lifecycle Event)#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.573 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.576 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.579 250273 INFO nova.virt.libvirt.driver [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Instance spawned successfully.#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.579 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.587 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b01594-1143-4a81-8ce1-0a19e0c645a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.588 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a86a6e8-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.589 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.589 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a86a6e8-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.591 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:26 np0005593232 NetworkManager[49057]: <info>  [1769163206.5921] manager: (tap0a86a6e8-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/233)
Jan 23 05:13:26 np0005593232 kernel: tap0a86a6e8-50: entered promiscuous mode
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.596 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a86a6e8-50, col_values=(('external_ids', {'iface-id': '3090087a-6e6f-4e48-98d5-06ed0571b2cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:26 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:26Z|00477|binding|INFO|Releasing lport 3090087a-6e6f-4e48-98d5-06ed0571b2cb from this chassis (sb_readonly=0)
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.598 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.599 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.601 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a86a6e8-5e41-414d-9717-c71aa2218873.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a86a6e8-5e41-414d-9717-c71aa2218873.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.602 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[17e3f331-3cd9-4437-830b-fbdbcde3f7fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.603 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-0a86a6e8-5e41-414d-9717-c71aa2218873
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/0a86a6e8-5e41-414d-9717-c71aa2218873.pid.haproxy
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 0a86a6e8-5e41-414d-9717-c71aa2218873
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:13:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:26.604 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873', 'env', 'PROCESS_TAG=haproxy-0a86a6e8-5e41-414d-9717-c71aa2218873', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a86a6e8-5e41-414d-9717-c71aa2218873.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.615 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.713 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.727 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.727 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.728 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.728 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.728 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.729 250273 DEBUG nova.virt.libvirt.driver [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.733 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.783 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.784 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163206.5731142, b3f4a8f0-513b-4165-a2f5-3c01bac04576 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.790 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:13:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.937 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.941 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163206.575736, b3f4a8f0-513b-4165-a2f5-3c01bac04576 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.941 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:13:26 np0005593232 podman[338380]: 2026-01-23 10:13:26.98505688 +0000 UTC m=+0.057662879 container create 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.987 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:26 np0005593232 nova_compute[250269]: 2026-01-23 10:13:26.993 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.001 250273 INFO nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Took 13.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.001 250273 DEBUG nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:27 np0005593232 systemd[1]: Started libpod-conmon-1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0.scope.
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.052 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:13:27 np0005593232 podman[338380]: 2026-01-23 10:13:26.95935706 +0000 UTC m=+0.031963069 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:13:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d14ee09bff3046b8011b6b05f4032873b011537d8703a62d387203867f388591/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:27 np0005593232 podman[338380]: 2026-01-23 10:13:27.083810316 +0000 UTC m=+0.156416325 container init 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:13:27 np0005593232 podman[338380]: 2026-01-23 10:13:27.089724044 +0000 UTC m=+0.162330033 container start 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.106 250273 INFO nova.compute.manager [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Took 14.74 seconds to build instance.#033[00m
Jan 23 05:13:27 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [NOTICE]   (338400) : New worker (338402) forked
Jan 23 05:13:27 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [NOTICE]   (338400) : Loading success.
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.176 250273 DEBUG oslo_concurrency.lockutils [None req-2ee86188-2ac8-42b4-bad8-41db317a6680 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2502: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 334 op/s
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.827 250273 INFO nova.compute.manager [None req-2448ee4b-048b-4fed-bb8a-8cbb534a3f7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Pausing#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.828 250273 DEBUG nova.objects.instance [None req-2448ee4b-048b-4fed-bb8a-8cbb534a3f7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'flavor' on Instance uuid 74fc0ca6-968a-48e2-8496-f53ebb50daf0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.868 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163207.868638, 74fc0ca6-968a-48e2-8496-f53ebb50daf0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.869 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.871 250273 DEBUG nova.compute.manager [None req-2448ee4b-048b-4fed-bb8a-8cbb534a3f7f aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.887 250273 DEBUG nova.compute.manager [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.887 250273 DEBUG oslo_concurrency.lockutils [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.888 250273 DEBUG oslo_concurrency.lockutils [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.888 250273 DEBUG oslo_concurrency.lockutils [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.888 250273 DEBUG nova.compute.manager [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] No waiting events found dispatching network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.888 250273 WARNING nova.compute.manager [req-86150e43-9bf1-4c70-99cd-c5918c605a18 req-13479fe3-815f-4053-9ae6-3727b37be2d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received unexpected event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c for instance with vm_state active and task_state pausing.#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.900 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.904 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:27 np0005593232 nova_compute[250269]: 2026-01-23 10:13:27.935 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Jan 23 05:13:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:28.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.627 250273 DEBUG nova.compute.manager [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.628 250273 DEBUG oslo_concurrency.lockutils [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.628 250273 DEBUG oslo_concurrency.lockutils [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.628 250273 DEBUG oslo_concurrency.lockutils [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.629 250273 DEBUG nova.compute.manager [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] No waiting events found dispatching network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:28 np0005593232 nova_compute[250269]: 2026-01-23 10:13:28.629 250273 WARNING nova.compute.manager [req-7d5f83c2-453c-435c-ae92-b848f1fe77d4 req-bef36363-7368-4483-9a1f-1d323c64d047 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received unexpected event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:13:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.023 250273 INFO nova.compute.manager [None req-b3f4e444-b7e6-4bdd-a4b1-1a886258e23b fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Unpausing#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.026 250273 DEBUG nova.objects.instance [None req-b3f4e444-b7e6-4bdd-a4b1-1a886258e23b fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'flavor' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.061 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163209.0595765, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.063 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:13:29 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.068 250273 DEBUG nova.virt.libvirt.guest [None req-b3f4e444-b7e6-4bdd-a4b1-1a886258e23b fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.068 250273 DEBUG nova.compute.manager [None req-b3f4e444-b7e6-4bdd-a4b1-1a886258e23b fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.086 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.094 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.125 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (unpausing). Skip.#033[00m
Jan 23 05:13:29 np0005593232 nova_compute[250269]: 2026-01-23 10:13:29.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2503: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 343 op/s
Jan 23 05:13:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:30.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:30 np0005593232 podman[338463]: 2026-01-23 10:13:30.417644085 +0000 UTC m=+0.070831613 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:13:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:30.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2504: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 139 KiB/s wr, 191 op/s
Jan 23 05:13:31 np0005593232 nova_compute[250269]: 2026-01-23 10:13:31.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:32.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:32 np0005593232 nova_compute[250269]: 2026-01-23 10:13:32.841 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:32 np0005593232 nova_compute[250269]: 2026-01-23 10:13:32.842 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:32 np0005593232 nova_compute[250269]: 2026-01-23 10:13:32.842 250273 INFO nova.compute.manager [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Shelving#033[00m
Jan 23 05:13:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:32.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:32 np0005593232 kernel: tap5a2eb60f-bd (unregistering): left promiscuous mode
Jan 23 05:13:32 np0005593232 NetworkManager[49057]: <info>  [1769163212.9187] device (tap5a2eb60f-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:13:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:32Z|00478|binding|INFO|Releasing lport 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c from this chassis (sb_readonly=0)
Jan 23 05:13:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:32Z|00479|binding|INFO|Setting lport 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c down in Southbound
Jan 23 05:13:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:32Z|00480|binding|INFO|Removing iface tap5a2eb60f-bd ovn-installed in OVS
Jan 23 05:13:32 np0005593232 nova_compute[250269]: 2026-01-23 10:13:32.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:32.936 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:7a:35 10.100.0.5'], port_security=['fa:16:3e:d9:7a:35 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '74fc0ca6-968a-48e2-8496-f53ebb50daf0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5b72284-9167-4768-aa53-98b2ad243e70', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:32.939 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c in datapath 8d9599b4-8855-4310-af02-cdd058438f7d unbound from our chassis#033[00m
Jan 23 05:13:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:32.942 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d9599b4-8855-4310-af02-cdd058438f7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:13:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:32.944 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5fd474d8-7c41-42c2-bb08-f1687e7d7903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:32.944 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace which is not needed anymore#033[00m
Jan 23 05:13:32 np0005593232 nova_compute[250269]: 2026-01-23 10:13:32.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:32 np0005593232 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Jan 23 05:13:32 np0005593232 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d0000008a.scope: Consumed 2.520s CPU time.
Jan 23 05:13:32 np0005593232 systemd-machined[215836]: Machine qemu-57-instance-0000008a terminated.
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.115 250273 INFO nova.virt.libvirt.driver [-] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance destroyed successfully.#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.115 250273 DEBUG nova.objects.instance [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'numa_topology' on Instance uuid 74fc0ca6-968a-48e2-8496-f53ebb50daf0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:33 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [NOTICE]   (338286) : haproxy version is 2.8.14-c23fe91
Jan 23 05:13:33 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [NOTICE]   (338286) : path to executable is /usr/sbin/haproxy
Jan 23 05:13:33 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [WARNING]  (338286) : Exiting Master process...
Jan 23 05:13:33 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [ALERT]    (338286) : Current worker (338288) exited with code 143 (Terminated)
Jan 23 05:13:33 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[338282]: [WARNING]  (338286) : All workers exited. Exiting... (0)
Jan 23 05:13:33 np0005593232 systemd[1]: libpod-db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360.scope: Deactivated successfully.
Jan 23 05:13:33 np0005593232 podman[338506]: 2026-01-23 10:13:33.131156588 +0000 UTC m=+0.060693356 container died db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:13:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360-userdata-shm.mount: Deactivated successfully.
Jan 23 05:13:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2f892184163c8f753e3bdec484b585c9640cdf1b6bb02a0cce60b7c09b43256b-merged.mount: Deactivated successfully.
Jan 23 05:13:33 np0005593232 podman[338506]: 2026-01-23 10:13:33.177232187 +0000 UTC m=+0.106768935 container cleanup db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:13:33 np0005593232 systemd[1]: libpod-conmon-db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360.scope: Deactivated successfully.
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.225 250273 DEBUG nova.compute.manager [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-vif-unplugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.227 250273 DEBUG oslo_concurrency.lockutils [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.229 250273 DEBUG oslo_concurrency.lockutils [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.229 250273 DEBUG oslo_concurrency.lockutils [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.229 250273 DEBUG nova.compute.manager [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] No waiting events found dispatching network-vif-unplugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.229 250273 WARNING nova.compute.manager [req-e224c16b-2a39-4468-83fe-d73a83a7c578 req-c0ee8d93-d09b-42e4-82fe-3a4e0eeebd69 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received unexpected event network-vif-unplugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c for instance with vm_state paused and task_state shelving.#033[00m
Jan 23 05:13:33 np0005593232 podman[338543]: 2026-01-23 10:13:33.240620698 +0000 UTC m=+0.040940724 container remove db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.247 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[acaa60c7-89db-460b-82b8-169c7a1173ce]: (4, ('Fri Jan 23 10:13:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360)\ndb7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360\nFri Jan 23 10:13:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (db7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360)\ndb7c62d66f966585954b076a7cfbe6b68e6010c13c467bb6c0db3229951da360\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.250 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cbf5f536-18c1-45e0-99ad-c25153052691]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.252 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.255 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:33 np0005593232 kernel: tap8d9599b4-80: left promiscuous mode
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.275 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.280 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0339b166-b21f-4f3d-99f1-ab0e7e26ca90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.296 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fab01f-3efc-4f5a-9f85-bf0fa8b29c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.299 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[200a1a10-c7af-4cc2-a9d2-5f056e797399]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.316 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b5130a-2c46-4132-b6a9-209670ac7170]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711851, 'reachable_time': 24317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338561, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8d9599b4\x2d8855\x2d4310\x2daf02\x2dcdd058438f7d.mount: Deactivated successfully.
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.325 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:13:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:33.326 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b59e27-148f-4c5e-8216-46e3ecb0182e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2505: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.8 MiB/s rd, 139 KiB/s wr, 239 op/s
Jan 23 05:13:33 np0005593232 nova_compute[250269]: 2026-01-23 10:13:33.787 250273 INFO nova.virt.libvirt.driver [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Beginning cold snapshot process#033[00m
Jan 23 05:13:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:34.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.319 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:34Z|00481|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:34Z|00482|binding|INFO|Releasing lport 3090087a-6e6f-4e48-98d5-06ed0571b2cb from this chassis (sb_readonly=0)
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.462 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.472 250273 DEBUG nova.virt.libvirt.imagebackend [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:13:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:34Z|00483|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:34Z|00484|binding|INFO|Releasing lport 3090087a-6e6f-4e48-98d5-06ed0571b2cb from this chassis (sb_readonly=0)
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:34 np0005593232 nova_compute[250269]: 2026-01-23 10:13:34.807 250273 DEBUG nova.storage.rbd_utils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(fd0e74e4a7f54bda99ada8c7f505300c) on rbd image(74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:13:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:34.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 23 05:13:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 23 05:13:35 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.161 250273 DEBUG nova.storage.rbd_utils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] cloning vms/74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk@fd0e74e4a7f54bda99ada8c7f505300c to images/6bd0071a-f3ee-4720-bcc3-909d56e7c877 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.298 250273 DEBUG nova.storage.rbd_utils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] flattening images/6bd0071a-f3ee-4720-bcc3-909d56e7c877 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:13:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2507: 321 pgs: 321 active+clean; 577 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.1 MiB/s rd, 46 KiB/s wr, 216 op/s
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.485 250273 DEBUG nova.compute.manager [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.485 250273 DEBUG oslo_concurrency.lockutils [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.486 250273 DEBUG oslo_concurrency.lockutils [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.486 250273 DEBUG oslo_concurrency.lockutils [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.486 250273 DEBUG nova.compute.manager [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] No waiting events found dispatching network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.487 250273 WARNING nova.compute.manager [req-08cf74b8-bce6-4087-ac41-b3a75d4eb291 req-65e28472-c205-4e24-8c55-cf2f34d32f08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received unexpected event network-vif-plugged-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c for instance with vm_state paused and task_state shelving_image_uploading.#033[00m
Jan 23 05:13:35 np0005593232 nova_compute[250269]: 2026-01-23 10:13:35.576 250273 DEBUG nova.storage.rbd_utils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] removing snapshot(fd0e74e4a7f54bda99ada8c7f505300c) on rbd image(74fc0ca6-968a-48e2-8496-f53ebb50daf0_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:13:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:36.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 23 05:13:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 23 05:13:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 23 05:13:36 np0005593232 nova_compute[250269]: 2026-01-23 10:13:36.182 250273 DEBUG nova.storage.rbd_utils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] creating snapshot(snap) on rbd image(6bd0071a-f3ee-4720-bcc3-909d56e7c877) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:13:36 np0005593232 nova_compute[250269]: 2026-01-23 10:13:36.315 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:36.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:36 np0005593232 nova_compute[250269]: 2026-01-23 10:13:36.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 23 05:13:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 23 05:13:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:13:37
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2510: 321 pgs: 321 active+clean; 601 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.8 MiB/s rd, 2.0 MiB/s wr, 229 op/s
Jan 23 05:13:37 np0005593232 nova_compute[250269]: 2026-01-23 10:13:37.580 250273 INFO nova.compute.manager [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Rescuing#033[00m
Jan 23 05:13:37 np0005593232 nova_compute[250269]: 2026-01-23 10:13:37.581 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:37 np0005593232 nova_compute[250269]: 2026-01-23 10:13:37.581 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:37 np0005593232 nova_compute[250269]: 2026-01-23 10:13:37.581 250273 DEBUG nova.network.neutron [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:13:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:38.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:13:38 np0005593232 NetworkManager[49057]: <info>  [1769163218.6306] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Jan 23 05:13:38 np0005593232 nova_compute[250269]: 2026-01-23 10:13:38.630 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:38 np0005593232 NetworkManager[49057]: <info>  [1769163218.6323] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/235)
Jan 23 05:13:38 np0005593232 nova_compute[250269]: 2026-01-23 10:13:38.793 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:38Z|00485|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:38Z|00486|binding|INFO|Releasing lport 3090087a-6e6f-4e48-98d5-06ed0571b2cb from this chassis (sb_readonly=0)
Jan 23 05:13:38 np0005593232 nova_compute[250269]: 2026-01-23 10:13:38.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:38.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.321 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2511: 321 pgs: 321 active+clean; 639 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.2 MiB/s rd, 6.4 MiB/s wr, 309 op/s
Jan 23 05:13:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:39Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:3d:11 10.100.0.19
Jan 23 05:13:39 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:39Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:3d:11 10.100.0.19
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.787 250273 INFO nova.virt.libvirt.driver [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Snapshot image upload complete#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.788 250273 DEBUG nova.compute.manager [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.971 250273 INFO nova.compute.manager [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Shelve offloading#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.978 250273 INFO nova.virt.libvirt.driver [-] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance destroyed successfully.#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.979 250273 DEBUG nova.compute.manager [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.981 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.981 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:39 np0005593232 nova_compute[250269]: 2026-01-23 10:13:39.981 250273 DEBUG nova.network.neutron [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:13:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:40.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:40.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2512: 321 pgs: 321 active+clean; 639 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 6.1 MiB/s wr, 253 op/s
Jan 23 05:13:41 np0005593232 nova_compute[250269]: 2026-01-23 10:13:41.716 250273 DEBUG nova.network.neutron [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:41 np0005593232 nova_compute[250269]: 2026-01-23 10:13:41.743 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:41 np0005593232 nova_compute[250269]: 2026-01-23 10:13:41.885 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 23 05:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 23 05:13:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 23 05:13:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:42.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:42 np0005593232 nova_compute[250269]: 2026-01-23 10:13:42.385 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:42.627 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:42.630 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:42.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:43 np0005593232 nova_compute[250269]: 2026-01-23 10:13:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:43 np0005593232 nova_compute[250269]: 2026-01-23 10:13:43.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:13:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2514: 321 pgs: 321 active+clean; 656 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 6.4 MiB/s wr, 273 op/s
Jan 23 05:13:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.171 250273 DEBUG nova.network.neutron [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updating instance_info_cache with network_info: [{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.324 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.722 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:44 np0005593232 kernel: tap5247b656-d9 (unregistering): left promiscuous mode
Jan 23 05:13:44 np0005593232 NetworkManager[49057]: <info>  [1769163224.7710] device (tap5247b656-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:13:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:44Z|00487|binding|INFO|Releasing lport 5247b656-d92f-4246-8db1-32dd4ca770b1 from this chassis (sb_readonly=0)
Jan 23 05:13:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:44Z|00488|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 down in Southbound
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:44Z|00489|binding|INFO|Removing iface tap5247b656-d9 ovn-installed in OVS
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:44.803 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:44.804 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 unbound from our chassis#033[00m
Jan 23 05:13:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:44.806 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:13:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:44.808 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9b892642-540b-4e6a-a023-79802504e52a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:44.808 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace which is not needed anymore#033[00m
Jan 23 05:13:44 np0005593232 nova_compute[250269]: 2026-01-23 10:13:44.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:44 np0005593232 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 23 05:13:44 np0005593232 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000085.scope: Consumed 16.395s CPU time.
Jan 23 05:13:44 np0005593232 systemd-machined[215836]: Machine qemu-56-instance-00000085 terminated.
Jan 23 05:13:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:44.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [NOTICE]   (336689) : haproxy version is 2.8.14-c23fe91
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [NOTICE]   (336689) : path to executable is /usr/sbin/haproxy
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [WARNING]  (336689) : Exiting Master process...
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [WARNING]  (336689) : Exiting Master process...
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [ALERT]    (336689) : Current worker (336691) exited with code 143 (Terminated)
Jan 23 05:13:44 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[336683]: [WARNING]  (336689) : All workers exited. Exiting... (0)
Jan 23 05:13:44 np0005593232 systemd[1]: libpod-f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6.scope: Deactivated successfully.
Jan 23 05:13:44 np0005593232 podman[338744]: 2026-01-23 10:13:44.970433548 +0000 UTC m=+0.043818006 container died f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:13:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6-userdata-shm.mount: Deactivated successfully.
Jan 23 05:13:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ce0d6928129688ea86fc5ce48defcae04f495f03b04e2de2d2eb6a3286a5a9f1-merged.mount: Deactivated successfully.
Jan 23 05:13:45 np0005593232 podman[338744]: 2026-01-23 10:13:45.013895593 +0000 UTC m=+0.087280051 container cleanup f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:13:45 np0005593232 systemd[1]: libpod-conmon-f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6.scope: Deactivated successfully.
Jan 23 05:13:45 np0005593232 podman[338788]: 2026-01-23 10:13:45.076628236 +0000 UTC m=+0.039956557 container remove f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.082 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fef12412-bcdc-4091-8ee6-d847c3e12daf]: (4, ('Fri Jan 23 10:13:44 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6)\nf594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6\nFri Jan 23 10:13:45 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (f594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6)\nf594ece9e5ad3b2c3c211a432faded67887142428eccbee722fcfd9458fd8cb6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.085 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cfeeec77-34a3-41ef-b437-1522b53bed76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.086 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:45 np0005593232 kernel: tap00bd3319-b0: left promiscuous mode
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.110 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[110cf374-b9a5-479e-b46e-a75fda4baecc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.124 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[09c725ea-ed86-4f39-a26e-1abf8bb59b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.125 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[89d0e2a6-403b-41e7-a6e2-3dd96570ab9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.140 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7e420089-6fe1-4fba-871f-ff2c05f0264b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 707554, 'reachable_time': 29182, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338809, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 systemd[1]: run-netns-ovnmeta\x2d00bd3319\x2dbfe5\x2d4acd\x2db2e4\x2d17830ee847f9.mount: Deactivated successfully.
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.145 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:13:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:45.145 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[418c5db0-6c6e-4bbe-8871-78a96a01569c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2515: 321 pgs: 321 active+clean; 658 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 183 op/s
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.402 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.408 250273 INFO nova.virt.libvirt.driver [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance destroyed successfully.#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.408 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'numa_topology' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.657 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Attempting rescue#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.658 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.662 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.663 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Creating image(s)#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.688 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.691 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.789 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.818 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.822 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.892 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.893 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.894 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.894 250273 DEBUG oslo_concurrency.lockutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.923 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:45 np0005593232 nova_compute[250269]: 2026-01-23 10:13:45.928 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.294 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.295 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'migration_context' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.312 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.312 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start _get_guest_xml network_info=[{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "vif_mac": "fa:16:3e:7a:b9:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.313 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'resources' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.337 250273 WARNING nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.340 250273 DEBUG nova.compute.manager [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.341 250273 DEBUG oslo_concurrency.lockutils [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.341 250273 DEBUG oslo_concurrency.lockutils [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.341 250273 DEBUG oslo_concurrency.lockutils [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.341 250273 DEBUG nova.compute.manager [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.342 250273 WARNING nova.compute.manager [req-7ff6bd3b-8c0a-45bc-bf91-148af3fb5171 req-9ca72f64-af5b-424c-b957-2bb1143a5cde 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.346 250273 DEBUG nova.virt.libvirt.host [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.346 250273 DEBUG nova.virt.libvirt.host [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.349 250273 DEBUG nova.virt.libvirt.host [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.349 250273 DEBUG nova.virt.libvirt.host [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.350 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.350 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.351 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.351 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.351 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.351 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.352 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.352 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.352 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.352 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.352 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.353 250273 DEBUG nova.virt.hardware [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.353 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.378 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:13:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1688935479' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:13:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352051592' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.821 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.824 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:46.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:46 np0005593232 nova_compute[250269]: 2026-01-23 10:13:46.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.01500885503908431 of space, bias 1.0, pg target 4.502656511725293 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.0002689954401637819 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8560510887772195 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.098 250273 INFO nova.virt.libvirt.driver [-] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Instance destroyed successfully.#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.099 250273 DEBUG nova.objects.instance [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'resources' on Instance uuid 74fc0ca6-968a-48e2-8496-f53ebb50daf0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.130 250273 DEBUG nova.virt.libvirt.vif [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-256866531',display_name='tempest-ServerActionsTestOtherB-server-256866531',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-256866531',id=138,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:13:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-33x0cr32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member',shelved_at='2026-01-23T10:13:39.788802',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6bd0071a-f3ee-4720-bcc3-909d56e7c877'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:13:34Z,user_data=None,user_id='aca3cab576d641d3b89e7dddf155d467',uuid=74fc0ca6-968a-48e2-8496-f53ebb50daf0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.130 250273 DEBUG nova.network.os_vif_util [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.131 250273 DEBUG nova.network.os_vif_util [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.131 250273 DEBUG os_vif [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.132 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.133 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a2eb60f-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.136 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.142 250273 INFO os_vif [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:7a:35,bridge_name='br-int',has_traffic_filtering=True,id=5a2eb60f-bd18-4a91-b5f8-eaec207ac51c,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a2eb60f-bd')#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.260 250273 DEBUG nova.compute.manager [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Received event network-changed-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.261 250273 DEBUG nova.compute.manager [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Refreshing instance network info cache due to event network-changed-5a2eb60f-bd18-4a91-b5f8-eaec207ac51c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.261 250273 DEBUG oslo_concurrency.lockutils [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.261 250273 DEBUG oslo_concurrency.lockutils [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.261 250273 DEBUG nova.network.neutron [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Refreshing network info cache for port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:13:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398530176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.311 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.312 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2516: 321 pgs: 321 active+clean; 673 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.8 MiB/s wr, 154 op/s
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.581 250273 INFO nova.virt.libvirt.driver [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Deleting instance files /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0_del#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.583 250273 INFO nova.virt.libvirt.driver [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Deletion of /var/lib/nova/instances/74fc0ca6-968a-48e2-8496-f53ebb50daf0_del complete#033[00m
Jan 23 05:13:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:13:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1952083015' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.762 250273 INFO nova.scheduler.client.report [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Deleted allocations for instance 74fc0ca6-968a-48e2-8496-f53ebb50daf0#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.767 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.769 250273 DEBUG nova.virt.libvirt.vif [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1844411829',display_name='tempest-ServerRescueNegativeTestJSON-server-1844411829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1844411829',id=133,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a6ba16c4b9d49d3bc24cd7b44935d1f',ramdisk_id='',reservation_id='r-v1vp7v6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-87224704',owner_user_name='tempest-ServerRescueNegativeTestJSON-87224704-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:13:29Z,user_data=None,user_id='fae914e59ec54f6b80928ef3cc68dbdb',uuid=8056a321-13d3-4dd8-bb33-70c832c17ac1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "vif_mac": "fa:16:3e:7a:b9:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.769 250273 DEBUG nova.network.os_vif_util [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converting VIF {"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "vif_mac": "fa:16:3e:7a:b9:0c"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.770 250273 DEBUG nova.network.os_vif_util [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.772 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'pci_devices' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.811 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <uuid>8056a321-13d3-4dd8-bb33-70c832c17ac1</uuid>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <name>instance-00000085</name>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerRescueNegativeTestJSON-server-1844411829</nova:name>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:13:46</nova:creationTime>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:user uuid="fae914e59ec54f6b80928ef3cc68dbdb">tempest-ServerRescueNegativeTestJSON-87224704-project-member</nova:user>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:project uuid="0a6ba16c4b9d49d3bc24cd7b44935d1f">tempest-ServerRescueNegativeTestJSON-87224704</nova:project>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <nova:port uuid="5247b656-d92f-4246-8db1-32dd4ca770b1">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="serial">8056a321-13d3-4dd8-bb33-70c832c17ac1</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="uuid">8056a321-13d3-4dd8-bb33-70c832c17ac1</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.rescue">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/8056a321-13d3-4dd8-bb33-70c832c17ac1_disk">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config.rescue">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:7a:b9:0c"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <target dev="tap5247b656-d9"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/console.log" append="off"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:13:47 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:13:47 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:13:47 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:13:47 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.819 250273 INFO nova.virt.libvirt.driver [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance destroyed successfully.#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.832 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.832 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.919 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.920 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.920 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.921 250273 DEBUG nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] No VIF found with MAC fa:16:3e:7a:b9:0c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.921 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Using config drive#033[00m
Jan 23 05:13:47 np0005593232 nova_compute[250269]: 2026-01-23 10:13:47.988 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.025 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.036 250273 DEBUG oslo_concurrency.processutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:48.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.083 250273 DEBUG nova.objects.instance [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'keypairs' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.113 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163213.1109746, 74fc0ca6-968a-48e2-8496-f53ebb50daf0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.114 250273 INFO nova.compute.manager [-] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.139 250273 DEBUG nova.compute.manager [None req-a6fd055f-4476-46cd-8e7a-7962a4d6e50c - - - - - -] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:13:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490541171' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.491 250273 DEBUG nova.compute.manager [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.492 250273 DEBUG oslo_concurrency.lockutils [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.492 250273 DEBUG oslo_concurrency.lockutils [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.493 250273 DEBUG oslo_concurrency.lockutils [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.493 250273 DEBUG nova.compute.manager [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.493 250273 WARNING nova.compute.manager [req-b89abcd0-3eb8-496e-8c3a-00e8594408b3 req-fbc524e5-f556-4f78-a931-772a07b46ab9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.494 250273 DEBUG oslo_concurrency.processutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.500 250273 DEBUG nova.compute.provider_tree [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.516 250273 DEBUG nova.scheduler.client.report [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.546 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.603 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Creating config drive at /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.608 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10dzpj2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.641 250273 DEBUG oslo_concurrency.lockutils [None req-4cc9d172-0c40-4fc9-922c-dfec86fd430d aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "74fc0ca6-968a-48e2-8496-f53ebb50daf0" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 15.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.746 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp10dzpj2s" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.781 250273 DEBUG nova.storage.rbd_utils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] rbd image 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:13:48 np0005593232 nova_compute[250269]: 2026-01-23 10:13:48.786 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:48.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.320 250273 DEBUG oslo_concurrency.processutils [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue 8056a321-13d3-4dd8-bb33-70c832c17ac1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.323 250273 INFO nova.virt.libvirt.driver [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Deleting local config drive /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1/disk.config.rescue because it was imported into RBD.#033[00m
Jan 23 05:13:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2517: 321 pgs: 321 active+clean; 684 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 180 KiB/s rd, 3.1 MiB/s wr, 80 op/s
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 kernel: tap5247b656-d9: entered promiscuous mode
Jan 23 05:13:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:49Z|00490|binding|INFO|Claiming lport 5247b656-d92f-4246-8db1-32dd4ca770b1 for this chassis.
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.3911] manager: (tap5247b656-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/236)
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:49Z|00491|binding|INFO|5247b656-d92f-4246-8db1-32dd4ca770b1: Claiming fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.398 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.399 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 bound to our chassis#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.401 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.414 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[aed14d8e-dd09-4e57-8e47-e968442a6d76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.414 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00bd3319-b1 in ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.416 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00bd3319-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.416 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5b36065e-3e5f-4119-b0d1-253260665697]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:49Z|00492|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 up in Southbound
Jan 23 05:13:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:49Z|00493|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 ovn-installed in OVS
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.418 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.418 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[42d1d312-3dd6-45ed-9a73-04ae2fb6de14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 systemd-udevd[339139]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 systemd-machined[215836]: New machine qemu-59-instance-00000085.
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.435 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f63ae1c-6e86-4aa6-8a09-36c80ba893c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.4405] device (tap5247b656-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.4412] device (tap5247b656-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.447 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[46325f31-db4d-4eb0-a0fb-b6a63ef77c29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 systemd[1]: Started Virtual Machine qemu-59-instance-00000085.
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.480 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[92232d93-dd77-45e8-87d6-2a3cef0b8970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 systemd-udevd[339143]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.4901] manager: (tap00bd3319-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/237)
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.489 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[294ff754-00aa-4f0c-a4a3-6b74ccd153ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.532 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7f1208de-ee82-4899-befb-c87d468b02c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.536 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[803cd2fb-e85c-4220-845d-6b46e6a1f0e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.5641] device (tap00bd3319-b0): carrier: link connected
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.572 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f7a230-660b-45d6-852e-3b7599b0cf68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.589 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59e5b7e9-fe66-41c6-8d8b-6c042a76f510]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714270, 'reachable_time': 17180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339172, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.604 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a9e9cf7-3b0c-4240-89a7-a89a6aa680be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:83f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 714270, 'tstamp': 714270}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339173, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.618 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fffa0db1-b965-4ab6-ba0b-a830696ccf8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 148], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714270, 'reachable_time': 17180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339174, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.652 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[64f39411-4937-4177-9095-78efe7551560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.710 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfd2ad2-a0b7-4566-8bb5-9c89e362b26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.716 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.717 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.717 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00bd3319-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.721 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 kernel: tap00bd3319-b0: entered promiscuous mode
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 NetworkManager[49057]: <info>  [1769163229.7250] manager: (tap00bd3319-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.731 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00bd3319-b0, col_values=(('external_ids', {'iface-id': '1788b5e6-601b-4e3d-a584-c0138c3308f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.733 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:49Z|00494|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.737 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.738 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd52206-2986-4756-a977-0d919dfa7714]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.739 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:13:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:49.740 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'env', 'PROCESS_TAG=haproxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00bd3319-bfe5-4acd-b2e4-17830ee847f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.756 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.886 250273 DEBUG nova.network.neutron [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updated VIF entry in instance network info cache for port 5a2eb60f-bd18-4a91-b5f8-eaec207ac51c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.887 250273 DEBUG nova.network.neutron [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 74fc0ca6-968a-48e2-8496-f53ebb50daf0] Updating instance_info_cache with network_info: [{"id": "5a2eb60f-bd18-4a91-b5f8-eaec207ac51c", "address": "fa:16:3e:d9:7a:35", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap5a2eb60f-bd", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:49 np0005593232 nova_compute[250269]: 2026-01-23 10:13:49.916 250273 DEBUG oslo_concurrency.lockutils [req-46bf46d1-04ac-486f-a98e-0c32cf51a166 req-c8c15a80-d033-4a04-a28e-464742988057 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-74fc0ca6-968a-48e2-8496-f53ebb50daf0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:50.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.084 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 8056a321-13d3-4dd8-bb33-70c832c17ac1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.084 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163230.0834348, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.085 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.089 250273 DEBUG nova.compute.manager [None req-5d2c53f2-c4f7-4e41-8110-815341fe49ec fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:50 np0005593232 podman[339265]: 2026-01-23 10:13:50.146467673 +0000 UTC m=+0.046098771 container create f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:13:50 np0005593232 systemd[1]: Started libpod-conmon-f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c.scope.
Jan 23 05:13:50 np0005593232 podman[339265]: 2026-01-23 10:13:50.122696997 +0000 UTC m=+0.022328115 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:13:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b2a203a3899ff4c7635e25901fd1c5a90c7668e7b9bbb0a8936038db62cc97/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:50 np0005593232 podman[339265]: 2026-01-23 10:13:50.260486382 +0000 UTC m=+0.160117510 container init f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:13:50 np0005593232 podman[339265]: 2026-01-23 10:13:50.27096636 +0000 UTC m=+0.170597468 container start f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:13:50 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [NOTICE]   (339284) : New worker (339286) forked
Jan 23 05:13:50 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [NOTICE]   (339284) : Loading success.
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.664 250273 DEBUG nova.compute.manager [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.664 250273 DEBUG oslo_concurrency.lockutils [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.665 250273 DEBUG oslo_concurrency.lockutils [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.666 250273 DEBUG oslo_concurrency.lockutils [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.666 250273 DEBUG nova.compute.manager [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.666 250273 WARNING nova.compute.manager [req-0c40bb8f-2dbb-4014-b97a-3927e597060e req-7a207a39-a8e4-4286-87e0-144596c396c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.690 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.690 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.690 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.690 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.692 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.696 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.865 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163230.0837965, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.865 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:13:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:13:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:50.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.918 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:13:50 np0005593232 nova_compute[250269]: 2026-01-23 10:13:50.926 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:13:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:13:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:13:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2518: 321 pgs: 321 active+clean; 684 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 180 KiB/s rd, 3.1 MiB/s wr, 80 op/s
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.559 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.560 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.560 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.560 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.562 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.563 250273 INFO nova.compute.manager [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Terminating instance#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.564 250273 DEBUG nova.compute.manager [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:13:51 np0005593232 kernel: tapfaf5bdcb-b5 (unregistering): left promiscuous mode
Jan 23 05:13:51 np0005593232 NetworkManager[49057]: <info>  [1769163231.6112] device (tapfaf5bdcb-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.672 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:51Z|00495|binding|INFO|Releasing lport faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 from this chassis (sb_readonly=0)
Jan 23 05:13:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:51Z|00496|binding|INFO|Setting lport faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 down in Southbound
Jan 23 05:13:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:13:51Z|00497|binding|INFO|Removing iface tapfaf5bdcb-b5 ovn-installed in OVS
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:51.688 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:3d:11 10.100.0.19'], port_security=['fa:16:3e:13:3d:11 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': 'b3f4a8f0-513b-4165-a2f5-3c01bac04576', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a86a6e8-5e41-414d-9717-c71aa2218873', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ec17e7a3-c853-4135-b8b0-167a2ed6cea5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9398d99-046a-4d16-8858-45cb48df3c81, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:13:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:51.689 161902 INFO neutron.agent.ovn.metadata.agent [-] Port faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 in datapath 0a86a6e8-5e41-414d-9717-c71aa2218873 unbound from our chassis#033[00m
Jan 23 05:13:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:51.691 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a86a6e8-5e41-414d-9717-c71aa2218873, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:13:51 np0005593232 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 23 05:13:51 np0005593232 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000089.scope: Consumed 13.669s CPU time.
Jan 23 05:13:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:51.692 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[11633005-f17f-4372-80ac-91f6ebbbfdb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:51.692 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873 namespace which is not needed anymore#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 systemd-machined[215836]: Machine qemu-58-instance-00000089 terminated.
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.802 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.820 250273 INFO nova.virt.libvirt.driver [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Instance destroyed successfully.#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.820 250273 DEBUG nova.objects.instance [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'resources' on Instance uuid b3f4a8f0-513b-4165-a2f5-3c01bac04576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.850 250273 DEBUG nova.virt.libvirt.vif [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:13:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-495186012',display_name='tempest-TestNetworkBasicOps-server-495186012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-495186012',id=137,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJcRSiJbCJSpcIIny/K0weugJbU1lrFB/m2zYZiRLImaUCxqEBpaQ1Ck0Aehrf7Knf/qKUoGahOjDGUPouvtWFw4LRqeD1QCuGOxag4C/th3BIgZBXyUD1wzevSZevWMEw==',key_name='tempest-TestNetworkBasicOps-676077224',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:13:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-mfrfos5b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:13:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=b3f4a8f0-513b-4165-a2f5-3c01bac04576,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.851 250273 DEBUG nova.network.os_vif_util [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "address": "fa:16:3e:13:3d:11", "network": {"id": "0a86a6e8-5e41-414d-9717-c71aa2218873", "bridge": "br-int", "label": "tempest-network-smoke--1158487566", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf5bdcb-b5", "ovs_interfaceid": "faf5bdcb-b5b5-4066-9cf4-a8f4014679a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.852 250273 DEBUG nova.network.os_vif_util [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.853 250273 DEBUG os_vif [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.855 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.856 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf5bdcb-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.862 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.865 250273 INFO os_vif [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:3d:11,bridge_name='br-int',has_traffic_filtering=True,id=faf5bdcb-b5b5-4066-9cf4-a8f4014679a6,network=Network(0a86a6e8-5e41-414d-9717-c71aa2218873),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaf5bdcb-b5')#033[00m
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [NOTICE]   (338400) : haproxy version is 2.8.14-c23fe91
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [NOTICE]   (338400) : path to executable is /usr/sbin/haproxy
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [WARNING]  (338400) : Exiting Master process...
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [WARNING]  (338400) : Exiting Master process...
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [ALERT]    (338400) : Current worker (338402) exited with code 143 (Terminated)
Jan 23 05:13:51 np0005593232 neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873[338396]: [WARNING]  (338400) : All workers exited. Exiting... (0)
Jan 23 05:13:51 np0005593232 systemd[1]: libpod-1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0.scope: Deactivated successfully.
Jan 23 05:13:51 np0005593232 conmon[338396]: conmon 1bebf489aaa2a4a28747 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0.scope/container/memory.events
Jan 23 05:13:51 np0005593232 nova_compute[250269]: 2026-01-23 10:13:51.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:51 np0005593232 podman[339456]: 2026-01-23 10:13:51.893959616 +0000 UTC m=+0.060982284 container died 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:13:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0-userdata-shm.mount: Deactivated successfully.
Jan 23 05:13:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d14ee09bff3046b8011b6b05f4032873b011537d8703a62d387203867f388591-merged.mount: Deactivated successfully.
Jan 23 05:13:51 np0005593232 podman[339456]: 2026-01-23 10:13:51.968097183 +0000 UTC m=+0.135119851 container cleanup 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:13:51 np0005593232 systemd[1]: libpod-conmon-1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0.scope: Deactivated successfully.
Jan 23 05:13:52 np0005593232 podman[339504]: 2026-01-23 10:13:52.043933618 +0000 UTC m=+0.053358118 container remove 1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.050 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f431e424-0a1c-4db1-9324-0fab1fea0c67]: (4, ('Fri Jan 23 10:13:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873 (1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0)\n1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0\nFri Jan 23 10:13:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873 (1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0)\n1bebf489aaa2a4a28747f64534a2e8d123e430b80e76cb205d015eea6c1a34f0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.052 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cb9253-c00e-4c80-8357-6f9c717cde0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.053 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a86a6e8-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:13:52 np0005593232 kernel: tap0a86a6e8-50: left promiscuous mode
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.057 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.061 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[699ea4c9-739b-41e4-a529-10f06fd2eeeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:52.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.077 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f0396cc6-826d-494e-be57-5adf653017a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.078 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[96568f3b-d5a3-4134-8be0-0bd0d30976b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.097 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[76b72e1d-be84-4662-9f1a-f792506c9e03]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711946, 'reachable_time': 16625, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339519, 'error': None, 'target': 'ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 systemd[1]: run-netns-ovnmeta\x2d0a86a6e8\x2d5e41\x2d414d\x2d9717\x2dc71aa2218873.mount: Deactivated successfully.
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.102 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a86a6e8-5e41-414d-9717-c71aa2218873 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:13:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:13:52.102 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[c7558d9a-72c3-4bb6-bfe9-57ec8d7acd02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1122630f-2388-4520-8b7d-62cc028efe55 does not exist
Jan 23 05:13:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4f631af8-73bf-463d-a971-efbb44b63c17 does not exist
Jan 23 05:13:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 89fa1afb-19b7-4732-b26e-5c92f6e922f0 does not exist
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.346 250273 INFO nova.virt.libvirt.driver [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Deleting instance files /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576_del#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.347 250273 INFO nova.virt.libvirt.driver [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Deletion of /var/lib/nova/instances/b3f4a8f0-513b-4165-a2f5-3c01bac04576_del complete#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.421 250273 DEBUG nova.compute.manager [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-unplugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.422 250273 DEBUG oslo_concurrency.lockutils [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.423 250273 DEBUG oslo_concurrency.lockutils [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.423 250273 DEBUG oslo_concurrency.lockutils [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.423 250273 DEBUG nova.compute.manager [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] No waiting events found dispatching network-vif-unplugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.423 250273 DEBUG nova.compute.manager [req-14e11268-b8e4-44aa-9591-eea6acb50458 req-c9834b4d-eba2-4274-87ff-b11976df5545 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-unplugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.454 250273 INFO nova.compute.manager [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.455 250273 DEBUG oslo.service.loopingcall [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.455 250273 DEBUG nova.compute.manager [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.455 250273 DEBUG nova.network.neutron [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.518 250273 INFO nova.compute.manager [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Unrescuing#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.519 250273 DEBUG oslo_concurrency.lockutils [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.754531289 +0000 UTC m=+0.047831350 container create 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.764 250273 DEBUG nova.compute.manager [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.766 250273 DEBUG oslo_concurrency.lockutils [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.766 250273 DEBUG oslo_concurrency.lockutils [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.766 250273 DEBUG oslo_concurrency.lockutils [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.766 250273 DEBUG nova.compute.manager [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:52 np0005593232 nova_compute[250269]: 2026-01-23 10:13:52.767 250273 WARNING nova.compute.manager [req-1db33ed3-116e-40a5-8d1c-9cbe7fffb213 req-2b5cf233-36ba-4606-8fa2-f045a02e36d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:13:52 np0005593232 systemd[1]: Started libpod-conmon-57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8.scope.
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.729733434 +0000 UTC m=+0.023033525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.860833599 +0000 UTC m=+0.154133680 container init 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.86788747 +0000 UTC m=+0.161187531 container start 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.873893021 +0000 UTC m=+0.167193102 container attach 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:13:52 np0005593232 systemd[1]: libpod-57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8.scope: Deactivated successfully.
Jan 23 05:13:52 np0005593232 angry_wilbur[339680]: 167 167
Jan 23 05:13:52 np0005593232 conmon[339680]: conmon 57c6e2d39b57488c4627 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8.scope/container/memory.events
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.880175849 +0000 UTC m=+0.173475910 container died 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:52.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7bc0b0d07d4ae825c8bba85eb51f1f32675c470e3613a8624c356a8b570b622b-merged.mount: Deactivated successfully.
Jan 23 05:13:52 np0005593232 podman[339663]: 2026-01-23 10:13:52.920118504 +0000 UTC m=+0.213418565 container remove 57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_wilbur, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:13:52 np0005593232 systemd[1]: libpod-conmon-57c6e2d39b57488c4627a5d71e4dcac0e8918a34a48748f50c569a4ff8de8ee8.scope: Deactivated successfully.
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.085 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.115 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.115 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.115 250273 DEBUG oslo_concurrency.lockutils [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.116 250273 DEBUG nova.network.neutron [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.117 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.118 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.118 250273 DEBUG nova.network.neutron [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.119 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:13:53 np0005593232 podman[339703]: 2026-01-23 10:13:53.145459087 +0000 UTC m=+0.061501468 container create 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.176 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.177 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.177 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.178 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.178 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:53 np0005593232 systemd[1]: Started libpod-conmon-199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a.scope.
Jan 23 05:13:53 np0005593232 podman[339703]: 2026-01-23 10:13:53.124119851 +0000 UTC m=+0.040162242 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.218 250273 INFO nova.compute.manager [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Took 0.76 seconds to deallocate network for instance.#033[00m
Jan 23 05:13:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:53 np0005593232 podman[339703]: 2026-01-23 10:13:53.264573962 +0000 UTC m=+0.180616353 container init 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:13:53 np0005593232 podman[339703]: 2026-01-23 10:13:53.273229798 +0000 UTC m=+0.189272169 container start 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:13:53 np0005593232 podman[339703]: 2026-01-23 10:13:53.276317415 +0000 UTC m=+0.192359796 container attach 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.280 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.295 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2519: 321 pgs: 321 active+clean; 642 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.4 MiB/s wr, 154 op/s
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.433 250273 DEBUG oslo_concurrency.processutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:13:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:13:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3156948847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.646 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.879 250273 DEBUG oslo_concurrency.processutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:13:53 np0005593232 nova_compute[250269]: 2026-01-23 10:13:53.886 250273 DEBUG nova.compute.provider_tree [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:13:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:54 np0005593232 sleepy_banzai[339720]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:13:54 np0005593232 sleepy_banzai[339720]: --> relative data size: 1.0
Jan 23 05:13:54 np0005593232 sleepy_banzai[339720]: --> All data devices are unavailable
Jan 23 05:13:54 np0005593232 systemd[1]: libpod-199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a.scope: Deactivated successfully.
Jan 23 05:13:54 np0005593232 conmon[339720]: conmon 199ebcef7d19cc77eb3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a.scope/container/memory.events
Jan 23 05:13:54 np0005593232 podman[339781]: 2026-01-23 10:13:54.15595365 +0000 UTC m=+0.025499835 container died 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:13:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-17b2010187ef3bd391b98063bcc95f87ade1a37f93ba6cdaaef5b0ff2f633722-merged.mount: Deactivated successfully.
Jan 23 05:13:54 np0005593232 podman[339781]: 2026-01-23 10:13:54.211630062 +0000 UTC m=+0.081176227 container remove 199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:13:54 np0005593232 systemd[1]: libpod-conmon-199ebcef7d19cc77eb3da6864b23a3d8324e62b590509ca5b7696a5479bd346a.scope: Deactivated successfully.
Jan 23 05:13:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:13:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:54.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:13:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2520: 321 pgs: 321 active+clean; 600 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Jan 23 05:13:55 np0005593232 podman[339937]: 2026-01-23 10:13:55.525416772 +0000 UTC m=+0.038262638 container create 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:13:55 np0005593232 systemd[1]: Started libpod-conmon-8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c.scope.
Jan 23 05:13:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:55 np0005593232 podman[339937]: 2026-01-23 10:13:55.593673161 +0000 UTC m=+0.106519057 container init 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:13:55 np0005593232 podman[339937]: 2026-01-23 10:13:55.601836183 +0000 UTC m=+0.114682049 container start 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:13:55 np0005593232 podman[339937]: 2026-01-23 10:13:55.508354977 +0000 UTC m=+0.021200863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:55 np0005593232 podman[339937]: 2026-01-23 10:13:55.605388314 +0000 UTC m=+0.118234200 container attach 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:55 np0005593232 vigorous_kapitsa[339959]: 167 167
Jan 23 05:13:55 np0005593232 systemd[1]: libpod-8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c.scope: Deactivated successfully.
Jan 23 05:13:55 np0005593232 podman[339978]: 2026-01-23 10:13:55.645380981 +0000 UTC m=+0.021257275 container died 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:13:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0e3ab676cf68c7e13d6bdf9bd062d9892f52264cfe400c77db158f0e7e50f40a-merged.mount: Deactivated successfully.
Jan 23 05:13:55 np0005593232 podman[339951]: 2026-01-23 10:13:55.678949435 +0000 UTC m=+0.119974351 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:13:55 np0005593232 podman[339978]: 2026-01-23 10:13:55.686791517 +0000 UTC m=+0.062667811 container remove 8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_kapitsa, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:13:55 np0005593232 systemd[1]: libpod-conmon-8970deb30a709fa688cc4e7d85db6fc0264bbc894b7fe6422e13a41b13b3b32c.scope: Deactivated successfully.
Jan 23 05:13:55 np0005593232 podman[340005]: 2026-01-23 10:13:55.862748487 +0000 UTC m=+0.045067751 container create cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:13:55 np0005593232 systemd[1]: Started libpod-conmon-cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad.scope.
Jan 23 05:13:55 np0005593232 podman[340005]: 2026-01-23 10:13:55.842674367 +0000 UTC m=+0.024993661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ce798909ddd45a94decd15a3758daade8e9e11030959a6a5ebee368f946bca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ce798909ddd45a94decd15a3758daade8e9e11030959a6a5ebee368f946bca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ce798909ddd45a94decd15a3758daade8e9e11030959a6a5ebee368f946bca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60ce798909ddd45a94decd15a3758daade8e9e11030959a6a5ebee368f946bca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:55 np0005593232 podman[340005]: 2026-01-23 10:13:55.974431381 +0000 UTC m=+0.156750675 container init cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:13:55 np0005593232 podman[340005]: 2026-01-23 10:13:55.984051214 +0000 UTC m=+0.166370488 container start cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:55 np0005593232 podman[340005]: 2026-01-23 10:13:55.991624249 +0000 UTC m=+0.173943513 container attach cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:56.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]: {
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:    "0": [
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:        {
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "devices": [
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "/dev/loop3"
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            ],
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "lv_name": "ceph_lv0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "lv_size": "7511998464",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "name": "ceph_lv0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "tags": {
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.cluster_name": "ceph",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.crush_device_class": "",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.encrypted": "0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.osd_id": "0",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.type": "block",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:                "ceph.vdo": "0"
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            },
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "type": "block",
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:            "vg_name": "ceph_vg0"
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:        }
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]:    ]
Jan 23 05:13:56 np0005593232 friendly_gagarin[340021]: }
Jan 23 05:13:56 np0005593232 systemd[1]: libpod-cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad.scope: Deactivated successfully.
Jan 23 05:13:56 np0005593232 podman[340005]: 2026-01-23 10:13:56.835698223 +0000 UTC m=+1.018017497 container died cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:56 np0005593232 nova_compute[250269]: 2026-01-23 10:13:56.875 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-60ce798909ddd45a94decd15a3758daade8e9e11030959a6a5ebee368f946bca-merged.mount: Deactivated successfully.
Jan 23 05:13:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:56.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:56 np0005593232 nova_compute[250269]: 2026-01-23 10:13:56.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:13:56 np0005593232 podman[340005]: 2026-01-23 10:13:56.919490304 +0000 UTC m=+1.101809568 container remove cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_gagarin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:13:56 np0005593232 systemd[1]: libpod-conmon-cb1ad9dbdf3ce3921fa4432e14f00200bb85d558121e36197c5b6db62f2409ad.scope: Deactivated successfully.
Jan 23 05:13:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:13:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2521: 321 pgs: 321 active+clean; 579 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.495576433 +0000 UTC m=+0.023967012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.694465935 +0000 UTC m=+0.222856474 container create 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:13:57 np0005593232 systemd[1]: Started libpod-conmon-9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed.scope.
Jan 23 05:13:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.873286546 +0000 UTC m=+0.401677105 container init 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.885634467 +0000 UTC m=+0.414025006 container start 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.889105355 +0000 UTC m=+0.417495894 container attach 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:13:57 np0005593232 frosty_poitras[340200]: 167 167
Jan 23 05:13:57 np0005593232 systemd[1]: libpod-9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed.scope: Deactivated successfully.
Jan 23 05:13:57 np0005593232 conmon[340200]: conmon 9275a4f24f1d4a7fa844 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed.scope/container/memory.events
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.891476973 +0000 UTC m=+0.419867532 container died 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:13:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cdeadaab95515248996f75c6679815d62f1788dae0fdef48277088efeafb4195-merged.mount: Deactivated successfully.
Jan 23 05:13:57 np0005593232 podman[340184]: 2026-01-23 10:13:57.944568191 +0000 UTC m=+0.472958730 container remove 9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:13:57 np0005593232 systemd[1]: libpod-conmon-9275a4f24f1d4a7fa84410309c58aa34386d7872ef027438ec85012501406fed.scope: Deactivated successfully.
Jan 23 05:13:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:13:58.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:58 np0005593232 podman[340224]: 2026-01-23 10:13:58.141205088 +0000 UTC m=+0.047835190 container create b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:13:58 np0005593232 systemd[1]: Started libpod-conmon-b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d.scope.
Jan 23 05:13:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:13:58 np0005593232 podman[340224]: 2026-01-23 10:13:58.124165494 +0000 UTC m=+0.030795616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:13:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19aff2aca2a303f4ed26e6aca9f59d848b83448fa7eb9237c42e2846c8dda747/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19aff2aca2a303f4ed26e6aca9f59d848b83448fa7eb9237c42e2846c8dda747/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19aff2aca2a303f4ed26e6aca9f59d848b83448fa7eb9237c42e2846c8dda747/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19aff2aca2a303f4ed26e6aca9f59d848b83448fa7eb9237c42e2846c8dda747/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:13:58 np0005593232 podman[340224]: 2026-01-23 10:13:58.256246697 +0000 UTC m=+0.162876799 container init b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:13:58 np0005593232 podman[340224]: 2026-01-23 10:13:58.265125199 +0000 UTC m=+0.171755301 container start b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:13:58 np0005593232 podman[340224]: 2026-01-23 10:13:58.319914236 +0000 UTC m=+0.226544358 container attach b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.746 250273 DEBUG nova.compute.manager [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.748 250273 DEBUG oslo_concurrency.lockutils [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.749 250273 DEBUG oslo_concurrency.lockutils [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.749 250273 DEBUG oslo_concurrency.lockutils [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.749 250273 DEBUG nova.compute.manager [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] No waiting events found dispatching network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.749 250273 WARNING nova.compute.manager [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received unexpected event network-vif-plugged-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.750 250273 DEBUG nova.compute.manager [req-abf31dae-132b-4e54-8284-4e9358f6726f req-3437a501-7d86-4045-9a97-19290c0b7c29 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Received event network-vif-deleted-faf5bdcb-b5b5-4066-9cf4-a8f4014679a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:13:58 np0005593232 nova_compute[250269]: 2026-01-23 10:13:58.779 250273 DEBUG nova.scheduler.client.report [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:13:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:13:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:13:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:13:58.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:13:59 np0005593232 elated_darwin[340241]: {
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:        "osd_id": 0,
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:        "type": "bluestore"
Jan 23 05:13:59 np0005593232 elated_darwin[340241]:    }
Jan 23 05:13:59 np0005593232 elated_darwin[340241]: }
Jan 23 05:13:59 np0005593232 systemd[1]: libpod-b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d.scope: Deactivated successfully.
Jan 23 05:13:59 np0005593232 podman[340262]: 2026-01-23 10:13:59.262715575 +0000 UTC m=+0.024551369 container died b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:13:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-19aff2aca2a303f4ed26e6aca9f59d848b83448fa7eb9237c42e2846c8dda747-merged.mount: Deactivated successfully.
Jan 23 05:13:59 np0005593232 podman[340262]: 2026-01-23 10:13:59.310993167 +0000 UTC m=+0.072828921 container remove b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:13:59 np0005593232 systemd[1]: libpod-conmon-b49a57958ddfe6834b5d3332168a0e94efec36c787236662b8fe075dd3a9ce9d.scope: Deactivated successfully.
Jan 23 05:13:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2522: 321 pgs: 321 active+clean; 579 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 768 KiB/s wr, 150 op/s
Jan 23 05:13:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:13:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:13:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:13:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 57777493-cf37-4ce6-a2da-bfdd9862c33d does not exist
Jan 23 05:13:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eaaaf6bb-a8c2-4f4e-8dd6-769b44052960 does not exist
Jan 23 05:13:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a3702ee9-1b5d-4007-a0ec-9179e5dd60d0 does not exist
Jan 23 05:14:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:14:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:00.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:14:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:14:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.790 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.790 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.790 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:00.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.955 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.956 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4048MB free_disk=20.683975219726562GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:14:00 np0005593232 nova_compute[250269]: 2026-01-23 10:14:00.957 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:01 np0005593232 nova_compute[250269]: 2026-01-23 10:14:01.155 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2523: 321 pgs: 321 active+clean; 579 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 124 op/s
Jan 23 05:14:01 np0005593232 podman[340328]: 2026-01-23 10:14:01.431515601 +0000 UTC m=+0.084889523 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 23 05:14:01 np0005593232 nova_compute[250269]: 2026-01-23 10:14:01.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:01 np0005593232 nova_compute[250269]: 2026-01-23 10:14:01.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.172 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 8.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.177 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.804 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 8056a321-13d3-4dd8-bb33-70c832c17ac1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.805 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.805 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.809 250273 INFO nova.scheduler.client.report [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Deleted allocations for instance b3f4a8f0-513b-4165-a2f5-3c01bac04576#033[00m
Jan 23 05:14:02 np0005593232 nova_compute[250269]: 2026-01-23 10:14:02.873 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:02.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.057 250273 DEBUG nova.network.neutron [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.177 250273 DEBUG oslo_concurrency.lockutils [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.179 250273 DEBUG nova.objects.instance [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'flavor' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.260 250273 DEBUG oslo_concurrency.lockutils [None req-b17c8bda-cd58-442d-9c81-f2d7143704a0 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "b3f4a8f0-513b-4165-a2f5-3c01bac04576" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 11.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:03 np0005593232 kernel: tap5247b656-d9 (unregistering): left promiscuous mode
Jan 23 05:14:03 np0005593232 NetworkManager[49057]: <info>  [1769163243.3038] device (tap5247b656-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00498|binding|INFO|Releasing lport 5247b656-d92f-4246-8db1-32dd4ca770b1 from this chassis (sb_readonly=0)
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00499|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 down in Southbound
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00500|binding|INFO|Removing iface tap5247b656-d9 ovn-installed in OVS
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.318 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2524: 321 pgs: 321 active+clean; 579 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 127 op/s
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1532447865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:03 np0005593232 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 23 05:14:03 np0005593232 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000085.scope: Consumed 13.363s CPU time.
Jan 23 05:14:03 np0005593232 systemd-machined[215836]: Machine qemu-59-instance-00000085 terminated.
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.381 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.386 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.426 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.428 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 unbound from our chassis#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.430 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.431 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f201796e-c352-457c-8712-d3d06ba0c075]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.432 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace which is not needed anymore#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.450 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.464 250273 INFO nova.virt.libvirt.driver [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance destroyed successfully.#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.465 250273 DEBUG nova.objects.instance [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'numa_topology' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.529 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.530 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.531 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:03 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [NOTICE]   (339284) : haproxy version is 2.8.14-c23fe91
Jan 23 05:14:03 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [NOTICE]   (339284) : path to executable is /usr/sbin/haproxy
Jan 23 05:14:03 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [ALERT]    (339284) : Current worker (339286) exited with code 143 (Terminated)
Jan 23 05:14:03 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[339280]: [WARNING]  (339284) : All workers exited. Exiting... (0)
Jan 23 05:14:03 np0005593232 systemd[1]: libpod-f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c.scope: Deactivated successfully.
Jan 23 05:14:03 np0005593232 podman[340403]: 2026-01-23 10:14:03.558475727 +0000 UTC m=+0.044974179 container died f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:14:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c-userdata-shm.mount: Deactivated successfully.
Jan 23 05:14:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-76b2a203a3899ff4c7635e25901fd1c5a90c7668e7b9bbb0a8936038db62cc97-merged.mount: Deactivated successfully.
Jan 23 05:14:03 np0005593232 podman[340403]: 2026-01-23 10:14:03.602669143 +0000 UTC m=+0.089167565 container cleanup f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 05:14:03 np0005593232 systemd[1]: libpod-conmon-f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c.scope: Deactivated successfully.
Jan 23 05:14:03 np0005593232 kernel: tap5247b656-d9: entered promiscuous mode
Jan 23 05:14:03 np0005593232 NetworkManager[49057]: <info>  [1769163243.6261] manager: (tap5247b656-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/239)
Jan 23 05:14:03 np0005593232 systemd-udevd[340370]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.629 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00501|binding|INFO|Claiming lport 5247b656-d92f-4246-8db1-32dd4ca770b1 for this chassis.
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00502|binding|INFO|5247b656-d92f-4246-8db1-32dd4ca770b1: Claiming fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:14:03 np0005593232 NetworkManager[49057]: <info>  [1769163243.6391] device (tap5247b656-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:14:03 np0005593232 NetworkManager[49057]: <info>  [1769163243.6401] device (tap5247b656-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00503|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 ovn-installed in OVS
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.648 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.652 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 systemd-machined[215836]: New machine qemu-60-instance-00000085.
Jan 23 05:14:03 np0005593232 systemd[1]: Started Virtual Machine qemu-60-instance-00000085.
Jan 23 05:14:03 np0005593232 podman[340438]: 2026-01-23 10:14:03.679461445 +0000 UTC m=+0.049301342 container remove f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.685 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b24bf40c-3260-4ae5-bad9-aab00b5dde52]: (4, ('Fri Jan 23 10:14:03 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c)\nf2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c\nFri Jan 23 10:14:03 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (f2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c)\nf2c9f1ccb95afce695833d94e5b0a13790ce52da271ef84389f93bb70f93370c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.687 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[731b9bde-7ab2-481c-9810-ef5dbe9356c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.688 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.707 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:14:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:03Z|00504|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 up in Southbound
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 kernel: tap00bd3319-b0: left promiscuous mode
Jan 23 05:14:03 np0005593232 nova_compute[250269]: 2026-01-23 10:14:03.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.807 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5ccec50e-1306-4225-962a-ab950c675810]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.829 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e86ee190-07f0-47e5-8e87-f17b4442d64e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.831 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f132e45c-4a9c-4e1b-950a-b6b6fde84e7f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.849 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fde161-b882-4281-a89c-092838ad9ba6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 714261, 'reachable_time': 36259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340467, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2d00bd3319\x2dbfe5\x2d4acd\x2db2e4\x2d17830ee847f9.mount: Deactivated successfully.
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.854 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.855 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[96877e43-7be7-4760-870e-99afd3afeb20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.855 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 bound to our chassis#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.857 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.868 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2c802f68-dcad-402b-9778-8083b942aaf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.869 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00bd3319-b1 in ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.870 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00bd3319-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.871 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2516d16-ccfa-4814-a734-152fa061affd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.872 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[38be0118-aaa1-4a24-b65b-cd4e5b241890]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.882 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[de7ab82e-daaf-4157-b99b-2854cd0f3773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.907 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c7f26fa0-70b4-4185-a6cc-4ef276f899fa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.940 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc16661-29bb-48e7-8b26-7c3d7bd96929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.945 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e256a5-656c-43f8-985d-e503ac804ed1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 NetworkManager[49057]: <info>  [1769163243.9470] manager: (tap00bd3319-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/240)
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.980 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[70e10810-103e-4e9e-b60c-4d537f296ecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:03.983 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[339b8a5f-a096-483c-bbf3-2fe12a24b81b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 NetworkManager[49057]: <info>  [1769163244.0062] device (tap00bd3319-b0): carrier: link connected
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.011 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d2587632-e4d5-4838-ba11-7a8fb3eb7536]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.027 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[703a953c-5315-44e8-8b32-510ae046edd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715714, 'reachable_time': 28974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340492, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.042 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[541e7da2-1b91-4388-a2f7-79be84a44a08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:83f8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 715714, 'tstamp': 715714}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340493, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.058 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[351125fc-4391-4cb8-82db-9c0df6abf7ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00bd3319-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:83:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 152], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715714, 'reachable_time': 28974, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340494, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:04.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.088 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49e0376b-cfd7-4cff-a078-84c6e023e77e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.148 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[45bca4aa-9b09-4e7e-a7d6-76f2a07735f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.150 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.150 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.151 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00bd3319-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.153 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:04 np0005593232 kernel: tap00bd3319-b0: entered promiscuous mode
Jan 23 05:14:04 np0005593232 NetworkManager[49057]: <info>  [1769163244.1539] manager: (tap00bd3319-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/241)
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.155 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00bd3319-b0, col_values=(('external_ids', {'iface-id': '1788b5e6-601b-4e3d-a584-c0138c3308f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:04Z|00505|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=1)
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.156 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.173 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.175 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.176 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0dfdc1c6-425d-42be-a207-5c3d1186e85e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.177 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/00bd3319-bfe5-4acd-b2e4-17830ee847f9.pid.haproxy
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 00bd3319-bfe5-4acd-b2e4-17830ee847f9
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:14:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:04.178 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'env', 'PROCESS_TAG=haproxy-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00bd3319-bfe5-4acd-b2e4-17830ee847f9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.299 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 8056a321-13d3-4dd8-bb33-70c832c17ac1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.300 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163244.299179, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.300 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.494 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.532 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.537 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.585 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.585 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163244.3015158, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.585 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.587 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.587 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:14:04 np0005593232 podman[340586]: 2026-01-23 10:14:04.594628549 +0000 UTC m=+0.051312279 container create 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 23 05:14:04 np0005593232 systemd[1]: Started libpod-conmon-0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c.scope.
Jan 23 05:14:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:14:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d769abd82db613460fda240ca2caa056af40b7db88deac7a8deb20e43cd22583/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:14:04 np0005593232 podman[340586]: 2026-01-23 10:14:04.569552087 +0000 UTC m=+0.026235847 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:14:04 np0005593232 podman[340586]: 2026-01-23 10:14:04.676465135 +0000 UTC m=+0.133148865 container init 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:14:04 np0005593232 podman[340586]: 2026-01-23 10:14:04.683858875 +0000 UTC m=+0.140542605 container start 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:14:04 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [NOTICE]   (340606) : New worker (340608) forked
Jan 23 05:14:04 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [NOTICE]   (340606) : Loading success.
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.786 250273 DEBUG nova.compute.manager [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.786 250273 DEBUG oslo_concurrency.lockutils [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.786 250273 DEBUG oslo_concurrency.lockutils [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.787 250273 DEBUG oslo_concurrency.lockutils [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.787 250273 DEBUG nova.compute.manager [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.787 250273 WARNING nova.compute.manager [req-f4f9d0ee-ab06-4f4c-b591-74476667184b req-57835c56-ed57-4c5e-a7fd-5006469d706b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.806 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.807 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.812 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.850 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:14:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:14:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:04.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:14:04 np0005593232 nova_compute[250269]: 2026-01-23 10:14:04.956 250273 DEBUG nova.compute.manager [None req-abe8a1d9-ca50-4ece-95cf-210f5c7edc1c fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2525: 321 pgs: 321 active+clean; 579 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 703 KiB/s rd, 22 KiB/s wr, 63 op/s
Jan 23 05:14:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:06.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:06 np0005593232 nova_compute[250269]: 2026-01-23 10:14:06.819 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163231.8186777, b3f4a8f0-513b-4165-a2f5-3c01bac04576 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:06 np0005593232 nova_compute[250269]: 2026-01-23 10:14:06.820 250273 INFO nova.compute.manager [-] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:14:06 np0005593232 nova_compute[250269]: 2026-01-23 10:14:06.863 250273 DEBUG nova.compute.manager [None req-91ecfc03-7ae1-43d4-8385-873b5f4fe13b - - - - - -] [instance: b3f4a8f0-513b-4165-a2f5-3c01bac04576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:06.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:06 np0005593232 nova_compute[250269]: 2026-01-23 10:14:06.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.136 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.136 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.137 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.137 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.137 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.138 250273 WARNING nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.138 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.138 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.138 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.139 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.139 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.139 250273 WARNING nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.140 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.140 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.140 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.141 250273 DEBUG oslo_concurrency.lockutils [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.141 250273 DEBUG nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.141 250273 WARNING nova.compute.manager [req-20d48eba-3466-4721-8bde-c0d9e61e18b0 req-477b6f0b-41b5-46d6-91d1-c3570293d2e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2526: 321 pgs: 321 active+clean; 564 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 24 KiB/s wr, 85 op/s
Jan 23 05:14:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:07Z|00506|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:07Z|00507|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:14:07 np0005593232 nova_compute[250269]: 2026-01-23 10:14:07.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 23 05:14:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 23 05:14:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 23 05:14:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:08.606 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:14:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:08.607 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:14:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:08.608 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:08 np0005593232 nova_compute[250269]: 2026-01-23 10:14:08.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2528: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.1 MiB/s wr, 229 op/s
Jan 23 05:14:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 23 05:14:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 23 05:14:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 23 05:14:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:14:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:10.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:14:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 23 05:14:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 23 05:14:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 23 05:14:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:10.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2531: 321 pgs: 321 active+clean; 534 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 1.9 MiB/s wr, 336 op/s
Jan 23 05:14:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:11 np0005593232 nova_compute[250269]: 2026-01-23 10:14:11.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:12.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2532: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 MiB/s rd, 7.8 MiB/s wr, 355 op/s
Jan 23 05:14:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:14.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:14.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2533: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 8.3 MiB/s rd, 6.9 MiB/s wr, 289 op/s
Jan 23 05:14:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:16 np0005593232 nova_compute[250269]: 2026-01-23 10:14:16.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2534: 321 pgs: 321 active+clean; 532 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 4.5 MiB/s wr, 114 op/s
Jan 23 05:14:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:17Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:b9:0c 10.100.0.4
Jan 23 05:14:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 23 05:14:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 23 05:14:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 23 05:14:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:18.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2536: 321 pgs: 321 active+clean; 540 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 201 op/s
Jan 23 05:14:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:20.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2537: 321 pgs: 321 active+clean; 540 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 6.0 MiB/s wr, 172 op/s
Jan 23 05:14:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 23 05:14:21 np0005593232 nova_compute[250269]: 2026-01-23 10:14:21.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:21 np0005593232 nova_compute[250269]: 2026-01-23 10:14:21.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 23 05:14:21 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 23 05:14:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:22.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:22.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2539: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 840 KiB/s rd, 5.2 MiB/s wr, 174 op/s
Jan 23 05:14:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:24.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2540: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 780 KiB/s rd, 5.2 MiB/s wr, 172 op/s
Jan 23 05:14:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:26.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:26 np0005593232 podman[340680]: 2026-01-23 10:14:26.438730168 +0000 UTC m=+0.093206689 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:14:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:26 np0005593232 nova_compute[250269]: 2026-01-23 10:14:26.964 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:26 np0005593232 nova_compute[250269]: 2026-01-23 10:14:26.966 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:26 np0005593232 nova_compute[250269]: 2026-01-23 10:14:26.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:14:26 np0005593232 nova_compute[250269]: 2026-01-23 10:14:26.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:14:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:14:27 np0005593232 nova_compute[250269]: 2026-01-23 10:14:27.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:27 np0005593232 nova_compute[250269]: 2026-01-23 10:14:27.098 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:27 np0005593232 nova_compute[250269]: 2026-01-23 10:14:27.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2541: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 635 KiB/s rd, 4.9 MiB/s wr, 143 op/s
Jan 23 05:14:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:28.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2542: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 23 05:14:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:30.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:14:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:14:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2543: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 23 05:14:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:32 np0005593232 nova_compute[250269]: 2026-01-23 10:14:32.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:32 np0005593232 nova_compute[250269]: 2026-01-23 10:14:32.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:32 np0005593232 podman[340760]: 2026-01-23 10:14:32.422590007 +0000 UTC m=+0.087149167 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:14:32 np0005593232 nova_compute[250269]: 2026-01-23 10:14:32.441 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:33 np0005593232 nova_compute[250269]: 2026-01-23 10:14:33.243 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 05:14:33 np0005593232 nova_compute[250269]: 2026-01-23 10:14:33.244 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:33 np0005593232 nova_compute[250269]: 2026-01-23 10:14:33.244 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2544: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 869 KiB/s rd, 1.4 MiB/s wr, 30 op/s
Jan 23 05:14:33 np0005593232 nova_compute[250269]: 2026-01-23 10:14:33.395 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:34.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.752 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.752 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.753 250273 INFO nova.compute.manager [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Unshelving#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.863 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.863 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.869 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_requests' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.890 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'numa_topology' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.906 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:14:34 np0005593232 nova_compute[250269]: 2026-01-23 10:14:34.906 250273 INFO nova.compute.claims [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:14:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:34.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:35 np0005593232 nova_compute[250269]: 2026-01-23 10:14:35.233 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2545: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 9 op/s
Jan 23 05:14:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174047833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:35 np0005593232 nova_compute[250269]: 2026-01-23 10:14:35.771 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:35 np0005593232 nova_compute[250269]: 2026-01-23 10:14:35.779 250273 DEBUG nova.compute.provider_tree [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:14:35 np0005593232 nova_compute[250269]: 2026-01-23 10:14:35.804 250273 DEBUG nova.scheduler.client.report [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:14:35 np0005593232 nova_compute[250269]: 2026-01-23 10:14:35.835 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:36 np0005593232 nova_compute[250269]: 2026-01-23 10:14:36.096 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:36 np0005593232 nova_compute[250269]: 2026-01-23 10:14:36.117 250273 INFO nova.network.neutron [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Updating port 5004fad4-5788-4709-9c83-b5fe075c0aa7 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 23 05:14:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:36.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:36.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.130 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:37 np0005593232 nova_compute[250269]: 2026-01-23 10:14:37.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:14:37
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'default.rgw.meta']
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2546: 321 pgs: 321 active+clean; 522 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 18 KiB/s wr, 14 op/s
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:14:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 05:14:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:14:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2547: 321 pgs: 321 active+clean; 538 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 671 KiB/s wr, 23 op/s
Jan 23 05:14:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:40 np0005593232 nova_compute[250269]: 2026-01-23 10:14:40.187 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:14:40 np0005593232 nova_compute[250269]: 2026-01-23 10:14:40.187 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquired lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:14:40 np0005593232 nova_compute[250269]: 2026-01-23 10:14:40.187 250273 DEBUG nova.network.neutron [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:14:40 np0005593232 nova_compute[250269]: 2026-01-23 10:14:40.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2548: 321 pgs: 321 active+clean; 538 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.5 MiB/s rd, 671 KiB/s wr, 23 op/s
Jan 23 05:14:41 np0005593232 nova_compute[250269]: 2026-01-23 10:14:41.439 250273 DEBUG nova.compute.manager [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-changed-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:41 np0005593232 nova_compute[250269]: 2026-01-23 10:14:41.440 250273 DEBUG nova.compute.manager [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Refreshing instance network info cache due to event network-changed-5004fad4-5788-4709-9c83-b5fe075c0aa7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:14:41 np0005593232 nova_compute[250269]: 2026-01-23 10:14:41.440 250273 DEBUG oslo_concurrency.lockutils [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:14:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.133 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:42.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:42.627 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.936 250273 DEBUG nova.network.neutron [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Updating instance_info_cache with network_info: [{"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.963 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Releasing lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.965 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.965 250273 INFO nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Creating image(s)#033[00m
Jan 23 05:14:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.994 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.998 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.999 250273 DEBUG oslo_concurrency.lockutils [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:14:42 np0005593232 nova_compute[250269]: 2026-01-23 10:14:42.999 250273 DEBUG nova.network.neutron [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Refreshing network info cache for port 5004fad4-5788-4709-9c83-b5fe075c0aa7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.055 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.086 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.089 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "965d6271ff62317bf37023ed6b0460abebd79e77" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.090 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "965d6271ff62317bf37023ed6b0460abebd79e77" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2549: 321 pgs: 321 active+clean; 590 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.5 MiB/s wr, 114 op/s
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.398 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:43 np0005593232 NetworkManager[49057]: <info>  [1769163283.3990] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/242)
Jan 23 05:14:43 np0005593232 NetworkManager[49057]: <info>  [1769163283.4000] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:43Z|00508|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.619 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.672 250273 DEBUG nova.virt.libvirt.imagebackend [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/91667598-4041-4c0e-ba8d-b3a19e535259/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/91667598-4041-4c0e-ba8d-b3a19e535259/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.731 250273 DEBUG nova.virt.libvirt.imagebackend [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Selected location: {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/91667598-4041-4c0e-ba8d-b3a19e535259/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 23 05:14:43 np0005593232 nova_compute[250269]: 2026-01-23 10:14:43.732 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] cloning images/91667598-4041-4c0e-ba8d-b3a19e535259@snap to None/33559028-00d9-4918-9015-26172db3d00c_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:14:44 np0005593232 nova_compute[250269]: 2026-01-23 10:14:44.081 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "965d6271ff62317bf37023ed6b0460abebd79e77" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.991s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:44 np0005593232 nova_compute[250269]: 2026-01-23 10:14:44.248 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'migration_context' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:44 np0005593232 nova_compute[250269]: 2026-01-23 10:14:44.323 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:44 np0005593232 nova_compute[250269]: 2026-01-23 10:14:44.323 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:14:44 np0005593232 nova_compute[250269]: 2026-01-23 10:14:44.332 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] flattening vms/33559028-00d9-4918-9015-26172db3d00c_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:14:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:14:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343886908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:14:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:14:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343886908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:14:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:44.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2550: 321 pgs: 321 active+clean; 604 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 6.4 MiB/s rd, 3.3 MiB/s wr, 159 op/s
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.526 250273 DEBUG nova.network.neutron [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Updated VIF entry in instance network info cache for port 5004fad4-5788-4709-9c83-b5fe075c0aa7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.527 250273 DEBUG nova.network.neutron [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Updating instance_info_cache with network_info: [{"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.550 250273 DEBUG oslo_concurrency.lockutils [req-48cd6555-d268-4de5-bf4c-6098b72d40f5 req-bb91ffcf-2500-4815-8a4f-4c0bd18b0ebb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-33559028-00d9-4918-9015-26172db3d00c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.586 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Image rbd:vms/33559028-00d9-4918-9015-26172db3d00c_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.587 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.587 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Ensure instance console log exists: /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.588 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.588 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.588 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.591 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Start _get_guest_xml network_info=[{"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T10:14:03Z,direct_url=<?>,disk_format='raw',id=91667598-4041-4c0e-ba8d-b3a19e535259,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1716094682-shelved',owner='9dd869ce76e44fc8a82b8bbee1654d33',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T10:14:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.594 250273 WARNING nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.600 250273 DEBUG nova.virt.libvirt.host [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.600 250273 DEBUG nova.virt.libvirt.host [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.605 250273 DEBUG nova.virt.libvirt.host [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.606 250273 DEBUG nova.virt.libvirt.host [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.607 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.607 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T10:14:03Z,direct_url=<?>,disk_format='raw',id=91667598-4041-4c0e-ba8d-b3a19e535259,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-1716094682-shelved',owner='9dd869ce76e44fc8a82b8bbee1654d33',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T10:14:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.608 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.608 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.608 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.608 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.609 250273 DEBUG nova.virt.hardware [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.610 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:45 np0005593232 nova_compute[250269]: 2026-01-23 10:14:45.633 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:14:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/90868513' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.078 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.108 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.113 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:14:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435383825' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.543 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.545 250273 DEBUG nova.virt.libvirt.vif [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1716094682',display_name='tempest-ServerActionsTestOtherB-server-1716094682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1716094682',id=134,image_ref='91667598-4041-4c0e-ba8d-b3a19e535259',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:12:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-frjx5azl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member',shelved_at='2026-01-23T10:14:13.370474',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='91667598-4041-4c0e-ba8d-b3a19e535259'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=33559028-00d9-4918-9015-26172db3d00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.546 250273 DEBUG nova.network.os_vif_util [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.547 250273 DEBUG nova.network.os_vif_util [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:14:46 np0005593232 nova_compute[250269]: 2026-01-23 10:14:46.549 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'pci_devices' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008505708519449097 of space, bias 1.0, pg target 2.551712555834729 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0018091760887772195 of space, bias 1.0, pg target 0.5391344744556115 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.006010594058254234 of space, bias 1.0, pg target 1.7911570293597618 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.223 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <uuid>33559028-00d9-4918-9015-26172db3d00c</uuid>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <name>instance-00000086</name>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerActionsTestOtherB-server-1716094682</nova:name>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:14:45</nova:creationTime>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:user uuid="aca3cab576d641d3b89e7dddf155d467">tempest-ServerActionsTestOtherB-1052932467-project-member</nova:user>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:project uuid="9dd869ce76e44fc8a82b8bbee1654d33">tempest-ServerActionsTestOtherB-1052932467</nova:project>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="91667598-4041-4c0e-ba8d-b3a19e535259"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <nova:port uuid="5004fad4-5788-4709-9c83-b5fe075c0aa7">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="serial">33559028-00d9-4918-9015-26172db3d00c</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="uuid">33559028-00d9-4918-9015-26172db3d00c</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/33559028-00d9-4918-9015-26172db3d00c_disk">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/33559028-00d9-4918-9015-26172db3d00c_disk.config">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:95:4c:75"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <target dev="tap5004fad4-57"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/console.log" append="off"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:14:47 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:14:47 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:14:47 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:14:47 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.224 250273 DEBUG nova.compute.manager [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Preparing to wait for external event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.225 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.225 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.226 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.227 250273 DEBUG nova.virt.libvirt.vif [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1716094682',display_name='tempest-ServerActionsTestOtherB-server-1716094682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1716094682',id=134,image_ref='91667598-4041-4c0e-ba8d-b3a19e535259',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:12:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-frjx5azl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member',shelved_at='2026-01-23T10:14:13.370474',shelved_host='compute-2.ctlplane.example.com',shelved_image_id='91667598-4041-4c0e-ba8d-b3a19e535259'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:34Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=33559028-00d9-4918-9015-26172db3d00c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.227 250273 DEBUG nova.network.os_vif_util [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.228 250273 DEBUG nova.network.os_vif_util [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.228 250273 DEBUG os_vif [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.230 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.231 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.240 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.240 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5004fad4-57, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.241 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5004fad4-57, col_values=(('external_ids', {'iface-id': '5004fad4-5788-4709-9c83-b5fe075c0aa7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:95:4c:75', 'vm-uuid': '33559028-00d9-4918-9015-26172db3d00c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.252 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:47 np0005593232 NetworkManager[49057]: <info>  [1769163287.2533] manager: (tap5004fad4-57): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.256 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.260 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.261 250273 INFO os_vif [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57')#033[00m
Jan 23 05:14:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2551: 321 pgs: 321 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.4 MiB/s rd, 7.6 MiB/s wr, 241 op/s
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.417 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.418 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.418 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] No VIF found with MAC fa:16:3e:95:4c:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.419 250273 INFO nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Using config drive#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.449 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.469 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:47 np0005593232 nova_compute[250269]: 2026-01-23 10:14:47.514 250273 DEBUG nova.objects.instance [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'keypairs' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.016 250273 INFO nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Creating config drive at /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.023 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoqgvb4x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.162 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoqgvb4x" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.196 250273 DEBUG nova.storage.rbd_utils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] rbd image 33559028-00d9-4918-9015-26172db3d00c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.200 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config 33559028-00d9-4918-9015-26172db3d00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:48.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.350 250273 DEBUG oslo_concurrency.processutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config 33559028-00d9-4918-9015-26172db3d00c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.350 250273 INFO nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Deleting local config drive /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c/disk.config because it was imported into RBD.#033[00m
Jan 23 05:14:48 np0005593232 virtqemud[249592]: End of file while reading data: Input/output error
Jan 23 05:14:48 np0005593232 virtqemud[249592]: End of file while reading data: Input/output error
Jan 23 05:14:48 np0005593232 kernel: tap5004fad4-57: entered promiscuous mode
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.4041] manager: (tap5004fad4-57): new Tun device (/org/freedesktop/NetworkManager/Devices/245)
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.441 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:48Z|00509|binding|INFO|Claiming lport 5004fad4-5788-4709-9c83-b5fe075c0aa7 for this chassis.
Jan 23 05:14:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:48Z|00510|binding|INFO|5004fad4-5788-4709-9c83-b5fe075c0aa7: Claiming fa:16:3e:95:4c:75 10.100.0.7
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.451 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:4c:75 10.100.0.7'], port_security=['fa:16:3e:95:4c:75 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33559028-00d9-4918-9015-26172db3d00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5004fad4-5788-4709-9c83-b5fe075c0aa7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.452 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5004fad4-5788-4709-9c83-b5fe075c0aa7 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d bound to our chassis#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.453 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8d9599b4-8855-4310-af02-cdd058438f7d#033[00m
Jan 23 05:14:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:48Z|00511|binding|INFO|Setting lport 5004fad4-5788-4709-9c83-b5fe075c0aa7 ovn-installed in OVS
Jan 23 05:14:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:48Z|00512|binding|INFO|Setting lport 5004fad4-5788-4709-9c83-b5fe075c0aa7 up in Southbound
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:48 np0005593232 systemd-machined[215836]: New machine qemu-61-instance-00000086.
Jan 23 05:14:48 np0005593232 systemd-udevd[341163]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.469 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[134e89de-1ad8-48cf-bef7-a38c40afcb9a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.471 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8d9599b4-81 in ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.473 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8d9599b4-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.473 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3d22803e-a9f6-4f30-bc3f-e566afd5c90e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.474 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d72f76c7-64a9-4bb0-ba37-769fe87782db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.4822] device (tap5004fad4-57): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.4832] device (tap5004fad4-57): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:14:48 np0005593232 systemd[1]: Started Virtual Machine qemu-61-instance-00000086.
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.490 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[671097d4-8311-4edb-9ccf-07e1c6e8c813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.513 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[997cbb83-6c50-4f53-bd5e-017bef8a7f90]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.551 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e525395e-3928-4a13-b436-beda2824ed2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.5600] manager: (tap8d9599b4-80): new Veth device (/org/freedesktop/NetworkManager/Devices/246)
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.560 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4d1804ac-c78c-4da7-84cc-1f1f00bb7f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.597 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[af260a58-d089-4cea-bb2b-c30c85a666ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.601 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[486c8295-fe08-41a4-a3d5-6cd17ca4256c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.7359] device (tap8d9599b4-80): carrier: link connected
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.741 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5deead-a263-4321-8ed9-f4354498453c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.758 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba1403f-931e-43de-a24c-0f80f63216ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720187, 'reachable_time': 29899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341195, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.769 250273 DEBUG nova.compute.manager [req-cf8a7f9a-9c78-4692-8466-0bdc7124ebb6 req-59be9e6a-c1ca-4cc0-8cea-60d85c3dacf1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.769 250273 DEBUG oslo_concurrency.lockutils [req-cf8a7f9a-9c78-4692-8466-0bdc7124ebb6 req-59be9e6a-c1ca-4cc0-8cea-60d85c3dacf1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.770 250273 DEBUG oslo_concurrency.lockutils [req-cf8a7f9a-9c78-4692-8466-0bdc7124ebb6 req-59be9e6a-c1ca-4cc0-8cea-60d85c3dacf1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.771 250273 DEBUG oslo_concurrency.lockutils [req-cf8a7f9a-9c78-4692-8466-0bdc7124ebb6 req-59be9e6a-c1ca-4cc0-8cea-60d85c3dacf1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.771 250273 DEBUG nova.compute.manager [req-cf8a7f9a-9c78-4692-8466-0bdc7124ebb6 req-59be9e6a-c1ca-4cc0-8cea-60d85c3dacf1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Processing event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.774 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb839bf-fe72-4875-88db-7a66d8807db7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:a12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 720187, 'tstamp': 720187}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341196, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.788 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2acd4e18-0078-4fb2-b2a5-ba2d4f82ee3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d9599b4-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:a1:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720187, 'reachable_time': 29899, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 341197, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.819 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8d14f6-a9a6-4a44-94e9-3bcc946c4db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.873 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[52f4beba-b00e-4cc9-93a1-35d0921f1367]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.875 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.875 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.876 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d9599b4-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:48 np0005593232 kernel: tap8d9599b4-80: entered promiscuous mode
Jan 23 05:14:48 np0005593232 NetworkManager[49057]: <info>  [1769163288.8788] manager: (tap8d9599b4-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.878 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.881 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8d9599b4-80, col_values=(('external_ids', {'iface-id': 'b57bd565-3bb1-4ecc-8df0-a7c439ac84a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.887 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.888 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[77a77ce1-267c-4759-a518-e8a26dff95c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.889 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/8d9599b4-8855-4310-af02-cdd058438f7d.pid.haproxy
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 8d9599b4-8855-4310-af02-cdd058438f7d
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:14:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:14:48.891 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'env', 'PROCESS_TAG=haproxy-8d9599b4-8855-4310-af02-cdd058438f7d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8d9599b4-8855-4310-af02-cdd058438f7d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:14:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:14:48Z|00513|binding|INFO|Releasing lport b57bd565-3bb1-4ecc-8df0-a7c439ac84a6 from this chassis (sb_readonly=0)
Jan 23 05:14:48 np0005593232 nova_compute[250269]: 2026-01-23 10:14:48.910 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:49 np0005593232 podman[341230]: 2026-01-23 10:14:49.254121587 +0000 UTC m=+0.059530313 container create 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:14:49 np0005593232 systemd[1]: Started libpod-conmon-6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c.scope.
Jan 23 05:14:49 np0005593232 podman[341230]: 2026-01-23 10:14:49.221438078 +0000 UTC m=+0.026846824 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:14:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:14:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b87c0aab52f769463663cd11c4965559e837d532fee17b8f81e98d74b65dbed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:14:49 np0005593232 podman[341230]: 2026-01-23 10:14:49.334589123 +0000 UTC m=+0.139997869 container init 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:14:49 np0005593232 podman[341230]: 2026-01-23 10:14:49.340279265 +0000 UTC m=+0.145687991 container start 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:14:49 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [NOTICE]   (341290) : New worker (341293) forked
Jan 23 05:14:49 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [NOTICE]   (341290) : Loading success.
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.385 250273 DEBUG nova.compute.manager [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.386 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163289.3846173, 33559028-00d9-4918-9015-26172db3d00c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.387 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] VM Started (Lifecycle Event)#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.395 250273 DEBUG nova.virt.libvirt.driver [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.399 250273 INFO nova.virt.libvirt.driver [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] Instance spawned successfully.#033[00m
Jan 23 05:14:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2552: 321 pgs: 321 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 MiB/s rd, 9.2 MiB/s wr, 261 op/s
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.432 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.435 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.467 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.468 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163289.3849738, 33559028-00d9-4918-9015-26172db3d00c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.468 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.490 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.493 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163289.3918445, 33559028-00d9-4918-9015-26172db3d00c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.493 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.515 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.519 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:14:49 np0005593232 nova_compute[250269]: 2026-01-23 10:14:49.541 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.931 250273 DEBUG nova.compute.manager [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.932 250273 DEBUG oslo_concurrency.lockutils [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.932 250273 DEBUG oslo_concurrency.lockutils [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.932 250273 DEBUG oslo_concurrency.lockutils [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.932 250273 DEBUG nova.compute.manager [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] No waiting events found dispatching network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:14:50 np0005593232 nova_compute[250269]: 2026-01-23 10:14:50.933 250273 WARNING nova.compute.manager [req-bd56a6af-5d77-4689-aafe-4bb4d2073fe9 req-2abec7a5-ac43-4e05-8c91-4095a1fd656d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received unexpected event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Jan 23 05:14:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 23 05:14:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 23 05:14:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 23 05:14:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:50.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2554: 321 pgs: 321 active+clean; 740 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.9 MiB/s rd, 10 MiB/s wr, 302 op/s
Jan 23 05:14:51 np0005593232 nova_compute[250269]: 2026-01-23 10:14:51.844 250273 DEBUG nova.compute.manager [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:14:51 np0005593232 nova_compute[250269]: 2026-01-23 10:14:51.947 250273 DEBUG oslo_concurrency.lockutils [None req-265e7bf5-770f-41f0-a57f-645b14a2cef1 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 17.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.274 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.275 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.308 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.319 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:14:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.319 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:52.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.400 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.400 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.408 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.409 250273 INFO nova.compute.claims [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.516 250273 DEBUG nova.scheduler.client.report [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.558 250273 DEBUG nova.scheduler.client.report [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.558 250273 DEBUG nova.compute.provider_tree [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.580 250273 DEBUG nova.scheduler.client.report [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.607 250273 DEBUG nova.scheduler.client.report [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.720 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.925 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.926 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:52 np0005593232 nova_compute[250269]: 2026-01-23 10:14:52.948 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:14:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:52.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3005356777' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.163 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.169 250273 DEBUG nova.compute.provider_tree [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.299 250273 DEBUG nova.scheduler.client.report [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.309 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.396 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.396 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:14:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2555: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 681 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 8.1 MiB/s wr, 262 op/s
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.411 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.423 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.423 250273 INFO nova.compute.claims [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.515 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.516 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.540 250273 INFO nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.569 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.640 250273 INFO nova.virt.block_device [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Booting with volume 0bf60c43-104e-4a31-b83b-9fcea380005d at /dev/vda#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.684 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.844 250273 DEBUG os_brick.utils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.848 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.863 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.864 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4f0796-e0a9-40ed-ace4-3b0bbb33708d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.867 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.879 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.880 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[2c524c09-f5c6-4a22-a397-9e291c2836cf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.882 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.893 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.894 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[853cca47-05e9-4e29-a1eb-d7740db55ec8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.895 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[ffacd010-6dc5-4ad4-8e84-31ae1b205697]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.896 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.925 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.927 250273 DEBUG os_brick.initiator.connectors.lightos [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.927 250273 DEBUG os_brick.initiator.connectors.lightos [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.927 250273 DEBUG os_brick.initiator.connectors.lightos [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.928 250273 DEBUG os_brick.utils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] <== get_connector_properties: return (82ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:14:53 np0005593232 nova_compute[250269]: 2026-01-23 10:14:53.928 250273 DEBUG nova.virt.block_device [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating existing volume attachment record: ee693baf-e89b-42e6-a9cb-a114c373e1a2 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:14:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2229312135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.130 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.138 250273 DEBUG nova.compute.provider_tree [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.163 250273 DEBUG nova.scheduler.client.report [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.207 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.208 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.276 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.276 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.309 250273 INFO nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.314 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.314 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.315 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.315 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.315 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:54.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.364 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.476 250273 DEBUG nova.policy [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95ac13194f0940128d42af3d45d130fa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ae621f21a8e438fb95152309b38cee5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.594 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.596 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.596 250273 INFO nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Creating image(s)#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.624 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.653 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.682 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.687 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.720 250273 DEBUG nova.policy [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.761 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.762 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.763 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:54 np0005593232 nova_compute[250269]: 2026-01-23 10:14:54.763 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/674754246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:55.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2556: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 682 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.3 MiB/s rd, 9.3 MiB/s wr, 304 op/s
Jan 23 05:14:55 np0005593232 nova_compute[250269]: 2026-01-23 10:14:55.880 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:14:55 np0005593232 nova_compute[250269]: 2026-01-23 10:14:55.886 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:55 np0005593232 nova_compute[250269]: 2026-01-23 10:14:55.922 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.194 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.195 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.200 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.200 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:14:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.400 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.401 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3839MB free_disk=20.764286041259766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.401 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.402 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.451 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Successfully created port: 4a04244c-3270-4ff3-ad30-52e80e7db513 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.580 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.581 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.582 250273 INFO nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Creating image(s)#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.582 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.582 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Ensure instance console log exists: /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.583 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.583 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.583 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.628 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 8056a321-13d3-4dd8-bb33-70c832c17ac1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.628 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 33559028-00d9-4918-9015-26172db3d00c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.628 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.629 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.629 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.629 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.812 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:14:56 np0005593232 nova_compute[250269]: 2026-01-23 10:14:56.961 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Successfully created port: e76c3794-4bfb-450d-901b-d5c2ecccb574 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:14:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:14:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:57.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.292 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.351 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:14:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1898597621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.389 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:14:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2557: 321 pgs: 321 active+clean; 707 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 5.6 MiB/s wr, 279 op/s
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.431 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] resizing rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:14:57 np0005593232 podman[341545]: 2026-01-23 10:14:57.432847991 +0000 UTC m=+0.086379775 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.466 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.493 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.524 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.524 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:57 np0005593232 nova_compute[250269]: 2026-01-23 10:14:57.757 250273 DEBUG nova.objects.instance [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'migration_context' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:14:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 23 05:14:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:14:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:14:58.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:14:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 23 05:14:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 23 05:14:58 np0005593232 nova_compute[250269]: 2026-01-23 10:14:58.573 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:14:58 np0005593232 nova_compute[250269]: 2026-01-23 10:14:58.574 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Ensure instance console log exists: /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:14:58 np0005593232 nova_compute[250269]: 2026-01-23 10:14:58.575 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:14:58 np0005593232 nova_compute[250269]: 2026-01-23 10:14:58.575 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:14:58 np0005593232 nova_compute[250269]: 2026-01-23 10:14:58.576 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:14:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:14:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:14:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:14:59.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:14:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2559: 321 pgs: 321 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 374 op/s
Jan 23 05:14:59 np0005593232 nova_compute[250269]: 2026-01-23 10:14:59.798 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Successfully updated port: 4a04244c-3270-4ff3-ad30-52e80e7db513 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:14:59 np0005593232 nova_compute[250269]: 2026-01-23 10:14:59.906 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:14:59 np0005593232 nova_compute[250269]: 2026-01-23 10:14:59.906 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:14:59 np0005593232 nova_compute[250269]: 2026-01-23 10:14:59.906 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:15:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:00.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.487 250273 DEBUG nova.compute.manager [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.488 250273 DEBUG nova.compute.manager [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.488 250273 DEBUG oslo_concurrency.lockutils [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.729 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Successfully updated port: e76c3794-4bfb-450d-901b-d5c2ecccb574 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:15:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.769 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.770 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:15:00 np0005593232 nova_compute[250269]: 2026-01-23 10:15:00.770 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:15:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:01.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2560: 321 pgs: 321 active+clean; 706 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 4.1 MiB/s wr, 315 op/s
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 22d8606d-8754-477a-b008-929c26af59a0 does not exist
Jan 23 05:15:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c2e719f6-f798-4613-9956-c510834ecfb9 does not exist
Jan 23 05:15:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0d9b28ac-26ef-4006-8c98-97c86801e0a4 does not exist
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:15:01 np0005593232 nova_compute[250269]: 2026-01-23 10:15:01.467 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:01 np0005593232 nova_compute[250269]: 2026-01-23 10:15:01.515 250273 DEBUG nova.compute.manager [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:01 np0005593232 nova_compute[250269]: 2026-01-23 10:15:01.515 250273 DEBUG nova.compute.manager [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing instance network info cache due to event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:15:01 np0005593232 nova_compute[250269]: 2026-01-23 10:15:01.516 250273 DEBUG oslo_concurrency.lockutils [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 23 05:15:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.029479972 +0000 UTC m=+0.043753674 container create cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.012393027 +0000 UTC m=+0.026666759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:02 np0005593232 systemd[1]: Started libpod-conmon-cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd.scope.
Jan 23 05:15:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.167593327 +0000 UTC m=+0.181867059 container init cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:15:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:02Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:95:4c:75 10.100.0.7
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.175909963 +0000 UTC m=+0.190183665 container start cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.179125175 +0000 UTC m=+0.193398897 container attach cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:15:02 np0005593232 epic_shamir[341938]: 167 167
Jan 23 05:15:02 np0005593232 systemd[1]: libpod-cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd.scope: Deactivated successfully.
Jan 23 05:15:02 np0005593232 conmon[341938]: conmon cd3e63ce3b4be3f35ca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd.scope/container/memory.events
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.183922041 +0000 UTC m=+0.198195753 container died cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:15:02 np0005593232 nova_compute[250269]: 2026-01-23 10:15:02.195 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:15:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-93067467b99e2f7486cd00d166029b0c340ab2c6a3e09fbc2fd4303bce2c8085-merged.mount: Deactivated successfully.
Jan 23 05:15:02 np0005593232 podman[341921]: 2026-01-23 10:15:02.221832878 +0000 UTC m=+0.236106580 container remove cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:15:02 np0005593232 systemd[1]: libpod-conmon-cd3e63ce3b4be3f35ca1f59298e250f19807769471bf727179337935e96fc2fd.scope: Deactivated successfully.
Jan 23 05:15:02 np0005593232 nova_compute[250269]: 2026-01-23 10:15:02.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:02.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:15:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1599502282' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:15:02 np0005593232 podman[341961]: 2026-01-23 10:15:02.407033011 +0000 UTC m=+0.046993877 container create 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:15:02 np0005593232 systemd[1]: Started libpod-conmon-84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637.scope.
Jan 23 05:15:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:02 np0005593232 podman[341961]: 2026-01-23 10:15:02.387190967 +0000 UTC m=+0.027151853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:02 np0005593232 podman[341961]: 2026-01-23 10:15:02.504455448 +0000 UTC m=+0.144416334 container init 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:15:02 np0005593232 podman[341961]: 2026-01-23 10:15:02.512110065 +0000 UTC m=+0.152070931 container start 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:15:02 np0005593232 podman[341961]: 2026-01-23 10:15:02.515113181 +0000 UTC m=+0.155074077 container attach 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:15:02 np0005593232 podman[341978]: 2026-01-23 10:15:02.531901208 +0000 UTC m=+0.061612182 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 23 05:15:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:15:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:03.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:03 np0005593232 musing_heisenberg[341979]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:15:03 np0005593232 musing_heisenberg[341979]: --> relative data size: 1.0
Jan 23 05:15:03 np0005593232 musing_heisenberg[341979]: --> All data devices are unavailable
Jan 23 05:15:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2562: 321 pgs: 321 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 3.2 MiB/s wr, 247 op/s
Jan 23 05:15:03 np0005593232 systemd[1]: libpod-84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637.scope: Deactivated successfully.
Jan 23 05:15:03 np0005593232 podman[342012]: 2026-01-23 10:15:03.479633957 +0000 UTC m=+0.023556310 container died 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:15:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d91b80102d6391431eb92b4837307fe913d7efd817552e8a30291987afada8e6-merged.mount: Deactivated successfully.
Jan 23 05:15:03 np0005593232 podman[342012]: 2026-01-23 10:15:03.533262341 +0000 UTC m=+0.077184684 container remove 84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:15:03 np0005593232 systemd[1]: libpod-conmon-84be74aece82409ada2ddf57d54227cbb09d23a50e86ce5eb91f00326b466637.scope: Deactivated successfully.
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.184181617 +0000 UTC m=+0.037482626 container create 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:15:04 np0005593232 systemd[1]: Started libpod-conmon-94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0.scope.
Jan 23 05:15:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.257195531 +0000 UTC m=+0.110496570 container init 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.166780452 +0000 UTC m=+0.020081481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.264050616 +0000 UTC m=+0.117351625 container start 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.266925958 +0000 UTC m=+0.120226997 container attach 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:15:04 np0005593232 jolly_bardeen[342186]: 167 167
Jan 23 05:15:04 np0005593232 systemd[1]: libpod-94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0.scope: Deactivated successfully.
Jan 23 05:15:04 np0005593232 conmon[342186]: conmon 94676358c61cfee57371 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0.scope/container/memory.events
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.270915781 +0000 UTC m=+0.124216790 container died 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:15:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a3410e6f7fe0c75dfc848d3ea8598386fb3236f5d08ab305dc208e60a4b8e721-merged.mount: Deactivated successfully.
Jan 23 05:15:04 np0005593232 podman[342170]: 2026-01-23 10:15:04.300189023 +0000 UTC m=+0.153490032 container remove 94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:15:04 np0005593232 systemd[1]: libpod-conmon-94676358c61cfee57371b72747e0aff493d5d5bf9061e3565651c2279abcbca0.scope: Deactivated successfully.
Jan 23 05:15:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:04.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:04 np0005593232 podman[342210]: 2026-01-23 10:15:04.474688341 +0000 UTC m=+0.041316925 container create 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:15:04 np0005593232 systemd[1]: Started libpod-conmon-7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80.scope.
Jan 23 05:15:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2b2a469ba8f4f00c68e170980ac82bfd09f86256c07b28ebdafcc37e118f5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2b2a469ba8f4f00c68e170980ac82bfd09f86256c07b28ebdafcc37e118f5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2b2a469ba8f4f00c68e170980ac82bfd09f86256c07b28ebdafcc37e118f5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2b2a469ba8f4f00c68e170980ac82bfd09f86256c07b28ebdafcc37e118f5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:04 np0005593232 podman[342210]: 2026-01-23 10:15:04.457173854 +0000 UTC m=+0.023802458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:04 np0005593232 podman[342210]: 2026-01-23 10:15:04.556351482 +0000 UTC m=+0.122980156 container init 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:15:04 np0005593232 podman[342210]: 2026-01-23 10:15:04.567578441 +0000 UTC m=+0.134207055 container start 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:15:04 np0005593232 podman[342210]: 2026-01-23 10:15:04.572068558 +0000 UTC m=+0.138697172 container attach 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:15:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:05.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]: {
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:    "0": [
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:        {
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "devices": [
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "/dev/loop3"
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            ],
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "lv_name": "ceph_lv0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "lv_size": "7511998464",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "name": "ceph_lv0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "tags": {
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.cluster_name": "ceph",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.crush_device_class": "",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.encrypted": "0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.osd_id": "0",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.type": "block",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:                "ceph.vdo": "0"
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            },
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "type": "block",
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:            "vg_name": "ceph_vg0"
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:        }
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]:    ]
Jan 23 05:15:05 np0005593232 goofy_hawking[342226]: }
Jan 23 05:15:05 np0005593232 systemd[1]: libpod-7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80.scope: Deactivated successfully.
Jan 23 05:15:05 np0005593232 podman[342235]: 2026-01-23 10:15:05.382399784 +0000 UTC m=+0.021671407 container died 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:15:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ec2b2a469ba8f4f00c68e170980ac82bfd09f86256c07b28ebdafcc37e118f5c-merged.mount: Deactivated successfully.
Jan 23 05:15:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2563: 321 pgs: 321 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 184 op/s
Jan 23 05:15:05 np0005593232 podman[342235]: 2026-01-23 10:15:05.442689117 +0000 UTC m=+0.081960720 container remove 7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:15:05 np0005593232 systemd[1]: libpod-conmon-7356da61e133445a9eeca62c090f4ee022dcdc71fae49804f76dbb4d40454d80.scope: Deactivated successfully.
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.028925164 +0000 UTC m=+0.036371495 container create 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:15:06 np0005593232 systemd[1]: Started libpod-conmon-92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9.scope.
Jan 23 05:15:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.107121425 +0000 UTC m=+0.114567766 container init 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.012056774 +0000 UTC m=+0.019503125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.114080583 +0000 UTC m=+0.121526914 container start 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.117166901 +0000 UTC m=+0.124613232 container attach 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:15:06 np0005593232 recursing_dewdney[342410]: 167 167
Jan 23 05:15:06 np0005593232 systemd[1]: libpod-92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9.scope: Deactivated successfully.
Jan 23 05:15:06 np0005593232 conmon[342410]: conmon 92609bd3c6016ff4e9f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9.scope/container/memory.events
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.120480655 +0000 UTC m=+0.127926996 container died 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:15:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1480c55b6ba12c3276a2f7ab3112a74965c9f26e65a8b34edbdf88a2aeacfed2-merged.mount: Deactivated successfully.
Jan 23 05:15:06 np0005593232 podman[342394]: 2026-01-23 10:15:06.151059794 +0000 UTC m=+0.158506125 container remove 92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dewdney, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:15:06 np0005593232 systemd[1]: libpod-conmon-92609bd3c6016ff4e9f9c620ff0db8c0ac378413f1826111d75ddaf923bc6db9.scope: Deactivated successfully.
Jan 23 05:15:06 np0005593232 podman[342433]: 2026-01-23 10:15:06.313287154 +0000 UTC m=+0.039262677 container create 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:15:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:06.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:06 np0005593232 systemd[1]: Started libpod-conmon-2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30.scope.
Jan 23 05:15:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855e43376090c04679da5f3d2df20bde10184bfcc43a380e174e15a464b4c6c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855e43376090c04679da5f3d2df20bde10184bfcc43a380e174e15a464b4c6c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855e43376090c04679da5f3d2df20bde10184bfcc43a380e174e15a464b4c6c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/855e43376090c04679da5f3d2df20bde10184bfcc43a380e174e15a464b4c6c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:06 np0005593232 podman[342433]: 2026-01-23 10:15:06.389904291 +0000 UTC m=+0.115879844 container init 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:15:06 np0005593232 podman[342433]: 2026-01-23 10:15:06.293568433 +0000 UTC m=+0.019543966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:15:06 np0005593232 podman[342433]: 2026-01-23 10:15:06.396088236 +0000 UTC m=+0.122063749 container start 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:15:06 np0005593232 podman[342433]: 2026-01-23 10:15:06.399309788 +0000 UTC m=+0.125285321 container attach 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:15:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 23 05:15:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:07.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 23 05:15:07 np0005593232 objective_jang[342449]: {
Jan 23 05:15:07 np0005593232 objective_jang[342449]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:15:07 np0005593232 objective_jang[342449]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:15:07 np0005593232 objective_jang[342449]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:15:07 np0005593232 objective_jang[342449]:        "osd_id": 0,
Jan 23 05:15:07 np0005593232 objective_jang[342449]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:15:07 np0005593232 objective_jang[342449]:        "type": "bluestore"
Jan 23 05:15:07 np0005593232 objective_jang[342449]:    }
Jan 23 05:15:07 np0005593232 objective_jang[342449]: }
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.347 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.350 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.350 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.350 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.350 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:07 np0005593232 systemd[1]: libpod-2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30.scope: Deactivated successfully.
Jan 23 05:15:07 np0005593232 podman[342433]: 2026-01-23 10:15:07.350939658 +0000 UTC m=+1.076915191 container died 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.351 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.352 250273 INFO nova.compute.manager [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Terminating instance#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.353 250273 DEBUG nova.compute.manager [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:15:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-855e43376090c04679da5f3d2df20bde10184bfcc43a380e174e15a464b4c6c1-merged.mount: Deactivated successfully.
Jan 23 05:15:07 np0005593232 podman[342433]: 2026-01-23 10:15:07.403702817 +0000 UTC m=+1.129678330 container remove 2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:15:07 np0005593232 kernel: tap5004fad4-57 (unregistering): left promiscuous mode
Jan 23 05:15:07 np0005593232 NetworkManager[49057]: <info>  [1769163307.4100] device (tap5004fad4-57): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:15:07 np0005593232 systemd[1]: libpod-conmon-2b619ae59dfdb7a9f08891dbfe0a50a6d133c7af336ab5bee461eefa6a5ffe30.scope: Deactivated successfully.
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2565: 321 pgs: 321 active+clean; 694 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 850 KiB/s wr, 159 op/s
Jan 23 05:15:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:07Z|00514|binding|INFO|Releasing lport 5004fad4-5788-4709-9c83-b5fe075c0aa7 from this chassis (sb_readonly=0)
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:07Z|00515|binding|INFO|Setting lport 5004fad4-5788-4709-9c83-b5fe075c0aa7 down in Southbound
Jan 23 05:15:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:07Z|00516|binding|INFO|Removing iface tap5004fad4-57 ovn-installed in OVS
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.433 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:95:4c:75 10.100.0.7'], port_security=['fa:16:3e:95:4c:75 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '33559028-00d9-4918-9015-26172db3d00c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d9599b4-8855-4310-af02-cdd058438f7d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9dd869ce76e44fc8a82b8bbee1654d33', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'cf3e0bf9-33c6-483b-a880-c8297a0be71f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.199', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=875f4baa-cb85-49ca-8f02-78715d351fdb, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5004fad4-5788-4709-9c83-b5fe075c0aa7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.435 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5004fad4-5788-4709-9c83-b5fe075c0aa7 in datapath 8d9599b4-8855-4310-af02-cdd058438f7d unbound from our chassis#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.437 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d9599b4-8855-4310-af02-cdd058438f7d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.439 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.439 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c53559ca-5a51-4ab0-adde-4ded344e0a4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.440 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d namespace which is not needed anymore#033[00m
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:15:07 np0005593232 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 23 05:15:07 np0005593232 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000086.scope: Consumed 14.115s CPU time.
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:07 np0005593232 systemd-machined[215836]: Machine qemu-61-instance-00000086 terminated.
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f68ae418-20c8-49d3-901c-2a2bec62296b does not exist
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 789264c6-7598-416f-b68b-f5226c80fae7 does not exist
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3261b971-0236-415e-bc89-d2fad8d53e82 does not exist
Jan 23 05:15:07 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [NOTICE]   (341290) : haproxy version is 2.8.14-c23fe91
Jan 23 05:15:07 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [NOTICE]   (341290) : path to executable is /usr/sbin/haproxy
Jan 23 05:15:07 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [WARNING]  (341290) : Exiting Master process...
Jan 23 05:15:07 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [ALERT]    (341290) : Current worker (341293) exited with code 143 (Terminated)
Jan 23 05:15:07 np0005593232 neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d[341285]: [WARNING]  (341290) : All workers exited. Exiting... (0)
Jan 23 05:15:07 np0005593232 systemd[1]: libpod-6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c.scope: Deactivated successfully.
Jan 23 05:15:07 np0005593232 conmon[341285]: conmon 6173389c7e7eaa2df771 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c.scope/container/memory.events
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:15:07 np0005593232 podman[342531]: 2026-01-23 10:15:07.590554407 +0000 UTC m=+0.050123935 container died 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.595 250273 INFO nova.virt.libvirt.driver [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] Instance destroyed successfully.#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.596 250273 DEBUG nova.objects.instance [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lazy-loading 'resources' on Instance uuid 33559028-00d9-4918-9015-26172db3d00c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.611 250273 DEBUG nova.network.neutron [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c-userdata-shm.mount: Deactivated successfully.
Jan 23 05:15:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b87c0aab52f769463663cd11c4965559e837d532fee17b8f81e98d74b65dbed-merged.mount: Deactivated successfully.
Jan 23 05:15:07 np0005593232 podman[342531]: 2026-01-23 10:15:07.638215311 +0000 UTC m=+0.097784839 container cleanup 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:15:07 np0005593232 systemd[1]: libpod-conmon-6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c.scope: Deactivated successfully.
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.685 250273 DEBUG nova.virt.libvirt.vif [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-23T10:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1716094682',display_name='tempest-ServerActionsTestOtherB-server-1716094682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1716094682',id=134,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPpuWItOSZUstL5LlOZAhtyKqrmFs0bJ/+DBMLk1rKDBu2SnttdOypH9Db6AMV4nGhLXOyr97hIMUaALurv7OcM9NkoB1CxFMDb3d0IWPDnRphumt71Jz0jUP0kiZtXBTQ==',key_name='tempest-keypair-1844396132',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:14:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9dd869ce76e44fc8a82b8bbee1654d33',ramdisk_id='',reservation_id='r-frjx5azl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1052932467',owner_user_name='tempest-ServerActionsTestOtherB-1052932467-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:14:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='aca3cab576d641d3b89e7dddf155d467',uuid=33559028-00d9-4918-9015-26172db3d00c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.685 250273 DEBUG nova.network.os_vif_util [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converting VIF {"id": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "address": "fa:16:3e:95:4c:75", "network": {"id": "8d9599b4-8855-4310-af02-cdd058438f7d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1325714374-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9dd869ce76e44fc8a82b8bbee1654d33", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5004fad4-57", "ovs_interfaceid": "5004fad4-5788-4709-9c83-b5fe075c0aa7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.686 250273 DEBUG nova.network.os_vif_util [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.686 250273 DEBUG os_vif [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.688 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5004fad4-57, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.694 250273 INFO os_vif [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:95:4c:75,bridge_name='br-int',has_traffic_filtering=True,id=5004fad4-5788-4709-9c83-b5fe075c0aa7,network=Network(8d9599b4-8855-4310-af02-cdd058438f7d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5004fad4-57')#033[00m
Jan 23 05:15:07 np0005593232 podman[342599]: 2026-01-23 10:15:07.709808015 +0000 UTC m=+0.048015245 container remove 6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.717 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.717 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Instance network_info: |[{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.718 250273 DEBUG oslo_concurrency.lockutils [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.717 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3579edb4-adda-4f44-8ff9-7ec06cf44b62]: (4, ('Fri Jan 23 10:15:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c)\n6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c\nFri Jan 23 10:15:07 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d (6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c)\n6173389c7e7eaa2df77128cfb438fc442db7d6559b27a834ce881e0439bc497c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.718 250273 DEBUG nova.network.neutron [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.719 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[36910cb9-1af7-40cb-8a08-a212e25dcbb5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.720 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d9599b4-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.721 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Start _get_guest_xml network_info=[{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': 'ee693baf-e89b-42e6-a9cb-a114c373e1a2', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-0bf60c43-104e-4a31-b83b-9fcea380005d', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '0bf60c43-104e-4a31-b83b-9fcea380005d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ccd07f55-529f-4dbb-989c-2cdbdd393a0b', 'attached_at': '', 'detached_at': '', 'volume_id': '0bf60c43-104e-4a31-b83b-9fcea380005d', 'serial': '0bf60c43-104e-4a31-b83b-9fcea380005d'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:15:07 np0005593232 kernel: tap8d9599b4-80: left promiscuous mode
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.728 250273 WARNING nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.736 250273 DEBUG nova.virt.libvirt.host [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.737 250273 DEBUG nova.virt.libvirt.host [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.738 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.742 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1c8c0df1-9de0-45ad-a04e-6fa635adda7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.743 250273 DEBUG nova.virt.libvirt.host [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.744 250273 DEBUG nova.virt.libvirt.host [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.745 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.745 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.746 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.747 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.747 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.747 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.747 250273 DEBUG nova.virt.hardware [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.755 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[71e30c69-a9bc-42fb-8d62-bc2dde2e10a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.756 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[675f18f5-fc99-4aca-9e5e-9a739f27340f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.776 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[14eb0fd0-8956-41f4-83bf-b94398d7f889]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 720168, 'reachable_time': 16379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342639, 'error': None, 'target': 'ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 systemd[1]: run-netns-ovnmeta\x2d8d9599b4\x2d8855\x2d4310\x2daf02\x2dcdd058438f7d.mount: Deactivated successfully.
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.782 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8d9599b4-8855-4310-af02-cdd058438f7d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:15:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:07.782 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[e72af530-d69f-403f-a0b1-14177ac1b4e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.788 250273 DEBUG nova.storage.rbd_utils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] rbd image ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:07 np0005593232 nova_compute[250269]: 2026-01-23 10:15:07.793 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.114 250273 INFO nova.virt.libvirt.driver [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Deleting instance files /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c_del#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.116 250273 INFO nova.virt.libvirt.driver [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Deletion of /var/lib/nova/instances/33559028-00d9-4918-9015-26172db3d00c_del complete#033[00m
Jan 23 05:15:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:15:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1965890841' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.257 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.280 250273 INFO nova.compute.manager [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.281 250273 DEBUG oslo.service.loopingcall [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.281 250273 DEBUG nova.compute.manager [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.281 250273 DEBUG nova.network.neutron [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:15:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:08.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.381 250273 DEBUG nova.virt.libvirt.vif [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-649765900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-649765900',id=142,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPuMczToXGmZUNyxG5fVGeV6xaoJVOpQ6Lh9dx5t6v22bv4xalVGQLUjYNEpg7ajkuOU/WHiNfvMhffjZHY/YojnQQYOX+q0GTa9+NPbkGDFf1XELa+vTNvIe6ZV8CwP9g==',key_name='tempest-TestInstancesWithCinderVolumes-232096272',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ae621f21a8e438fb95152309b38cee5',ramdisk_id='',reservation_id='r-zfxoc5fn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-565485208',owner_user_name='tempest-TestInstancesWithCinderVolumes-565485208-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:53Z,user_data=None,user_id='95ac13194f0940128d42af3d45d130fa',uuid=ccd07f55-529f-4dbb-989c-2cdbdd393a0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.381 250273 DEBUG nova.network.os_vif_util [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converting VIF {"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.382 250273 DEBUG nova.network.os_vif_util [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.383 250273 DEBUG nova.objects.instance [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'pci_devices' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.475 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <uuid>ccd07f55-529f-4dbb-989c-2cdbdd393a0b</uuid>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <name>instance-0000008e</name>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestInstancesWithCinderVolumes-server-649765900</nova:name>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:15:07</nova:creationTime>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:user uuid="95ac13194f0940128d42af3d45d130fa">tempest-TestInstancesWithCinderVolumes-565485208-project-member</nova:user>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:project uuid="3ae621f21a8e438fb95152309b38cee5">tempest-TestInstancesWithCinderVolumes-565485208</nova:project>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <nova:port uuid="4a04244c-3270-4ff3-ad30-52e80e7db513">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="serial">ccd07f55-529f-4dbb-989c-2cdbdd393a0b</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="uuid">ccd07f55-529f-4dbb-989c-2cdbdd393a0b</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-0bf60c43-104e-4a31-b83b-9fcea380005d">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <serial>0bf60c43-104e-4a31-b83b-9fcea380005d</serial>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:c0:68:91"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <target dev="tap4a04244c-32"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/console.log" append="off"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:15:08 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:15:08 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:15:08 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:15:08 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.476 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Preparing to wait for external event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.476 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.477 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.477 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.478 250273 DEBUG nova.virt.libvirt.vif [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-649765900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-649765900',id=142,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPuMczToXGmZUNyxG5fVGeV6xaoJVOpQ6Lh9dx5t6v22bv4xalVGQLUjYNEpg7ajkuOU/WHiNfvMhffjZHY/YojnQQYOX+q0GTa9+NPbkGDFf1XELa+vTNvIe6ZV8CwP9g==',key_name='tempest-TestInstancesWithCinderVolumes-232096272',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ae621f21a8e438fb95152309b38cee5',ramdisk_id='',reservation_id='r-zfxoc5fn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestInstancesWithCinderVolumes-565485208',owner_user_name='tempest-TestInstancesWithCinderVolumes-565485208-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:53Z,user_data=None,user_id='95ac13194f0940128d42af3d45d130fa',uuid=ccd07f55-529f-4dbb-989c-2cdbdd393a0b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.478 250273 DEBUG nova.network.os_vif_util [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converting VIF {"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.479 250273 DEBUG nova.network.os_vif_util [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.480 250273 DEBUG os_vif [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.480 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.481 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.482 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.485 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.485 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a04244c-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.487 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4a04244c-32, col_values=(('external_ids', {'iface-id': '4a04244c-3270-4ff3-ad30-52e80e7db513', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c0:68:91', 'vm-uuid': 'ccd07f55-529f-4dbb-989c-2cdbdd393a0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.529 250273 DEBUG nova.network.neutron [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.534 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:08 np0005593232 NetworkManager[49057]: <info>  [1769163308.5354] manager: (tap4a04244c-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/248)
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.538 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.539 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.540 250273 INFO os_vif [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32')#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.684 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.685 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Instance network_info: |[{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.685 250273 DEBUG oslo_concurrency.lockutils [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.685 250273 DEBUG nova.network.neutron [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.688 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Start _get_guest_xml network_info=[{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.693 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.694 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.694 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No VIF found with MAC fa:16:3e:c0:68:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.694 250273 INFO nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Using config drive#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.719 250273 DEBUG nova.storage.rbd_utils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] rbd image ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.727 250273 WARNING nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.740 250273 DEBUG nova.virt.libvirt.host [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.741 250273 DEBUG nova.virt.libvirt.host [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.755 250273 DEBUG nova.virt.libvirt.host [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.755 250273 DEBUG nova.virt.libvirt.host [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.756 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.757 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.757 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.757 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.757 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.757 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.758 250273 DEBUG nova.virt.hardware [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:15:08 np0005593232 nova_compute[250269]: 2026-01-23 10:15:08.761 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:09.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:15:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826000271' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.200 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.225 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.228 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2566: 321 pgs: 321 active+clean; 672 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 211 op/s
Jan 23 05:15:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:15:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960780344' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.690 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.692 250273 DEBUG nova.virt.libvirt.vif [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:54Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.692 250273 DEBUG nova.network.os_vif_util [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.693 250273 DEBUG nova.network.os_vif_util [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.694 250273 DEBUG nova.objects.instance [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.774 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <uuid>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</uuid>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <name>instance-0000008f</name>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:15:08</nova:creationTime>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="serial">c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="uuid">c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:15:fb:62"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <target dev="tape76c3794-4b"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log" append="off"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:15:09 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:15:09 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:15:09 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:15:09 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.775 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Preparing to wait for external event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.775 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.775 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.776 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.776 250273 DEBUG nova.virt.libvirt.vif [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:14:54Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.776 250273 DEBUG nova.network.os_vif_util [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.777 250273 DEBUG nova.network.os_vif_util [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.777 250273 DEBUG os_vif [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.778 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.779 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.781 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape76c3794-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.781 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape76c3794-4b, col_values=(('external_ids', {'iface-id': 'e76c3794-4bfb-450d-901b-d5c2ecccb574', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:fb:62', 'vm-uuid': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:09 np0005593232 NetworkManager[49057]: <info>  [1769163309.7839] manager: (tape76c3794-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.790 250273 INFO os_vif [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b')#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.887 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.887 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.887 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:15:fb:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.888 250273 INFO nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Using config drive#033[00m
Jan 23 05:15:09 np0005593232 nova_compute[250269]: 2026-01-23 10:15:09.909 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:10.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.845 250273 DEBUG nova.compute.manager [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-unplugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.846 250273 DEBUG oslo_concurrency.lockutils [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.846 250273 DEBUG oslo_concurrency.lockutils [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.847 250273 DEBUG oslo_concurrency.lockutils [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.847 250273 DEBUG nova.compute.manager [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] No waiting events found dispatching network-vif-unplugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:15:10 np0005593232 nova_compute[250269]: 2026-01-23 10:15:10.847 250273 DEBUG nova.compute.manager [req-c7a9f673-9186-454e-87bc-bd193ff7ab83 req-9fdc1b22-4df9-454f-b570-90478178933c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-unplugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:15:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:11.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2567: 321 pgs: 321 active+clean; 672 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.2 MiB/s wr, 156 op/s
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.648 250273 INFO nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Creating config drive at /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config#033[00m
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.656 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwmve0ld_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.792 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwmve0ld_" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.823 250273 DEBUG nova.storage.rbd_utils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] rbd image ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.826 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.991 250273 DEBUG oslo_concurrency.processutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config ccd07f55-529f-4dbb-989c-2cdbdd393a0b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:11 np0005593232 nova_compute[250269]: 2026-01-23 10:15:11.992 250273 INFO nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Deleting local config drive /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b/disk.config because it was imported into RBD.#033[00m
Jan 23 05:15:12 np0005593232 kernel: tap4a04244c-32: entered promiscuous mode
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.0415] manager: (tap4a04244c-32): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Jan 23 05:15:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:12Z|00517|binding|INFO|Claiming lport 4a04244c-3270-4ff3-ad30-52e80e7db513 for this chassis.
Jan 23 05:15:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:12Z|00518|binding|INFO|4a04244c-3270-4ff3-ad30-52e80e7db513: Claiming fa:16:3e:c0:68:91 10.100.0.3
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:12Z|00519|binding|INFO|Setting lport 4a04244c-3270-4ff3-ad30-52e80e7db513 ovn-installed in OVS
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 systemd-udevd[342883]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.0827] device (tap4a04244c-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.0839] device (tap4a04244c-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:15:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:12Z|00520|binding|INFO|Setting lport 4a04244c-3270-4ff3-ad30-52e80e7db513 up in Southbound
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.088 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:68:91 10.100.0.3'], port_security=['fa:16:3e:c0:68:91 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ccd07f55-529f-4dbb-989c-2cdbdd393a0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ae621f21a8e438fb95152309b38cee5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b0a0b41-45a8-4582-a4d2-a9aff1f1a18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5888498-07d6-4c96-95ee-546974eebd82, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=4a04244c-3270-4ff3-ad30-52e80e7db513) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.090 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4a04244c-3270-4ff3-ad30-52e80e7db513 in datapath f98d79de-4a23-4f29-9848-c5d4c5683a5d bound to our chassis#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.092 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f98d79de-4a23-4f29-9848-c5d4c5683a5d#033[00m
Jan 23 05:15:12 np0005593232 systemd-machined[215836]: New machine qemu-62-instance-0000008e.
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.105 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[961f682b-3bcc-4bcb-85ec-2165863fdf39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.106 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf98d79de-41 in ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:15:12 np0005593232 systemd[1]: Started Virtual Machine qemu-62-instance-0000008e.
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.111 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf98d79de-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.111 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bfad1e28-babe-4870-aa74-5d6d029615bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.113 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[14c0bb54-c43a-4c37-953d-10b2425f2ea5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.126 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[8af845f2-e191-4594-b152-49d12ad1036b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.141 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dba57dfb-6117-46a4-bf11-5e9c16f1702b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.176 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[531cb4d8-cd28-41e0-94b0-fda9f692b149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.1827] manager: (tapf98d79de-40): new Veth device (/org/freedesktop/NetworkManager/Devices/251)
Jan 23 05:15:12 np0005593232 systemd-udevd[342888]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.182 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a912cc06-0ed7-401e-b3d4-4e89271f88f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.219 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[447fad3a-8597-48e7-9459-3c70e90d6bf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.222 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f037bc4c-61cc-4d30-924e-b9217a8f863e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.2461] device (tapf98d79de-40): carrier: link connected
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.254 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[995c64e5-4d11-4faf-841e-c808e1eb17db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.269 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[128ecf2d-f8a4-4f9e-bebe-fd003699d3d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf98d79de-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:3d:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722538, 'reachable_time': 19732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342922, 'error': None, 'target': 'ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.284 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[79429d03-d498-460d-866a-385a86473567]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe79:3d5f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722538, 'tstamp': 722538}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342923, 'error': None, 'target': 'ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.302 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f56e3d79-f88c-4b4e-bf4a-752be6fca90e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf98d79de-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:79:3d:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 157], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722538, 'reachable_time': 19732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342924, 'error': None, 'target': 'ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.331 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b2eef459-31b5-47d9-a14d-e598fc3ab46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.348 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:12.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.383 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[51491f34-6e83-4f5f-970d-f8689adbf6a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.385 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf98d79de-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.385 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.385 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf98d79de-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:12 np0005593232 kernel: tapf98d79de-40: entered promiscuous mode
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 NetworkManager[49057]: <info>  [1769163312.3878] manager: (tapf98d79de-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.392 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.393 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf98d79de-40, col_values=(('external_ids', {'iface-id': '2c16e447-27d9-4516-bf23-ec948f375c10'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:12Z|00521|binding|INFO|Releasing lport 2c16e447-27d9-4516-bf23-ec948f375c10 from this chassis (sb_readonly=0)
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.395 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.422 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f98d79de-4a23-4f29-9848-c5d4c5683a5d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f98d79de-4a23-4f29-9848-c5d4c5683a5d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.423 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[93dabb49-cb7c-45cb-9f5a-5fcf9017a1b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.424 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-f98d79de-4a23-4f29-9848-c5d4c5683a5d
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/f98d79de-4a23-4f29-9848-c5d4c5683a5d.pid.haproxy
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID f98d79de-4a23-4f29-9848-c5d4c5683a5d
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:15:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:12.425 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'env', 'PROCESS_TAG=haproxy-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f98d79de-4a23-4f29-9848-c5d4c5683a5d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.700 250273 INFO nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Creating config drive at /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.710 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjn8y3ikd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.856 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjn8y3ikd" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:12 np0005593232 podman[342993]: 2026-01-23 10:15:12.876305519 +0000 UTC m=+0.120089483 container create 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 05:15:12 np0005593232 podman[342993]: 2026-01-23 10:15:12.782113853 +0000 UTC m=+0.025897857 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.891 250273 DEBUG nova.storage.rbd_utils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.897 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:12 np0005593232 systemd[1]: Started libpod-conmon-32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1.scope.
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.933 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163312.9140975, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:12 np0005593232 nova_compute[250269]: 2026-01-23 10:15:12.934 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] VM Started (Lifecycle Event)#033[00m
Jan 23 05:15:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e49c1afd44b824360f52b7e0ebbe39e0de112feb2b10c678b305fe509c52e75/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:12 np0005593232 podman[342993]: 2026-01-23 10:15:12.971229177 +0000 UTC m=+0.215013161 container init 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 23 05:15:12 np0005593232 podman[342993]: 2026-01-23 10:15:12.976074984 +0000 UTC m=+0.219858958 container start 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:15:12 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [NOTICE]   (343045) : New worker (343062) forked
Jan 23 05:15:12 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [NOTICE]   (343045) : Loading success.
Jan 23 05:15:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:15:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:13.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.083 250273 DEBUG oslo_concurrency.processutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.084 250273 INFO nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Deleting local config drive /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/disk.config because it was imported into RBD.#033[00m
Jan 23 05:15:13 np0005593232 kernel: tape76c3794-4b: entered promiscuous mode
Jan 23 05:15:13 np0005593232 NetworkManager[49057]: <info>  [1769163313.1396] manager: (tape76c3794-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Jan 23 05:15:13 np0005593232 systemd-udevd[342915]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:15:13 np0005593232 NetworkManager[49057]: <info>  [1769163313.1566] device (tape76c3794-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:15:13 np0005593232 NetworkManager[49057]: <info>  [1769163313.1572] device (tape76c3794-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:13Z|00522|binding|INFO|Claiming lport e76c3794-4bfb-450d-901b-d5c2ecccb574 for this chassis.
Jan 23 05:15:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:13Z|00523|binding|INFO|e76c3794-4bfb-450d-901b-d5c2ecccb574: Claiming fa:16:3e:15:fb:62 10.100.0.8
Jan 23 05:15:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:13Z|00524|binding|INFO|Setting lport e76c3794-4bfb-450d-901b-d5c2ecccb574 ovn-installed in OVS
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:13 np0005593232 systemd-machined[215836]: New machine qemu-63-instance-0000008f.
Jan 23 05:15:13 np0005593232 systemd[1]: Started Virtual Machine qemu-63-instance-0000008f.
Jan 23 05:15:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2568: 321 pgs: 321 active+clean; 648 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 231 op/s
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.446 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.452 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163312.9144182, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:13 np0005593232 nova_compute[250269]: 2026-01-23 10:15:13.452 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:15:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:14Z|00525|binding|INFO|Setting lport e76c3794-4bfb-450d-901b-d5c2ecccb574 up in Southbound
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.184 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:fb:62 10.100.0.8'], port_security=['fa:16:3e:15:fb:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28dbde1b-f0be-40ce-aec9-09c9d592e51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a78bd9d7-7834-4168-80e9-f5bf32da7504, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e76c3794-4bfb-450d-901b-d5c2ecccb574) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.186 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e76c3794-4bfb-450d-901b-d5c2ecccb574 in datapath 06fc7c06-b10a-4dd5-9475-e6feb221ea4a bound to our chassis#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.188 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 06fc7c06-b10a-4dd5-9475-e6feb221ea4a#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.199 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[89fc7641-312e-401c-9fc6-8d80ca2b096d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.200 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap06fc7c06-b1 in ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.203 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap06fc7c06-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.203 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eccb935c-dac8-4169-97aa-afb4618e2f3d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.204 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9b126949-b45b-4d07-b7d1-39e77eafaedf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.215 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb33f17-524e-46b4-90d5-038336943c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.230 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ef60c435-f1ae-4d5f-8a27-e12ed7a104df]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.258 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3501b784-90f7-4cb7-ba79-dd708eadf96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.262 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b59ec074-f508-4cc9-be81-7924dfd65561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 NetworkManager[49057]: <info>  [1769163314.2638] manager: (tap06fc7c06-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/254)
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.308 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f57657-aca8-41b6-8dd6-a0031c005fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.312 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b3fac041-3262-4210-b57b-6dfc4169e045]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 NetworkManager[49057]: <info>  [1769163314.3396] device (tap06fc7c06-b0): carrier: link connected
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.345 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fbf9f05d-e5af-4ff0-857b-8440854059f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:14.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.367 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0c27d2fb-d0c7-4568-b570-64e0f8c7f83f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06fc7c06-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:a9:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722748, 'reachable_time': 30220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343147, 'error': None, 'target': 'ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.386 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[24f1741e-270b-4746-80cd-d63975c670a0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6b:a966'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 722748, 'tstamp': 722748}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343148, 'error': None, 'target': 'ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.403 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3b42e83e-3a90-48fa-898f-be48aee2cd95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap06fc7c06-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6b:a9:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 159], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722748, 'reachable_time': 30220, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343149, 'error': None, 'target': 'ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.433 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ae780b-8601-4692-a3ba-8b69018e149b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.495 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[265dccb6-ec9e-42f3-ac84-11c1ed1eaab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.497 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06fc7c06-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.497 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.498 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap06fc7c06-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:14 np0005593232 NetworkManager[49057]: <info>  [1769163314.5014] manager: (tap06fc7c06-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/255)
Jan 23 05:15:14 np0005593232 nova_compute[250269]: 2026-01-23 10:15:14.501 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:14 np0005593232 kernel: tap06fc7c06-b0: entered promiscuous mode
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.503 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap06fc7c06-b0, col_values=(('external_ids', {'iface-id': 'e77f48c3-bdcc-4620-bc5b-550b6d3814da'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:14 np0005593232 nova_compute[250269]: 2026-01-23 10:15:14.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:14Z|00526|binding|INFO|Releasing lport e77f48c3-bdcc-4620-bc5b-550b6d3814da from this chassis (sb_readonly=1)
Jan 23 05:15:14 np0005593232 nova_compute[250269]: 2026-01-23 10:15:14.527 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.528 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/06fc7c06-b10a-4dd5-9475-e6feb221ea4a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/06fc7c06-b10a-4dd5-9475-e6feb221ea4a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.529 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3cc7b4-89a9-4791-b96d-13326b84ebac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.529 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-06fc7c06-b10a-4dd5-9475-e6feb221ea4a
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/06fc7c06-b10a-4dd5-9475-e6feb221ea4a.pid.haproxy
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 06fc7c06-b10a-4dd5-9475-e6feb221ea4a
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:15:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:14.530 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'env', 'PROCESS_TAG=haproxy-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/06fc7c06-b10a-4dd5-9475-e6feb221ea4a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:15:14 np0005593232 nova_compute[250269]: 2026-01-23 10:15:14.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:14 np0005593232 podman[343180]: 2026-01-23 10:15:14.874647601 +0000 UTC m=+0.047903342 container create 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 23 05:15:14 np0005593232 systemd[1]: Started libpod-conmon-7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128.scope.
Jan 23 05:15:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:15:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a8951aea5dbed6b7fca36349cd9427ea1eb457028efc76354c6c84c62d4c3ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:15:14 np0005593232 podman[343180]: 2026-01-23 10:15:14.847735796 +0000 UTC m=+0.020991567 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:15:14 np0005593232 podman[343180]: 2026-01-23 10:15:14.95311108 +0000 UTC m=+0.126366831 container init 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:15:14 np0005593232 podman[343180]: 2026-01-23 10:15:14.958992997 +0000 UTC m=+0.132248738 container start 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 05:15:14 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [NOTICE]   (343199) : New worker (343201) forked
Jan 23 05:15:14 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [NOTICE]   (343199) : Loading success.
Jan 23 05:15:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:15.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2569: 321 pgs: 321 active+clean; 648 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 212 op/s
Jan 23 05:15:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:16.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.467 250273 DEBUG nova.compute.manager [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.468 250273 DEBUG oslo_concurrency.lockutils [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "33559028-00d9-4918-9015-26172db3d00c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.468 250273 DEBUG oslo_concurrency.lockutils [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.468 250273 DEBUG oslo_concurrency.lockutils [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.469 250273 DEBUG nova.compute.manager [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] No waiting events found dispatching network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.469 250273 WARNING nova.compute.manager [req-fd8983e6-579a-45d6-8a3c-835c37972bb3 req-59f7d9e9-5445-4a16-955c-2c5f907cc72f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received unexpected event network-vif-plugged-5004fad4-5788-4709-9c83-b5fe075c0aa7 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.504 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:16 np0005593232 nova_compute[250269]: 2026-01-23 10:15:16.508 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:15:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:17.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:17 np0005593232 nova_compute[250269]: 2026-01-23 10:15:17.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2570: 321 pgs: 321 active+clean; 648 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 215 op/s
Jan 23 05:15:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:18.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.826 250273 DEBUG nova.compute.manager [req-20fb847d-3eec-44b4-8503-bba64d3790a0 req-6379b4b4-0b9c-4cb2-980a-d98542ed783a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.827 250273 DEBUG oslo_concurrency.lockutils [req-20fb847d-3eec-44b4-8503-bba64d3790a0 req-6379b4b4-0b9c-4cb2-980a-d98542ed783a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.828 250273 DEBUG oslo_concurrency.lockutils [req-20fb847d-3eec-44b4-8503-bba64d3790a0 req-6379b4b4-0b9c-4cb2-980a-d98542ed783a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.829 250273 DEBUG oslo_concurrency.lockutils [req-20fb847d-3eec-44b4-8503-bba64d3790a0 req-6379b4b4-0b9c-4cb2-980a-d98542ed783a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.829 250273 DEBUG nova.compute.manager [req-20fb847d-3eec-44b4-8503-bba64d3790a0 req-6379b4b4-0b9c-4cb2-980a-d98542ed783a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Processing event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.830 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.837 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.842 250273 INFO nova.virt.libvirt.driver [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Instance spawned successfully.#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.843 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.930 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.930 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.930 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.931 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.931 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.931 250273 DEBUG nova.virt.libvirt.driver [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.935 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.936 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163313.655828, c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:18 np0005593232 nova_compute[250269]: 2026-01-23 10:15:18.936 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] VM Started (Lifecycle Event)#033[00m
Jan 23 05:15:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:19.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2571: 321 pgs: 321 active+clean; 653 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.7 MiB/s wr, 161 op/s
Jan 23 05:15:19 np0005593232 nova_compute[250269]: 2026-01-23 10:15:19.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:20.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:21.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2572: 321 pgs: 321 active+clean; 653 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 125 op/s
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.779 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.784 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163313.6559443, c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.784 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.822 250273 DEBUG nova.network.neutron [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.822 250273 DEBUG nova.network.neutron [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.885 250273 DEBUG nova.network.neutron [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updated VIF entry in instance network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:15:21 np0005593232 nova_compute[250269]: 2026-01-23 10:15:21.886 250273 DEBUG nova.network.neutron [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:22.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:22 np0005593232 nova_compute[250269]: 2026-01-23 10:15:22.416 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:22 np0005593232 nova_compute[250269]: 2026-01-23 10:15:22.590 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163307.589936, 33559028-00d9-4918-9015-26172db3d00c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:22 np0005593232 nova_compute[250269]: 2026-01-23 10:15:22.591 250273 INFO nova.compute.manager [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:15:22 np0005593232 nova_compute[250269]: 2026-01-23 10:15:22.753 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:22 np0005593232 nova_compute[250269]: 2026-01-23 10:15:22.761 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:15:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:23.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2573: 321 pgs: 321 active+clean; 679 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 3.4 MiB/s wr, 227 op/s
Jan 23 05:15:23 np0005593232 nova_compute[250269]: 2026-01-23 10:15:23.811 250273 INFO nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Took 27.23 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:15:23 np0005593232 nova_compute[250269]: 2026-01-23 10:15:23.812 250273 DEBUG nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:23 np0005593232 nova_compute[250269]: 2026-01-23 10:15:23.886 250273 DEBUG nova.compute.manager [None req-edc3dec2-b994-4e26-b4c6-12f03620bbf7 - - - - - -] [instance: 33559028-00d9-4918-9015-26172db3d00c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:23 np0005593232 nova_compute[250269]: 2026-01-23 10:15:23.892 250273 DEBUG oslo_concurrency.lockutils [req-5b2427d3-c5bc-4677-a563-c2f97842aec5 req-f84fcaad-0bae-4014-82bb-64d41ecba40b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:23 np0005593232 nova_compute[250269]: 2026-01-23 10:15:23.895 250273 DEBUG oslo_concurrency.lockutils [req-72bfe28c-5266-4a4c-ab20-51a774621201 req-211a4928-a7f6-4ed9-8eea-78f9ac25735a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:24.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:24.621 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:15:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:24.622 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.665 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.686 250273 DEBUG nova.network.neutron [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.735 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.736 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163318.836184, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.736 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.742 250273 INFO nova.compute.manager [-] [instance: 33559028-00d9-4918-9015-26172db3d00c] Took 16.46 seconds to deallocate network for instance.#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.789 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.792 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.850 250273 INFO nova.compute.manager [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Took 32.48 seconds to build instance.#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.883 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.884 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:24 np0005593232 nova_compute[250269]: 2026-01-23 10:15:24.886 250273 DEBUG oslo_concurrency.lockutils [None req-6d0531da-0929-4858-9c22-440e98409be9 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 32.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:25.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.143 250273 DEBUG oslo_concurrency.processutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.214 250273 DEBUG nova.compute.manager [req-722a6dfd-8890-4e4b-8318-df7e60a8dbe0 req-9011974d-6dd0-4e0e-805f-da45cef62c84 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 33559028-00d9-4918-9015-26172db3d00c] Received event network-vif-deleted-5004fad4-5788-4709-9c83-b5fe075c0aa7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.253 250273 DEBUG nova.compute.manager [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.253 250273 DEBUG oslo_concurrency.lockutils [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.254 250273 DEBUG oslo_concurrency.lockutils [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.254 250273 DEBUG oslo_concurrency.lockutils [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.254 250273 DEBUG nova.compute.manager [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] No waiting events found dispatching network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.255 250273 WARNING nova.compute.manager [req-c23cbea5-43dc-4436-ace3-458233a947eb req-5fe01ada-7286-47b3-9e3b-c3b4597090bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received unexpected event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:15:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2574: 321 pgs: 321 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 23 05:15:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:15:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764370651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.590 250273 DEBUG oslo_concurrency.processutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.597 250273 DEBUG nova.compute.provider_tree [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.621 250273 DEBUG nova.scheduler.client.report [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.661 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.697 250273 INFO nova.scheduler.client.report [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Deleted allocations for instance 33559028-00d9-4918-9015-26172db3d00c#033[00m
Jan 23 05:15:25 np0005593232 nova_compute[250269]: 2026-01-23 10:15:25.826 250273 DEBUG oslo_concurrency.lockutils [None req-95d4bce3-e80a-4ff9-aca9-ce898b696d86 aca3cab576d641d3b89e7dddf155d467 9dd869ce76e44fc8a82b8bbee1654d33 - - default default] Lock "33559028-00d9-4918-9015-26172db3d00c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 18.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:26.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:27.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2575: 321 pgs: 321 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 147 op/s
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.459 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.484 250273 DEBUG nova.compute.manager [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.485 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.485 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.485 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.485 250273 DEBUG nova.compute.manager [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Processing event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.485 250273 DEBUG nova.compute.manager [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.486 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.486 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.486 250273 DEBUG oslo_concurrency.lockutils [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.486 250273 DEBUG nova.compute.manager [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.486 250273 WARNING nova.compute.manager [req-0720c0e3-7edb-4445-a0cc-60f0591b0c21 req-3ee7c44d-08b9-4a44-9baf-480e83e0b0fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.487 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Instance event wait completed in 13 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.491 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163327.4909692, c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.491 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.493 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.496 250273 INFO nova.virt.libvirt.driver [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Instance spawned successfully.#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.496 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.522 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.533 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.539 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.539 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.540 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.540 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.541 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.542 250273 DEBUG nova.virt.libvirt.driver [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.579 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.623 250273 INFO nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Took 33.03 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.624 250273 DEBUG nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.719 250273 INFO nova.compute.manager [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Took 34.70 seconds to build instance.#033[00m
Jan 23 05:15:27 np0005593232 nova_compute[250269]: 2026-01-23 10:15:27.759 250273 DEBUG oslo_concurrency.lockutils [None req-f51c5265-f76b-4de2-b2f7-5235801a973b 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 34.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:28.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:28 np0005593232 podman[343240]: 2026-01-23 10:15:28.447171965 +0000 UTC m=+0.087469757 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:15:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2576: 321 pgs: 321 active+clean; 639 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 23 05:15:29 np0005593232 nova_compute[250269]: 2026-01-23 10:15:29.806 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:15:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:30.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:15:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2577: 321 pgs: 321 active+clean; 639 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.6 MiB/s wr, 156 op/s
Jan 23 05:15:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:15:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:32.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:15:32 np0005593232 nova_compute[250269]: 2026-01-23 10:15:32.499 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:33.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2578: 321 pgs: 321 active+clean; 643 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.5 MiB/s wr, 286 op/s
Jan 23 05:15:33 np0005593232 podman[343318]: 2026-01-23 10:15:33.452049016 +0000 UTC m=+0.092614123 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:15:33 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:33Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c0:68:91 10.100.0.3
Jan 23 05:15:33 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:33Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c0:68:91 10.100.0.3
Jan 23 05:15:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:34.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:34 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:34.625 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:15:34 np0005593232 nova_compute[250269]: 2026-01-23 10:15:34.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:35.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2579: 321 pgs: 321 active+clean; 672 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 198 op/s
Jan 23 05:15:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:36.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:37.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:15:37
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.meta', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2580: 321 pgs: 321 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 252 op/s
Jan 23 05:15:37 np0005593232 nova_compute[250269]: 2026-01-23 10:15:37.566 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:15:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:38.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:38 np0005593232 nova_compute[250269]: 2026-01-23 10:15:38.526 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.010000285s ======
Jan 23 05:15:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:39.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.010000285s
Jan 23 05:15:39 np0005593232 nova_compute[250269]: 2026-01-23 10:15:39.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2581: 321 pgs: 321 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 263 op/s
Jan 23 05:15:39 np0005593232 nova_compute[250269]: 2026-01-23 10:15:39.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:40 np0005593232 nova_compute[250269]: 2026-01-23 10:15:40.206 250273 DEBUG nova.compute.manager [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:15:40 np0005593232 nova_compute[250269]: 2026-01-23 10:15:40.206 250273 DEBUG nova.compute.manager [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing instance network info cache due to event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:15:40 np0005593232 nova_compute[250269]: 2026-01-23 10:15:40.207 250273 DEBUG oslo_concurrency.lockutils [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:15:40 np0005593232 nova_compute[250269]: 2026-01-23 10:15:40.207 250273 DEBUG oslo_concurrency.lockutils [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:15:40 np0005593232 nova_compute[250269]: 2026-01-23 10:15:40.207 250273 DEBUG nova.network.neutron [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:15:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:40.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:41Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:fb:62 10.100.0.8
Jan 23 05:15:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:41Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:fb:62 10.100.0.8
Jan 23 05:15:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:41.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:41 np0005593232 nova_compute[250269]: 2026-01-23 10:15:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2582: 321 pgs: 321 active+clean; 681 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 229 op/s
Jan 23 05:15:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:15:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:15:42 np0005593232 nova_compute[250269]: 2026-01-23 10:15:42.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:42.628 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:15:42.629 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2583: 321 pgs: 321 active+clean; 701 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 5.2 MiB/s wr, 261 op/s
Jan 23 05:15:43 np0005593232 nova_compute[250269]: 2026-01-23 10:15:43.956 250273 DEBUG nova.network.neutron [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updated VIF entry in instance network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:15:43 np0005593232 nova_compute[250269]: 2026-01-23 10:15:43.956 250273 DEBUG nova.network.neutron [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:44 np0005593232 nova_compute[250269]: 2026-01-23 10:15:44.018 250273 DEBUG oslo_concurrency.lockutils [req-3e55477d-ab00-46d7-aa73-1639c9e897fc req-c84152f5-fc4b-4672-9a62-342a4c526269 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:44.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:44 np0005593232 nova_compute[250269]: 2026-01-23 10:15:44.864 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:45Z|00527|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:15:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:45Z|00528|binding|INFO|Releasing lport e77f48c3-bdcc-4620-bc5b-550b6d3814da from this chassis (sb_readonly=0)
Jan 23 05:15:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:15:45Z|00529|binding|INFO|Releasing lport 2c16e447-27d9-4516-bf23-ec948f375c10 from this chassis (sb_readonly=0)
Jan 23 05:15:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:45.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:45 np0005593232 nova_compute[250269]: 2026-01-23 10:15:45.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:15:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/147363235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:15:45 np0005593232 nova_compute[250269]: 2026-01-23 10:15:45.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:45 np0005593232 nova_compute[250269]: 2026-01-23 10:15:45.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:15:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2584: 321 pgs: 321 active+clean; 714 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 171 op/s
Jan 23 05:15:46 np0005593232 nova_compute[250269]: 2026-01-23 10:15:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:46.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.002328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347002497, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 2007, "num_deletes": 256, "total_data_size": 3396010, "memory_usage": 3454592, "flush_reason": "Manual Compaction"}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347021028, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 2083545, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 55480, "largest_seqno": 57486, "table_properties": {"data_size": 2076516, "index_size": 3719, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18820, "raw_average_key_size": 21, "raw_value_size": 2060815, "raw_average_value_size": 2357, "num_data_blocks": 163, "num_entries": 874, "num_filter_entries": 874, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163170, "oldest_key_time": 1769163170, "file_creation_time": 1769163347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 18827 microseconds, and 9178 cpu microseconds.
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.021181) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 2083545 bytes OK
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.021235) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.023468) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.023507) EVENT_LOG_v1 {"time_micros": 1769163347023500, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.023533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 3387513, prev total WAL file size 3387513, number of live WAL files 2.
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.025411) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323539' seq:0, type:0; will stop at (end)
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(2034KB)], [125(11MB)]
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347025599, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 14211534, "oldest_snapshot_seqno": -1}
Jan 23 05:15:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010693659268565048 of space, bias 1.0, pg target 3.2080977805695143 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.006494967604224828 of space, bias 1.0, pg target 1.929005378454774 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8449684766424715 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 8588 keys, 11574669 bytes, temperature: kUnknown
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347128284, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 11574669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11519624, "index_size": 32468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 221603, "raw_average_key_size": 25, "raw_value_size": 11369289, "raw_average_value_size": 1323, "num_data_blocks": 1276, "num_entries": 8588, "num_filter_entries": 8588, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163347, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.128718) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 11574669 bytes
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.130305) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.2 rd, 112.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(12.4) write-amplify(5.6) OK, records in: 9040, records dropped: 452 output_compression: NoCompression
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.130331) EVENT_LOG_v1 {"time_micros": 1769163347130320, "job": 76, "event": "compaction_finished", "compaction_time_micros": 102810, "compaction_time_cpu_micros": 55783, "output_level": 6, "num_output_files": 1, "total_output_size": 11574669, "num_input_records": 9040, "num_output_records": 8588, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347130923, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163347133846, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.025197) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.133921) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.133928) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.133929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.133931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:15:47.133932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:15:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2585: 321 pgs: 321 active+clean; 686 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 220 op/s
Jan 23 05:15:47 np0005593232 nova_compute[250269]: 2026-01-23 10:15:47.593 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:15:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:48.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:15:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:49.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:49 np0005593232 nova_compute[250269]: 2026-01-23 10:15:49.263 250273 INFO nova.compute.manager [None req-e13582fa-2a29-4fa7-9e4b-203b48ded3e5 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Get console output#033[00m
Jan 23 05:15:49 np0005593232 nova_compute[250269]: 2026-01-23 10:15:49.270 312104 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 23 05:15:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2586: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 187 op/s
Jan 23 05:15:49 np0005593232 nova_compute[250269]: 2026-01-23 10:15:49.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:15:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1413472275' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:15:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:15:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1413472275' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:15:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:15:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:15:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:51.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2587: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 172 op/s
Jan 23 05:15:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:52 np0005593232 nova_compute[250269]: 2026-01-23 10:15:52.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:52.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:52 np0005593232 nova_compute[250269]: 2026-01-23 10:15:52.596 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:53.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:53 np0005593232 nova_compute[250269]: 2026-01-23 10:15:53.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2588: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 180 op/s
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:15:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:54.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.726 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.727 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.727 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.727 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:54 np0005593232 nova_compute[250269]: 2026-01-23 10:15:54.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:55.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2589: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.2 MiB/s wr, 160 op/s
Jan 23 05:15:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:15:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:56.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:15:56 np0005593232 nova_compute[250269]: 2026-01-23 10:15:56.767 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:56 np0005593232 nova_compute[250269]: 2026-01-23 10:15:56.768 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:56 np0005593232 nova_compute[250269]: 2026-01-23 10:15:56.768 250273 DEBUG nova.objects.instance [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'flavor' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:15:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:57.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2590: 321 pgs: 321 active+clean; 667 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 399 KiB/s wr, 142 op/s
Jan 23 05:15:57 np0005593232 nova_compute[250269]: 2026-01-23 10:15:57.641 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:15:57 np0005593232 nova_compute[250269]: 2026-01-23 10:15:57.778 250273 DEBUG nova.objects.instance [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:15:57 np0005593232 nova_compute[250269]: 2026-01-23 10:15:57.881 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [{"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:15:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:15:58.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.821 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-8056a321-13d3-4dd8-bb33-70c832c17ac1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.821 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.822 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.838 250273 DEBUG nova.network.neutron [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.850 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.851 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.852 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.852 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:15:58 np0005593232 nova_compute[250269]: 2026-01-23 10:15:58.853 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:15:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:15:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:15:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:15:59.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:15:59 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 23 05:15:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:15:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267731780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.331 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.418 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.419 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.424 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.424 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 podman[343421]: 2026-01-23 10:15:59.423709167 +0000 UTC m=+0.082842315 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.427 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.427 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:15:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2591: 321 pgs: 321 active+clean; 667 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 203 KiB/s wr, 110 op/s
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.569 250273 DEBUG nova.policy [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.620 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.621 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3623MB free_disk=20.784595489501953GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.621 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.622 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.780 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 8056a321-13d3-4dd8-bb33-70c832c17ac1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.781 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.781 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.781 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:15:59 np0005593232 nova_compute[250269]: 2026-01-23 10:15:59.781 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.017 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:00.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:16:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271557808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.554 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.560 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.881 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.919 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:16:00 np0005593232 nova_compute[250269]: 2026-01-23 10:16:00.919 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:01.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2592: 321 pgs: 321 active+clean; 667 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 470 KiB/s rd, 201 KiB/s wr, 72 op/s
Jan 23 05:16:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:02 np0005593232 nova_compute[250269]: 2026-01-23 10:16:02.417 250273 DEBUG nova.network.neutron [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Successfully created port: 23917d91-d9fe-4f15-98e4-bfcb56895c1f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:16:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:02.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:02 np0005593232 nova_compute[250269]: 2026-01-23 10:16:02.672 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:02 np0005593232 nova_compute[250269]: 2026-01-23 10:16:02.915 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2593: 321 pgs: 321 active+clean; 667 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 556 KiB/s rd, 202 KiB/s wr, 78 op/s
Jan 23 05:16:04 np0005593232 podman[343480]: 2026-01-23 10:16:04.404695096 +0000 UTC m=+0.058945615 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:16:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:04.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.682 250273 DEBUG nova.network.neutron [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Successfully updated port: 23917d91-d9fe-4f15-98e4-bfcb56895c1f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.714 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.714 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.714 250273 DEBUG nova.network.neutron [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.858 250273 DEBUG nova.compute.manager [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-changed-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.859 250273 DEBUG nova.compute.manager [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing instance network info cache due to event network-changed-23917d91-d9fe-4f15-98e4-bfcb56895c1f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:16:04 np0005593232 nova_compute[250269]: 2026-01-23 10:16:04.859 250273 DEBUG oslo_concurrency.lockutils [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:16:05 np0005593232 nova_compute[250269]: 2026-01-23 10:16:05.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:05.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:05 np0005593232 nova_compute[250269]: 2026-01-23 10:16:05.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2594: 321 pgs: 321 active+clean; 667 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 30 KiB/s wr, 71 op/s
Jan 23 05:16:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:06.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.921966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163366922046, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 251, "total_data_size": 343129, "memory_usage": 352168, "flush_reason": "Manual Compaction"}
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163366936525, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 339894, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57488, "largest_seqno": 57908, "table_properties": {"data_size": 337452, "index_size": 541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6041, "raw_average_key_size": 18, "raw_value_size": 332596, "raw_average_value_size": 1032, "num_data_blocks": 24, "num_entries": 322, "num_filter_entries": 322, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163347, "oldest_key_time": 1769163347, "file_creation_time": 1769163366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 14592 microseconds, and 1920 cpu microseconds.
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.936574) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 339894 bytes OK
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.936591) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.938338) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.938360) EVENT_LOG_v1 {"time_micros": 1769163366938347, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.938377) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 340524, prev total WAL file size 340524, number of live WAL files 2.
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.938799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(331KB)], [128(11MB)]
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163366938838, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11914563, "oldest_snapshot_seqno": -1}
Jan 23 05:16:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 8400 keys, 10025473 bytes, temperature: kUnknown
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163367048036, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10025473, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9973058, "index_size": 30286, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21061, "raw_key_size": 218390, "raw_average_key_size": 25, "raw_value_size": 9827329, "raw_average_value_size": 1169, "num_data_blocks": 1177, "num_entries": 8400, "num_filter_entries": 8400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163366, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.049534) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10025473 bytes
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.050954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.9 rd, 91.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(64.5) write-amplify(29.5) OK, records in: 8910, records dropped: 510 output_compression: NoCompression
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.050979) EVENT_LOG_v1 {"time_micros": 1769163367050968, "job": 78, "event": "compaction_finished", "compaction_time_micros": 109375, "compaction_time_cpu_micros": 25122, "output_level": 6, "num_output_files": 1, "total_output_size": 10025473, "num_input_records": 8910, "num_output_records": 8400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163367051405, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163367054435, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:06.938730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.054549) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.054556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.054559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.054562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:07.054565) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2595: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 544 KiB/s rd, 37 KiB/s wr, 61 op/s
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:07 np0005593232 nova_compute[250269]: 2026-01-23 10:16:07.733 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:08.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7198db59-155c-4c34-8ae8-4fccc87799c7 does not exist
Jan 23 05:16:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d81def17-6be9-4242-91d1-f703794387a7 does not exist
Jan 23 05:16:08 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5606ca8f-99c3-49a9-9fd9-a99ba79005bf does not exist
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:08 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:16:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.268 250273 DEBUG nova.network.neutron [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.349 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.352 250273 DEBUG oslo_concurrency.lockutils [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.353 250273 DEBUG nova.network.neutron [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing network info cache for port 23917d91-d9fe-4f15-98e4-bfcb56895c1f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.357 250273 DEBUG nova.virt.libvirt.vif [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.357 250273 DEBUG nova.network.os_vif_util [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.358 250273 DEBUG nova.network.os_vif_util [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.358 250273 DEBUG os_vif [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.359 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.360 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.360 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.363 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23917d91-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.364 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23917d91-d9, col_values=(('external_ids', {'iface-id': '23917d91-d9fe-4f15-98e4-bfcb56895c1f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:67:7f', 'vm-uuid': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.365 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.3665] manager: (tap23917d91-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.367 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.374 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.375 250273 INFO os_vif [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9')#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.375 250273 DEBUG nova.virt.libvirt.vif [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.376 250273 DEBUG nova.network.os_vif_util [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.376 250273 DEBUG nova.network.os_vif_util [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.381 250273 DEBUG nova.virt.libvirt.guest [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] attach device xml: <interface type="ethernet">
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:75:67:7f"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <target dev="tap23917d91-d9"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]: </interface>
Jan 23 05:16:09 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:16:09 np0005593232 kernel: tap23917d91-d9: entered promiscuous mode
Jan 23 05:16:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:09Z|00530|binding|INFO|Claiming lport 23917d91-d9fe-4f15-98e4-bfcb56895c1f for this chassis.
Jan 23 05:16:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:09Z|00531|binding|INFO|23917d91-d9fe-4f15-98e4-bfcb56895c1f: Claiming fa:16:3e:75:67:7f 10.100.0.25
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.396 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.4006] manager: (tap23917d91-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/257)
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.410069152 +0000 UTC m=+0.056783224 container create 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.417 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:67:7f 10.100.0.25'], port_security=['fa:16:3e:75:67:7f 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '104c556a-4616-455b-9049-a55a5af0ff57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=115c815a-98eb-4606-b70e-eb3b509317a0, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=23917d91-d9fe-4f15-98e4-bfcb56895c1f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.419 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 23917d91-d9fe-4f15-98e4-bfcb56895c1f in datapath be4fe986-7e1a-41dc-a5ab-f14f84be1b07 bound to our chassis#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.421 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be4fe986-7e1a-41dc-a5ab-f14f84be1b07#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.433 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0f403487-0b80-4668-af91-23d332106ccb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.437 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe4fe986-71 in ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.439 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe4fe986-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.439 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f97f68-ee45-45ea-aca9-b6ed2a297319]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.440 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4d48c07a-c496-4cf5-9ef0-dd1fd7126a23]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 systemd-udevd[343796]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:16:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2596: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 461 KiB/s rd, 25 KiB/s wr, 40 op/s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.455 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9846bc98-2f04-4acd-a844-fdb12fe21c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.4632] device (tap23917d91-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.4649] device (tap23917d91-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:16:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:09Z|00532|binding|INFO|Setting lport 23917d91-d9fe-4f15-98e4-bfcb56895c1f ovn-installed in OVS
Jan 23 05:16:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:09Z|00533|binding|INFO|Setting lport 23917d91-d9fe-4f15-98e4-bfcb56895c1f up in Southbound
Jan 23 05:16:09 np0005593232 systemd[1]: Started libpod-conmon-45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee.scope.
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.380354728 +0000 UTC m=+0.027068820 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.479 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b5dac29d-b36d-4172-833a-f6ca925c3b68]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.513 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f689d20d-bcb3-4293-9bce-486c53bfa19a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.522 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e023e1-2c89-412e-959f-64c7c143721e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.5229] manager: (tapbe4fe986-70): new Veth device (/org/freedesktop/NetworkManager/Devices/258)
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.523474824 +0000 UTC m=+0.170188916 container init 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.526 250273 DEBUG nova.virt.libvirt.driver [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.526 250273 DEBUG nova.virt.libvirt.driver [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.528 250273 DEBUG nova.virt.libvirt.driver [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:15:fb:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.528 250273 DEBUG nova.virt.libvirt.driver [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:75:67:7f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.531439951 +0000 UTC m=+0.178154023 container start 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.535798385 +0000 UTC m=+0.182512457 container attach 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:16:09 np0005593232 charming_keller[343803]: 167 167
Jan 23 05:16:09 np0005593232 systemd[1]: libpod-45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee.scope: Deactivated successfully.
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.539961963 +0000 UTC m=+0.186676045 container died 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:16:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4ddf1260188bc3e10387dcd1f466a07eccbd35160d67d5550bb4cf2018996992-merged.mount: Deactivated successfully.
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.569 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4b6de7-68fa-4da9-a3a2-00ab622a0492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.572 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb52e64e-0c36-4e6a-be9a-5780cb5a6505]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.576 250273 DEBUG nova.virt.libvirt.guest [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:09</nova:creationTime>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:09 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    <nova:port uuid="23917d91-d9fe-4f15-98e4-bfcb56895c1f">
Jan 23 05:16:09 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:09 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:09 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:09 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 05:16:09 np0005593232 podman[343775]: 2026-01-23 10:16:09.587671429 +0000 UTC m=+0.234385491 container remove 45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_keller, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.6005] device (tapbe4fe986-70): carrier: link connected
Jan 23 05:16:09 np0005593232 systemd[1]: libpod-conmon-45d469d017c3de5a65244e6fdcf58508b6ee533d49c34a91fb445afa307626ee.scope: Deactivated successfully.
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.607 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e9ab6d-13e5-40ba-9385-bf6830d06447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.613 250273 DEBUG oslo_concurrency.lockutils [None req-a9a39980-5a99-4207-84fe-5b7361ea8887 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 12.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.625 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c3278547-1c97-4ae1-8c3e-3e06a5b0452c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe4fe986-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:ef:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728274, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343844, 'error': None, 'target': 'ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.640 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d683b0a0-a322-42e7-982c-577997858330]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed4:ef2c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 728274, 'tstamp': 728274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343845, 'error': None, 'target': 'ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.660 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2b94f0c9-bc0a-4a80-9902-4cc598d006a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe4fe986-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d4:ef:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728274, 'reachable_time': 20778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343846, 'error': None, 'target': 'ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.691 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[783c5305-f45a-4e81-ad40-5e667ea8ecbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.753 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[abe43f11-f834-4c63-9ab9-e58de89800dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.755 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe4fe986-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.755 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.755 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe4fe986-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 kernel: tapbe4fe986-70: entered promiscuous mode
Jan 23 05:16:09 np0005593232 NetworkManager[49057]: <info>  [1769163369.7586] manager: (tapbe4fe986-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.773 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe4fe986-70, col_values=(('external_ids', {'iface-id': '520c43ee-a517-48e8-9cb3-0fa0f3532015'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:09Z|00534|binding|INFO|Releasing lport 520c43ee-a517-48e8-9cb3-0fa0f3532015 from this chassis (sb_readonly=0)
Jan 23 05:16:09 np0005593232 nova_compute[250269]: 2026-01-23 10:16:09.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.793 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be4fe986-7e1a-41dc-a5ab-f14f84be1b07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be4fe986-7e1a-41dc-a5ab-f14f84be1b07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.794 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5573f8-49b5-435f-ab4b-03be44114cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.795 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-be4fe986-7e1a-41dc-a5ab-f14f84be1b07
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/be4fe986-7e1a-41dc-a5ab-f14f84be1b07.pid.haproxy
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID be4fe986-7e1a-41dc-a5ab-f14f84be1b07
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:16:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:09.795 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'env', 'PROCESS_TAG=haproxy-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be4fe986-7e1a-41dc-a5ab-f14f84be1b07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:16:09 np0005593232 podman[343858]: 2026-01-23 10:16:09.805545679 +0000 UTC m=+0.058018689 container create 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:09 np0005593232 systemd[1]: Started libpod-conmon-06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140.scope.
Jan 23 05:16:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:09 np0005593232 podman[343858]: 2026-01-23 10:16:09.787203788 +0000 UTC m=+0.039676818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:09 np0005593232 podman[343858]: 2026-01-23 10:16:09.906243421 +0000 UTC m=+0.158716441 container init 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:16:09 np0005593232 podman[343858]: 2026-01-23 10:16:09.912678464 +0000 UTC m=+0.165151474 container start 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:16:09 np0005593232 podman[343858]: 2026-01-23 10:16:09.915565336 +0000 UTC m=+0.168038386 container attach 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:16:10 np0005593232 podman[343908]: 2026-01-23 10:16:10.156060629 +0000 UTC m=+0.050083684 container create db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:16:10 np0005593232 systemd[1]: Started libpod-conmon-db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433.scope.
Jan 23 05:16:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:10 np0005593232 podman[343908]: 2026-01-23 10:16:10.129896676 +0000 UTC m=+0.023919751 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:16:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ca709e4b90fb4b8c65189d7d0444f2110098ec6b4af612e63d5aeb5341ab8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:10 np0005593232 podman[343908]: 2026-01-23 10:16:10.249805743 +0000 UTC m=+0.143828828 container init db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 23 05:16:10 np0005593232 podman[343908]: 2026-01-23 10:16:10.256949136 +0000 UTC m=+0.150972191 container start db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:10 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [NOTICE]   (343975) : New worker (343979) forked
Jan 23 05:16:10 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [NOTICE]   (343975) : Loading success.
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.325 250273 DEBUG nova.compute.manager [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.326 250273 DEBUG oslo_concurrency.lockutils [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.326 250273 DEBUG oslo_concurrency.lockutils [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.326 250273 DEBUG oslo_concurrency.lockutils [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.326 250273 DEBUG nova.compute.manager [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:10 np0005593232 nova_compute[250269]: 2026-01-23 10:16:10.326 250273 WARNING nova.compute.manager [req-2d6ec502-d9c6-4720-951b-f0d6f138fa7b req-8581843d-4f88-460a-a899-fa6c7d17d25c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f for instance with vm_state active and task_state None.#033[00m
Jan 23 05:16:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:10.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:10 np0005593232 musing_cray[343879]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:16:10 np0005593232 musing_cray[343879]: --> relative data size: 1.0
Jan 23 05:16:10 np0005593232 musing_cray[343879]: --> All data devices are unavailable
Jan 23 05:16:10 np0005593232 systemd[1]: libpod-06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140.scope: Deactivated successfully.
Jan 23 05:16:10 np0005593232 podman[343858]: 2026-01-23 10:16:10.811741969 +0000 UTC m=+1.064214979 container died 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:16:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-18d63935fcb5022374346ff3878788d2c9545c0e20c8ef9d57ee1d48defa6266-merged.mount: Deactivated successfully.
Jan 23 05:16:10 np0005593232 podman[343858]: 2026-01-23 10:16:10.864112567 +0000 UTC m=+1.116585577 container remove 06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:16:10 np0005593232 systemd[1]: libpod-conmon-06b86330e512c622732c556d5ef04e436cb0aebc66c79d115e979da8ff0e7140.scope: Deactivated successfully.
Jan 23 05:16:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:11Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:67:7f 10.100.0.25
Jan 23 05:16:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:11Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:67:7f 10.100.0.25
Jan 23 05:16:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2597: 321 pgs: 321 active+clean; 669 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 88 KiB/s rd, 22 KiB/s wr, 9 op/s
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.493 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-23917d91-d9fe-4f15-98e4-bfcb56895c1f" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.493 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-23917d91-d9fe-4f15-98e4-bfcb56895c1f" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.513 250273 DEBUG nova.objects.instance [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'flavor' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.521999781 +0000 UTC m=+0.041535191 container create 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:11 np0005593232 systemd[1]: Started libpod-conmon-75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5.scope.
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.561 250273 DEBUG nova.virt.libvirt.vif [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.562 250273 DEBUG nova.network.os_vif_util [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.563 250273 DEBUG nova.network.os_vif_util [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.567 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.569 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.571 250273 DEBUG nova.virt.libvirt.driver [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Attempting to detach device tap23917d91-d9 from instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.571 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] detach device xml: <interface type="ethernet">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:75:67:7f"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <target dev="tap23917d91-d9"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </interface>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.577 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.581 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface>not found in domain: <domain type='kvm' id='63'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <name>instance-0000008f</name>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <uuid>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</uuid>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:09</nova:creationTime>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:port uuid="23917d91-d9fe-4f15-98e4-bfcb56895c1f">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <resource>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </resource>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='serial'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='uuid'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk' index='2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config' index='1'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:15:fb:62'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='tape76c3794-4b'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:75:67:7f'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='tap23917d91-d9'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='net1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </target>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </console>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c245,c658</label>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c245,c658</imagelabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.582 250273 INFO nova.virt.libvirt.driver [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully detached device tap23917d91-d9 from instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 from the persistent domain config.#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.583 250273 DEBUG nova.virt.libvirt.driver [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] (1/8): Attempting to detach device tap23917d91-d9 with device alias net1 from instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.583 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] detach device xml: <interface type="ethernet">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <mac address="fa:16:3e:75:67:7f"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <model type="virtio"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <mtu size="1442"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <target dev="tap23917d91-d9"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </interface>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:16:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.50754894 +0000 UTC m=+0.027084370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.610040613 +0000 UTC m=+0.129576043 container init 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.61698838 +0000 UTC m=+0.136523810 container start 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.621682634 +0000 UTC m=+0.141218064 container attach 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:16:11 np0005593232 systemd[1]: libpod-75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5.scope: Deactivated successfully.
Jan 23 05:16:11 np0005593232 laughing_shirley[344168]: 167 167
Jan 23 05:16:11 np0005593232 conmon[344168]: conmon 75d3678381f8dc08c16c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5.scope/container/memory.events
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.62612361 +0000 UTC m=+0.145659020 container died 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:11 np0005593232 kernel: tap23917d91-d9 (unregistering): left promiscuous mode
Jan 23 05:16:11 np0005593232 NetworkManager[49057]: <info>  [1769163371.6517] device (tap23917d91-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:16:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-afc12ef29ab1aa2e4a7ffbefa39c439240e139f8150da6385a2ac38369dcabf2-merged.mount: Deactivated successfully.
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.664 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:11Z|00535|binding|INFO|Releasing lport 23917d91-d9fe-4f15-98e4-bfcb56895c1f from this chassis (sb_readonly=0)
Jan 23 05:16:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:11Z|00536|binding|INFO|Setting lport 23917d91-d9fe-4f15-98e4-bfcb56895c1f down in Southbound
Jan 23 05:16:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:11Z|00537|binding|INFO|Removing iface tap23917d91-d9 ovn-installed in OVS
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.669 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769163371.6680593, c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.670 250273 DEBUG nova.virt.libvirt.driver [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Start waiting for the detach event from libvirt for device tap23917d91-d9 with device alias net1 for instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.670 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:11 np0005593232 podman[344153]: 2026-01-23 10:16:11.673172837 +0000 UTC m=+0.192708247 container remove 75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_shirley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.673 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface>not found in domain: <domain type='kvm' id='63'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <name>instance-0000008f</name>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <uuid>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</uuid>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:09</nova:creationTime>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:port uuid="23917d91-d9fe-4f15-98e4-bfcb56895c1f">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <resource>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </resource>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='serial'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='uuid'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk' index='2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config' index='1'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:15:fb:62'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target dev='tape76c3794-4b'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      </target>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </console>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c245,c658</label>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c245,c658</imagelabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.674 250273 INFO nova.virt.libvirt.driver [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully detached device tap23917d91-d9 from instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 from the live domain config.#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.674 250273 DEBUG nova.virt.libvirt.vif [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.674 250273 DEBUG nova.network.os_vif_util [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.675 250273 DEBUG nova.network.os_vif_util [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.675 250273 DEBUG os_vif [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.677 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23917d91-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.682 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:67:7f 10.100.0.25'], port_security=['fa:16:3e:75:67:7f 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '104c556a-4616-455b-9049-a55a5af0ff57', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=115c815a-98eb-4606-b70e-eb3b509317a0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=23917d91-d9fe-4f15-98e4-bfcb56895c1f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.683 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 23917d91-d9fe-4f15-98e4-bfcb56895c1f in datapath be4fe986-7e1a-41dc-a5ab-f14f84be1b07 unbound from our chassis#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.685 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be4fe986-7e1a-41dc-a5ab-f14f84be1b07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.686 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[000ca0e0-bf13-452e-b307-bc3e33475427]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.686 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07 namespace which is not needed anymore#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.695 250273 INFO os_vif [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9')#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.696 250273 DEBUG nova.virt.libvirt.guest [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:11</nova:creationTime>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:11 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:11 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:11 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 05:16:11 np0005593232 systemd[1]: libpod-conmon-75d3678381f8dc08c16cdcdf1867234eb91f05f24c0485840cf3072ecd5d65b5.scope: Deactivated successfully.
Jan 23 05:16:11 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [NOTICE]   (343975) : haproxy version is 2.8.14-c23fe91
Jan 23 05:16:11 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [NOTICE]   (343975) : path to executable is /usr/sbin/haproxy
Jan 23 05:16:11 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [WARNING]  (343975) : Exiting Master process...
Jan 23 05:16:11 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [ALERT]    (343975) : Current worker (343979) exited with code 143 (Terminated)
Jan 23 05:16:11 np0005593232 neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07[343946]: [WARNING]  (343975) : All workers exited. Exiting... (0)
Jan 23 05:16:11 np0005593232 systemd[1]: libpod-db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433.scope: Deactivated successfully.
Jan 23 05:16:11 np0005593232 podman[344208]: 2026-01-23 10:16:11.829944731 +0000 UTC m=+0.054103158 container died db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:16:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433-userdata-shm.mount: Deactivated successfully.
Jan 23 05:16:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a83ca709e4b90fb4b8c65189d7d0444f2110098ec6b4af612e63d5aeb5341ab8-merged.mount: Deactivated successfully.
Jan 23 05:16:11 np0005593232 podman[344208]: 2026-01-23 10:16:11.865277715 +0000 UTC m=+0.089436142 container cleanup db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:16:11 np0005593232 systemd[1]: libpod-conmon-db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433.scope: Deactivated successfully.
Jan 23 05:16:11 np0005593232 podman[344223]: 2026-01-23 10:16:11.881232168 +0000 UTC m=+0.065876962 container create 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:16:11 np0005593232 systemd[1]: Started libpod-conmon-04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee.scope.
Jan 23 05:16:11 np0005593232 podman[344251]: 2026-01-23 10:16:11.922728408 +0000 UTC m=+0.036822458 container remove db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.928 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b10e6c4-8b9c-4ac0-85f3-da218bcb7234]: (4, ('Fri Jan 23 10:16:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07 (db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433)\ndb3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433\nFri Jan 23 10:16:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07 (db3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433)\ndb3b219ac6cba0922dafec59430f7ae1ebb2e8fc6f8989ce4169258e2f973433\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.931 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[87093038-ae64-4e54-9273-5a0e6608413a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.932 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe4fe986-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.934 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 kernel: tapbe4fe986-70: left promiscuous mode
Jan 23 05:16:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:11 np0005593232 podman[344223]: 2026-01-23 10:16:11.853548482 +0000 UTC m=+0.038193296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c3640cb6def7b05467454d93d572687703c2b0061f8ecc18cd15c1297c91fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c3640cb6def7b05467454d93d572687703c2b0061f8ecc18cd15c1297c91fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c3640cb6def7b05467454d93d572687703c2b0061f8ecc18cd15c1297c91fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41c3640cb6def7b05467454d93d572687703c2b0061f8ecc18cd15c1297c91fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:11 np0005593232 nova_compute[250269]: 2026-01-23 10:16:11.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.953 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[180b0e7d-9295-4d23-84bf-fbb37160674d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 podman[344223]: 2026-01-23 10:16:11.962190669 +0000 UTC m=+0.146835483 container init 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:16:11 np0005593232 podman[344223]: 2026-01-23 10:16:11.970044362 +0000 UTC m=+0.154689156 container start 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.972 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[216d2bde-fa29-4293-bdbc-07e1fd4c059b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 podman[344223]: 2026-01-23 10:16:11.973236843 +0000 UTC m=+0.157881647 container attach 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.973 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[592402d4-72cc-4e9d-a493-62c7cae90cbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.986 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[94954199-39f2-48ac-a40c-d39fb5cf409b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 728265, 'reachable_time': 32517, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344275, 'error': None, 'target': 'ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.989 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be4fe986-7e1a-41dc-a5ab-f14f84be1b07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:16:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:11.989 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[95b3f906-3419-4b26-989a-2b8b9baf0a1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.006659) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372006825, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 250, "total_data_size": 85942, "memory_usage": 92944, "flush_reason": "Manual Compaction"}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372008746, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 86250, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57909, "largest_seqno": 58213, "table_properties": {"data_size": 84237, "index_size": 176, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 4296, "raw_average_key_size": 15, "raw_value_size": 80279, "raw_average_value_size": 283, "num_data_blocks": 8, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163367, "oldest_key_time": 1769163367, "file_creation_time": 1769163372, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 2110 microseconds, and 684 cpu microseconds.
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.008772) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 86250 bytes OK
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.008784) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.010082) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.010105) EVENT_LOG_v1 {"time_micros": 1769163372010092, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.010137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 83737, prev total WAL file size 83737, number of live WAL files 2.
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.010708) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(84KB)], [131(9790KB)]
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372010748, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10111723, "oldest_snapshot_seqno": -1}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 8173 keys, 9041815 bytes, temperature: kUnknown
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372069707, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 9041815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8991764, "index_size": 28498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 215395, "raw_average_key_size": 26, "raw_value_size": 8850649, "raw_average_value_size": 1082, "num_data_blocks": 1084, "num_entries": 8173, "num_filter_entries": 8173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163372, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.069994) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 9041815 bytes
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.071207) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 153.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.6 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(222.1) write-amplify(104.8) OK, records in: 8683, records dropped: 510 output_compression: NoCompression
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.071228) EVENT_LOG_v1 {"time_micros": 1769163372071218, "job": 80, "event": "compaction_finished", "compaction_time_micros": 59046, "compaction_time_cpu_micros": 21244, "output_level": 6, "num_output_files": 1, "total_output_size": 9041815, "num_input_records": 8683, "num_output_records": 8173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372071367, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163372073488, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.010616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.073547) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.073555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.073556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.073558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:16:12.073560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:16:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:12 np0005593232 systemd[1]: run-netns-ovnmeta\x2dbe4fe986\x2d7e1a\x2d41dc\x2da5ab\x2df14f84be1b07.mount: Deactivated successfully.
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.678 250273 DEBUG nova.compute.manager [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.679 250273 DEBUG oslo_concurrency.lockutils [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.679 250273 DEBUG oslo_concurrency.lockutils [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.679 250273 DEBUG oslo_concurrency.lockutils [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.680 250273 DEBUG nova.compute.manager [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.680 250273 WARNING nova.compute.manager [req-ce8b3ac7-3399-4a05-a904-70c1d8049cd9 req-be103be2-69bc-4875-83dd-d762cbc5a0cd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f for instance with vm_state active and task_state None.#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]: {
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:    "0": [
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:        {
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "devices": [
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "/dev/loop3"
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            ],
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "lv_name": "ceph_lv0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "lv_size": "7511998464",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "name": "ceph_lv0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "tags": {
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.cluster_name": "ceph",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.crush_device_class": "",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.encrypted": "0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.osd_id": "0",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.type": "block",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:                "ceph.vdo": "0"
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            },
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "type": "block",
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:            "vg_name": "ceph_vg0"
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:        }
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]:    ]
Jan 23 05:16:12 np0005593232 hardcore_swirles[344266]: }
Jan 23 05:16:12 np0005593232 systemd[1]: libpod-04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee.scope: Deactivated successfully.
Jan 23 05:16:12 np0005593232 podman[344223]: 2026-01-23 10:16:12.765084643 +0000 UTC m=+0.949729457 container died 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:16:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-41c3640cb6def7b05467454d93d572687703c2b0061f8ecc18cd15c1297c91fe-merged.mount: Deactivated successfully.
Jan 23 05:16:12 np0005593232 podman[344223]: 2026-01-23 10:16:12.917065311 +0000 UTC m=+1.101710125 container remove 04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:16:12 np0005593232 systemd[1]: libpod-conmon-04c4d76edf5cf98f2831c38b2460092081346d8fe092d3afc2ae55bff49185ee.scope: Deactivated successfully.
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.948 250273 DEBUG nova.network.neutron [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updated VIF entry in instance network info cache for port 23917d91-d9fe-4f15-98e4-bfcb56895c1f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.948 250273 DEBUG nova.network.neutron [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:12 np0005593232 nova_compute[250269]: 2026-01-23 10:16:12.970 250273 DEBUG oslo_concurrency.lockutils [req-297dccca-c02d-4160-be86-a98f80872f4f req-b8fad4a1-0901-43c0-a832-e50ab225aff7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:16:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:13.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.205 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.205 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.205 250273 DEBUG nova.network.neutron [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.297 250273 DEBUG nova.compute.manager [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-deleted-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.298 250273 INFO nova.compute.manager [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Neutron deleted interface 23917d91-d9fe-4f15-98e4-bfcb56895c1f; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.298 250273 DEBUG nova.network.neutron [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.344 250273 DEBUG nova.objects.instance [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lazy-loading 'system_metadata' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.409 250273 DEBUG nova.objects.instance [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lazy-loading 'flavor' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.433 250273 DEBUG nova.virt.libvirt.vif [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.433 250273 DEBUG nova.network.os_vif_util [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.434 250273 DEBUG nova.network.os_vif_util [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.438 250273 DEBUG nova.virt.libvirt.guest [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.441 250273 DEBUG nova.virt.libvirt.guest [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface>not found in domain: <domain type='kvm' id='63'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <name>instance-0000008f</name>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <uuid>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</uuid>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:11</nova:creationTime>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <resource>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </resource>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='serial'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='uuid'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk' index='2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config' index='1'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:15:fb:62'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='tape76c3794-4b'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </target>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </console>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c245,c658</label>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c245,c658</imagelabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.441 250273 DEBUG nova.virt.libvirt.guest [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.446 250273 DEBUG nova.virt.libvirt.guest [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:75:67:7f"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap23917d91-d9"/></interface>not found in domain: <domain type='kvm' id='63'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <name>instance-0000008f</name>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <uuid>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</uuid>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:11</nova:creationTime>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <memory unit='KiB'>131072</memory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <currentMemory unit='KiB'>131072</currentMemory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <vcpu placement='static'>1</vcpu>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <resource>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <partition>/machine</partition>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </resource>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <sysinfo type='smbios'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='manufacturer'>RDO</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='product'>OpenStack Compute</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='serial'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='uuid'>c8c9cb1d-4faa-4945-afcc-f67ccc4d4237</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <entry name='family'>Virtual Machine</entry>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <boot dev='hd'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <smbios mode='sysinfo'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <vmcoreinfo state='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <cpu mode='custom' match='exact' check='full'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <model fallback='forbid'>Nehalem</model>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='x2apic'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='hypervisor'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <feature policy='require' name='vme'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <clock offset='utc'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='pit' tickpolicy='delay'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='rtc' tickpolicy='catchup'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <timer name='hpet' present='no'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_poweroff>destroy</on_poweroff>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_reboot>restart</on_reboot>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <on_crash>destroy</on_crash>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <disk type='network' device='disk'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk' index='2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='vda' bus='virtio'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='virtio-disk0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <disk type='network' device='cdrom'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='qemu' type='raw' cache='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <auth username='openstack'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <secret type='ceph' uuid='e1533653-0a5a-584c-b34b-8689f0d32e77'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source protocol='rbd' name='vms/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_disk.config' index='1'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.100' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.102' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <host name='192.168.122.101' port='6789'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='sda' bus='sata'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <readonly/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='sata0-0-0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='0' model='pcie-root'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pcie.0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='1' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='1' port='0x10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='2' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='2' port='0x11'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='3' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='3' port='0x12'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='4' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='4' port='0x13'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='5' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='5' port='0x14'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='6' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='6' port='0x15'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='7' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='7' port='0x16'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='8' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='8' port='0x17'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.8'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='9' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='9' port='0x18'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.9'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='10' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='10' port='0x19'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='11' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='11' port='0x1a'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.11'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='12' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='12' port='0x1b'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.12'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='13' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='13' port='0x1c'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.13'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='14' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='14' port='0x1d'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.14'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='15' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='15' port='0x1e'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.15'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='16' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='16' port='0x1f'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.16'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='17' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='17' port='0x20'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.17'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='18' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='18' port='0x21'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.18'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='19' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='19' port='0x22'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.19'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='20' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='20' port='0x23'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.20'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='21' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='21' port='0x24'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.21'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='22' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='22' port='0x25'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.22'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='23' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='23' port='0x26'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.23'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='24' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='24' port='0x27'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.24'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='25' model='pcie-root-port'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-root-port'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target chassis='25' port='0x28'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.25'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model name='pcie-pci-bridge'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='pci.26'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='usb' index='0' model='piix3-uhci'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='usb'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <controller type='sata' index='0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='ide'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </controller>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <interface type='ethernet'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <mac address='fa:16:3e:15:fb:62'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target dev='tape76c3794-4b'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model type='virtio'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <driver name='vhost' rx_queue_size='512'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <mtu size='1442'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='net0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <serial type='pty'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target type='isa-serial' port='0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:        <model name='isa-serial'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      </target>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <console type='pty' tty='/dev/pts/2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <source path='/dev/pts/2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <log file='/var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237/console.log' append='off'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <target type='serial' port='0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='serial0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </console>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='tablet' bus='usb'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='usb' bus='0' port='1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='mouse' bus='ps2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input1'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <input type='keyboard' bus='ps2'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='input2'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </input>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <listen type='address' address='::0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </graphics>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <audio id='1' type='none'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <model type='virtio' heads='1' primary='yes'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='video0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <watchdog model='itco' action='reset'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='watchdog0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </watchdog>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <memballoon model='virtio'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <stats period='10'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='balloon0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <rng model='virtio'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <backend model='random'>/dev/urandom</backend>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <alias name='rng0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <label>system_u:system_r:svirt_t:s0:c245,c658</label>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c245,c658</imagelabel>
Jan 23 05:16:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2598: 321 pgs: 321 active+clean; 645 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 98 KiB/s rd, 24 KiB/s wr, 22 op/s
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <label>+107:+107</label>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <imagelabel>+107:+107</imagelabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </seclabel>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.447 250273 WARNING nova.virt.libvirt.driver [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Detaching interface fa:16:3e:75:67:7f failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap23917d91-d9' not found.#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.448 250273 DEBUG nova.virt.libvirt.vif [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.448 250273 DEBUG nova.network.os_vif_util [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converting VIF {"id": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "address": "fa:16:3e:75:67:7f", "network": {"id": "be4fe986-7e1a-41dc-a5ab-f14f84be1b07", "bridge": "br-int", "label": "tempest-network-smoke--1531756005", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23917d91-d9", "ovs_interfaceid": "23917d91-d9fe-4f15-98e4-bfcb56895c1f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.449 250273 DEBUG nova.network.os_vif_util [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.450 250273 DEBUG os_vif [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.452 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.453 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23917d91-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.453 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.456 250273 INFO os_vif [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:67:7f,bridge_name='br-int',has_traffic_filtering=True,id=23917d91-d9fe-4f15-98e4-bfcb56895c1f,network=Network(be4fe986-7e1a-41dc-a5ab-f14f84be1b07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23917d91-d9')#033[00m
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.457 250273 DEBUG nova.virt.libvirt.guest [req-4f7b2662-9b75-4767-9beb-fc88944fb68a req-88c5680e-90fa-4bfc-8d63-c39089df6e65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:name>tempest-TestNetworkBasicOps-server-167712358</nova:name>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:creationTime>2026-01-23 10:16:13</nova:creationTime>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:flavor name="m1.nano">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:memory>128</nova:memory>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:disk>1</nova:disk>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:swap>0</nova:swap>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:vcpus>1</nova:vcpus>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:flavor>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:owner>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  <nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    <nova:port uuid="e76c3794-4bfb-450d-901b-d5c2ecccb574">
Jan 23 05:16:13 np0005593232 nova_compute[250269]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:    </nova:port>
Jan 23 05:16:13 np0005593232 nova_compute[250269]:  </nova:ports>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: </nova:instance>
Jan 23 05:16:13 np0005593232 nova_compute[250269]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.553734682 +0000 UTC m=+0.042696434 container create 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:16:13 np0005593232 nova_compute[250269]: 2026-01-23 10:16:13.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:13 np0005593232 systemd[1]: Started libpod-conmon-4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df.scope.
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.534149246 +0000 UTC m=+0.023111018 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.663946324 +0000 UTC m=+0.152908086 container init 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.671842238 +0000 UTC m=+0.160803980 container start 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.67472802 +0000 UTC m=+0.163689772 container attach 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:16:13 np0005593232 kind_jepsen[344447]: 167 167
Jan 23 05:16:13 np0005593232 systemd[1]: libpod-4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df.scope: Deactivated successfully.
Jan 23 05:16:13 np0005593232 conmon[344447]: conmon 4a503e1aa04fa1542bc6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df.scope/container/memory.events
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.678664672 +0000 UTC m=+0.167626414 container died 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:16:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-060a776c00dba8a5f1abbd5caf5725e4c7ea83e4e92a7050e3c1495a7cb6fc1d-merged.mount: Deactivated successfully.
Jan 23 05:16:13 np0005593232 podman[344431]: 2026-01-23 10:16:13.71027688 +0000 UTC m=+0.199238622 container remove 4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:16:13 np0005593232 systemd[1]: libpod-conmon-4a503e1aa04fa1542bc6669282a7d4b00afbc37fb7c652641b1d4eca8221b8df.scope: Deactivated successfully.
Jan 23 05:16:13 np0005593232 podman[344470]: 2026-01-23 10:16:13.909366197 +0000 UTC m=+0.044743552 container create 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:16:13 np0005593232 systemd[1]: Started libpod-conmon-8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad.scope.
Jan 23 05:16:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:16:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5569487bb7a103b330961129e49e57799ef0d623ccea69e0a6a19fa9c26c4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5569487bb7a103b330961129e49e57799ef0d623ccea69e0a6a19fa9c26c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5569487bb7a103b330961129e49e57799ef0d623ccea69e0a6a19fa9c26c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:13 np0005593232 podman[344470]: 2026-01-23 10:16:13.892499988 +0000 UTC m=+0.027877373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:16:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5569487bb7a103b330961129e49e57799ef0d623ccea69e0a6a19fa9c26c4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:16:13 np0005593232 podman[344470]: 2026-01-23 10:16:13.998003125 +0000 UTC m=+0.133380510 container init 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:14 np0005593232 podman[344470]: 2026-01-23 10:16:14.005417236 +0000 UTC m=+0.140794601 container start 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:16:14 np0005593232 podman[344470]: 2026-01-23 10:16:14.008208595 +0000 UTC m=+0.143585980 container attach 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:16:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:16:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/828127319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:16:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]: {
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:        "osd_id": 0,
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:        "type": "bluestore"
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]:    }
Jan 23 05:16:14 np0005593232 quizzical_shaw[344488]: }
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.929 250273 DEBUG nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-unplugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.931 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.931 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.931 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.932 250273 DEBUG nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-unplugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.932 250273 WARNING nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-unplugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f for instance with vm_state active and task_state None.#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.932 250273 DEBUG nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.932 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.932 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.933 250273 DEBUG oslo_concurrency.lockutils [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.933 250273 DEBUG nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:14 np0005593232 nova_compute[250269]: 2026-01-23 10:16:14.933 250273 WARNING nova.compute.manager [req-9fd21baa-9009-4e76-86d3-5e04f0be3bd1 req-fbcf0448-b07d-4eee-a2a2-8bd0af78f3a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-plugged-23917d91-d9fe-4f15-98e4-bfcb56895c1f for instance with vm_state active and task_state None.#033[00m
Jan 23 05:16:14 np0005593232 systemd[1]: libpod-8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad.scope: Deactivated successfully.
Jan 23 05:16:14 np0005593232 podman[344470]: 2026-01-23 10:16:14.956785608 +0000 UTC m=+1.092162993 container died 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:16:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ff5569487bb7a103b330961129e49e57799ef0d623ccea69e0a6a19fa9c26c4e-merged.mount: Deactivated successfully.
Jan 23 05:16:15 np0005593232 podman[344470]: 2026-01-23 10:16:15.01032263 +0000 UTC m=+1.145699995 container remove 8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_shaw, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:16:15 np0005593232 systemd[1]: libpod-conmon-8998fcfb878dd1390af3b7147e6d8d2c7052df5e16cefef4ca8deaa21b3d08ad.scope: Deactivated successfully.
Jan 23 05:16:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:16:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:16:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9bba4baa-62eb-4645-8513-22a52a6e9af1 does not exist
Jan 23 05:16:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7a4c9868-b889-44e9-aaab-fe5ce225f448 does not exist
Jan 23 05:16:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 140fb9be-0715-4deb-b870-ac35e0f9c74f does not exist
Jan 23 05:16:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:15.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2599: 321 pgs: 321 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 27 KiB/s wr, 34 op/s
Jan 23 05:16:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:16:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:16Z|00538|binding|INFO|Releasing lport 1788b5e6-601b-4e3d-a584-c0138c3308f6 from this chassis (sb_readonly=0)
Jan 23 05:16:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:16Z|00539|binding|INFO|Releasing lport e77f48c3-bdcc-4620-bc5b-550b6d3814da from this chassis (sb_readonly=0)
Jan 23 05:16:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:16Z|00540|binding|INFO|Releasing lport 2c16e447-27d9-4516-bf23-ec948f375c10 from this chassis (sb_readonly=0)
Jan 23 05:16:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:16.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:16 np0005593232 nova_compute[250269]: 2026-01-23 10:16:16.466 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:16 np0005593232 nova_compute[250269]: 2026-01-23 10:16:16.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:17.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2600: 321 pgs: 321 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 23 KiB/s wr, 37 op/s
Jan 23 05:16:17 np0005593232 nova_compute[250269]: 2026-01-23 10:16:17.609 250273 INFO nova.network.neutron [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Port 23917d91-d9fe-4f15-98e4-bfcb56895c1f from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 23 05:16:17 np0005593232 nova_compute[250269]: 2026-01-23 10:16:17.609 250273 DEBUG nova.network.neutron [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:17 np0005593232 nova_compute[250269]: 2026-01-23 10:16:17.711 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:16:17 np0005593232 nova_compute[250269]: 2026-01-23 10:16:17.759 250273 DEBUG oslo_concurrency.lockutils [None req-b41cf450-ccd6-4072-9028-f090cabc8694 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "interface-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-23917d91-d9fe-4f15-98e4-bfcb56895c1f" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:17 np0005593232 nova_compute[250269]: 2026-01-23 10:16:17.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:18.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.633 250273 DEBUG nova.compute.manager [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.634 250273 DEBUG nova.compute.manager [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing instance network info cache due to event network-changed-e76c3794-4bfb-450d-901b-d5c2ecccb574. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.634 250273 DEBUG oslo_concurrency.lockutils [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.634 250273 DEBUG oslo_concurrency.lockutils [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.634 250273 DEBUG nova.network.neutron [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Refreshing network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.839 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.840 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.840 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.840 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.840 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.842 250273 INFO nova.compute.manager [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Terminating instance#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.843 250273 DEBUG nova.compute.manager [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:16:18 np0005593232 kernel: tape76c3794-4b (unregistering): left promiscuous mode
Jan 23 05:16:18 np0005593232 NetworkManager[49057]: <info>  [1769163378.9287] device (tape76c3794-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:16:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:18Z|00541|binding|INFO|Releasing lport e76c3794-4bfb-450d-901b-d5c2ecccb574 from this chassis (sb_readonly=0)
Jan 23 05:16:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:18Z|00542|binding|INFO|Setting lport e76c3794-4bfb-450d-901b-d5c2ecccb574 down in Southbound
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:18Z|00543|binding|INFO|Removing iface tape76c3794-4b ovn-installed in OVS
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:18.949 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:fb:62 10.100.0.8'], port_security=['fa:16:3e:15:fb:62 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c8c9cb1d-4faa-4945-afcc-f67ccc4d4237', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28dbde1b-f0be-40ce-aec9-09c9d592e51c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a78bd9d7-7834-4168-80e9-f5bf32da7504, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e76c3794-4bfb-450d-901b-d5c2ecccb574) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:16:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:18.950 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e76c3794-4bfb-450d-901b-d5c2ecccb574 in datapath 06fc7c06-b10a-4dd5-9475-e6feb221ea4a unbound from our chassis#033[00m
Jan 23 05:16:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:18.952 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 06fc7c06-b10a-4dd5-9475-e6feb221ea4a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:16:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:18.953 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[060d5b5f-a90b-4140-a584-8311f2984f54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:18.954 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a namespace which is not needed anymore#033[00m
Jan 23 05:16:18 np0005593232 nova_compute[250269]: 2026-01-23 10:16:18.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:18 np0005593232 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 23 05:16:18 np0005593232 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d0000008f.scope: Consumed 15.887s CPU time.
Jan 23 05:16:18 np0005593232 systemd-machined[215836]: Machine qemu-63-instance-0000008f terminated.
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.067 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.081 250273 INFO nova.virt.libvirt.driver [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Instance destroyed successfully.#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.081 250273 DEBUG nova.objects.instance [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'resources' on Instance uuid c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [NOTICE]   (343199) : haproxy version is 2.8.14-c23fe91
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [NOTICE]   (343199) : path to executable is /usr/sbin/haproxy
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [WARNING]  (343199) : Exiting Master process...
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [WARNING]  (343199) : Exiting Master process...
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [ALERT]    (343199) : Current worker (343201) exited with code 143 (Terminated)
Jan 23 05:16:19 np0005593232 neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a[343195]: [WARNING]  (343199) : All workers exited. Exiting... (0)
Jan 23 05:16:19 np0005593232 systemd[1]: libpod-7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128.scope: Deactivated successfully.
Jan 23 05:16:19 np0005593232 podman[344595]: 2026-01-23 10:16:19.092765661 +0000 UTC m=+0.047941023 container died 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.100 250273 DEBUG nova.virt.libvirt.vif [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-167712358',display_name='tempest-TestNetworkBasicOps-server-167712358',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-167712358',id=143,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAF35E1Zip5mB7Z7bBpTlXmea73flBCQ0OxIp3E3UHfQJ8/C+PcZ9Yn+30apyBlpqi/cSP1tnLqb2v0HvL8Yo3sFqR36G/CFrDPtbVe+ut7anU4AXXWhdM5kg8fAAxRz+A==',key_name='tempest-TestNetworkBasicOps-880733138',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-9al0ysbc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=c8c9cb1d-4faa-4945-afcc-f67ccc4d4237,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.101 250273 DEBUG nova.network.os_vif_util [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.102 250273 DEBUG nova.network.os_vif_util [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:19.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.102 250273 DEBUG os_vif [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.104 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape76c3794-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.112 250273 INFO os_vif [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:fb:62,bridge_name='br-int',has_traffic_filtering=True,id=e76c3794-4bfb-450d-901b-d5c2ecccb574,network=Network(06fc7c06-b10a-4dd5-9475-e6feb221ea4a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape76c3794-4b')#033[00m
Jan 23 05:16:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128-userdata-shm.mount: Deactivated successfully.
Jan 23 05:16:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0a8951aea5dbed6b7fca36349cd9427ea1eb457028efc76354c6c84c62d4c3ff-merged.mount: Deactivated successfully.
Jan 23 05:16:19 np0005593232 podman[344595]: 2026-01-23 10:16:19.133838328 +0000 UTC m=+0.089013690 container cleanup 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:16:19 np0005593232 systemd[1]: libpod-conmon-7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128.scope: Deactivated successfully.
Jan 23 05:16:19 np0005593232 podman[344645]: 2026-01-23 10:16:19.203707873 +0000 UTC m=+0.044270439 container remove 7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.209 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfb1728-6af6-4721-b9f8-5c92db5cc6ba]: (4, ('Fri Jan 23 10:16:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a (7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128)\n7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128\nFri Jan 23 10:16:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a (7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128)\n7ee737873017556494dd362bfc2bd33acbbfc53078adaa6280e783b322d5a128\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.211 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7a029a-aa98-4ec5-b1ec-d7d7b7289061]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.211 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap06fc7c06-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:19 np0005593232 kernel: tap06fc7c06-b0: left promiscuous mode
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.228 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.231 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0e4ba3-5d86-440b-80cb-26583fc0ffdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.246 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[86588c05-2456-4fee-9442-08a74e4a7e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.247 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[86726048-34a0-4051-942c-cb630641cf2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.264 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e5bc817-51e8-41bc-b6a0-58a608ce4fad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722739, 'reachable_time': 34494, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344663, 'error': None, 'target': 'ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 systemd[1]: run-netns-ovnmeta\x2d06fc7c06\x2db10a\x2d4dd5\x2d9475\x2de6feb221ea4a.mount: Deactivated successfully.
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.267 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-06fc7c06-b10a-4dd5-9475-e6feb221ea4a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:16:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:19.267 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f50e0f30-c76d-4c87-9370-46edbf62833c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.336 250273 DEBUG nova.compute.manager [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-unplugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.337 250273 DEBUG oslo_concurrency.lockutils [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.337 250273 DEBUG oslo_concurrency.lockutils [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.337 250273 DEBUG oslo_concurrency.lockutils [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.338 250273 DEBUG nova.compute.manager [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-unplugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.338 250273 DEBUG nova.compute.manager [req-2c2cb37d-ac7c-470f-843e-c5c80eb13df5 req-6165c5b6-7e89-4a23-af35-9abd4207daf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-unplugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:16:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2601: 321 pgs: 321 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 36 op/s
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.550 250273 INFO nova.virt.libvirt.driver [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Deleting instance files /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_del#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.551 250273 INFO nova.virt.libvirt.driver [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Deletion of /var/lib/nova/instances/c8c9cb1d-4faa-4945-afcc-f67ccc4d4237_del complete#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.693 250273 INFO nova.compute.manager [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.693 250273 DEBUG oslo.service.loopingcall [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.694 250273 DEBUG nova.compute.manager [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:16:19 np0005593232 nova_compute[250269]: 2026-01-23 10:16:19.694 250273 DEBUG nova.network.neutron [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:16:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:20.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:21.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2602: 321 pgs: 321 active+clean; 588 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 12 KiB/s wr, 36 op/s
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.611 250273 DEBUG nova.compute.manager [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.611 250273 DEBUG oslo_concurrency.lockutils [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.611 250273 DEBUG oslo_concurrency.lockutils [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.612 250273 DEBUG oslo_concurrency.lockutils [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.612 250273 DEBUG nova.compute.manager [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] No waiting events found dispatching network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:21 np0005593232 nova_compute[250269]: 2026-01-23 10:16:21.612 250273 WARNING nova.compute.manager [req-94dac445-2244-46f9-897f-434b495f2845 req-c6394b3c-4554-4f9b-ace5-1d61509c8bf3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received unexpected event network-vif-plugged-e76c3794-4bfb-450d-901b-d5c2ecccb574 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:16:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:22.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:22 np0005593232 nova_compute[250269]: 2026-01-23 10:16:22.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2603: 321 pgs: 321 active+clean; 532 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 22 KiB/s wr, 73 op/s
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.480 250273 DEBUG nova.network.neutron [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updated VIF entry in instance network info cache for port e76c3794-4bfb-450d-901b-d5c2ecccb574. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.481 250273 DEBUG nova.network.neutron [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [{"id": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "address": "fa:16:3e:15:fb:62", "network": {"id": "06fc7c06-b10a-4dd5-9475-e6feb221ea4a", "bridge": "br-int", "label": "tempest-network-smoke--1334269906", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape76c3794-4b", "ovs_interfaceid": "e76c3794-4bfb-450d-901b-d5c2ecccb574", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.680 250273 DEBUG nova.network.neutron [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.687 250273 DEBUG nova.compute.manager [req-7fbd8b5f-717c-4d5a-8f29-ce0cd2390056 req-0efa699d-6c22-4fdc-8ce9-820e13f5d563 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Received event network-vif-deleted-e76c3794-4bfb-450d-901b-d5c2ecccb574 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.688 250273 INFO nova.compute.manager [req-7fbd8b5f-717c-4d5a-8f29-ce0cd2390056 req-0efa699d-6c22-4fdc-8ce9-820e13f5d563 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Neutron deleted interface e76c3794-4bfb-450d-901b-d5c2ecccb574; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.688 250273 DEBUG nova.network.neutron [req-7fbd8b5f-717c-4d5a-8f29-ce0cd2390056 req-0efa699d-6c22-4fdc-8ce9-820e13f5d563 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.711 250273 DEBUG oslo_concurrency.lockutils [req-75a9a58a-9398-4913-a6af-bac4c2025878 req-c39630a4-b76c-4f74-a9c8-1aa765cf533a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.732 250273 INFO nova.compute.manager [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Took 4.04 seconds to deallocate network for instance.#033[00m
Jan 23 05:16:23 np0005593232 nova_compute[250269]: 2026-01-23 10:16:23.742 250273 DEBUG nova.compute.manager [req-7fbd8b5f-717c-4d5a-8f29-ce0cd2390056 req-0efa699d-6c22-4fdc-8ce9-820e13f5d563 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Detach interface failed, port_id=e76c3794-4bfb-450d-901b-d5c2ecccb574, reason: Instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.233 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.233 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.389 250273 DEBUG oslo_concurrency.processutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:16:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:24.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:16:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123842421' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.860 250273 DEBUG oslo_concurrency.processutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:16:24 np0005593232 nova_compute[250269]: 2026-01-23 10:16:24.866 250273 DEBUG nova.compute.provider_tree [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:16:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:25 np0005593232 nova_compute[250269]: 2026-01-23 10:16:25.244 250273 DEBUG nova.scheduler.client.report [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:16:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2604: 321 pgs: 321 active+clean; 468 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 71 KiB/s rd, 20 KiB/s wr, 94 op/s
Jan 23 05:16:25 np0005593232 nova_compute[250269]: 2026-01-23 10:16:25.470 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:25 np0005593232 nova_compute[250269]: 2026-01-23 10:16:25.740 250273 INFO nova.scheduler.client.report [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Deleted allocations for instance c8c9cb1d-4faa-4945-afcc-f67ccc4d4237#033[00m
Jan 23 05:16:26 np0005593232 nova_compute[250269]: 2026-01-23 10:16:26.375 250273 DEBUG oslo_concurrency.lockutils [None req-4604c851-6a84-4ffc-9dd2-9a7879a86e6f 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "c8c9cb1d-4faa-4945-afcc-f67ccc4d4237" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:26.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2605: 321 pgs: 321 active+clean; 413 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 19 KiB/s wr, 99 op/s
Jan 23 05:16:27 np0005593232 nova_compute[250269]: 2026-01-23 10:16:27.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:28.030 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:16:28 np0005593232 nova_compute[250269]: 2026-01-23 10:16:28.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:28.032 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:16:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:28.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:29 np0005593232 nova_compute[250269]: 2026-01-23 10:16:29.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:29.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2606: 321 pgs: 321 active+clean; 381 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 76 KiB/s rd, 13 KiB/s wr, 110 op/s
Jan 23 05:16:30 np0005593232 podman[344701]: 2026-01-23 10:16:30.473707003 +0000 UTC m=+0.123983494 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 23 05:16:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:30.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:31.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2607: 321 pgs: 321 active+clean; 381 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 80 KiB/s rd, 13 KiB/s wr, 114 op/s
Jan 23 05:16:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:32.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.780 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.781 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.781 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.782 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.782 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.783 250273 INFO nova.compute.manager [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Terminating instance#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.785 250273 DEBUG nova.compute.manager [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:32 np0005593232 kernel: tap5247b656-d9 (unregistering): left promiscuous mode
Jan 23 05:16:32 np0005593232 NetworkManager[49057]: <info>  [1769163392.8946] device (tap5247b656-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:16:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:32Z|00544|binding|INFO|Releasing lport 5247b656-d92f-4246-8db1-32dd4ca770b1 from this chassis (sb_readonly=0)
Jan 23 05:16:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:32Z|00545|binding|INFO|Setting lport 5247b656-d92f-4246-8db1-32dd4ca770b1 down in Southbound
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.911 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:32Z|00546|binding|INFO|Removing iface tap5247b656-d9 ovn-installed in OVS
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.914 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:32.936 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:b9:0c 10.100.0.4'], port_security=['fa:16:3e:7a:b9:0c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8056a321-13d3-4dd8-bb33-70c832c17ac1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a6ba16c4b9d49d3bc24cd7b44935d1f', 'neutron:revision_number': '8', 'neutron:security_group_ids': '6fc0d424-7779-4175-b5e0-e2613de6ecef', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5fb685af-2efd-4d70-8868-8a86ed4c3ca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=5247b656-d92f-4246-8db1-32dd4ca770b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:16:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:32.938 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 5247b656-d92f-4246-8db1-32dd4ca770b1 in datapath 00bd3319-bfe5-4acd-b2e4-17830ee847f9 unbound from our chassis#033[00m
Jan 23 05:16:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:32.940 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00bd3319-bfe5-4acd-b2e4-17830ee847f9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:16:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:32.941 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f27095-6e5c-4797-9e3c-1f80b5507753]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:32.942 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 namespace which is not needed anymore#033[00m
Jan 23 05:16:32 np0005593232 nova_compute[250269]: 2026-01-23 10:16:32.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:32 np0005593232 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 23 05:16:32 np0005593232 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000085.scope: Consumed 19.701s CPU time.
Jan 23 05:16:32 np0005593232 systemd-machined[215836]: Machine qemu-60-instance-00000085 terminated.
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.008 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.022 250273 INFO nova.virt.libvirt.driver [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Instance destroyed successfully.#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.023 250273 DEBUG nova.objects.instance [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lazy-loading 'resources' on Instance uuid 8056a321-13d3-4dd8-bb33-70c832c17ac1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [NOTICE]   (340606) : haproxy version is 2.8.14-c23fe91
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [NOTICE]   (340606) : path to executable is /usr/sbin/haproxy
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [WARNING]  (340606) : Exiting Master process...
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [WARNING]  (340606) : Exiting Master process...
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [ALERT]    (340606) : Current worker (340608) exited with code 143 (Terminated)
Jan 23 05:16:33 np0005593232 neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9[340602]: [WARNING]  (340606) : All workers exited. Exiting... (0)
Jan 23 05:16:33 np0005593232 systemd[1]: libpod-0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c.scope: Deactivated successfully.
Jan 23 05:16:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:33.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:33 np0005593232 podman[344804]: 2026-01-23 10:16:33.118525857 +0000 UTC m=+0.047536612 container died 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:16:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c-userdata-shm.mount: Deactivated successfully.
Jan 23 05:16:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d769abd82db613460fda240ca2caa056af40b7db88deac7a8deb20e43cd22583-merged.mount: Deactivated successfully.
Jan 23 05:16:33 np0005593232 podman[344804]: 2026-01-23 10:16:33.156562458 +0000 UTC m=+0.085573213 container cleanup 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 23 05:16:33 np0005593232 systemd[1]: libpod-conmon-0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c.scope: Deactivated successfully.
Jan 23 05:16:33 np0005593232 podman[344835]: 2026-01-23 10:16:33.216405468 +0000 UTC m=+0.040397769 container remove 0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.224 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[aa682f12-71b4-4e77-bda3-3d62e70cbc09]: (4, ('Fri Jan 23 10:16:33 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c)\n0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c\nFri Jan 23 10:16:33 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 (0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c)\n0e7d9a35f38619c7d77ccbc62a2cb0cdc46a1d4d00f4a40f2544a99998bed99c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.227 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92426d05-655d-4c66-be3a-a0bc43d32bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.228 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00bd3319-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.230 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 kernel: tap00bd3319-b0: left promiscuous mode
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.264 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.268 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fab77ac5-bd3e-4733-978d-a4085203a678]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.274 250273 DEBUG nova.virt.libvirt.vif [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:12:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1844411829',display_name='tempest-ServerRescueNegativeTestJSON-server-1844411829',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1844411829',id=133,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:13:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a6ba16c4b9d49d3bc24cd7b44935d1f',ramdisk_id='',reservation_id='r-v1vp7v6p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-87224704',owner_user_name='tempest-ServerRescueNegativeTestJSON-87224704-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:14:05Z,user_data=None,user_id='fae914e59ec54f6b80928ef3cc68dbdb',uuid=8056a321-13d3-4dd8-bb33-70c832c17ac1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.275 250273 DEBUG nova.network.os_vif_util [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converting VIF {"id": "5247b656-d92f-4246-8db1-32dd4ca770b1", "address": "fa:16:3e:7a:b9:0c", "network": {"id": "00bd3319-bfe5-4acd-b2e4-17830ee847f9", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-921943436-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a6ba16c4b9d49d3bc24cd7b44935d1f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5247b656-d9", "ovs_interfaceid": "5247b656-d92f-4246-8db1-32dd4ca770b1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.276 250273 DEBUG nova.network.os_vif_util [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.276 250273 DEBUG os_vif [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.278 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.279 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5247b656-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.285 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[aa62e235-03c1-439c-87e4-7d30c8f3bb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 nova_compute[250269]: 2026-01-23 10:16:33.287 250273 INFO os_vif [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:b9:0c,bridge_name='br-int',has_traffic_filtering=True,id=5247b656-d92f-4246-8db1-32dd4ca770b1,network=Network(00bd3319-bfe5-4acd-b2e4-17830ee847f9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5247b656-d9')#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.288 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f2419293-ce00-49c4-a897-0c84bbbfede7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.304 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a60376d4-88a4-49f5-9760-5eab8f334685]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 715707, 'reachable_time': 35225, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344853, 'error': None, 'target': 'ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 systemd[1]: run-netns-ovnmeta\x2d00bd3319\x2dbfe5\x2d4acd\x2db2e4\x2d17830ee847f9.mount: Deactivated successfully.
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.309 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00bd3319-bfe5-4acd-b2e4-17830ee847f9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:16:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:33.309 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9acb281e-ced4-4434-94a4-3f8126704623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:16:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2608: 321 pgs: 321 active+clean; 381 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 459 KiB/s rd, 13 KiB/s wr, 115 op/s
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.080 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163379.0786948, c8c9cb1d-4faa-4945-afcc-f67ccc4d4237 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.081 250273 INFO nova.compute.manager [-] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.220 250273 INFO nova.virt.libvirt.driver [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Deleting instance files /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1_del#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.221 250273 INFO nova.virt.libvirt.driver [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Deletion of /var/lib/nova/instances/8056a321-13d3-4dd8-bb33-70c832c17ac1_del complete#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.333 250273 DEBUG nova.compute.manager [None req-7e30035c-0867-4794-beb4-aeba13fb66e0 - - - - - -] [instance: c8c9cb1d-4faa-4945-afcc-f67ccc4d4237] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.383 250273 INFO nova.compute.manager [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Took 1.60 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.384 250273 DEBUG oslo.service.loopingcall [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.385 250273 DEBUG nova.compute.manager [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:16:34 np0005593232 nova_compute[250269]: 2026-01-23 10:16:34.386 250273 DEBUG nova.network.neutron [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:16:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:34.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.109 250273 DEBUG nova.compute.manager [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.110 250273 DEBUG oslo_concurrency.lockutils [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.110 250273 DEBUG oslo_concurrency.lockutils [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.110 250273 DEBUG oslo_concurrency.lockutils [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.111 250273 DEBUG nova.compute.manager [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.111 250273 DEBUG nova.compute.manager [req-9214f6ed-84d4-4a12-a185-a63e3b57c1f5 req-a03ba92d-803a-45cb-92b6-7e17e4bf5250 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-unplugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:16:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:35.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:35 np0005593232 podman[344874]: 2026-01-23 10:16:35.4261835 +0000 UTC m=+0.070470023 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:16:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2609: 321 pgs: 321 active+clean; 361 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 3.9 KiB/s wr, 86 op/s
Jan 23 05:16:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:35Z|00547|binding|INFO|Releasing lport 2c16e447-27d9-4516-bf23-ec948f375c10 from this chassis (sb_readonly=0)
Jan 23 05:16:35 np0005593232 nova_compute[250269]: 2026-01-23 10:16:35.987 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:36 np0005593232 nova_compute[250269]: 2026-01-23 10:16:36.470 250273 DEBUG nova.network.neutron [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:16:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:36.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:36 np0005593232 nova_compute[250269]: 2026-01-23 10:16:36.498 250273 INFO nova.compute.manager [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Took 2.11 seconds to deallocate network for instance.#033[00m
Jan 23 05:16:36 np0005593232 nova_compute[250269]: 2026-01-23 10:16:36.610 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:36 np0005593232 nova_compute[250269]: 2026-01-23 10:16:36.610 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:36 np0005593232 nova_compute[250269]: 2026-01-23 10:16:36.731 250273 DEBUG oslo_concurrency.processutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:16:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:37.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:16:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1284096215' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.230 250273 DEBUG oslo_concurrency.processutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.237 250273 DEBUG nova.compute.provider_tree [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:16:37
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', '.mgr']
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2610: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.9 KiB/s wr, 72 op/s
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.619 250273 DEBUG nova.scheduler.client.report [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.703 250273 DEBUG nova.compute.manager [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.704 250273 DEBUG oslo_concurrency.lockutils [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.704 250273 DEBUG oslo_concurrency.lockutils [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.704 250273 DEBUG oslo_concurrency.lockutils [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.704 250273 DEBUG nova.compute.manager [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] No waiting events found dispatching network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.705 250273 WARNING nova.compute.manager [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received unexpected event network-vif-plugged-5247b656-d92f-4246-8db1-32dd4ca770b1 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.705 250273 DEBUG nova.compute.manager [req-44d60e55-b12c-4d3e-ac7e-8976091b3da9 req-1358bc8a-fdf7-417a-8905-8d8018fc9c39 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Received event network-vif-deleted-5247b656-d92f-4246-8db1-32dd4ca770b1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.735 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.825 250273 INFO nova.scheduler.client.report [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Deleted allocations for instance 8056a321-13d3-4dd8-bb33-70c832c17ac1#033[00m
Jan 23 05:16:37 np0005593232 nova_compute[250269]: 2026-01-23 10:16:37.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:38.033 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:16:38 np0005593232 nova_compute[250269]: 2026-01-23 10:16:38.072 250273 DEBUG oslo_concurrency.lockutils [None req-b4ad8c0b-1278-474b-99ff-0087a208cc94 fae914e59ec54f6b80928ef3cc68dbdb 0a6ba16c4b9d49d3bc24cd7b44935d1f - - default default] Lock "8056a321-13d3-4dd8-bb33-70c832c17ac1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:38 np0005593232 nova_compute[250269]: 2026-01-23 10:16:38.281 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:16:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:38.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:39.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:39 np0005593232 nova_compute[250269]: 2026-01-23 10:16:39.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2611: 321 pgs: 321 active+clean; 304 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 258 KiB/s wr, 60 op/s
Jan 23 05:16:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:40.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:41.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:41 np0005593232 nova_compute[250269]: 2026-01-23 10:16:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2612: 321 pgs: 321 active+clean; 319 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 989 KiB/s wr, 69 op/s
Jan 23 05:16:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:42 np0005593232 nova_compute[250269]: 2026-01-23 10:16:42.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:42.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:42.629 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:42.629 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:16:42.630 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:16:42 np0005593232 nova_compute[250269]: 2026-01-23 10:16:42.835 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:43.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:43 np0005593232 nova_compute[250269]: 2026-01-23 10:16:43.283 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2613: 321 pgs: 321 active+clean; 338 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Jan 23 05:16:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:44.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:16:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807368308' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:16:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:16:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807368308' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:16:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:45.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2614: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 23 05:16:46 np0005593232 nova_compute[250269]: 2026-01-23 10:16:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:46 np0005593232 nova_compute[250269]: 2026-01-23 10:16:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:46 np0005593232 nova_compute[250269]: 2026-01-23 10:16:46.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:16:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:46.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:16:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:47.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.0901672715428998e-05 of space, bias 1.0, pg target 0.006270501814628699 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.007586943688349153 of space, bias 1.0, pg target 2.276083106504746 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:16:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2615: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 731 KiB/s rd, 2.0 MiB/s wr, 69 op/s
Jan 23 05:16:47 np0005593232 nova_compute[250269]: 2026-01-23 10:16:47.838 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:48 np0005593232 nova_compute[250269]: 2026-01-23 10:16:48.020 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163393.019143, 8056a321-13d3-4dd8-bb33-70c832c17ac1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:16:48 np0005593232 nova_compute[250269]: 2026-01-23 10:16:48.021 250273 INFO nova.compute.manager [-] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:16:48 np0005593232 nova_compute[250269]: 2026-01-23 10:16:48.286 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:48.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:49.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2616: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 2.0 MiB/s wr, 50 op/s
Jan 23 05:16:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:50.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:50 np0005593232 nova_compute[250269]: 2026-01-23 10:16:50.618 250273 DEBUG nova.compute.manager [None req-5feb7452-5cf0-4240-ac83-7b027ec3676e - - - - - -] [instance: 8056a321-13d3-4dd8-bb33-70c832c17ac1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:16:50 np0005593232 ovn_controller[151001]: 2026-01-23T10:16:50Z|00548|binding|INFO|Releasing lport 2c16e447-27d9-4516-bf23-ec948f375c10 from this chassis (sb_readonly=0)
Jan 23 05:16:50 np0005593232 nova_compute[250269]: 2026-01-23 10:16:50.750 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:16:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1517793893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:16:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:16:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1517793893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:16:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:51.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2617: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.9 MiB/s wr, 46 op/s
Jan 23 05:16:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:52.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:52 np0005593232 nova_compute[250269]: 2026-01-23 10:16:52.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:53.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:53 np0005593232 nova_compute[250269]: 2026-01-23 10:16:53.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2618: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 MiB/s wr, 28 op/s
Jan 23 05:16:54 np0005593232 nova_compute[250269]: 2026-01-23 10:16:54.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:54.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:55.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:55 np0005593232 nova_compute[250269]: 2026-01-23 10:16:55.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:16:55 np0005593232 nova_compute[250269]: 2026-01-23 10:16:55.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:16:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2619: 321 pgs: 321 active+clean; 348 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 739 KiB/s wr, 25 op/s
Jan 23 05:16:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:16:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:56.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:16:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:16:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:57.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2620: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 184 KiB/s wr, 32 op/s
Jan 23 05:16:57 np0005593232 nova_compute[250269]: 2026-01-23 10:16:57.843 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:58 np0005593232 nova_compute[250269]: 2026-01-23 10:16:58.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:16:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:16:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:16:58.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:16:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:16:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:16:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:16:59.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:16:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2621: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 181 KiB/s wr, 28 op/s
Jan 23 05:17:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:00.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:01.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:01 np0005593232 podman[344979]: 2026-01-23 10:17:01.420942464 +0000 UTC m=+0.079000245 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:17:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2622: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 180 KiB/s wr, 26 op/s
Jan 23 05:17:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:02.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:02 np0005593232 nova_compute[250269]: 2026-01-23 10:17:02.846 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:03 np0005593232 nova_compute[250269]: 2026-01-23 10:17:03.290 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:03 np0005593232 nova_compute[250269]: 2026-01-23 10:17:03.444 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:03 np0005593232 nova_compute[250269]: 2026-01-23 10:17:03.444 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:03 np0005593232 nova_compute[250269]: 2026-01-23 10:17:03.445 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:17:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2623: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 7.3 KiB/s wr, 18 op/s
Jan 23 05:17:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:17:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3768638326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:17:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:17:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3768638326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:17:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:04 np0005593232 nova_compute[250269]: 2026-01-23 10:17:04.913 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:04.913 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:17:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:04.916 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:17:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2624: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 2.7 KiB/s wr, 15 op/s
Jan 23 05:17:06 np0005593232 podman[345008]: 2026-01-23 10:17:06.388831854 +0000 UTC m=+0.053373397 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 23 05:17:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:06.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:06.918 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2625: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 2.7 KiB/s wr, 15 op/s
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:07 np0005593232 nova_compute[250269]: 2026-01-23 10:17:07.849 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:08 np0005593232 nova_compute[250269]: 2026-01-23 10:17:08.291 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:09.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2626: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.120 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.196 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.310 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.311 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.311 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.312 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.361 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.362 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.362 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.362 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.363 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:17:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021288148' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:17:10 np0005593232 nova_compute[250269]: 2026-01-23 10:17:10.822 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.065 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.066 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:11.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.282 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.283 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4043MB free_disk=20.987842559814453GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.284 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.284 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2627: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.535 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.536 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.536 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:17:11 np0005593232 nova_compute[250269]: 2026-01-23 10:17:11.601 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:17:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489617333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:17:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.025 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.031 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.072 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.142 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:12 np0005593232 nova_compute[250269]: 2026-01-23 10:17:12.851 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:13.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:13 np0005593232 nova_compute[250269]: 2026-01-23 10:17:13.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2628: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:17:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:14.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:15 np0005593232 nova_compute[250269]: 2026-01-23 10:17:15.004 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:15 np0005593232 nova_compute[250269]: 2026-01-23 10:17:15.004 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:15 np0005593232 nova_compute[250269]: 2026-01-23 10:17:15.101 250273 DEBUG nova.objects.instance [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:15 np0005593232 nova_compute[250269]: 2026-01-23 10:17:15.402 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2629: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.210 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.211 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.211 250273 INFO nova.compute.manager [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attaching volume 23e1c9da-11aa-46ec-90eb-aa0b6ee150dc to /dev/vdb#033[00m
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.414 250273 DEBUG os_brick.utils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.415 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev acff11fa-445d-4bf4-9f87-b701b78e04ef does not exist
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.427 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.427 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[d201e0ac-6462-4c59-98a8-db99c0eaf6b8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 136f9f4c-89c0-47b0-9017-a693e4dc0f5c does not exist
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.429 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aaa216bd-0eae-411a-a8b9-d1aac53c25b6 does not exist
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.436 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.437 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[a6068987-062f-4c9f-85f7-fb0b8fcb2f07]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.438 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.446 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.446 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[66bdf1ec-68a4-4c63-aa00-dc3bf51803f3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.447 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[507ba50a-def9-4cb1-a9c4-1f16cfa4f629]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.448 250273 DEBUG oslo_concurrency.processutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.476 250273 DEBUG oslo_concurrency.processutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.478 250273 DEBUG os_brick.initiator.connectors.lightos [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.479 250273 DEBUG os_brick.initiator.connectors.lightos [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.479 250273 DEBUG os_brick.initiator.connectors.lightos [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.479 250273 DEBUG os_brick.utils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] <== get_connector_properties: return (64ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:17:16 np0005593232 nova_compute[250269]: 2026-01-23 10:17:16.480 250273 DEBUG nova.virt.block_device [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating existing volume attachment record: 617bc764-5592-42ac-9132-d7451ff7bd3b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:17:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:17:16 np0005593232 podman[345404]: 2026-01-23 10:17:16.964232729 +0000 UTC m=+0.036928170 container create 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:17:16 np0005593232 systemd[1]: Started libpod-conmon-0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba.scope.
Jan 23 05:17:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:17.043724468 +0000 UTC m=+0.116419919 container init 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:16.948803851 +0000 UTC m=+0.021499292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:17.050679506 +0000 UTC m=+0.123374947 container start 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:17.05469221 +0000 UTC m=+0.127387671 container attach 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:17:17 np0005593232 thirsty_herschel[345420]: 167 167
Jan 23 05:17:17 np0005593232 systemd[1]: libpod-0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba.scope: Deactivated successfully.
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:17.057464079 +0000 UTC m=+0.130159520 container died 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:17:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fb66d1b6784924ca411f8e4ee5e15217675bbc71e6cd46424f6aebf7b0f194e3-merged.mount: Deactivated successfully.
Jan 23 05:17:17 np0005593232 podman[345404]: 2026-01-23 10:17:17.108056306 +0000 UTC m=+0.180751777 container remove 0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:17:17 np0005593232 systemd[1]: libpod-conmon-0f590f53b061b1f05b84a3e1ecba507fbf5163780d3251df84afb9249aea20ba.scope: Deactivated successfully.
Jan 23 05:17:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:17.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:17 np0005593232 podman[345442]: 2026-01-23 10:17:17.286066714 +0000 UTC m=+0.035471539 container create 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:17:17 np0005593232 systemd[1]: Started libpod-conmon-364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9.scope.
Jan 23 05:17:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:17 np0005593232 podman[345442]: 2026-01-23 10:17:17.366285184 +0000 UTC m=+0.115690039 container init 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:17:17 np0005593232 podman[345442]: 2026-01-23 10:17:17.27184843 +0000 UTC m=+0.021253265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:17 np0005593232 podman[345442]: 2026-01-23 10:17:17.372560062 +0000 UTC m=+0.121964877 container start 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 23 05:17:17 np0005593232 podman[345442]: 2026-01-23 10:17:17.376861434 +0000 UTC m=+0.126266259 container attach 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:17:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2630: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 52 op/s
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.696 250273 DEBUG nova.objects.instance [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.768 250273 DEBUG nova.virt.libvirt.driver [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attempting to attach volume 23e1c9da-11aa-46ec-90eb-aa0b6ee150dc with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.772 250273 DEBUG nova.virt.libvirt.guest [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-23e1c9da-11aa-46ec-90eb-aa0b6ee150dc">
Jan 23 05:17:17 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:17:17 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:17:17 np0005593232 nova_compute[250269]:  <serial>23e1c9da-11aa-46ec-90eb-aa0b6ee150dc</serial>
Jan 23 05:17:17 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:17 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.963 250273 DEBUG nova.virt.libvirt.driver [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.963 250273 DEBUG nova.virt.libvirt.driver [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.964 250273 DEBUG nova.virt.libvirt.driver [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:17 np0005593232 nova_compute[250269]: 2026-01-23 10:17:17.964 250273 DEBUG nova.virt.libvirt.driver [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No VIF found with MAC fa:16:3e:c0:68:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:17:18 np0005593232 gifted_carver[345458]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:17:18 np0005593232 gifted_carver[345458]: --> relative data size: 1.0
Jan 23 05:17:18 np0005593232 gifted_carver[345458]: --> All data devices are unavailable
Jan 23 05:17:18 np0005593232 systemd[1]: libpod-364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9.scope: Deactivated successfully.
Jan 23 05:17:18 np0005593232 podman[345442]: 2026-01-23 10:17:18.202819593 +0000 UTC m=+0.952224438 container died 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:17:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c173905de96844b59d6b0ded3ee47693c6d3515f4e488f8b4d79d9b5eb059d6b-merged.mount: Deactivated successfully.
Jan 23 05:17:18 np0005593232 podman[345442]: 2026-01-23 10:17:18.258245498 +0000 UTC m=+1.007650323 container remove 364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:17:18 np0005593232 systemd[1]: libpod-conmon-364e72df3aa009bf3f41c1a071375965c62bbe6a4cb8f3459e1de82325f7ecf9.scope: Deactivated successfully.
Jan 23 05:17:18 np0005593232 nova_compute[250269]: 2026-01-23 10:17:18.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:18 np0005593232 nova_compute[250269]: 2026-01-23 10:17:18.396 250273 DEBUG oslo_concurrency.lockutils [None req-ef36f91f-f1d2-4e5e-a796-da5259ad93e2 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:18.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.812152346 +0000 UTC m=+0.037174857 container create b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 05:17:18 np0005593232 systemd[1]: Started libpod-conmon-b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e.scope.
Jan 23 05:17:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.796535723 +0000 UTC m=+0.021558264 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.901155565 +0000 UTC m=+0.126178096 container init b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.90941735 +0000 UTC m=+0.134439861 container start b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.912598741 +0000 UTC m=+0.137621282 container attach b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:17:18 np0005593232 vibrant_blackwell[345662]: 167 167
Jan 23 05:17:18 np0005593232 systemd[1]: libpod-b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e.scope: Deactivated successfully.
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.914829914 +0000 UTC m=+0.139852445 container died b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:17:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0321964c18b7359617cd859347da34856bbea7bf86ea2cdc52d1b2e8d7c06f10-merged.mount: Deactivated successfully.
Jan 23 05:17:18 np0005593232 podman[345645]: 2026-01-23 10:17:18.956405855 +0000 UTC m=+0.181428366 container remove b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:17:18 np0005593232 systemd[1]: libpod-conmon-b62f902c1de2a7c52ad45245b7bda7d1253f9cde86b7ee8653c0d65d417d9a7e.scope: Deactivated successfully.
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.099381278 +0000 UTC m=+0.035679305 container create a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:17:19 np0005593232 systemd[1]: Started libpod-conmon-a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74.scope.
Jan 23 05:17:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b489bba96e342a9124b3e956a90da37893d23924b185b69499efcb85db0010da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b489bba96e342a9124b3e956a90da37893d23924b185b69499efcb85db0010da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b489bba96e342a9124b3e956a90da37893d23924b185b69499efcb85db0010da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b489bba96e342a9124b3e956a90da37893d23924b185b69499efcb85db0010da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.176944272 +0000 UTC m=+0.113242289 container init a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.083674272 +0000 UTC m=+0.019972309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.184296151 +0000 UTC m=+0.120594158 container start a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.189790597 +0000 UTC m=+0.126088604 container attach a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 23 05:17:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2631: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:17:19 np0005593232 nova_compute[250269]: 2026-01-23 10:17:19.567 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:19 np0005593232 nova_compute[250269]: 2026-01-23 10:17:19.568 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:19 np0005593232 nova_compute[250269]: 2026-01-23 10:17:19.596 250273 DEBUG nova.objects.instance [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:19 np0005593232 nova_compute[250269]: 2026-01-23 10:17:19.665 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:19 np0005593232 gallant_benz[345703]: {
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:    "0": [
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:        {
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "devices": [
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "/dev/loop3"
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            ],
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "lv_name": "ceph_lv0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "lv_size": "7511998464",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "name": "ceph_lv0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "tags": {
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.cluster_name": "ceph",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.crush_device_class": "",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.encrypted": "0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.osd_id": "0",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.type": "block",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:                "ceph.vdo": "0"
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            },
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "type": "block",
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:            "vg_name": "ceph_vg0"
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:        }
Jan 23 05:17:19 np0005593232 gallant_benz[345703]:    ]
Jan 23 05:17:19 np0005593232 gallant_benz[345703]: }
Jan 23 05:17:19 np0005593232 systemd[1]: libpod-a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74.scope: Deactivated successfully.
Jan 23 05:17:19 np0005593232 podman[345687]: 2026-01-23 10:17:19.973918258 +0000 UTC m=+0.910216265 container died a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:17:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b489bba96e342a9124b3e956a90da37893d23924b185b69499efcb85db0010da-merged.mount: Deactivated successfully.
Jan 23 05:17:20 np0005593232 podman[345687]: 2026-01-23 10:17:20.025117752 +0000 UTC m=+0.961415759 container remove a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_benz, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:17:20 np0005593232 systemd[1]: libpod-conmon-a7061869dc42196bea38673c4984c2167fed64de5b661de04af2ded3f8e34c74.scope: Deactivated successfully.
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.117 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.118 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.119 250273 INFO nova.compute.manager [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attaching volume b016d51c-9f03-4ca0-86e5-02cc8ca5059f to /dev/vdc#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.454 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.508 250273 DEBUG os_brick.utils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.509 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.521 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.522 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[0b970e6e-8af6-4cb5-8977-4247a220fef9]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.523 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.533 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.534 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf4c5b5-6231-4781-8ce3-d2b1b9af3ffb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.536 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.545 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.546 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[587aae59-baed-4d44-aea4-993730595cad]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.548 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[c5384f40-5809-466f-a13b-6608203ccc54]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.548 250273 DEBUG oslo_concurrency.processutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:20.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.585 250273 DEBUG oslo_concurrency.processutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.589 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.589 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.589 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.590 250273 DEBUG os_brick.utils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:17:20 np0005593232 nova_compute[250269]: 2026-01-23 10:17:20.590 250273 DEBUG nova.virt.block_device [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating existing volume attachment record: 44675786-87de-4130-a118-01a6f88fed0a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.610321631 +0000 UTC m=+0.043007843 container create 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:17:20 np0005593232 systemd[1]: Started libpod-conmon-14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8.scope.
Jan 23 05:17:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.592273978 +0000 UTC m=+0.024960200 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.701151322 +0000 UTC m=+0.133837544 container init 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.708148291 +0000 UTC m=+0.140834493 container start 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.711060513 +0000 UTC m=+0.143746745 container attach 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:17:20 np0005593232 competent_borg[345890]: 167 167
Jan 23 05:17:20 np0005593232 systemd[1]: libpod-14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8.scope: Deactivated successfully.
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.713786871 +0000 UTC m=+0.146473063 container died 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:17:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-580a91123339d0b654aca000b53c38612a7f3333d661aa5f7dd9e63fbc734063-merged.mount: Deactivated successfully.
Jan 23 05:17:20 np0005593232 podman[345873]: 2026-01-23 10:17:20.752741048 +0000 UTC m=+0.185427250 container remove 14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:17:20 np0005593232 systemd[1]: libpod-conmon-14bd9deec89e19de6637ebbd5403d265d52a804d90c2c5fa619dec0ce0119de8.scope: Deactivated successfully.
Jan 23 05:17:20 np0005593232 podman[345913]: 2026-01-23 10:17:20.922582084 +0000 UTC m=+0.037448085 container create 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:17:20 np0005593232 systemd[1]: Started libpod-conmon-085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c.scope.
Jan 23 05:17:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb164f015d5025b24c1eeb7b4ae3a1492a22beb3819dad5f92921fc60d19d741/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb164f015d5025b24c1eeb7b4ae3a1492a22beb3819dad5f92921fc60d19d741/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb164f015d5025b24c1eeb7b4ae3a1492a22beb3819dad5f92921fc60d19d741/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb164f015d5025b24c1eeb7b4ae3a1492a22beb3819dad5f92921fc60d19d741/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:20 np0005593232 podman[345913]: 2026-01-23 10:17:20.99460314 +0000 UTC m=+0.109469151 container init 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:17:21 np0005593232 podman[345913]: 2026-01-23 10:17:21.001329071 +0000 UTC m=+0.116195072 container start 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:17:21 np0005593232 podman[345913]: 2026-01-23 10:17:20.907016081 +0000 UTC m=+0.021882102 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:17:21 np0005593232 podman[345913]: 2026-01-23 10:17:21.004779959 +0000 UTC m=+0.119645980 container attach 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:17:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:21.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2632: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]: {
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:        "osd_id": 0,
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:        "type": "bluestore"
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]:    }
Jan 23 05:17:21 np0005593232 competent_montalcini[345929]: }
Jan 23 05:17:21 np0005593232 systemd[1]: libpod-085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c.scope: Deactivated successfully.
Jan 23 05:17:21 np0005593232 podman[345913]: 2026-01-23 10:17:21.858451196 +0000 UTC m=+0.973317197 container died 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:17:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fb164f015d5025b24c1eeb7b4ae3a1492a22beb3819dad5f92921fc60d19d741-merged.mount: Deactivated successfully.
Jan 23 05:17:21 np0005593232 podman[345913]: 2026-01-23 10:17:21.909009453 +0000 UTC m=+1.023875454 container remove 085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_montalcini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:17:21 np0005593232 systemd[1]: libpod-conmon-085928d8c5fcea847e533e08717132316cfa33ad60a19ec9e37351afd8d3466c.scope: Deactivated successfully.
Jan 23 05:17:21 np0005593232 nova_compute[250269]: 2026-01-23 10:17:21.934 250273 DEBUG nova.objects.instance [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:17:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:17:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c50ddda3-0664-48a5-893c-4b0c30bff3d0 does not exist
Jan 23 05:17:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c54473d2-588e-4df0-9a06-2fd3f838a6c0 does not exist
Jan 23 05:17:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 44507d0f-e686-4114-b83f-11046f602e91 does not exist
Jan 23 05:17:21 np0005593232 nova_compute[250269]: 2026-01-23 10:17:21.964 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attempting to attach volume b016d51c-9f03-4ca0-86e5-02cc8ca5059f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:17:21 np0005593232 nova_compute[250269]: 2026-01-23 10:17:21.967 250273 DEBUG nova.virt.libvirt.guest [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b016d51c-9f03-4ca0-86e5-02cc8ca5059f">
Jan 23 05:17:21 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:17:21 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  <target dev="vdc" bus="virtio"/>
Jan 23 05:17:21 np0005593232 nova_compute[250269]:  <serial>b016d51c-9f03-4ca0-86e5-02cc8ca5059f</serial>
Jan 23 05:17:21 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:21 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:17:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.113 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.113 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.114 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.114 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.114 250273 DEBUG nova.virt.libvirt.driver [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] No VIF found with MAC fa:16:3e:c0:68:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.425 250273 DEBUG oslo_concurrency.lockutils [None req-8e47ba1c-2283-4537-812f-71a3f0a8c90f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:22.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:22 np0005593232 nova_compute[250269]: 2026-01-23 10:17:22.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:17:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:23.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:23 np0005593232 nova_compute[250269]: 2026-01-23 10:17:23.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2633: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Jan 23 05:17:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.042 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.042 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.066 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:17:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:25.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.217 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.217 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.223 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.224 250273 INFO nova.compute.claims [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:17:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2634: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 78 op/s
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.481 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:17:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/981599488' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.921 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:25 np0005593232 nova_compute[250269]: 2026-01-23 10:17:25.927 250273 DEBUG nova.compute.provider_tree [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.070 250273 DEBUG nova.compute.manager [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.070 250273 DEBUG nova.compute.manager [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.070 250273 DEBUG oslo_concurrency.lockutils [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.070 250273 DEBUG oslo_concurrency.lockutils [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.071 250273 DEBUG nova.network.neutron [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.089 250273 DEBUG nova.scheduler.client.report [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.410 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.411 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:17:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:26.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.857 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.857 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:17:26 np0005593232 nova_compute[250269]: 2026-01-23 10:17:26.990 250273 INFO nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:17:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.180 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:17:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:27.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.455 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.457 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.458 250273 INFO nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Creating image(s)#033[00m
Jan 23 05:17:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2635: 321 pgs: 321 active+clean; 368 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.493 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.528 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.564 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.569 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.642 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.643 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.644 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.644 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.678 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.682 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1d70a2c0-b633-4e56-9f11-ae40749783be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.799 250273 DEBUG nova.policy [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:17:27 np0005593232 nova_compute[250269]: 2026-01-23 10:17:27.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.053 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 1d70a2c0-b633-4e56-9f11-ae40749783be_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.136 250273 DEBUG nova.compute.manager [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.137 250273 DEBUG nova.compute.manager [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.138 250273 DEBUG oslo_concurrency.lockutils [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.145 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] resizing rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.273 250273 DEBUG nova.objects.instance [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 1d70a2c0-b633-4e56-9f11-ae40749783be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.298 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.298 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Ensure instance console log exists: /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.299 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.299 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.300 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:28 np0005593232 nova_compute[250269]: 2026-01-23 10:17:28.300 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:29.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2636: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 766 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Jan 23 05:17:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:31.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2637: 321 pgs: 321 active+clean; 379 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 105 KiB/s rd, 2.2 MiB/s wr, 46 op/s
Jan 23 05:17:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:32 np0005593232 nova_compute[250269]: 2026-01-23 10:17:32.255 250273 DEBUG nova.compute.manager [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:32 np0005593232 nova_compute[250269]: 2026-01-23 10:17:32.256 250273 DEBUG nova.compute.manager [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:32 np0005593232 nova_compute[250269]: 2026-01-23 10:17:32.257 250273 DEBUG oslo_concurrency.lockutils [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:32 np0005593232 podman[346278]: 2026-01-23 10:17:32.443024601 +0000 UTC m=+0.092764587 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:17:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:32 np0005593232 nova_compute[250269]: 2026-01-23 10:17:32.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.302 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2638: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 289 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.959 250273 DEBUG nova.network.neutron [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.960 250273 DEBUG nova.network.neutron [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.994 250273 DEBUG oslo_concurrency.lockutils [req-13b29d5b-34eb-4c49-9037-5bafba0ec34c req-f1ac6113-4cbb-4308-abbf-036fc84a5751 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.994 250273 DEBUG oslo_concurrency.lockutils [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:33 np0005593232 nova_compute[250269]: 2026-01-23 10:17:33.995 250273 DEBUG nova.network.neutron [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:34 np0005593232 nova_compute[250269]: 2026-01-23 10:17:34.150 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Successfully created port: 776b152a-2675-4fba-af36-cb34bf1e20ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:17:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:35.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2639: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.381 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Successfully updated port: 776b152a-2675-4fba-af36-cb34bf1e20ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.421 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.421 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.421 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.476 250273 DEBUG nova.network.neutron [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.477 250273 DEBUG nova.network.neutron [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.512 250273 DEBUG oslo_concurrency.lockutils [req-c0c1611e-5448-41e1-acd5-d6edd8691d7e req-7589c714-68c2-4571-b138-10646ca0ae50 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.513 250273 DEBUG oslo_concurrency.lockutils [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.513 250273 DEBUG nova.network.neutron [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:36.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:36 np0005593232 nova_compute[250269]: 2026-01-23 10:17:36.764 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:17:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:37.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:37 np0005593232 nova_compute[250269]: 2026-01-23 10:17:37.294 250273 DEBUG nova.compute.manager [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:37 np0005593232 nova_compute[250269]: 2026-01-23 10:17:37.295 250273 DEBUG nova.compute.manager [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing instance network info cache due to event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:37 np0005593232 nova_compute[250269]: 2026-01-23 10:17:37.295 250273 DEBUG oslo_concurrency.lockutils [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:17:37
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log']
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:17:37 np0005593232 podman[346308]: 2026-01-23 10:17:37.402943224 +0000 UTC m=+0.058587636 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2640: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 286 KiB/s rd, 3.9 MiB/s wr, 88 op/s
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:17:38 np0005593232 nova_compute[250269]: 2026-01-23 10:17:38.005 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:38 np0005593232 nova_compute[250269]: 2026-01-23 10:17:38.302 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:17:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:39.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2641: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 229 KiB/s rd, 2.0 MiB/s wr, 62 op/s
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.919 250273 DEBUG nova.network.neutron [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.984 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.985 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Instance network_info: |[{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.985 250273 DEBUG oslo_concurrency.lockutils [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.986 250273 DEBUG nova.network.neutron [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.990 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Start _get_guest_xml network_info=[{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:17:39 np0005593232 nova_compute[250269]: 2026-01-23 10:17:39.996 250273 WARNING nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.009 250273 DEBUG nova.virt.libvirt.host [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.009 250273 DEBUG nova.virt.libvirt.host [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.013 250273 DEBUG nova.virt.libvirt.host [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.014 250273 DEBUG nova.virt.libvirt.host [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.015 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.016 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.016 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.016 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.016 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.017 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.017 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.017 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.017 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.018 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.018 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.018 250273 DEBUG nova.virt.hardware [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.021 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.123 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.124 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:17:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060925391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.459 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.487 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.494 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:40.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.860 250273 DEBUG nova.network.neutron [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.860 250273 DEBUG nova.network.neutron [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:17:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359200490' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.931 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.933 250273 DEBUG nova.virt.libvirt.vif [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-253961316',display_name='tempest-TestNetworkBasicOps-server-253961316',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-253961316',id=145,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuTZsDX8Yidlcljee+mWcgr/8w1jex8qpbIhyaFbeVITtW45I3zURrpkl5L9QywgrlmJPDxvtYP7jMLWz49tvDv/cpi2iYtov3Za6bDONBt6jnbOwCbkkb5ok8znlV8Bw==',key_name='tempest-TestNetworkBasicOps-1667433613',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-u0ld2fbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:17:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=1d70a2c0-b633-4e56-9f11-ae40749783be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.933 250273 DEBUG nova.network.os_vif_util [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.934 250273 DEBUG nova.network.os_vif_util [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.936 250273 DEBUG nova.objects.instance [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1d70a2c0-b633-4e56-9f11-ae40749783be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:40 np0005593232 nova_compute[250269]: 2026-01-23 10:17:40.938 250273 DEBUG oslo_concurrency.lockutils [req-ded81483-bc1e-4c55-8e34-53de613bc969 req-1c4e2e6d-9c77-4c78-96cf-ff548cdb75b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:41.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.418 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <uuid>1d70a2c0-b633-4e56-9f11-ae40749783be</uuid>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <name>instance-00000091</name>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkBasicOps-server-253961316</nova:name>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:17:39</nova:creationTime>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <nova:port uuid="776b152a-2675-4fba-af36-cb34bf1e20ed">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="serial">1d70a2c0-b633-4e56-9f11-ae40749783be</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="uuid">1d70a2c0-b633-4e56-9f11-ae40749783be</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1d70a2c0-b633-4e56-9f11-ae40749783be_disk">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:15:11:c6"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <target dev="tap776b152a-26"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/console.log" append="off"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:17:41 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:17:41 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:17:41 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:17:41 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.419 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Preparing to wait for external event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.419 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.419 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.419 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.420 250273 DEBUG nova.virt.libvirt.vif [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-253961316',display_name='tempest-TestNetworkBasicOps-server-253961316',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-253961316',id=145,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuTZsDX8Yidlcljee+mWcgr/8w1jex8qpbIhyaFbeVITtW45I3zURrpkl5L9QywgrlmJPDxvtYP7jMLWz49tvDv/cpi2iYtov3Za6bDONBt6jnbOwCbkkb5ok8znlV8Bw==',key_name='tempest-TestNetworkBasicOps-1667433613',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-u0ld2fbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:17:27Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=1d70a2c0-b633-4e56-9f11-ae40749783be,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.420 250273 DEBUG nova.network.os_vif_util [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.421 250273 DEBUG nova.network.os_vif_util [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.421 250273 DEBUG os_vif [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.422 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.422 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.423 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.427 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.427 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap776b152a-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.428 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap776b152a-26, col_values=(('external_ids', {'iface-id': '776b152a-2675-4fba-af36-cb34bf1e20ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:11:c6', 'vm-uuid': '1d70a2c0-b633-4e56-9f11-ae40749783be'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:41 np0005593232 NetworkManager[49057]: <info>  [1769163461.4310] manager: (tap776b152a-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/260)
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.441 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.442 250273 INFO os_vif [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26')#033[00m
Jan 23 05:17:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2642: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 185 KiB/s rd, 1.7 MiB/s wr, 46 op/s
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.540 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.540 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.541 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:15:11:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.541 250273 INFO nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Using config drive#033[00m
Jan 23 05:17:41 np0005593232 nova_compute[250269]: 2026-01-23 10:17:41.566 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.578 250273 DEBUG nova.compute.manager [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.578 250273 DEBUG nova.compute.manager [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.579 250273 DEBUG oslo_concurrency.lockutils [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.579 250273 DEBUG oslo_concurrency.lockutils [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:42 np0005593232 nova_compute[250269]: 2026-01-23 10:17:42.579 250273 DEBUG nova.network.neutron [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.019000540s ======
Jan 23 05:17:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.019000540s
Jan 23 05:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:42.630 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:42.631 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:42.631 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.008 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:43.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.217 250273 INFO nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Creating config drive at /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.223 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiej95xre execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.360 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiej95xre" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.395 250273 DEBUG nova.storage.rbd_utils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.398 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config 1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2643: 321 pgs: 321 active+clean; 457 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 204 KiB/s rd, 3.2 MiB/s wr, 74 op/s
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.704 250273 DEBUG oslo_concurrency.processutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config 1d70a2c0-b633-4e56-9f11-ae40749783be_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.705 250273 INFO nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Deleting local config drive /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be/disk.config because it was imported into RBD.#033[00m
Jan 23 05:17:43 np0005593232 kernel: tap776b152a-26: entered promiscuous mode
Jan 23 05:17:43 np0005593232 NetworkManager[49057]: <info>  [1769163463.7556] manager: (tap776b152a-26): new Tun device (/org/freedesktop/NetworkManager/Devices/261)
Jan 23 05:17:43 np0005593232 systemd-udevd[346463]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:43Z|00549|binding|INFO|Claiming lport 776b152a-2675-4fba-af36-cb34bf1e20ed for this chassis.
Jan 23 05:17:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:43Z|00550|binding|INFO|776b152a-2675-4fba-af36-cb34bf1e20ed: Claiming fa:16:3e:15:11:c6 10.100.0.8
Jan 23 05:17:43 np0005593232 NetworkManager[49057]: <info>  [1769163463.8205] device (tap776b152a-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:17:43 np0005593232 NetworkManager[49057]: <info>  [1769163463.8211] device (tap776b152a-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:17:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:43Z|00551|binding|INFO|Setting lport 776b152a-2675-4fba-af36-cb34bf1e20ed ovn-installed in OVS
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.828 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:43 np0005593232 nova_compute[250269]: 2026-01-23 10:17:43.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:43Z|00552|binding|INFO|Setting lport 776b152a-2675-4fba-af36-cb34bf1e20ed up in Southbound
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.832 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:11:c6 10.100.0.8'], port_security=['fa:16:3e:15:11:c6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d70a2c0-b633-4e56-9f11-ae40749783be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ea2617-7043-474b-913f-7e235af58db8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '52d68d78-c737-420f-894f-05c7f67378f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53d70bbc-d32d-46a9-8fb7-68cccce823e4, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=776b152a-2675-4fba-af36-cb34bf1e20ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.833 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 776b152a-2675-4fba-af36-cb34bf1e20ed in datapath 41ea2617-7043-474b-913f-7e235af58db8 bound to our chassis#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.834 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41ea2617-7043-474b-913f-7e235af58db8#033[00m
Jan 23 05:17:43 np0005593232 systemd-machined[215836]: New machine qemu-64-instance-00000091.
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.845 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1928bb2e-dd2c-4a06-bc6f-60bd03388fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.846 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41ea2617-71 in ovnmeta-41ea2617-7043-474b-913f-7e235af58db8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.847 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41ea2617-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.848 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[30924cac-c206-41c2-aa25-a1757a3352c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.848 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e23f93d2-1fee-4cca-afcb-bb5c6d9c689c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 systemd[1]: Started Virtual Machine qemu-64-instance-00000091.
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.859 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[642f86df-ce02-4049-bd4e-f8466f4f0a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.883 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[804ee1ed-9bf1-4807-b682-09de7223a8ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.920 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[33eb1721-0503-44a5-867f-b7c0d3ced9f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 NetworkManager[49057]: <info>  [1769163463.9270] manager: (tap41ea2617-70): new Veth device (/org/freedesktop/NetworkManager/Devices/262)
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.926 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e6876a37-d453-44a3-8b4a-f7419e489aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.961 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[df8e71bd-ff24-43d4-921b-558f2d317924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.964 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[febe5bc2-2813-4fad-b52f-a060b476317f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:43 np0005593232 NetworkManager[49057]: <info>  [1769163463.9864] device (tap41ea2617-70): carrier: link connected
Jan 23 05:17:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:43.991 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4cc312-1e2c-4bc7-82f1-f7185c5de6f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.010 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d2ebb711-1f41-4103-b76a-9e7ebc4571dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ea2617-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:85:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737712, 'reachable_time': 17127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346500, 'error': None, 'target': 'ovnmeta-41ea2617-7043-474b-913f-7e235af58db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.025 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fa81dbd6-6b49-4826-a6f5-578b8baa4729]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:854d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 737712, 'tstamp': 737712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346501, 'error': None, 'target': 'ovnmeta-41ea2617-7043-474b-913f-7e235af58db8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.039 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a9998411-2c5e-4179-b7ae-1a80ec967742]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41ea2617-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:85:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 165], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737712, 'reachable_time': 17127, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346502, 'error': None, 'target': 'ovnmeta-41ea2617-7043-474b-913f-7e235af58db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.071 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[917de289-36da-4eab-8c7e-74b3fec51ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.136 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8ef601-a4fe-4976-859e-7138f1521ed2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.138 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ea2617-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.139 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.139 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41ea2617-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:44 np0005593232 NetworkManager[49057]: <info>  [1769163464.1419] manager: (tap41ea2617-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/263)
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.141 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:44 np0005593232 kernel: tap41ea2617-70: entered promiscuous mode
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.146 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.147 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41ea2617-70, col_values=(('external_ids', {'iface-id': '8bd4ccf4-68fb-4020-8110-9c0416d4ef1e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:17:44 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:44Z|00553|binding|INFO|Releasing lport 8bd4ccf4-68fb-4020-8110-9c0416d4ef1e from this chassis (sb_readonly=0)
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.148 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.178 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.179 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41ea2617-7043-474b-913f-7e235af58db8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41ea2617-7043-474b-913f-7e235af58db8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.180 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bade09-0ec9-43a5-b894-9bded82eb1c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.181 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-41ea2617-7043-474b-913f-7e235af58db8
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/41ea2617-7043-474b-913f-7e235af58db8.pid.haproxy
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 41ea2617-7043-474b-913f-7e235af58db8
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:17:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:17:44.183 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41ea2617-7043-474b-913f-7e235af58db8', 'env', 'PROCESS_TAG=haproxy-41ea2617-7043-474b-913f-7e235af58db8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41ea2617-7043-474b-913f-7e235af58db8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.337 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163464.336804, 1d70a2c0-b633-4e56-9f11-ae40749783be => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.338 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] VM Started (Lifecycle Event)#033[00m
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.541 250273 DEBUG nova.network.neutron [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updated VIF entry in instance network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:44 np0005593232 nova_compute[250269]: 2026-01-23 10:17:44.543 250273 DEBUG nova.network.neutron [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:44 np0005593232 podman[346576]: 2026-01-23 10:17:44.643195942 +0000 UTC m=+0.080259921 container create 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:17:44 np0005593232 podman[346576]: 2026-01-23 10:17:44.588239071 +0000 UTC m=+0.025303050 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:17:44 np0005593232 systemd[1]: Started libpod-conmon-66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f.scope.
Jan 23 05:17:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:17:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dbde6e351e8bfd7724b3260f1b1beb4f1bad61ace4eaba4d296b2291ed90daf7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:17:44 np0005593232 podman[346576]: 2026-01-23 10:17:44.766190787 +0000 UTC m=+0.203254786 container init 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:17:44 np0005593232 podman[346576]: 2026-01-23 10:17:44.778217309 +0000 UTC m=+0.215281268 container start 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 23 05:17:44 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [NOTICE]   (346596) : New worker (346598) forked
Jan 23 05:17:44 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [NOTICE]   (346596) : Loading success.
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.120 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.128 250273 DEBUG oslo_concurrency.lockutils [req-72e2bc37-f1f9-40ef-a4df-0ae197667599 req-722bd9de-b13e-47d5-80d4-f07b780777ce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.132 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163464.3373382, 1d70a2c0-b633-4e56-9f11-ae40749783be => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.132 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:17:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:45.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.240 250273 DEBUG nova.compute.manager [req-858bc23b-2033-4fcd-9921-7f90f5014b03 req-7286dc4f-326b-46f5-aa61-6e50c1ba60d1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.241 250273 DEBUG oslo_concurrency.lockutils [req-858bc23b-2033-4fcd-9921-7f90f5014b03 req-7286dc4f-326b-46f5-aa61-6e50c1ba60d1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.242 250273 DEBUG oslo_concurrency.lockutils [req-858bc23b-2033-4fcd-9921-7f90f5014b03 req-7286dc4f-326b-46f5-aa61-6e50c1ba60d1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.243 250273 DEBUG oslo_concurrency.lockutils [req-858bc23b-2033-4fcd-9921-7f90f5014b03 req-7286dc4f-326b-46f5-aa61-6e50c1ba60d1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.243 250273 DEBUG nova.compute.manager [req-858bc23b-2033-4fcd-9921-7f90f5014b03 req-7286dc4f-326b-46f5-aa61-6e50c1ba60d1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Processing event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.244 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.250 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.256 250273 INFO nova.virt.libvirt.driver [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Instance spawned successfully.#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.256 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.296 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.301 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163465.2495909, 1d70a2c0-b633-4e56-9f11-ae40749783be => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.302 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:17:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2644: 321 pgs: 321 active+clean; 472 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.725 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.726 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.726 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.727 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.728 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.728 250273 DEBUG nova.virt.libvirt.driver [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.734 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.739 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.781 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.890 250273 INFO nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Took 18.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:17:45 np0005593232 nova_compute[250269]: 2026-01-23 10:17:45.890 250273 DEBUG nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.291 250273 INFO nova.compute.manager [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Took 21.13 seconds to build instance.#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.469 250273 DEBUG oslo_concurrency.lockutils [None req-6bf390a4-a1c4-45a2-a1aa-5ad186209140 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:46.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.850 250273 DEBUG nova.network.neutron [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.850 250273 DEBUG nova.network.neutron [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:46 np0005593232 nova_compute[250269]: 2026-01-23 10:17:46.940 250273 DEBUG oslo_concurrency.lockutils [req-2ae86478-594d-49fd-91a5-8d73e2ba9ec9 req-9d3d03c5-e5bc-4dd3-8950-57862deedd45 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.104 250273 DEBUG oslo_concurrency.lockutils [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.104 250273 DEBUG oslo_concurrency.lockutils [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0020129219593337058 of space, bias 1.0, pg target 0.6038765878001118 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00867437593057882 of space, bias 1.0, pg target 2.602312779173646 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.175 250273 INFO nova.compute.manager [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Detaching volume 23e1c9da-11aa-46ec-90eb-aa0b6ee150dc#033[00m
Jan 23 05:17:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.415 250273 DEBUG nova.compute.manager [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.416 250273 DEBUG oslo_concurrency.lockutils [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.417 250273 DEBUG oslo_concurrency.lockutils [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.417 250273 DEBUG oslo_concurrency.lockutils [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.418 250273 DEBUG nova.compute.manager [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] No waiting events found dispatching network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.418 250273 WARNING nova.compute.manager [req-9d6561b6-9280-4999-875c-e17f384a3c28 req-ed074c2e-cbdc-47fa-80f5-50536a2d8b7b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received unexpected event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed for instance with vm_state active and task_state None.#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.479 250273 INFO nova.virt.block_device [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attempting to driver detach volume 23e1c9da-11aa-46ec-90eb-aa0b6ee150dc from mountpoint /dev/vdb#033[00m
Jan 23 05:17:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2645: 321 pgs: 321 active+clean; 472 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.494 250273 DEBUG nova.virt.libvirt.driver [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Attempting to detach device vdb from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.495 250273 DEBUG nova.virt.libvirt.guest [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-23e1c9da-11aa-46ec-90eb-aa0b6ee150dc">
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <serial>23e1c9da-11aa-46ec-90eb-aa0b6ee150dc</serial>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:47 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.511 250273 INFO nova.virt.libvirt.driver [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully detached device vdb from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the persistent domain config.#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.511 250273 DEBUG nova.virt.libvirt.driver [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.512 250273 DEBUG nova.virt.libvirt.guest [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-23e1c9da-11aa-46ec-90eb-aa0b6ee150dc">
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <serial>23e1c9da-11aa-46ec-90eb-aa0b6ee150dc</serial>
Jan 23 05:17:47 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:17:47 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:47 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.696 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769163467.6959167, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.704 250273 DEBUG nova.virt.libvirt.driver [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:17:47 np0005593232 nova_compute[250269]: 2026-01-23 10:17:47.707 250273 INFO nova.virt.libvirt.driver [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully detached device vdb from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the live domain config.#033[00m
Jan 23 05:17:48 np0005593232 nova_compute[250269]: 2026-01-23 10:17:48.058 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:48 np0005593232 nova_compute[250269]: 2026-01-23 10:17:48.183 250273 DEBUG nova.objects.instance [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:48 np0005593232 nova_compute[250269]: 2026-01-23 10:17:48.341 250273 DEBUG oslo_concurrency.lockutils [None req-dd43a847-3c6f-4e62-bc20-54ca0dbb695b 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.237s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:48.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:49.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2646: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 126 op/s
Jan 23 05:17:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:51.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2647: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 126 op/s
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.516 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.517 250273 DEBUG oslo_concurrency.lockutils [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.518 250273 DEBUG oslo_concurrency.lockutils [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.558 250273 INFO nova.compute.manager [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Detaching volume b016d51c-9f03-4ca0-86e5-02cc8ca5059f#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.845 250273 INFO nova.virt.block_device [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Attempting to driver detach volume b016d51c-9f03-4ca0-86e5-02cc8ca5059f from mountpoint /dev/vdc#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.854 250273 DEBUG nova.virt.libvirt.driver [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Attempting to detach device vdc from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.855 250273 DEBUG nova.virt.libvirt.guest [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b016d51c-9f03-4ca0-86e5-02cc8ca5059f">
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <target dev="vdc" bus="virtio"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <serial>b016d51c-9f03-4ca0-86e5-02cc8ca5059f</serial>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:51 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.862 250273 INFO nova.virt.libvirt.driver [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully detached device vdc from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the persistent domain config.#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.863 250273 DEBUG nova.virt.libvirt.driver [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] (1/8): Attempting to detach device vdc with device alias virtio-disk2 from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.863 250273 DEBUG nova.virt.libvirt.guest [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-b016d51c-9f03-4ca0-86e5-02cc8ca5059f">
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <target dev="vdc" bus="virtio"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <serial>b016d51c-9f03-4ca0-86e5-02cc8ca5059f</serial>
Jan 23 05:17:51 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
Jan 23 05:17:51 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:17:51 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.977 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769163471.9768033, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => virtio-disk2> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.979 250273 DEBUG nova.virt.libvirt.driver [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Start waiting for the detach event from libvirt for device vdc with device alias virtio-disk2 for instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:17:51 np0005593232 nova_compute[250269]: 2026-01-23 10:17:51.981 250273 INFO nova.virt.libvirt.driver [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully detached device vdc from instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b from the live domain config.#033[00m
Jan 23 05:17:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:52 np0005593232 nova_compute[250269]: 2026-01-23 10:17:52.609 250273 DEBUG nova.objects.instance [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'flavor' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:17:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:52.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:52 np0005593232 nova_compute[250269]: 2026-01-23 10:17:52.730 250273 DEBUG oslo_concurrency.lockutils [None req-b3cb7ee7-4c59-4b47-be0d-f1e3a9a157f5 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:53 np0005593232 nova_compute[250269]: 2026-01-23 10:17:53.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:53.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2648: 321 pgs: 321 active+clean; 474 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 2.0 MiB/s wr, 140 op/s
Jan 23 05:17:54 np0005593232 nova_compute[250269]: 2026-01-23 10:17:54.613 250273 DEBUG nova.compute.manager [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:54 np0005593232 nova_compute[250269]: 2026-01-23 10:17:54.614 250273 DEBUG nova.compute.manager [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:54 np0005593232 nova_compute[250269]: 2026-01-23 10:17:54.614 250273 DEBUG oslo_concurrency.lockutils [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:54 np0005593232 nova_compute[250269]: 2026-01-23 10:17:54.614 250273 DEBUG oslo_concurrency.lockutils [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:54 np0005593232 nova_compute[250269]: 2026-01-23 10:17:54.614 250273 DEBUG nova.network.neutron [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:54.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:55.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2649: 321 pgs: 321 active+clean; 460 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 630 KiB/s wr, 127 op/s
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.045 250273 DEBUG nova.compute.manager [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.046 250273 DEBUG nova.compute.manager [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing instance network info cache due to event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.047 250273 DEBUG oslo_concurrency.lockutils [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.047 250273 DEBUG oslo_concurrency.lockutils [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.048 250273 DEBUG nova.network.neutron [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:56 np0005593232 nova_compute[250269]: 2026-01-23 10:17:56.550 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:17:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:56.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:17:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:17:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:17:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:57.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.344 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.345 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.375 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.376 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.437 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.438 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.438 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.438 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.439 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2650: 321 pgs: 321 active+clean; 420 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 1.0 MiB/s wr, 230 op/s
Jan 23 05:17:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:17:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1163182349' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:17:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:17:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486774755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:17:57 np0005593232 nova_compute[250269]: 2026-01-23 10:17:57.924 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.017 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.018 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.021 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.021 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.194 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.195 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3732MB free_disk=20.94619369506836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.195 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.195 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.296 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.296 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 1d70a2c0-b633-4e56-9f11-ae40749783be actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.296 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.296 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.372 250273 DEBUG nova.network.neutron [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.373 250273 DEBUG nova.network.neutron [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.425 250273 DEBUG oslo_concurrency.lockutils [req-b583e621-e466-49d9-b5af-30f127c55c28 req-2a0d1fac-2866-4c95-8ab6-e76b68fe7234 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.446 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:17:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:17:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:17:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/13219395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.921 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.927 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.947 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.985 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:17:58 np0005593232 nova_compute[250269]: 2026-01-23 10:17:58.986 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:17:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:17:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:17:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:17:59.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:17:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:59Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:15:11:c6 10.100.0.8
Jan 23 05:17:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:17:59Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:15:11:c6 10.100.0.8
Jan 23 05:17:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2651: 321 pgs: 321 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 202 op/s
Jan 23 05:17:59 np0005593232 nova_compute[250269]: 2026-01-23 10:17:59.540 250273 DEBUG nova.network.neutron [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updated VIF entry in instance network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:17:59 np0005593232 nova_compute[250269]: 2026-01-23 10:17:59.540 250273 DEBUG nova.network.neutron [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:17:59 np0005593232 nova_compute[250269]: 2026-01-23 10:17:59.656 250273 DEBUG oslo_concurrency.lockutils [req-0581e767-7c56-4997-840c-0f4491d548c0 req-a905ffd5-01ec-42dc-ab99-5141696ecd8d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:01.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2652: 321 pgs: 321 active+clean; 452 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 2.6 MiB/s wr, 190 op/s
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.552 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.993 250273 DEBUG nova.compute.manager [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.993 250273 DEBUG nova.compute.manager [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing instance network info cache due to event network-changed-4a04244c-3270-4ff3-ad30-52e80e7db513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.994 250273 DEBUG oslo_concurrency.lockutils [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.994 250273 DEBUG oslo_concurrency.lockutils [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:01 np0005593232 nova_compute[250269]: 2026-01-23 10:18:01.994 250273 DEBUG nova.network.neutron [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Refreshing network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.045129) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482045202, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1253, "num_deletes": 257, "total_data_size": 1985851, "memory_usage": 2025120, "flush_reason": "Manual Compaction"}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482059455, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1954264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58214, "largest_seqno": 59466, "table_properties": {"data_size": 1948456, "index_size": 3074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12831, "raw_average_key_size": 19, "raw_value_size": 1936579, "raw_average_value_size": 2970, "num_data_blocks": 136, "num_entries": 652, "num_filter_entries": 652, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163372, "oldest_key_time": 1769163372, "file_creation_time": 1769163482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 14383 microseconds, and 6842 cpu microseconds.
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.059512) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1954264 bytes OK
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.059532) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.061253) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.061302) EVENT_LOG_v1 {"time_micros": 1769163482061264, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.061325) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1980195, prev total WAL file size 1980195, number of live WAL files 2.
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.062100) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323539' seq:72057594037927935, type:22 .. '6C6F676D0032353132' seq:0, type:0; will stop at (end)
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1908KB)], [134(8829KB)]
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482062203, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 10996079, "oldest_snapshot_seqno": -1}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8298 keys, 10860092 bytes, temperature: kUnknown
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482146815, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 10860092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10807196, "index_size": 31023, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 219013, "raw_average_key_size": 26, "raw_value_size": 10661894, "raw_average_value_size": 1284, "num_data_blocks": 1189, "num_entries": 8298, "num_filter_entries": 8298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.147144) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 10860092 bytes
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.148896) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.8 rd, 128.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 8.6 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(11.2) write-amplify(5.6) OK, records in: 8825, records dropped: 527 output_compression: NoCompression
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.148920) EVENT_LOG_v1 {"time_micros": 1769163482148908, "job": 82, "event": "compaction_finished", "compaction_time_micros": 84734, "compaction_time_cpu_micros": 32008, "output_level": 6, "num_output_files": 1, "total_output_size": 10860092, "num_input_records": 8825, "num_output_records": 8298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482149519, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163482151645, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.061940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.151737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.151746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.151747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.151749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:18:02.151751) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:18:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:03 np0005593232 nova_compute[250269]: 2026-01-23 10:18:03.063 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:03.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:03 np0005593232 podman[346715]: 2026-01-23 10:18:03.438578743 +0000 UTC m=+0.097508722 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 23 05:18:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2653: 321 pgs: 321 active+clean; 472 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.1 MiB/s wr, 246 op/s
Jan 23 05:18:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:04.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:05.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2654: 321 pgs: 321 active+clean; 472 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 4.1 MiB/s wr, 274 op/s
Jan 23 05:18:05 np0005593232 nova_compute[250269]: 2026-01-23 10:18:05.558 250273 DEBUG nova.network.neutron [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updated VIF entry in instance network info cache for port 4a04244c-3270-4ff3-ad30-52e80e7db513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:18:05 np0005593232 nova_compute[250269]: 2026-01-23 10:18:05.559 250273 DEBUG nova.network.neutron [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [{"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:05 np0005593232 nova_compute[250269]: 2026-01-23 10:18:05.606 250273 DEBUG oslo_concurrency.lockutils [req-2d318209-ffac-48f3-8e75-b83c7537a209 req-ff48117c-c871-4e43-8e38-58e9caeb3e60 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ccd07f55-529f-4dbb-989c-2cdbdd393a0b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:06 np0005593232 nova_compute[250269]: 2026-01-23 10:18:06.206 250273 INFO nova.compute.manager [None req-b525213f-7200-4f14-862d-c536fb0a94cb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Get console output#033[00m
Jan 23 05:18:06 np0005593232 nova_compute[250269]: 2026-01-23 10:18:06.217 312104 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 23 05:18:06 np0005593232 nova_compute[250269]: 2026-01-23 10:18:06.555 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:06.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:07 np0005593232 nova_compute[250269]: 2026-01-23 10:18:07.101 250273 INFO nova.compute.manager [None req-c9d10b14-d774-41e4-a279-12ed8a333ab2 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Get console output#033[00m
Jan 23 05:18:07 np0005593232 nova_compute[250269]: 2026-01-23 10:18:07.108 312104 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 23 05:18:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:07.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2655: 321 pgs: 321 active+clean; 472 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 261 op/s
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.066 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:08 np0005593232 podman[346746]: 2026-01-23 10:18:08.412809434 +0000 UTC m=+0.066432249 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:18:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:08.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.697 250273 DEBUG nova.compute.manager [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.698 250273 DEBUG nova.compute.manager [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing instance network info cache due to event network-changed-776b152a-2675-4fba-af36-cb34bf1e20ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.698 250273 DEBUG oslo_concurrency.lockutils [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.698 250273 DEBUG oslo_concurrency.lockutils [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.698 250273 DEBUG nova.network.neutron [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Refreshing network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.848 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.848 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.849 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.849 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.849 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.851 250273 INFO nova.compute.manager [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Terminating instance#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.853 250273 DEBUG nova.compute.manager [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:18:08 np0005593232 kernel: tap776b152a-26 (unregistering): left promiscuous mode
Jan 23 05:18:08 np0005593232 NetworkManager[49057]: <info>  [1769163488.9194] device (tap776b152a-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:18:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:08Z|00554|binding|INFO|Releasing lport 776b152a-2675-4fba-af36-cb34bf1e20ed from this chassis (sb_readonly=0)
Jan 23 05:18:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:08Z|00555|binding|INFO|Setting lport 776b152a-2675-4fba-af36-cb34bf1e20ed down in Southbound
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:08Z|00556|binding|INFO|Removing iface tap776b152a-26 ovn-installed in OVS
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:08.979 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:11:c6 10.100.0.8'], port_security=['fa:16:3e:15:11:c6 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1d70a2c0-b633-4e56-9f11-ae40749783be', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41ea2617-7043-474b-913f-7e235af58db8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '52d68d78-c737-420f-894f-05c7f67378f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=53d70bbc-d32d-46a9-8fb7-68cccce823e4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=776b152a-2675-4fba-af36-cb34bf1e20ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:08.982 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 776b152a-2675-4fba-af36-cb34bf1e20ed in datapath 41ea2617-7043-474b-913f-7e235af58db8 unbound from our chassis#033[00m
Jan 23 05:18:08 np0005593232 nova_compute[250269]: 2026-01-23 10:18:08.984 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:08.984 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41ea2617-7043-474b-913f-7e235af58db8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:08.986 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bb848a-a7b0-4d50-b1fb-b481282f74a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:08.987 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41ea2617-7043-474b-913f-7e235af58db8 namespace which is not needed anymore#033[00m
Jan 23 05:18:09 np0005593232 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000091.scope: Deactivated successfully.
Jan 23 05:18:09 np0005593232 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000091.scope: Consumed 14.029s CPU time.
Jan 23 05:18:09 np0005593232 systemd-machined[215836]: Machine qemu-64-instance-00000091 terminated.
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.087 250273 INFO nova.virt.libvirt.driver [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Instance destroyed successfully.#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.088 250273 DEBUG nova.objects.instance [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'resources' on Instance uuid 1d70a2c0-b633-4e56-9f11-ae40749783be obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:09 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [NOTICE]   (346596) : haproxy version is 2.8.14-c23fe91
Jan 23 05:18:09 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [NOTICE]   (346596) : path to executable is /usr/sbin/haproxy
Jan 23 05:18:09 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [WARNING]  (346596) : Exiting Master process...
Jan 23 05:18:09 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [ALERT]    (346596) : Current worker (346598) exited with code 143 (Terminated)
Jan 23 05:18:09 np0005593232 neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8[346592]: [WARNING]  (346596) : All workers exited. Exiting... (0)
Jan 23 05:18:09 np0005593232 systemd[1]: libpod-66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f.scope: Deactivated successfully.
Jan 23 05:18:09 np0005593232 podman[346789]: 2026-01-23 10:18:09.123156697 +0000 UTC m=+0.049569399 container died 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:18:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f-userdata-shm.mount: Deactivated successfully.
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.163 250273 DEBUG nova.virt.libvirt.vif [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:17:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-253961316',display_name='tempest-TestNetworkBasicOps-server-253961316',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-253961316',id=145,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDuTZsDX8Yidlcljee+mWcgr/8w1jex8qpbIhyaFbeVITtW45I3zURrpkl5L9QywgrlmJPDxvtYP7jMLWz49tvDv/cpi2iYtov3Za6bDONBt6jnbOwCbkkb5ok8znlV8Bw==',key_name='tempest-TestNetworkBasicOps-1667433613',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:17:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-u0ld2fbq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:17:45Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=1d70a2c0-b633-4e56-9f11-ae40749783be,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.165 250273 DEBUG nova.network.os_vif_util [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.167 250273 DEBUG nova.network.os_vif_util [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.167 250273 DEBUG os_vif [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:18:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dbde6e351e8bfd7724b3260f1b1beb4f1bad61ace4eaba4d296b2291ed90daf7-merged.mount: Deactivated successfully.
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.170 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.170 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap776b152a-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.172 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.174 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:09 np0005593232 podman[346789]: 2026-01-23 10:18:09.176100141 +0000 UTC m=+0.102512843 container cleanup 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.176 250273 INFO os_vif [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:15:11:c6,bridge_name='br-int',has_traffic_filtering=True,id=776b152a-2675-4fba-af36-cb34bf1e20ed,network=Network(41ea2617-7043-474b-913f-7e235af58db8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap776b152a-26')#033[00m
Jan 23 05:18:09 np0005593232 systemd[1]: libpod-conmon-66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f.scope: Deactivated successfully.
Jan 23 05:18:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:09.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:09 np0005593232 podman[346837]: 2026-01-23 10:18:09.236279021 +0000 UTC m=+0.038759072 container remove 66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.243 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[038a4e25-5c7d-4fa9-a4f5-a50864e26ddb]: (4, ('Fri Jan 23 10:18:09 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8 (66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f)\n66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f\nFri Jan 23 10:18:09 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-41ea2617-7043-474b-913f-7e235af58db8 (66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f)\n66a9c75a54bacbb7939337b2c0fea2e7f777a25c6cd7489e53987ef80b41bc5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.245 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[804db336-068b-4b1f-9e91-9b74eb58fb36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.246 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41ea2617-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:09 np0005593232 kernel: tap41ea2617-70: left promiscuous mode
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.248 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.268 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fa5fbff9-39cb-42b2-80d7-e6780bbe995c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.286 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ec18a45b-afa1-4118-bb7f-eb08e9135215]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.288 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[db9a37c2-0793-4fb8-80b0-705128268fba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.304 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bd1138-b3f2-4fa1-b519-42c8e86bdb65]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 737705, 'reachable_time': 35811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346863, 'error': None, 'target': 'ovnmeta-41ea2617-7043-474b-913f-7e235af58db8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 systemd[1]: run-netns-ovnmeta\x2d41ea2617\x2d7043\x2d474b\x2d913f\x2d7e235af58db8.mount: Deactivated successfully.
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.309 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41ea2617-7043-474b-913f-7e235af58db8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:18:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:09.310 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0e569a66-ec45-4302-bf5f-393ceef1e0e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.445 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.445 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.485 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:18:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2656: 321 pgs: 321 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.541 250273 INFO nova.virt.libvirt.driver [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Deleting instance files /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be_del#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.542 250273 INFO nova.virt.libvirt.driver [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Deletion of /var/lib/nova/instances/1d70a2c0-b633-4e56-9f11-ae40749783be_del complete#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.628 250273 INFO nova.compute.manager [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.628 250273 DEBUG oslo.service.loopingcall [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.629 250273 DEBUG nova.compute.manager [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.629 250273 DEBUG nova.network.neutron [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.646 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.647 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.659 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:18:09 np0005593232 nova_compute[250269]: 2026-01-23 10:18:09.659 250273 INFO nova.compute.claims [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.317 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.700 250273 DEBUG nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-unplugged-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.700 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.700 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.701 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.701 250273 DEBUG nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] No waiting events found dispatching network-vif-unplugged-776b152a-2675-4fba-af36-cb34bf1e20ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.701 250273 DEBUG nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-unplugged-776b152a-2675-4fba-af36-cb34bf1e20ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.701 250273 DEBUG nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.702 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.702 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.702 250273 DEBUG oslo_concurrency.lockutils [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.703 250273 DEBUG nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] No waiting events found dispatching network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.703 250273 WARNING nova.compute.manager [req-91aadd46-eef1-4a15-b132-d17c00ee7183 req-b6d047e7-4d37-4e3f-8601-270ffb9504ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received unexpected event network-vif-plugged-776b152a-2675-4fba-af36-cb34bf1e20ed for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:18:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2023694854' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.783 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.789 250273 DEBUG nova.compute.provider_tree [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.839 250273 DEBUG nova.scheduler.client.report [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.888 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.889 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.944 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.945 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:18:10 np0005593232 nova_compute[250269]: 2026-01-23 10:18:10.983 250273 INFO nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.003 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.161 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.162 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.162 250273 INFO nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Creating image(s)#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.187 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.212 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:11.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.241 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.244 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.307 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.308 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.309 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.309 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.332 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.335 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 96b953f7-8990-4761-81e9-d93cee7240dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.366 250273 DEBUG nova.policy [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec99ae7c69d0438280441e0434374cbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:18:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2657: 321 pgs: 321 active+clean; 480 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.617 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 96b953f7-8990-4761-81e9-d93cee7240dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.683 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] resizing rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.819 250273 DEBUG nova.objects.instance [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 96b953f7-8990-4761-81e9-d93cee7240dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.849 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.849 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Ensure instance console log exists: /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.850 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.850 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:11 np0005593232 nova_compute[250269]: 2026-01-23 10:18:11.851 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:12.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.682 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Successfully created port: e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.792 250273 DEBUG nova.network.neutron [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.830 250273 INFO nova.compute.manager [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Took 3.20 seconds to deallocate network for instance.#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.899 250273 DEBUG nova.network.neutron [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updated VIF entry in instance network info cache for port 776b152a-2675-4fba-af36-cb34bf1e20ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.900 250273 DEBUG nova.network.neutron [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [{"id": "776b152a-2675-4fba-af36-cb34bf1e20ed", "address": "fa:16:3e:15:11:c6", "network": {"id": "41ea2617-7043-474b-913f-7e235af58db8", "bridge": "br-int", "label": "tempest-network-smoke--887500891", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap776b152a-26", "ovs_interfaceid": "776b152a-2675-4fba-af36-cb34bf1e20ed", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.912 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.912 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:12 np0005593232 nova_compute[250269]: 2026-01-23 10:18:12.924 250273 DEBUG oslo_concurrency.lockutils [req-546e1a26-5794-45cf-b196-53d316e764bc req-1de9a7f9-5012-43c9-8202-0802c9d2a45f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-1d70a2c0-b633-4e56-9f11-ae40749783be" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.071 250273 DEBUG oslo_concurrency.processutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:13.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.270 250273 DEBUG nova.compute.manager [req-615797ad-b1ba-4a03-9775-28f15d7532b4 req-becb769c-4aae-4438-ad7b-50557ecb5605 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Received event network-vif-deleted-776b152a-2675-4fba-af36-cb34bf1e20ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.271 250273 INFO nova.compute.manager [req-615797ad-b1ba-4a03-9775-28f15d7532b4 req-becb769c-4aae-4438-ad7b-50557ecb5605 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Neutron deleted interface 776b152a-2675-4fba-af36-cb34bf1e20ed; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.271 250273 DEBUG nova.network.neutron [req-615797ad-b1ba-4a03-9775-28f15d7532b4 req-becb769c-4aae-4438-ad7b-50557ecb5605 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.299 250273 DEBUG nova.compute.manager [req-615797ad-b1ba-4a03-9775-28f15d7532b4 req-becb769c-4aae-4438-ad7b-50557ecb5605 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Detach interface failed, port_id=776b152a-2675-4fba-af36-cb34bf1e20ed, reason: Instance 1d70a2c0-b633-4e56-9f11-ae40749783be could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 23 05:18:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2658: 321 pgs: 321 active+clean; 477 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 5.4 MiB/s wr, 206 op/s
Jan 23 05:18:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1528295005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.530 250273 DEBUG oslo_concurrency.processutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.537 250273 DEBUG nova.compute.provider_tree [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.560 250273 DEBUG nova.scheduler.client.report [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.620 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.662 250273 INFO nova.scheduler.client.report [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Deleted allocations for instance 1d70a2c0-b633-4e56-9f11-ae40749783be#033[00m
Jan 23 05:18:13 np0005593232 nova_compute[250269]: 2026-01-23 10:18:13.747 250273 DEBUG oslo_concurrency.lockutils [None req-b0ba67dc-2074-4f25-9655-f34ba0ec71ab 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "1d70a2c0-b633-4e56-9f11-ae40749783be" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.899s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:14 np0005593232 nova_compute[250269]: 2026-01-23 10:18:14.174 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:14 np0005593232 nova_compute[250269]: 2026-01-23 10:18:14.635 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Successfully updated port: e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:18:14 np0005593232 nova_compute[250269]: 2026-01-23 10:18:14.660 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:14 np0005593232 nova_compute[250269]: 2026-01-23 10:18:14.660 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquired lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:14 np0005593232 nova_compute[250269]: 2026-01-23 10:18:14.660 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:18:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:14.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:15.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:15 np0005593232 nova_compute[250269]: 2026-01-23 10:18:15.307 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:18:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2659: 321 pgs: 321 active+clean; 491 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 5.6 MiB/s wr, 178 op/s
Jan 23 05:18:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:16.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:16 np0005593232 nova_compute[250269]: 2026-01-23 10:18:16.879 250273 DEBUG nova.compute.manager [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-changed-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:16 np0005593232 nova_compute[250269]: 2026-01-23 10:18:16.880 250273 DEBUG nova.compute.manager [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Refreshing instance network info cache due to event network-changed-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:18:16 np0005593232 nova_compute[250269]: 2026-01-23 10:18:16.880 250273 DEBUG oslo_concurrency.lockutils [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:17.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2660: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 6.1 MiB/s wr, 185 op/s
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.591 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.592 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.593 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.593 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.594 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.596 250273 INFO nova.compute.manager [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Terminating instance#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.599 250273 DEBUG nova.compute.manager [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:18:17 np0005593232 kernel: tap4a04244c-32 (unregistering): left promiscuous mode
Jan 23 05:18:17 np0005593232 NetworkManager[49057]: <info>  [1769163497.6846] device (tap4a04244c-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.737 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:17Z|00557|binding|INFO|Releasing lport 4a04244c-3270-4ff3-ad30-52e80e7db513 from this chassis (sb_readonly=0)
Jan 23 05:18:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:17Z|00558|binding|INFO|Setting lport 4a04244c-3270-4ff3-ad30-52e80e7db513 down in Southbound
Jan 23 05:18:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:17Z|00559|binding|INFO|Removing iface tap4a04244c-32 ovn-installed in OVS
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:17.746 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c0:68:91 10.100.0.3'], port_security=['fa:16:3e:c0:68:91 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ccd07f55-529f-4dbb-989c-2cdbdd393a0b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ae621f21a8e438fb95152309b38cee5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '3b0a0b41-45a8-4582-a4d2-a9aff1f1a18c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e5888498-07d6-4c96-95ee-546974eebd82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=4a04244c-3270-4ff3-ad30-52e80e7db513) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:17.749 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4a04244c-3270-4ff3-ad30-52e80e7db513 in datapath f98d79de-4a23-4f29-9848-c5d4c5683a5d unbound from our chassis#033[00m
Jan 23 05:18:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:17.751 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f98d79de-4a23-4f29-9848-c5d4c5683a5d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:17.753 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[35582459-f99a-4066-af8f-4e4dcec02f83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:17.754 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d namespace which is not needed anymore#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 23 05:18:17 np0005593232 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d0000008e.scope: Consumed 22.609s CPU time.
Jan 23 05:18:17 np0005593232 systemd-machined[215836]: Machine qemu-62-instance-0000008e terminated.
Jan 23 05:18:17 np0005593232 kernel: tap4a04244c-32: entered promiscuous mode
Jan 23 05:18:17 np0005593232 kernel: tap4a04244c-32 (unregistering): left promiscuous mode
Jan 23 05:18:17 np0005593232 NetworkManager[49057]: <info>  [1769163497.8312] manager: (tap4a04244c-32): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.849 250273 INFO nova.virt.libvirt.driver [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Instance destroyed successfully.#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.849 250273 DEBUG nova.objects.instance [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lazy-loading 'resources' on Instance uuid ccd07f55-529f-4dbb-989c-2cdbdd393a0b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.871 250273 DEBUG nova.virt.libvirt.vif [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:14:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-TestInstancesWithCinderVolumes-server-649765900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testinstanceswithcindervolumes-server-649765900',id=142,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPuMczToXGmZUNyxG5fVGeV6xaoJVOpQ6Lh9dx5t6v22bv4xalVGQLUjYNEpg7ajkuOU/WHiNfvMhffjZHY/YojnQQYOX+q0GTa9+NPbkGDFf1XELa+vTNvIe6ZV8CwP9g==',key_name='tempest-TestInstancesWithCinderVolumes-232096272',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:15:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ae621f21a8e438fb95152309b38cee5',ramdisk_id='',reservation_id='r-zfxoc5fn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestInstancesWithCinderVolumes-565485208',owner_user_name='tempest-TestInstancesWithCinderVolumes-565485208-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:15:24Z,user_data=None,user_id='95ac13194f0940128d42af3d45d130fa',uuid=ccd07f55-529f-4dbb-989c-2cdbdd393a0b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.872 250273 DEBUG nova.network.os_vif_util [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converting VIF {"id": "4a04244c-3270-4ff3-ad30-52e80e7db513", "address": "fa:16:3e:c0:68:91", "network": {"id": "f98d79de-4a23-4f29-9848-c5d4c5683a5d", "bridge": "br-int", "label": "tempest-TestInstancesWithCinderVolumes-1507431135-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ae621f21a8e438fb95152309b38cee5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4a04244c-32", "ovs_interfaceid": "4a04244c-3270-4ff3-ad30-52e80e7db513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.873 250273 DEBUG nova.network.os_vif_util [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.873 250273 DEBUG os_vif [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.875 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.875 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a04244c-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.877 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.878 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:17 np0005593232 nova_compute[250269]: 2026-01-23 10:18:17.881 250273 INFO os_vif [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c0:68:91,bridge_name='br-int',has_traffic_filtering=True,id=4a04244c-3270-4ff3-ad30-52e80e7db513,network=Network(f98d79de-4a23-4f29-9848-c5d4c5683a5d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4a04244c-32')#033[00m
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [NOTICE]   (343045) : haproxy version is 2.8.14-c23fe91
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [NOTICE]   (343045) : path to executable is /usr/sbin/haproxy
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [WARNING]  (343045) : Exiting Master process...
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [WARNING]  (343045) : Exiting Master process...
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [ALERT]    (343045) : Current worker (343062) exited with code 143 (Terminated)
Jan 23 05:18:17 np0005593232 neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d[343041]: [WARNING]  (343045) : All workers exited. Exiting... (0)
Jan 23 05:18:17 np0005593232 systemd[1]: libpod-32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1.scope: Deactivated successfully.
Jan 23 05:18:17 np0005593232 podman[347154]: 2026-01-23 10:18:17.96637333 +0000 UTC m=+0.065328317 container died 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:18:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1-userdata-shm.mount: Deactivated successfully.
Jan 23 05:18:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1e49c1afd44b824360f52b7e0ebbe39e0de112feb2b10c678b305fe509c52e75-merged.mount: Deactivated successfully.
Jan 23 05:18:18 np0005593232 podman[347154]: 2026-01-23 10:18:18.001567161 +0000 UTC m=+0.100522118 container cleanup 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:18 np0005593232 systemd[1]: libpod-conmon-32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1.scope: Deactivated successfully.
Jan 23 05:18:18 np0005593232 podman[347201]: 2026-01-23 10:18:18.064704904 +0000 UTC m=+0.042653602 container remove 32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.069 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.070 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a4516908-d215-4f29-afdc-b6b9ffb98c76]: (4, ('Fri Jan 23 10:18:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d (32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1)\n32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1\nFri Jan 23 10:18:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d (32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1)\n32da8065e7e6d658a98dc611a644f50f8974092becf5d5ce509e2c3289babaf1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.072 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbd99d5-d36f-4fd5-92f6-56131aec1cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.073 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf98d79de-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.075 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:18 np0005593232 kernel: tapf98d79de-40: left promiscuous mode
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.080 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa4a23a-6432-4921-846b-e2abe04ac697]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.094 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[93549f42-fc15-43e8-a0b2-d66ce0fb33af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.095 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[49d264f9-5abe-47c5-9275-b726e3ea15c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.107 250273 INFO nova.virt.libvirt.driver [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Deleting instance files /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b_del#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.108 250273 INFO nova.virt.libvirt.driver [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Deletion of /var/lib/nova/instances/ccd07f55-529f-4dbb-989c-2cdbdd393a0b_del complete#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.111 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ad15c722-f541-4e92-a7a3-270fc94527e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 722531, 'reachable_time': 39176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347219, 'error': None, 'target': 'ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.115 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f98d79de-4a23-4f29-9848-c5d4c5683a5d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:18:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:18.115 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f36c751-6ae8-4f43-b0c5-26b793d08d5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:18 np0005593232 systemd[1]: run-netns-ovnmeta\x2df98d79de\x2d4a23\x2d4f29\x2d9848\x2dc5d4c5683a5d.mount: Deactivated successfully.
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.204 250273 INFO nova.compute.manager [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Took 0.60 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.204 250273 DEBUG oslo.service.loopingcall [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.205 250273 DEBUG nova.compute.manager [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.205 250273 DEBUG nova.network.neutron [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:18:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:18.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.903 250273 DEBUG nova.network.neutron [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Updating instance_info_cache with network_info: [{"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.943 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Releasing lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.944 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Instance network_info: |[{"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.945 250273 DEBUG oslo_concurrency.lockutils [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.945 250273 DEBUG nova.network.neutron [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Refreshing network info cache for port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.951 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Start _get_guest_xml network_info=[{"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.957 250273 WARNING nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.963 250273 DEBUG nova.virt.libvirt.host [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.964 250273 DEBUG nova.virt.libvirt.host [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.970 250273 DEBUG nova.virt.libvirt.host [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.971 250273 DEBUG nova.virt.libvirt.host [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.973 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.974 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.975 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.975 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.976 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.977 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.977 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.978 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.979 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.979 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.980 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.981 250273 DEBUG nova.virt.hardware [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:18:18 np0005593232 nova_compute[250269]: 2026-01-23 10:18:18.986 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:19.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:18:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4046235666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.445 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.469 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.473 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2661: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 743 KiB/s rd, 6.0 MiB/s wr, 185 op/s
Jan 23 05:18:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:18:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/866125585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.916 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.918 250273 DEBUG nova.virt.libvirt.vif [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:18:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1417985880',display_name='tempest-ServersTestJSON-server-1417985880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1417985880',id=147,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-7l5yrd3o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:18:11Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=96b953f7-8990-4761-81e9-d93cee7240dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.918 250273 DEBUG nova.network.os_vif_util [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.919 250273 DEBUG nova.network.os_vif_util [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.920 250273 DEBUG nova.objects.instance [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96b953f7-8990-4761-81e9-d93cee7240dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.947 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <uuid>96b953f7-8990-4761-81e9-d93cee7240dc</uuid>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <name>instance-00000093</name>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersTestJSON-server-1417985880</nova:name>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:18:18</nova:creationTime>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:user uuid="ec99ae7c69d0438280441e0434374cbf">tempest-ServersTestJSON-1611255243-project-member</nova:user>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:project uuid="c59351a1b59c4cc9ad389dff900935f2">tempest-ServersTestJSON-1611255243</nova:project>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <nova:port uuid="e3abfed6-0093-41f6-95f6-e5c9c0b94fc4">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="serial">96b953f7-8990-4761-81e9-d93cee7240dc</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="uuid">96b953f7-8990-4761-81e9-d93cee7240dc</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/96b953f7-8990-4761-81e9-d93cee7240dc_disk">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/96b953f7-8990-4761-81e9-d93cee7240dc_disk.config">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:7d:dd:31"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <target dev="tape3abfed6-00"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/console.log" append="off"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:18:19 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:18:19 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:18:19 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:18:19 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.949 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Preparing to wait for external event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.949 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.949 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.949 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.950 250273 DEBUG nova.virt.libvirt.vif [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:18:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1417985880',display_name='tempest-ServersTestJSON-server-1417985880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1417985880',id=147,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-7l5yrd3o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:18:11Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=96b953f7-8990-4761-81e9-d93cee7240dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.950 250273 DEBUG nova.network.os_vif_util [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.951 250273 DEBUG nova.network.os_vif_util [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.951 250273 DEBUG os_vif [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.952 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.952 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.953 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.956 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3abfed6-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.957 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3abfed6-00, col_values=(('external_ids', {'iface-id': 'e3abfed6-0093-41f6-95f6-e5c9c0b94fc4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:dd:31', 'vm-uuid': '96b953f7-8990-4761-81e9-d93cee7240dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:19 np0005593232 NetworkManager[49057]: <info>  [1769163499.9606] manager: (tape3abfed6-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.964 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.968 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:19 np0005593232 nova_compute[250269]: 2026-01-23 10:18:19.973 250273 INFO os_vif [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00')#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.035 250273 DEBUG nova.compute.manager [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-unplugged-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.036 250273 DEBUG oslo_concurrency.lockutils [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.036 250273 DEBUG oslo_concurrency.lockutils [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.036 250273 DEBUG oslo_concurrency.lockutils [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.036 250273 DEBUG nova.compute.manager [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] No waiting events found dispatching network-vif-unplugged-4a04244c-3270-4ff3-ad30-52e80e7db513 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.037 250273 DEBUG nova.compute.manager [req-989a5a11-55b1-4abc-b403-3446e6aa6e06 req-e53e18a2-9c96-42ce-891a-e1bbccc5a9b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-unplugged-4a04244c-3270-4ff3-ad30-52e80e7db513 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.051 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.051 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.051 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No VIF found with MAC fa:16:3e:7d:dd:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.052 250273 INFO nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Using config drive#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.077 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:20.225 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:20.226 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:20.227 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.584 250273 DEBUG nova.network.neutron [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.621 250273 INFO nova.compute.manager [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Took 2.42 seconds to deallocate network for instance.#033[00m
Jan 23 05:18:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:20.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.721 250273 INFO nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Creating config drive at /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.725 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4tpde8b7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.761 250273 DEBUG nova.compute.manager [req-984391be-658a-4c9d-8608-3c590cc63604 req-fd3ace94-cb2b-4efa-a08d-84c450d3696e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-deleted-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.870 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4tpde8b7" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.901 250273 DEBUG nova.storage.rbd_utils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 96b953f7-8990-4761-81e9-d93cee7240dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:20 np0005593232 nova_compute[250269]: 2026-01-23 10:18:20.906 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config 96b953f7-8990-4761-81e9-d93cee7240dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.022 250273 INFO nova.compute.manager [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Took 0.40 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.063 250273 DEBUG oslo_concurrency.processutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config 96b953f7-8990-4761-81e9-d93cee7240dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.063 250273 INFO nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Deleting local config drive /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc/disk.config because it was imported into RBD.#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.104 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.104 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:21 np0005593232 kernel: tape3abfed6-00: entered promiscuous mode
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.1120] manager: (tape3abfed6-00): new Tun device (/org/freedesktop/NetworkManager/Devices/266)
Jan 23 05:18:21 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:21Z|00560|binding|INFO|Claiming lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for this chassis.
Jan 23 05:18:21 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:21Z|00561|binding|INFO|e3abfed6-0093-41f6-95f6-e5c9c0b94fc4: Claiming fa:16:3e:7d:dd:31 10.100.0.11
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.113 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.122 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:31 10.100.0.11'], port_security=['fa:16:3e:7d:dd:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '96b953f7-8990-4761-81e9-d93cee7240dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.124 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 bound to our chassis#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.127 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7#033[00m
Jan 23 05:18:21 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:21Z|00562|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 ovn-installed in OVS
Jan 23 05:18:21 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:21Z|00563|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 up in Southbound
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.140 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc4304b-9590-4a96-b901-58958be0fd64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.140 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43bdb40a-e1 in ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.142 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43bdb40a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.142 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c7d2e705-5f22-4c65-874b-901f06878baa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.143 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c24ec1-4215-449c-8aa1-a587453720d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 systemd-udevd[347357]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.144 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 systemd-machined[215836]: New machine qemu-65-instance-00000093.
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.1564] device (tape3abfed6-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.1571] device (tape3abfed6-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.156 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[cd20fb99-2332-4686-8eac-ea9fa3046f58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 systemd[1]: Started Virtual Machine qemu-65-instance-00000093.
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.182 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[45867b58-f98f-4572-9925-420ab4d54aa4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.214 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d7c63f-4652-419c-a809-70d301b4bfb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 systemd-udevd[347361]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.2253] manager: (tap43bdb40a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/267)
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.224 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dc5f7991-a50b-4fea-9797-b93d7a601c02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:21.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.256 250273 DEBUG oslo_concurrency.processutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.261 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[95ee0f49-cfe2-448a-97c1-ed9d1c218b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.269 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[091edb2b-ae85-4a02-a32b-2bb751607a88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.2980] device (tap43bdb40a-e0): carrier: link connected
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.308 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3418bb3d-ee22-4db9-ab53-d308dc444df5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.331 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[39d18752-7666-40b2-832c-7a64d79de1ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741444, 'reachable_time': 42630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 347391, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.347 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[30b7dc5e-1be1-421b-9300-bb3883f22f2b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:5ee5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 741444, 'tstamp': 741444}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 347392, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.365 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b108270-76d5-4578-b7e5-80146b8d0959]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741444, 'reachable_time': 42630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 347393, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.403 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[828e0bd1-18ed-427d-98d4-c013686c2aab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.468 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3273dea7-98e3-4564-a5d8-99392184dc4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.470 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.470 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.470 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43bdb40a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:21 np0005593232 kernel: tap43bdb40a-e0: entered promiscuous mode
Jan 23 05:18:21 np0005593232 NetworkManager[49057]: <info>  [1769163501.4730] manager: (tap43bdb40a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.475 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43bdb40a-e0, col_values=(('external_ids', {'iface-id': '8a8ef4f2-2ba5-405a-811e-058c5ff2b91e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:21Z|00564|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.490 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.491 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.492 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56b8b955-be7d-41ba-8d82-244d54b76a5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.493 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:18:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:21.494 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'env', 'PROCESS_TAG=haproxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:18:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2662: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 552 KiB/s rd, 5.5 MiB/s wr, 162 op/s
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.625 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163501.624735, 96b953f7-8990-4761-81e9-d93cee7240dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.625 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] VM Started (Lifecycle Event)#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.657 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.663 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163501.6257591, 96b953f7-8990-4761-81e9-d93cee7240dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.664 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:18:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4230946624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.698 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.702 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.713 250273 DEBUG oslo_concurrency.processutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.718 250273 DEBUG nova.compute.provider_tree [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.742 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.750 250273 DEBUG nova.scheduler.client.report [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:21 np0005593232 podman[347488]: 2026-01-23 10:18:21.841785508 +0000 UTC m=+0.042482549 container create 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:21 np0005593232 systemd[1]: Started libpod-conmon-2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367.scope.
Jan 23 05:18:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/744367de6641cff53c49a53e60ed04c74ae46db1c6873c2b375c867e07f5b542/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:21 np0005593232 podman[347488]: 2026-01-23 10:18:21.915152612 +0000 UTC m=+0.115849673 container init 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:21 np0005593232 podman[347488]: 2026-01-23 10:18:21.819391251 +0000 UTC m=+0.020088312 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:18:21 np0005593232 podman[347488]: 2026-01-23 10:18:21.920900286 +0000 UTC m=+0.121597327 container start 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:21 np0005593232 nova_compute[250269]: 2026-01-23 10:18:21.941 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:21 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [NOTICE]   (347508) : New worker (347511) forked
Jan 23 05:18:21 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [NOTICE]   (347508) : Loading success.
Jan 23 05:18:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.496 250273 DEBUG nova.compute.manager [req-b0793c87-4dc9-471e-ba82-f5371e22f027 req-a81f5a17-d702-4bd4-be43-c36f2a688729 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.498 250273 DEBUG oslo_concurrency.lockutils [req-b0793c87-4dc9-471e-ba82-f5371e22f027 req-a81f5a17-d702-4bd4-be43-c36f2a688729 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.498 250273 DEBUG oslo_concurrency.lockutils [req-b0793c87-4dc9-471e-ba82-f5371e22f027 req-a81f5a17-d702-4bd4-be43-c36f2a688729 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.498 250273 DEBUG oslo_concurrency.lockutils [req-b0793c87-4dc9-471e-ba82-f5371e22f027 req-a81f5a17-d702-4bd4-be43-c36f2a688729 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.498 250273 DEBUG nova.compute.manager [req-b0793c87-4dc9-471e-ba82-f5371e22f027 req-a81f5a17-d702-4bd4-be43-c36f2a688729 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Processing event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.499 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.504 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163502.5039256, 96b953f7-8990-4761-81e9-d93cee7240dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.504 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.508 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.513 250273 INFO nova.virt.libvirt.driver [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Instance spawned successfully.#033[00m
Jan 23 05:18:22 np0005593232 nova_compute[250269]: 2026-01-23 10:18:22.514 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:18:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 23 05:18:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:22.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 23 05:18:23 np0005593232 nova_compute[250269]: 2026-01-23 10:18:23.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:23.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2663: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 563 KiB/s rd, 5.5 MiB/s wr, 178 op/s
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.086 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163489.084624, 1d70a2c0-b633-4e56-9f11-ae40749783be => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.087 250273 INFO nova.compute.manager [-] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:18:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.123 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.129 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:18:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.362 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.363 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.365 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.366 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.367 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.367 250273 DEBUG nova.virt.libvirt.driver [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.666 250273 DEBUG nova.compute.manager [None req-8a495258-bb6e-4a21-8273-e69e5ce81664 - - - - - -] [instance: 1d70a2c0-b633-4e56-9f11-ae40749783be] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:24.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:24 np0005593232 nova_compute[250269]: 2026-01-23 10:18:24.992 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:18:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:25.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2664: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c8ef8b4c-bada-46c8-8e3c-4bfbbee1115d does not exist
Jan 23 05:18:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7896322e-7154-4d5c-a24d-ab02b070fba1 does not exist
Jan 23 05:18:25 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4a5ecff7-1a18-4629-a5c9-9741589a7025 does not exist
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.106 250273 INFO nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Took 14.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.107 250273 DEBUG nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.180474369 +0000 UTC m=+0.040932594 container create 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.188 250273 INFO nova.scheduler.client.report [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Deleted allocations for instance ccd07f55-529f-4dbb-989c-2cdbdd393a0b#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.197 250273 INFO nova.compute.manager [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Took 16.62 seconds to build instance.#033[00m
Jan 23 05:18:26 np0005593232 systemd[1]: Started libpod-conmon-071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2.scope.
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.234 250273 DEBUG oslo_concurrency.lockutils [None req-b1e60e4f-037e-4547-b71f-a248d47edfbc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.257 250273 DEBUG nova.compute.manager [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.257 250273 DEBUG oslo_concurrency.lockutils [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.163187008 +0000 UTC m=+0.023645233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.258 250273 DEBUG oslo_concurrency.lockutils [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.258 250273 DEBUG oslo_concurrency.lockutils [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.258 250273 DEBUG nova.compute.manager [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] No waiting events found dispatching network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.259 250273 WARNING nova.compute.manager [req-1e76e046-7a7a-4dc2-ac1a-ab3938d9da81 req-6a206097-d0ee-4f39-bb75-705552ed6f72 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Received unexpected event network-vif-plugged-4a04244c-3270-4ff3-ad30-52e80e7db513 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.271440854 +0000 UTC m=+0.131899099 container init 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.273 250273 DEBUG oslo_concurrency.lockutils [None req-c46ff4ea-b3b6-4b87-83e1-11141f8aa71f 95ac13194f0940128d42af3d45d130fa 3ae621f21a8e438fb95152309b38cee5 - - default default] Lock "ccd07f55-529f-4dbb-989c-2cdbdd393a0b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.681s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.27869212 +0000 UTC m=+0.139150345 container start 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.28152908 +0000 UTC m=+0.141987325 container attach 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:26 np0005593232 clever_pascal[347810]: 167 167
Jan 23 05:18:26 np0005593232 systemd[1]: libpod-071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2.scope: Deactivated successfully.
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.287094509 +0000 UTC m=+0.147552734 container died 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:18:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5ad74d258ddfbea10d2ee63a2ec2e9f35b118dfb428719186ac87701f648ff3f-merged.mount: Deactivated successfully.
Jan 23 05:18:26 np0005593232 podman[347794]: 2026-01-23 10:18:26.320411385 +0000 UTC m=+0.180869610 container remove 071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_pascal, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:18:26 np0005593232 systemd[1]: libpod-conmon-071fe7972bb6483fe51f314457d8896fef814d1dd05600b175ed3edd108b8ce2.scope: Deactivated successfully.
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.341 250273 DEBUG nova.network.neutron [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Updated VIF entry in instance network info cache for port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.341 250273 DEBUG nova.network.neutron [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Updating instance_info_cache with network_info: [{"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:26 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:26Z|00565|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.376 250273 DEBUG oslo_concurrency.lockutils [req-5d8f30b3-3d7f-4783-af23-5db834ea5d0d req-9d23ecc9-50be-43d0-b37f-2e534b219b57 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-96b953f7-8990-4761-81e9-d93cee7240dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.418 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:26 np0005593232 podman[347832]: 2026-01-23 10:18:26.516851616 +0000 UTC m=+0.042752556 container create 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:18:26 np0005593232 podman[347832]: 2026-01-23 10:18:26.500702407 +0000 UTC m=+0.026603367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:26 np0005593232 systemd[1]: Started libpod-conmon-824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c.scope.
Jan 23 05:18:26 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:26Z|00566|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.619 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:26 np0005593232 podman[347832]: 2026-01-23 10:18:26.659195621 +0000 UTC m=+0.185096591 container init 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:18:26 np0005593232 podman[347832]: 2026-01-23 10:18:26.667641501 +0000 UTC m=+0.193542431 container start 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:18:26 np0005593232 podman[347832]: 2026-01-23 10:18:26.671594763 +0000 UTC m=+0.197495733 container attach 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:18:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:26.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.844 250273 DEBUG nova.compute.manager [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.844 250273 DEBUG oslo_concurrency.lockutils [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.845 250273 DEBUG oslo_concurrency.lockutils [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.845 250273 DEBUG oslo_concurrency.lockutils [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.845 250273 DEBUG nova.compute.manager [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] No waiting events found dispatching network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:26 np0005593232 nova_compute[250269]: 2026-01-23 10:18:26.845 250273 WARNING nova.compute.manager [req-f2334193-5333-4b3c-9f8b-a39e8ca81104 req-e3696a04-15e9-4e1a-a95d-fc18d67b54b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received unexpected event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:18:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:27.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:27 np0005593232 optimistic_wescoff[347850]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:18:27 np0005593232 optimistic_wescoff[347850]: --> relative data size: 1.0
Jan 23 05:18:27 np0005593232 optimistic_wescoff[347850]: --> All data devices are unavailable
Jan 23 05:18:27 np0005593232 systemd[1]: libpod-824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c.scope: Deactivated successfully.
Jan 23 05:18:27 np0005593232 podman[347832]: 2026-01-23 10:18:27.489816672 +0000 UTC m=+1.015717612 container died 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:18:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2665: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 482 KiB/s wr, 137 op/s
Jan 23 05:18:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a40079535b468b354f7a36bd92d4cd311d3e7fbfd7d6e1715b3bf7a8cb32e276-merged.mount: Deactivated successfully.
Jan 23 05:18:27 np0005593232 podman[347832]: 2026-01-23 10:18:27.540319707 +0000 UTC m=+1.066220648 container remove 824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wescoff, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:18:27 np0005593232 systemd[1]: libpod-conmon-824c1c35ec0d3c0557927236d4aa0db45126bf6aec8062e71c0fc75b1ee2d07c.scope: Deactivated successfully.
Jan 23 05:18:28 np0005593232 nova_compute[250269]: 2026-01-23 10:18:28.072 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.103430908 +0000 UTC m=+0.037792045 container create 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:18:28 np0005593232 systemd[1]: Started libpod-conmon-02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5.scope.
Jan 23 05:18:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.088156314 +0000 UTC m=+0.022517471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.185327655 +0000 UTC m=+0.119688822 container init 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.193042964 +0000 UTC m=+0.127404101 container start 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.196028709 +0000 UTC m=+0.130389846 container attach 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:18:28 np0005593232 sharp_kowalevski[348035]: 167 167
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.199196419 +0000 UTC m=+0.133557556 container died 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:18:28 np0005593232 systemd[1]: libpod-02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5.scope: Deactivated successfully.
Jan 23 05:18:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-be8fd2848b8cf1ade484825ce8d3625bd62e7900ba39dd1693482079c5e2afb5-merged.mount: Deactivated successfully.
Jan 23 05:18:28 np0005593232 podman[348017]: 2026-01-23 10:18:28.230835048 +0000 UTC m=+0.165196185 container remove 02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:18:28 np0005593232 systemd[1]: libpod-conmon-02fb40439437014d2b4e1f48bb6d1931792434e58b6ef9d88c6c9d1bd9e09ab5.scope: Deactivated successfully.
Jan 23 05:18:28 np0005593232 podman[348059]: 2026-01-23 10:18:28.401328313 +0000 UTC m=+0.047571053 container create e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:18:28 np0005593232 systemd[1]: Started libpod-conmon-e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89.scope.
Jan 23 05:18:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b411bb3b2340905b2ac583a10889d395aa85b93652044140b91b6bf2c2650d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b411bb3b2340905b2ac583a10889d395aa85b93652044140b91b6bf2c2650d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b411bb3b2340905b2ac583a10889d395aa85b93652044140b91b6bf2c2650d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b411bb3b2340905b2ac583a10889d395aa85b93652044140b91b6bf2c2650d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:28 np0005593232 podman[348059]: 2026-01-23 10:18:28.37658375 +0000 UTC m=+0.022826510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:28 np0005593232 podman[348059]: 2026-01-23 10:18:28.478690871 +0000 UTC m=+0.124933631 container init e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:18:28 np0005593232 podman[348059]: 2026-01-23 10:18:28.483960261 +0000 UTC m=+0.130203001 container start e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:18:28 np0005593232 podman[348059]: 2026-01-23 10:18:28.488120829 +0000 UTC m=+0.134363599 container attach e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:28.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]: {
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:    "0": [
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:        {
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "devices": [
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "/dev/loop3"
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            ],
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "lv_name": "ceph_lv0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "lv_size": "7511998464",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "name": "ceph_lv0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "tags": {
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.cluster_name": "ceph",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.crush_device_class": "",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.encrypted": "0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.osd_id": "0",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.type": "block",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:                "ceph.vdo": "0"
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            },
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "type": "block",
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:            "vg_name": "ceph_vg0"
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:        }
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]:    ]
Jan 23 05:18:29 np0005593232 confident_ptolemy[348075]: }
Jan 23 05:18:29 np0005593232 systemd[1]: libpod-e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89.scope: Deactivated successfully.
Jan 23 05:18:29 np0005593232 podman[348059]: 2026-01-23 10:18:29.220667494 +0000 UTC m=+0.866910234 container died e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:18:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-92b411bb3b2340905b2ac583a10889d395aa85b93652044140b91b6bf2c2650d-merged.mount: Deactivated successfully.
Jan 23 05:18:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:29 np0005593232 podman[348059]: 2026-01-23 10:18:29.284421746 +0000 UTC m=+0.930664496 container remove e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_ptolemy, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:18:29 np0005593232 systemd[1]: libpod-conmon-e736f844885ef3207de8e1ee58e45c8cb62c48e7ce75e063554a994ec98baf89.scope: Deactivated successfully.
Jan 23 05:18:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2666: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 88 op/s
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.561 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.565 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.565 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.566 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.566 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.567 250273 INFO nova.compute.manager [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Terminating instance#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.568 250273 DEBUG nova.compute.manager [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:18:29 np0005593232 kernel: tape3abfed6-00 (unregistering): left promiscuous mode
Jan 23 05:18:29 np0005593232 NetworkManager[49057]: <info>  [1769163509.6207] device (tape3abfed6-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00567|binding|INFO|Releasing lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 from this chassis (sb_readonly=0)
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00568|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 down in Southbound
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00569|binding|INFO|Removing iface tape3abfed6-00 ovn-installed in OVS
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.637 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.645 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:31 10.100.0.11'], port_security=['fa:16:3e:7d:dd:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '96b953f7-8990-4761-81e9-d93cee7240dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.646 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.648 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.650 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[300f314b-e3bb-40e2-92b6-8c7b4c8b955b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.650 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace which is not needed anymore#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 23 05:18:29 np0005593232 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000093.scope: Consumed 7.784s CPU time.
Jan 23 05:18:29 np0005593232 systemd-machined[215836]: Machine qemu-65-instance-00000093 terminated.
Jan 23 05:18:29 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [NOTICE]   (347508) : haproxy version is 2.8.14-c23fe91
Jan 23 05:18:29 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [NOTICE]   (347508) : path to executable is /usr/sbin/haproxy
Jan 23 05:18:29 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [WARNING]  (347508) : Exiting Master process...
Jan 23 05:18:29 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [ALERT]    (347508) : Current worker (347511) exited with code 143 (Terminated)
Jan 23 05:18:29 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[347504]: [WARNING]  (347508) : All workers exited. Exiting... (0)
Jan 23 05:18:29 np0005593232 systemd[1]: libpod-2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367.scope: Deactivated successfully.
Jan 23 05:18:29 np0005593232 podman[348224]: 2026-01-23 10:18:29.78327776 +0000 UTC m=+0.041991134 container died 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:29 np0005593232 kernel: tape3abfed6-00: entered promiscuous mode
Jan 23 05:18:29 np0005593232 NetworkManager[49057]: <info>  [1769163509.7911] manager: (tape3abfed6-00): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Jan 23 05:18:29 np0005593232 kernel: tape3abfed6-00 (unregistering): left promiscuous mode
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00570|binding|INFO|Claiming lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for this chassis.
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00571|binding|INFO|e3abfed6-0093-41f6-95f6-e5c9c0b94fc4: Claiming fa:16:3e:7d:dd:31 10.100.0.11
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.803 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:31 10.100.0.11'], port_security=['fa:16:3e:7d:dd:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '96b953f7-8990-4761-81e9-d93cee7240dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.813 250273 INFO nova.virt.libvirt.driver [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Instance destroyed successfully.#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.814 250273 DEBUG nova.objects.instance [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'resources' on Instance uuid 96b953f7-8990-4761-81e9-d93cee7240dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00572|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 ovn-installed in OVS
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00573|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 up in Southbound
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00574|binding|INFO|Releasing lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 from this chassis (sb_readonly=1)
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00575|binding|INFO|Removing iface tape3abfed6-00 ovn-installed in OVS
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00576|if_status|INFO|Dropped 4 log messages in last 471 seconds (most recently, 471 seconds ago) due to excessive rate
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00577|if_status|INFO|Not setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 down as sb is readonly
Jan 23 05:18:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367-userdata-shm.mount: Deactivated successfully.
Jan 23 05:18:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-744367de6641cff53c49a53e60ed04c74ae46db1c6873c2b375c867e07f5b542-merged.mount: Deactivated successfully.
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00578|binding|INFO|Releasing lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 from this chassis (sb_readonly=0)
Jan 23 05:18:29 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:29Z|00579|binding|INFO|Setting lport e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 down in Southbound
Jan 23 05:18:29 np0005593232 podman[348224]: 2026-01-23 10:18:29.841657539 +0000 UTC m=+0.100370913 container cleanup 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:29 np0005593232 systemd[1]: libpod-conmon-2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367.scope: Deactivated successfully.
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.846 250273 DEBUG nova.virt.libvirt.vif [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:18:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1417985880',display_name='tempest-ServersTestJSON-server-1417985880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1417985880',id=147,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:18:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-7l5yrd3o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:18:26Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=96b953f7-8990-4761-81e9-d93cee7240dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.847 250273 DEBUG nova.network.os_vif_util [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "address": "fa:16:3e:7d:dd:31", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3abfed6-00", "ovs_interfaceid": "e3abfed6-0093-41f6-95f6-e5c9c0b94fc4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.847 250273 DEBUG nova.network.os_vif_util [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.848 250273 DEBUG os_vif [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.850 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.851 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3abfed6-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.853 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.855 250273 INFO os_vif [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:dd:31,bridge_name='br-int',has_traffic_filtering=True,id=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3abfed6-00')#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.887 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:dd:31 10.100.0.11'], port_security=['fa:16:3e:7d:dd:31 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '96b953f7-8990-4761-81e9-d93cee7240dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e3abfed6-0093-41f6-95f6-e5c9c0b94fc4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:29 np0005593232 podman[348279]: 2026-01-23 10:18:29.933224201 +0000 UTC m=+0.065654616 container remove 2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.941 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e46569d2-082e-402e-b74f-32173bf2ceac]: (4, ('Fri Jan 23 10:18:29 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367)\n2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367\nFri Jan 23 10:18:29 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367)\n2361b5712333a617f56a618ec75602795a4e32cd2d6aa64b57f151aaeec99367\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.943 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2086297-64e2-45d2-81fd-a4eb9fe0fca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.944 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 kernel: tap43bdb40a-e0: left promiscuous mode
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.948 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.951 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8236f86c-4264-4f04-9ca2-78e63eb8ff72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 nova_compute[250269]: 2026-01-23 10:18:29.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.977 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a8b94d-6a29-45c5-ba46-6ffa9e3a9b82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.978 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[de5e8396-258d-4101-89ce-d9ce8ac7b413]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.994 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[45b85ffb-1000-46c1-b6b8-ff492847dfd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 741435, 'reachable_time': 35402, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348334, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.997 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.998 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[33c7ac8b-8db8-466a-b19a-18783e73158e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:29.999 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:18:29 np0005593232 systemd[1]: run-netns-ovnmeta\x2d43bdb40a\x2deff5\x2d45cd\x2d9cb3\x2dcfdf465ad1f7.mount: Deactivated successfully.
Jan 23 05:18:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:30.001 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:30.003 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5de3de5e-e64f-4c89-b5d3-0c3545aab4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:30.003 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:18:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:30.005 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:30.006 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2aad5ce1-a414-488a-a3e6-e6ef982208f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.020109759 +0000 UTC m=+0.052336087 container create 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:18:30 np0005593232 systemd[1]: Started libpod-conmon-5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4.scope.
Jan 23 05:18:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:29.999380631 +0000 UTC m=+0.031607019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.10494532 +0000 UTC m=+0.137171708 container init 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.111474155 +0000 UTC m=+0.143700503 container start 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.115412267 +0000 UTC m=+0.147638645 container attach 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 05:18:30 np0005593232 cool_dijkstra[348342]: 167 167
Jan 23 05:18:30 np0005593232 systemd[1]: libpod-5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4.scope: Deactivated successfully.
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.117164227 +0000 UTC m=+0.149390555 container died 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 05:18:30 np0005593232 podman[348322]: 2026-01-23 10:18:30.146577123 +0000 UTC m=+0.178803441 container remove 5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:18:30 np0005593232 systemd[1]: libpod-conmon-5a8721b31950e6c7d9c3aed49f68addd5e824a65189d432aea1d442a635b09a4.scope: Deactivated successfully.
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.244 250273 INFO nova.virt.libvirt.driver [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Deleting instance files /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc_del#033[00m
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.245 250273 INFO nova.virt.libvirt.driver [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Deletion of /var/lib/nova/instances/96b953f7-8990-4761-81e9-d93cee7240dc_del complete#033[00m
Jan 23 05:18:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b1101c5fcae94da45de1fee3061b9c77889d1083c070c940ccbc8225d94cd2f4-merged.mount: Deactivated successfully.
Jan 23 05:18:30 np0005593232 podman[348366]: 2026-01-23 10:18:30.322254964 +0000 UTC m=+0.042751505 container create 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:18:30 np0005593232 systemd[1]: Started libpod-conmon-90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b.scope.
Jan 23 05:18:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361bf9a911670d1d2e0dd1a025f53e779c73c3c5aa51a26037c68de62ab1e627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361bf9a911670d1d2e0dd1a025f53e779c73c3c5aa51a26037c68de62ab1e627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361bf9a911670d1d2e0dd1a025f53e779c73c3c5aa51a26037c68de62ab1e627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/361bf9a911670d1d2e0dd1a025f53e779c73c3c5aa51a26037c68de62ab1e627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:30 np0005593232 podman[348366]: 2026-01-23 10:18:30.303189513 +0000 UTC m=+0.023686104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:18:30 np0005593232 podman[348366]: 2026-01-23 10:18:30.400780096 +0000 UTC m=+0.121276667 container init 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:18:30 np0005593232 podman[348366]: 2026-01-23 10:18:30.414950238 +0000 UTC m=+0.135446779 container start 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:18:30 np0005593232 podman[348366]: 2026-01-23 10:18:30.419138467 +0000 UTC m=+0.139635048 container attach 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.432 250273 INFO nova.compute.manager [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.434 250273 DEBUG oslo.service.loopingcall [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.434 250273 DEBUG nova.compute.manager [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:18:30 np0005593232 nova_compute[250269]: 2026-01-23 10:18:30.434 250273 DEBUG nova.network.neutron [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:18:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:30.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]: {
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:        "osd_id": 0,
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:        "type": "bluestore"
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]:    }
Jan 23 05:18:31 np0005593232 goofy_dewdney[348382]: }
Jan 23 05:18:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:31.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:31 np0005593232 systemd[1]: libpod-90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b.scope: Deactivated successfully.
Jan 23 05:18:31 np0005593232 podman[348366]: 2026-01-23 10:18:31.27195942 +0000 UTC m=+0.992456011 container died 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:18:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-361bf9a911670d1d2e0dd1a025f53e779c73c3c5aa51a26037c68de62ab1e627-merged.mount: Deactivated successfully.
Jan 23 05:18:31 np0005593232 podman[348366]: 2026-01-23 10:18:31.3370547 +0000 UTC m=+1.057551241 container remove 90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:18:31 np0005593232 systemd[1]: libpod-conmon-90f679ccf930130bed2b6a4f33580297951d757e195bca44bfb521f812498d9b.scope: Deactivated successfully.
Jan 23 05:18:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:18:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:18:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 69b3310c-90e8-4e0e-926b-e7a026ae46a3 does not exist
Jan 23 05:18:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0b36cb71-d13b-4870-af92-0d5460c6b12e does not exist
Jan 23 05:18:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 79f92a22-3724-46ff-9d89-cb24da504f63 does not exist
Jan 23 05:18:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2667: 321 pgs: 321 active+clean; 505 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 87 op/s
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.553 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-unplugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.554 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.555 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.555 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.555 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] No waiting events found dispatching network-vif-unplugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.556 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-unplugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.556 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.556 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.556 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.556 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] No waiting events found dispatching network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 WARNING nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received unexpected event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.557 250273 DEBUG oslo_concurrency.lockutils [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.558 250273 DEBUG nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] No waiting events found dispatching network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.558 250273 WARNING nova.compute.manager [req-7a00c5a2-fa80-4edb-b664-34bc02bb3609 req-4fb6aeec-8c0e-4e7e-8ceb-bad053094361 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received unexpected event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.559 250273 DEBUG nova.network.neutron [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.579 250273 INFO nova.compute.manager [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.654 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.655 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.755 250273 DEBUG nova.compute.manager [req-53cd9ab1-1182-41f3-873f-a581725fa3a3 req-1f01e7c3-a25b-46dd-9cca-c81ec688367a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-deleted-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:31 np0005593232 nova_compute[250269]: 2026-01-23 10:18:31.769 250273 DEBUG oslo_concurrency.processutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4155880973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.218 250273 DEBUG oslo_concurrency.processutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.228 250273 DEBUG nova.compute.provider_tree [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.264 250273 DEBUG nova.scheduler.client.report [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.302 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.358 250273 INFO nova.scheduler.client.report [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Deleted allocations for instance 96b953f7-8990-4761-81e9-d93cee7240dc#033[00m
Jan 23 05:18:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:32 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.468 250273 DEBUG oslo_concurrency.lockutils [None req-9179f287-9716-4741-9eb2-fae23a51ea54 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:32.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.847 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163497.8461905, ccd07f55-529f-4dbb-989c-2cdbdd393a0b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.848 250273 INFO nova.compute.manager [-] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:18:32 np0005593232 nova_compute[250269]: 2026-01-23 10:18:32.869 250273 DEBUG nova.compute.manager [None req-dd62d69d-82c4-46d9-9bae-42fac67afc7d - - - - - -] [instance: ccd07f55-529f-4dbb-989c-2cdbdd393a0b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:33.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2668: 321 pgs: 321 active+clean; 482 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 110 op/s
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.714 250273 DEBUG nova.compute.manager [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.715 250273 DEBUG oslo_concurrency.lockutils [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.716 250273 DEBUG oslo_concurrency.lockutils [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.717 250273 DEBUG oslo_concurrency.lockutils [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "96b953f7-8990-4761-81e9-d93cee7240dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.717 250273 DEBUG nova.compute.manager [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] No waiting events found dispatching network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:33 np0005593232 nova_compute[250269]: 2026-01-23 10:18:33.718 250273 WARNING nova.compute.manager [req-98aa4314-2e55-44ca-b801-969a42033041 req-78bf65be-7d2e-44fd-979d-30c0c1856515 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Received unexpected event network-vif-plugged-e3abfed6-0093-41f6-95f6-e5c9c0b94fc4 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:18:34 np0005593232 podman[348537]: 2026-01-23 10:18:34.552961067 +0000 UTC m=+0.207760374 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 23 05:18:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:34.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:34 np0005593232 nova_compute[250269]: 2026-01-23 10:18:34.853 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:35.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2669: 321 pgs: 321 active+clean; 458 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.2 KiB/s wr, 121 op/s
Jan 23 05:18:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:36.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:37.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:18:37
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms', 'volumes']
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2670: 321 pgs: 321 active+clean; 399 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 998 KiB/s rd, 9.2 KiB/s wr, 104 op/s
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:18:38 np0005593232 nova_compute[250269]: 2026-01-23 10:18:38.076 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:18:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:38.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.194 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.194 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.218 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:18:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.310 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.311 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.318 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.319 250273 INFO nova.compute.claims [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:18:39 np0005593232 podman[348565]: 2026-01-23 10:18:39.416354608 +0000 UTC m=+0.071408350 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.492 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2671: 321 pgs: 321 active+clean; 379 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 5.5 KiB/s wr, 83 op/s
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.857 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175613630' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.930 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.940 250273 DEBUG nova.compute.provider_tree [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:39 np0005593232 nova_compute[250269]: 2026-01-23 10:18:39.966 250273 DEBUG nova.scheduler.client.report [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.003 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.004 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.107 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.107 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.144 250273 INFO nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.191 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.469 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.471 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.472 250273 INFO nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Creating image(s)#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.515 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.556 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.589 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.593 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.642 250273 DEBUG nova.policy [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec99ae7c69d0438280441e0434374cbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.692 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.693 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.694 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.695 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:40.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.738 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.743 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 e401e666-cd99-4f22-882c-fe5130ab4471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.902 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:40 np0005593232 nova_compute[250269]: 2026-01-23 10:18:40.903 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303662156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3303662156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.126 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 e401e666-cd99-4f22-882c-fe5130ab4471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.221 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] resizing rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:18:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:41.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.351 250273 DEBUG nova.objects.instance [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'migration_context' on Instance uuid e401e666-cd99-4f22-882c-fe5130ab4471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.369 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.369 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Ensure instance console log exists: /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.370 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.370 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.370 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2672: 321 pgs: 321 active+clean; 379 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 59 KiB/s rd, 5.5 KiB/s wr, 82 op/s
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616452496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:18:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3616452496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:18:41 np0005593232 nova_compute[250269]: 2026-01-23 10:18:41.747 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Successfully created port: f2451ca0-b377-4357-87c4-3e92af9c4af9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:18:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:42.632 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:42.633 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:42.633 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.123 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:43.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.333 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Successfully updated port: f2451ca0-b377-4357-87c4-3e92af9c4af9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.354 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.355 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquired lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.355 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:18:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2673: 321 pgs: 321 active+clean; 241 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 102 KiB/s rd, 1.1 MiB/s wr, 145 op/s
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.519 250273 DEBUG nova.compute.manager [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-changed-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.520 250273 DEBUG nova.compute.manager [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Refreshing instance network info cache due to event network-changed-f2451ca0-b377-4357-87c4-3e92af9c4af9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.520 250273 DEBUG oslo_concurrency.lockutils [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:18:43 np0005593232 nova_compute[250269]: 2026-01-23 10:18:43.708 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:18:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:44.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:44 np0005593232 nova_compute[250269]: 2026-01-23 10:18:44.813 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163509.8106875, 96b953f7-8990-4761-81e9-d93cee7240dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:44 np0005593232 nova_compute[250269]: 2026-01-23 10:18:44.814 250273 INFO nova.compute.manager [-] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:18:44 np0005593232 nova_compute[250269]: 2026-01-23 10:18:44.861 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:44 np0005593232 nova_compute[250269]: 2026-01-23 10:18:44.865 250273 DEBUG nova.compute.manager [None req-f0f5d4bb-79f7-4dd7-bc0f-c4a4f7b9c77e - - - - - -] [instance: 96b953f7-8990-4761-81e9-d93cee7240dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 23 05:18:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:45.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.503 250273 DEBUG nova.network.neutron [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Updating instance_info_cache with network_info: [{"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2674: 321 pgs: 321 active+clean; 187 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 137 op/s
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.542 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Releasing lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.543 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Instance network_info: |[{"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.543 250273 DEBUG oslo_concurrency.lockutils [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.544 250273 DEBUG nova.network.neutron [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Refreshing network info cache for port f2451ca0-b377-4357-87c4-3e92af9c4af9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.547 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Start _get_guest_xml network_info=[{"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.552 250273 WARNING nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.559 250273 DEBUG nova.virt.libvirt.host [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.560 250273 DEBUG nova.virt.libvirt.host [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.566 250273 DEBUG nova.virt.libvirt.host [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.566 250273 DEBUG nova.virt.libvirt.host [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.568 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.568 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.569 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.569 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.569 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.569 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.570 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.570 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.570 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.571 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.571 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.571 250273 DEBUG nova.virt.hardware [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:18:45 np0005593232 nova_compute[250269]: 2026-01-23 10:18:45.574 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 23 05:18:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 23 05:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905993421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.049 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.077 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.082 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3118762230' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.572 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.574 250273 DEBUG nova.virt.libvirt.vif [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-247034792',display_name='tempest-ServersTestJSON-server-247034792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-247034792',id=148,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+YyteyDdyOuOjXPX1ByQY3uJqR9UTB66Nyph0Jb+GOWEVqPc3r3RrAzMewCHCtSWU0UY+8XoTz67A/gHf9c1r30V4NLdEGj9pn8g0jz3y+sdHA6v2H21ZNyJ94fvVYbw==',key_name='tempest-key-1590543153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-im7ewgfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:18:40Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=e401e666-cd99-4f22-882c-fe5130ab4471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.575 250273 DEBUG nova.network.os_vif_util [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.576 250273 DEBUG nova.network.os_vif_util [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:46 np0005593232 nova_compute[250269]: 2026-01-23 10:18:46.578 250273 DEBUG nova.objects.instance [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid e401e666-cd99-4f22-882c-fe5130ab4471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:46.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0031566978410571375 of space, bias 1.0, pg target 0.9470093523171412 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8563869695700725 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.190 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <uuid>e401e666-cd99-4f22-882c-fe5130ab4471</uuid>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <name>instance-00000094</name>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersTestJSON-server-247034792</nova:name>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:18:45</nova:creationTime>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:user uuid="ec99ae7c69d0438280441e0434374cbf">tempest-ServersTestJSON-1611255243-project-member</nova:user>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:project uuid="c59351a1b59c4cc9ad389dff900935f2">tempest-ServersTestJSON-1611255243</nova:project>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <nova:port uuid="f2451ca0-b377-4357-87c4-3e92af9c4af9">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="serial">e401e666-cd99-4f22-882c-fe5130ab4471</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="uuid">e401e666-cd99-4f22-882c-fe5130ab4471</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/e401e666-cd99-4f22-882c-fe5130ab4471_disk">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/e401e666-cd99-4f22-882c-fe5130ab4471_disk.config">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:98:0f:4e"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <target dev="tapf2451ca0-b3"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/console.log" append="off"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:18:47 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:18:47 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:18:47 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:18:47 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.191 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Preparing to wait for external event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.191 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.192 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:47 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.192 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.193 250273 DEBUG nova.virt.libvirt.vif [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-247034792',display_name='tempest-ServersTestJSON-server-247034792',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-247034792',id=148,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+YyteyDdyOuOjXPX1ByQY3uJqR9UTB66Nyph0Jb+GOWEVqPc3r3RrAzMewCHCtSWU0UY+8XoTz67A/gHf9c1r30V4NLdEGj9pn8g0jz3y+sdHA6v2H21ZNyJ94fvVYbw==',key_name='tempest-key-1590543153',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-im7ewgfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:18:40Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=e401e666-cd99-4f22-882c-fe5130ab4471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.193 250273 DEBUG nova.network.os_vif_util [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.194 250273 DEBUG nova.network.os_vif_util [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.194 250273 DEBUG os_vif [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.195 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.196 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.200 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.201 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2451ca0-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.202 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2451ca0-b3, col_values=(('external_ids', {'iface-id': 'f2451ca0-b377-4357-87c4-3e92af9c4af9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:0f:4e', 'vm-uuid': 'e401e666-cd99-4f22-882c-fe5130ab4471'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:47 np0005593232 NetworkManager[49057]: <info>  [1769163527.2059] manager: (tapf2451ca0-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/270)
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.216 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.217 250273 INFO os_vif [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3')#033[00m
Jan 23 05:18:47 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:18:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:47.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.308 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.310 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.311 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No VIF found with MAC fa:16:3e:98:0f:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.312 250273 INFO nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Using config drive#033[00m
Jan 23 05:18:47 np0005593232 nova_compute[250269]: 2026-01-23 10:18:47.354 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2676: 321 pgs: 321 active+clean; 179 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 2.1 MiB/s wr, 129 op/s
Jan 23 05:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 13K writes, 59K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1641 writes, 7728 keys, 1641 commit groups, 1.0 writes per commit group, ingest: 10.98 MB, 0.02 MB/s#012Interval WAL: 1641 writes, 1641 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.0      1.85              0.30        41    0.045       0      0       0.0       0.0#012  L6      1/0   10.36 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   4.8     84.2     71.4      5.39              1.26        40    0.135    264K    21K       0.0       0.0#012 Sum      1/0   10.36 MB   0.0      0.4     0.1      0.4       0.5      0.1       0.0   5.8     62.7     64.1      7.24              1.56        81    0.089    264K    21K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.2    103.5    104.8      0.80              0.32        14    0.057     61K   3590       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0     84.2     71.4      5.39              1.26        40    0.135    264K    21K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.1      1.84              0.30        40    0.046       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.078, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.45 GB write, 0.10 MB/s write, 0.44 GB read, 0.09 MB/s read, 7.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 49.25 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000409 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2854,47.30 MB,15.5587%) FilterBlock(82,744.30 KB,0.239096%) IndexBlock(82,1.22 MB,0.401472%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.319 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.815 250273 INFO nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Creating config drive at /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.821 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv914rauo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.950 250273 DEBUG nova.network.neutron [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Updated VIF entry in instance network info cache for port f2451ca0-b377-4357-87c4-3e92af9c4af9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.951 250273 DEBUG nova.network.neutron [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Updating instance_info_cache with network_info: [{"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.963 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv914rauo" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:48 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.995 250273 DEBUG nova.storage.rbd_utils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image e401e666-cd99-4f22-882c-fe5130ab4471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:48.999 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config e401e666-cd99-4f22-882c-fe5130ab4471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.046 250273 DEBUG oslo_concurrency.lockutils [req-593ef68c-5b84-49a4-94a3-21623d413b56 req-1243876b-fe56-4043-9ca4-3b3f7b172171 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-e401e666-cd99-4f22-882c-fe5130ab4471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.208 250273 DEBUG oslo_concurrency.processutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config e401e666-cd99-4f22-882c-fe5130ab4471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.209 250273 INFO nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Deleting local config drive /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471/disk.config because it was imported into RBD.#033[00m
Jan 23 05:18:49 np0005593232 kernel: tapf2451ca0-b3: entered promiscuous mode
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.2663] manager: (tapf2451ca0-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/271)
Jan 23 05:18:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:49.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:49Z|00580|binding|INFO|Claiming lport f2451ca0-b377-4357-87c4-3e92af9c4af9 for this chassis.
Jan 23 05:18:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:49Z|00581|binding|INFO|f2451ca0-b377-4357-87c4-3e92af9c4af9: Claiming fa:16:3e:98:0f:4e 10.100.0.7
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.310 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.318 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:0f:4e 10.100.0.7'], port_security=['fa:16:3e:98:0f:4e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e401e666-cd99-4f22-882c-fe5130ab4471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f2451ca0-b377-4357-87c4-3e92af9c4af9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.320 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f2451ca0-b377-4357-87c4-3e92af9c4af9 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 bound to our chassis#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.321 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7#033[00m
Jan 23 05:18:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:49Z|00582|binding|INFO|Setting lport f2451ca0-b377-4357-87c4-3e92af9c4af9 ovn-installed in OVS
Jan 23 05:18:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:49Z|00583|binding|INFO|Setting lport f2451ca0-b377-4357-87c4-3e92af9c4af9 up in Southbound
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.328 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.335 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[168cc0e0-2a2f-4fc9-b97d-5c80965f1e80]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.337 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43bdb40a-e1 in ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.339 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43bdb40a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.339 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[31caa4d4-7700-4d0e-ba55-8d5e88f846e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.342 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8fa72d-a7ce-4138-b862-64d73d920a43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 systemd-machined[215836]: New machine qemu-66-instance-00000094.
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.357 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[993241d6-5b30-4455-b026-0c3ec3504afc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 systemd[1]: Started Virtual Machine qemu-66-instance-00000094.
Jan 23 05:18:49 np0005593232 systemd-udevd[348917]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.385 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6620e494-0d85-483a-bc89-63cd124c74d8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.3930] device (tapf2451ca0-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.3941] device (tapf2451ca0-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.417 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[948fe667-51aa-49d9-a11d-f09cd653830f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.422 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[937843c7-2e38-48ba-8715-ef851a13632b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.4240] manager: (tap43bdb40a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/272)
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.464 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[be7b7500-b349-4845-85e6-966fb82d2598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.468 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe4763b-d85a-4758-b6d8-230ef50ce9db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.4903] device (tap43bdb40a-e0): carrier: link connected
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.495 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9e230228-abe9-4b6f-8662-8e0105e7a544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.510 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[03a47802-8264-4caa-a338-9de3249093b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744263, 'reachable_time': 38763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348947, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2677: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.526 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[00973618-025c-4891-9f3d-b3a5c3188deb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:5ee5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 744263, 'tstamp': 744263}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348948, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.542 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[837e8a88-826b-4264-842a-b3d9823ab80d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 172], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744263, 'reachable_time': 38763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348949, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.572 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b34ec257-196b-405f-84b3-4ee59791685a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.650 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[087dac42-2164-433c-a406-0862a7d763b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.652 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.652 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.652 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43bdb40a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 kernel: tap43bdb40a-e0: entered promiscuous mode
Jan 23 05:18:49 np0005593232 NetworkManager[49057]: <info>  [1769163529.6549] manager: (tap43bdb40a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/273)
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.657 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43bdb40a-e0, col_values=(('external_ids', {'iface-id': '8a8ef4f2-2ba5-405a-811e-058c5ff2b91e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:49Z|00584|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.677 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.678 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a70d75-76ec-42a1-a58c-e70e8b0c392a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.679 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:18:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:49.681 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'env', 'PROCESS_TAG=haproxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.798 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163529.7971044, e401e666-cd99-4f22-882c-fe5130ab4471 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.798 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] VM Started (Lifecycle Event)#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.837 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.844 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163529.7977943, e401e666-cd99-4f22-882c-fe5130ab4471 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.844 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.867 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.870 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:18:49 np0005593232 nova_compute[250269]: 2026-01-23 10:18:49.893 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:18:50 np0005593232 podman[349024]: 2026-01-23 10:18:50.113198609 +0000 UTC m=+0.072934634 container create ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:18:50 np0005593232 systemd[1]: Started libpod-conmon-ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3.scope.
Jan 23 05:18:50 np0005593232 podman[349024]: 2026-01-23 10:18:50.070343841 +0000 UTC m=+0.030079946 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:18:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:18:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2722aaff08114c42064bb2a2cc94b90bd3e3703573d67711f1ad46a9105cec7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:18:50 np0005593232 podman[349024]: 2026-01-23 10:18:50.194652795 +0000 UTC m=+0.154388830 container init ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:18:50 np0005593232 podman[349024]: 2026-01-23 10:18:50.204943577 +0000 UTC m=+0.164679632 container start ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:18:50 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [NOTICE]   (349043) : New worker (349045) forked
Jan 23 05:18:50 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [NOTICE]   (349043) : Loading success.
Jan 23 05:18:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:18:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.924 250273 DEBUG nova.compute.manager [req-2b6527ed-3772-4b83-9405-efe10df7dbb4 req-f53062e5-2cbd-42ba-a5ee-347690f38a90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.925 250273 DEBUG oslo_concurrency.lockutils [req-2b6527ed-3772-4b83-9405-efe10df7dbb4 req-f53062e5-2cbd-42ba-a5ee-347690f38a90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.926 250273 DEBUG oslo_concurrency.lockutils [req-2b6527ed-3772-4b83-9405-efe10df7dbb4 req-f53062e5-2cbd-42ba-a5ee-347690f38a90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.926 250273 DEBUG oslo_concurrency.lockutils [req-2b6527ed-3772-4b83-9405-efe10df7dbb4 req-f53062e5-2cbd-42ba-a5ee-347690f38a90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.927 250273 DEBUG nova.compute.manager [req-2b6527ed-3772-4b83-9405-efe10df7dbb4 req-f53062e5-2cbd-42ba-a5ee-347690f38a90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Processing event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.928 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.935 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163530.9351704, e401e666-cd99-4f22-882c-fe5130ab4471 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.936 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.938 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.943 250273 INFO nova.virt.libvirt.driver [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Instance spawned successfully.#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.944 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.972 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.978 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.978 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.979 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.979 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.980 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.981 250273 DEBUG nova.virt.libvirt.driver [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:18:50 np0005593232 nova_compute[250269]: 2026-01-23 10:18:50.987 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:18:51 np0005593232 nova_compute[250269]: 2026-01-23 10:18:51.033 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:18:51 np0005593232 nova_compute[250269]: 2026-01-23 10:18:51.073 250273 INFO nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Took 10.60 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:18:51 np0005593232 nova_compute[250269]: 2026-01-23 10:18:51.074 250273 DEBUG nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:18:51 np0005593232 nova_compute[250269]: 2026-01-23 10:18:51.168 250273 INFO nova.compute.manager [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Took 11.89 seconds to build instance.#033[00m
Jan 23 05:18:51 np0005593232 nova_compute[250269]: 2026-01-23 10:18:51.214 250273 DEBUG oslo_concurrency.lockutils [None req-b1348296-d6b6-4482-a51c-b149d93dd73b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2678: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 82 KiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 23 05:18:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 23 05:18:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 23 05:18:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 23 05:18:52 np0005593232 nova_compute[250269]: 2026-01-23 10:18:52.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:52.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.097 250273 DEBUG nova.compute.manager [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.097 250273 DEBUG oslo_concurrency.lockutils [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.098 250273 DEBUG oslo_concurrency.lockutils [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.098 250273 DEBUG oslo_concurrency.lockutils [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.099 250273 DEBUG nova.compute.manager [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] No waiting events found dispatching network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.099 250273 WARNING nova.compute.manager [req-2fec0132-3ab0-4e6e-a727-b75971555c8b req-ecfe52ab-35fb-4a8c-80cf-6d575a386323 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received unexpected event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:18:53 np0005593232 nova_compute[250269]: 2026-01-23 10:18:53.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:53.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2680: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 20 KiB/s wr, 89 op/s
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.464 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.466 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.466 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.466 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.466 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.467 250273 INFO nova.compute.manager [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Terminating instance#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.468 250273 DEBUG nova.compute.manager [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:18:54 np0005593232 kernel: tapf2451ca0-b3 (unregistering): left promiscuous mode
Jan 23 05:18:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:54 np0005593232 NetworkManager[49057]: <info>  [1769163534.7334] device (tapf2451ca0-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:54Z|00585|binding|INFO|Releasing lport f2451ca0-b377-4357-87c4-3e92af9c4af9 from this chassis (sb_readonly=0)
Jan 23 05:18:54 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:54Z|00586|binding|INFO|Setting lport f2451ca0-b377-4357-87c4-3e92af9c4af9 down in Southbound
Jan 23 05:18:54 np0005593232 ovn_controller[151001]: 2026-01-23T10:18:54Z|00587|binding|INFO|Removing iface tapf2451ca0-b3 ovn-installed in OVS
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.750 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:0f:4e 10.100.0.7'], port_security=['fa:16:3e:98:0f:4e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e401e666-cd99-4f22-882c-fe5130ab4471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=f2451ca0-b377-4357-87c4-3e92af9c4af9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.751 161902 INFO neutron.agent.ovn.metadata.agent [-] Port f2451ca0-b377-4357-87c4-3e92af9c4af9 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.752 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.753 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[16e36b9a-7896-49de-a7b9-eabbbf361dad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.754 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace which is not needed anymore#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 23 05:18:54 np0005593232 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000094.scope: Consumed 4.256s CPU time.
Jan 23 05:18:54 np0005593232 systemd-machined[215836]: Machine qemu-66-instance-00000094 terminated.
Jan 23 05:18:54 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [NOTICE]   (349043) : haproxy version is 2.8.14-c23fe91
Jan 23 05:18:54 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [NOTICE]   (349043) : path to executable is /usr/sbin/haproxy
Jan 23 05:18:54 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [WARNING]  (349043) : Exiting Master process...
Jan 23 05:18:54 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [ALERT]    (349043) : Current worker (349045) exited with code 143 (Terminated)
Jan 23 05:18:54 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349039]: [WARNING]  (349043) : All workers exited. Exiting... (0)
Jan 23 05:18:54 np0005593232 systemd[1]: libpod-ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3.scope: Deactivated successfully.
Jan 23 05:18:54 np0005593232 podman[349129]: 2026-01-23 10:18:54.875813968 +0000 UTC m=+0.041976074 container died ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:18:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2722aaff08114c42064bb2a2cc94b90bd3e3703573d67711f1ad46a9105cec7b-merged.mount: Deactivated successfully.
Jan 23 05:18:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3-userdata-shm.mount: Deactivated successfully.
Jan 23 05:18:54 np0005593232 podman[349129]: 2026-01-23 10:18:54.920295813 +0000 UTC m=+0.086457919 container cleanup ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.920 250273 INFO nova.virt.libvirt.driver [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Instance destroyed successfully.#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.921 250273 DEBUG nova.objects.instance [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'resources' on Instance uuid e401e666-cd99-4f22-882c-fe5130ab4471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:18:54 np0005593232 systemd[1]: libpod-conmon-ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3.scope: Deactivated successfully.
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.948 250273 DEBUG nova.virt.libvirt.vif [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:18:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-247034792',display_name='tempest-ServersTestJSON-server-247034792',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-247034792',id=148,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK+YyteyDdyOuOjXPX1ByQY3uJqR9UTB66Nyph0Jb+GOWEVqPc3r3RrAzMewCHCtSWU0UY+8XoTz67A/gHf9c1r30V4NLdEGj9pn8g0jz3y+sdHA6v2H21ZNyJ94fvVYbw==',key_name='tempest-key-1590543153',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:18:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-im7ewgfl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:18:51Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=e401e666-cd99-4f22-882c-fe5130ab4471,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.948 250273 DEBUG nova.network.os_vif_util [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "address": "fa:16:3e:98:0f:4e", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2451ca0-b3", "ovs_interfaceid": "f2451ca0-b377-4357-87c4-3e92af9c4af9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.949 250273 DEBUG nova.network.os_vif_util [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.950 250273 DEBUG os_vif [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.951 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2451ca0-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.954 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.957 250273 INFO os_vif [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:0f:4e,bridge_name='br-int',has_traffic_filtering=True,id=f2451ca0-b377-4357-87c4-3e92af9c4af9,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2451ca0-b3')#033[00m
Jan 23 05:18:54 np0005593232 podman[349168]: 2026-01-23 10:18:54.982844871 +0000 UTC m=+0.039603697 container remove ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.989 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[95dad0d2-b5bc-439e-8f55-4287b0aa25a9]: (4, ('Fri Jan 23 10:18:54 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3)\nff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3\nFri Jan 23 10:18:54 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (ff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3)\nff93621cd596b773eee01825d8e8c4da2153e2e1d64d730ca8f3c0c8447758b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.990 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1c3bcb-5cb5-4889-bfde-781eb1a0ae6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:54.991 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:18:54 np0005593232 nova_compute[250269]: 2026-01-23 10:18:54.992 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:54 np0005593232 kernel: tap43bdb40a-e0: left promiscuous mode
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.009 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[08882e8f-16fd-4363-8d22-835548130faf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.025 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[74d24ac9-a5d3-46d3-a466-e974017841fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.026 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[24330237-1054-423a-a16a-cd9f4a5040a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.041 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0741bd-598f-4a9d-954e-0cdb742c3c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 744255, 'reachable_time': 32171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349202, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:55 np0005593232 systemd[1]: run-netns-ovnmeta\x2d43bdb40a\x2deff5\x2d45cd\x2d9cb3\x2dcfdf465ad1f7.mount: Deactivated successfully.
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.044 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:18:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:18:55.044 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[79cc1fc2-36c0-4914-97b6-004ef71fb8bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:18:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:55.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.337 250273 INFO nova.virt.libvirt.driver [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Deleting instance files /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471_del#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.338 250273 INFO nova.virt.libvirt.driver [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Deletion of /var/lib/nova/instances/e401e666-cd99-4f22-882c-fe5130ab4471_del complete#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.402 250273 DEBUG nova.compute.manager [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-unplugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.402 250273 DEBUG oslo_concurrency.lockutils [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.403 250273 DEBUG oslo_concurrency.lockutils [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.403 250273 DEBUG oslo_concurrency.lockutils [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.403 250273 DEBUG nova.compute.manager [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] No waiting events found dispatching network-vif-unplugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.403 250273 DEBUG nova.compute.manager [req-b92ae0c8-6eac-4624-bf06-a302758ec3a5 req-34fd41b4-b3c0-451d-9772-76bfe142c018 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-unplugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.471 250273 INFO nova.compute.manager [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.472 250273 DEBUG oslo.service.loopingcall [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.472 250273 DEBUG nova.compute.manager [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:18:55 np0005593232 nova_compute[250269]: 2026-01-23 10:18:55.472 250273 DEBUG nova.network.neutron [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:18:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2681: 321 pgs: 321 active+clean; 179 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 405 KiB/s wr, 131 op/s
Jan 23 05:18:56 np0005593232 nova_compute[250269]: 2026-01-23 10:18:56.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:56 np0005593232 nova_compute[250269]: 2026-01-23 10:18:56.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:56.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.060 250273 DEBUG nova.network.neutron [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.104 250273 INFO nova.compute.manager [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Took 1.63 seconds to deallocate network for instance.#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.206 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.206 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:18:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:57.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.367 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.471 250273 DEBUG oslo_concurrency.processutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2682: 321 pgs: 321 active+clean; 183 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.581 250273 DEBUG nova.compute.manager [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.582 250273 DEBUG oslo_concurrency.lockutils [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.583 250273 DEBUG oslo_concurrency.lockutils [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.583 250273 DEBUG oslo_concurrency.lockutils [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.584 250273 DEBUG nova.compute.manager [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] No waiting events found dispatching network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.584 250273 WARNING nova.compute.manager [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received unexpected event network-vif-plugged-f2451ca0-b377-4357-87c4-3e92af9c4af9 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.585 250273 DEBUG nova.compute.manager [req-ce45ee68-186d-4f17-bd77-2456b193d2ce req-1bcf637d-dc24-4c02-b6a9-226e920c45fa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Received event network-vif-deleted-f2451ca0-b377-4357-87c4-3e92af9c4af9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:18:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437280170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.898 250273 DEBUG oslo_concurrency.processutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.906 250273 DEBUG nova.compute.provider_tree [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.942 250273 DEBUG nova.scheduler.client.report [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.972 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.975 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.975 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.976 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:18:57 np0005593232 nova_compute[250269]: 2026-01-23 10:18:57.976 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.162 250273 INFO nova.scheduler.client.report [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Deleted allocations for instance e401e666-cd99-4f22-882c-fe5130ab4471#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.258 250273 DEBUG oslo_concurrency.lockutils [None req-0256e1bf-b585-4fb1-9fa9-96b4c36a3bc7 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "e401e666-cd99-4f22-882c-fe5130ab4471" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747365930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.399 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.536 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.537 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4249MB free_disk=20.914398193359375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.538 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.538 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.640 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.640 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:18:58 np0005593232 nova_compute[250269]: 2026-01-23 10:18:58.680 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:18:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:18:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:18:58.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:18:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:18:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3462537773' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.100 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.110 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.200 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.232 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.233 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:18:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:18:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:18:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:18:59.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:18:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2683: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 23 05:18:59 np0005593232 nova_compute[250269]: 2026-01-23 10:18:59.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:00 np0005593232 nova_compute[250269]: 2026-01-23 10:19:00.235 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:00 np0005593232 nova_compute[250269]: 2026-01-23 10:19:00.236 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:19:00 np0005593232 nova_compute[250269]: 2026-01-23 10:19:00.237 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:19:00 np0005593232 nova_compute[250269]: 2026-01-23 10:19:00.257 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:19:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:00.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2684: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 148 op/s
Jan 23 05:19:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:03 np0005593232 nova_compute[250269]: 2026-01-23 10:19:03.265 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2685: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 1.9 MiB/s wr, 113 op/s
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.455 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.456 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.493 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.640 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.640 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.652 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.652 250273 INFO nova.compute.claims [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:19:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:04.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.802 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:04 np0005593232 nova_compute[250269]: 2026-01-23 10:19:04.958 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:19:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/78531397' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.296 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.307 250273 DEBUG nova.compute.provider_tree [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:19:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:05.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.384 250273 DEBUG nova.scheduler.client.report [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.449 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.451 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:19:05 np0005593232 podman[349298]: 2026-01-23 10:19:05.521826614 +0000 UTC m=+0.160158344 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 05:19:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2686: 321 pgs: 321 active+clean; 167 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.540 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.540 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.578 250273 INFO nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.608 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.758 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.760 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.761 250273 INFO nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Creating image(s)#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.798 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.838 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.877 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.882 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.971 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.973 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.974 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:05 np0005593232 nova_compute[250269]: 2026-01-23 10:19:05.974 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.005 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.009 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.044 250273 DEBUG nova.policy [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec99ae7c69d0438280441e0434374cbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.416 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.407s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.545 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] resizing rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.693 250273 DEBUG nova.objects.instance [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 26889fbb-8ea6-4457-897c-0e236ccc4b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.742 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.743 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Ensure instance console log exists: /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.743 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.744 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:06 np0005593232 nova_compute[250269]: 2026-01-23 10:19:06.744 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:06.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2687: 321 pgs: 321 active+clean; 191 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 125 op/s
Jan 23 05:19:07 np0005593232 nova_compute[250269]: 2026-01-23 10:19:07.541 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Successfully created port: 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:08 np0005593232 nova_compute[250269]: 2026-01-23 10:19:08.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:08.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2688: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.630 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Successfully updated port: 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.692 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.692 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquired lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.692 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.919 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163534.9186041, e401e666-cd99-4f22-882c-fe5130ab4471 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.920 250273 INFO nova.compute.manager [-] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.947 250273 DEBUG nova.compute.manager [None req-fda22133-ffad-4f71-b7f0-c7e42e6528fd - - - - - -] [instance: e401e666-cd99-4f22-882c-fe5130ab4471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:19:09 np0005593232 nova_compute[250269]: 2026-01-23 10:19:09.961 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:10 np0005593232 nova_compute[250269]: 2026-01-23 10:19:10.005 250273 DEBUG nova.compute.manager [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-changed-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:19:10 np0005593232 nova_compute[250269]: 2026-01-23 10:19:10.006 250273 DEBUG nova.compute.manager [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Refreshing instance network info cache due to event network-changed-356b746d-7a0d-4dda-8cdd-f648ae1b22e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:19:10 np0005593232 nova_compute[250269]: 2026-01-23 10:19:10.006 250273 DEBUG oslo_concurrency.lockutils [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:19:10 np0005593232 nova_compute[250269]: 2026-01-23 10:19:10.207 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:19:10 np0005593232 nova_compute[250269]: 2026-01-23 10:19:10.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:10 np0005593232 podman[349493]: 2026-01-23 10:19:10.420350014 +0000 UTC m=+0.066863472 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 23 05:19:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:10.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2689: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 05:19:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.291 250273 DEBUG nova.network.neutron [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Updating instance_info_cache with network_info: [{"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.346 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Releasing lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.347 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Instance network_info: |[{"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.347 250273 DEBUG oslo_concurrency.lockutils [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.347 250273 DEBUG nova.network.neutron [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Refreshing network info cache for port 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.350 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Start _get_guest_xml network_info=[{"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.356 250273 WARNING nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.364 250273 DEBUG nova.virt.libvirt.host [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.365 250273 DEBUG nova.virt.libvirt.host [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.369 250273 DEBUG nova.virt.libvirt.host [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.370 250273 DEBUG nova.virt.libvirt.host [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.371 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.372 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.372 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.372 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.373 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.373 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.373 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.374 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.374 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.374 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.375 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.375 250273 DEBUG nova.virt.hardware [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.378 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:12.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:19:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798977347' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.802 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.835 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:12 np0005593232 nova_compute[250269]: 2026-01-23 10:19:12.838 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:19:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905802174' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.297 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.301 250273 DEBUG nova.virt.libvirt.vif [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2121647420',display_name='tempest-ServersTestJSON-server-2121647420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2121647420',id=150,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-bdkvc653',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:19:05Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=26889fbb-8ea6-4457-897c-0e236ccc4b38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.302 250273 DEBUG nova.network.os_vif_util [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.303 250273 DEBUG nova.network.os_vif_util [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.305 250273 DEBUG nova.objects.instance [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 26889fbb-8ea6-4457-897c-0e236ccc4b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:19:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2690: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.558 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <uuid>26889fbb-8ea6-4457-897c-0e236ccc4b38</uuid>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <name>instance-00000096</name>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersTestJSON-server-2121647420</nova:name>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:19:12</nova:creationTime>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:user uuid="ec99ae7c69d0438280441e0434374cbf">tempest-ServersTestJSON-1611255243-project-member</nova:user>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:project uuid="c59351a1b59c4cc9ad389dff900935f2">tempest-ServersTestJSON-1611255243</nova:project>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <nova:port uuid="356b746d-7a0d-4dda-8cdd-f648ae1b22e1">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="serial">26889fbb-8ea6-4457-897c-0e236ccc4b38</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="uuid">26889fbb-8ea6-4457-897c-0e236ccc4b38</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/26889fbb-8ea6-4457-897c-0e236ccc4b38_disk">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:fe:11:5d"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <target dev="tap356b746d-7a"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/console.log" append="off"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:19:13 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:19:13 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:19:13 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:19:13 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.560 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Preparing to wait for external event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.561 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.561 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.561 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.562 250273 DEBUG nova.virt.libvirt.vif [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2121647420',display_name='tempest-ServersTestJSON-server-2121647420',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2121647420',id=150,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-bdkvc653',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:19:05Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=26889fbb-8ea6-4457-897c-0e236ccc4b38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.562 250273 DEBUG nova.network.os_vif_util [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.563 250273 DEBUG nova.network.os_vif_util [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.563 250273 DEBUG os_vif [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.563 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.564 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.564 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.566 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.567 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap356b746d-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.567 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap356b746d-7a, col_values=(('external_ids', {'iface-id': '356b746d-7a0d-4dda-8cdd-f648ae1b22e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:11:5d', 'vm-uuid': '26889fbb-8ea6-4457-897c-0e236ccc4b38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:13 np0005593232 NetworkManager[49057]: <info>  [1769163553.5694] manager: (tap356b746d-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/274)
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.572 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.577 250273 INFO os_vif [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a')#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.677 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.677 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.677 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No VIF found with MAC fa:16:3e:fe:11:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.677 250273 INFO nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Using config drive#033[00m
Jan 23 05:19:13 np0005593232 nova_compute[250269]: 2026-01-23 10:19:13.707 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.559 250273 INFO nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Creating config drive at /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.565 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhm9pcfm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.710 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjhm9pcfm" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.744 250273 DEBUG nova.storage.rbd_utils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.749 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:14.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.946 250273 DEBUG oslo_concurrency.processutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config 26889fbb-8ea6-4457-897c-0e236ccc4b38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:14 np0005593232 nova_compute[250269]: 2026-01-23 10:19:14.948 250273 INFO nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Deleting local config drive /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38/disk.config because it was imported into RBD.#033[00m
Jan 23 05:19:15 np0005593232 kernel: tap356b746d-7a: entered promiscuous mode
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.0097] manager: (tap356b746d-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/275)
Jan 23 05:19:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:15Z|00588|binding|INFO|Claiming lport 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 for this chassis.
Jan 23 05:19:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:15Z|00589|binding|INFO|356b746d-7a0d-4dda-8cdd-f648ae1b22e1: Claiming fa:16:3e:fe:11:5d 10.100.0.6
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.024 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:11:5d 10.100.0.6'], port_security=['fa:16:3e:fe:11:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '26889fbb-8ea6-4457-897c-0e236ccc4b38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=356b746d-7a0d-4dda-8cdd-f648ae1b22e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.026 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 bound to our chassis#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.027 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7#033[00m
Jan 23 05:19:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:15Z|00590|binding|INFO|Setting lport 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 ovn-installed in OVS
Jan 23 05:19:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:15Z|00591|binding|INFO|Setting lport 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 up in Southbound
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.036 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.040 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.043 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6fa2fbe6-0ce7-4ef5-8060-2ac3a7553927]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.044 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43bdb40a-e1 in ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:19:15 np0005593232 systemd-machined[215836]: New machine qemu-67-instance-00000096.
Jan 23 05:19:15 np0005593232 systemd-udevd[349700]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.046 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43bdb40a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.046 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a49a3fd2-27ee-45f7-8e16-4e40109bd730]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.051 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0b7d93d3-5145-46a1-a85b-334c95eff3db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.0616] device (tap356b746d-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.0622] device (tap356b746d-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:19:15 np0005593232 systemd[1]: Started Virtual Machine qemu-67-instance-00000096.
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.068 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4afbadba-58a7-4b12-b108-bbe482aa99dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.098 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[46dd3636-44bb-4eba-bf04-4c27e2ca6b0d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.135 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[84b94de9-f67d-4e32-9327-f2b530a142f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.1420] manager: (tap43bdb40a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/276)
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.140 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[db634f1a-3e86-4df4-8129-f7dfcf2a59e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.170 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[da9778cb-1b2a-4948-a543-737c59ac07ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.173 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[416038c6-ae6a-4bfd-919c-8d3aad320dd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.1924] device (tap43bdb40a-e0): carrier: link connected
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.198 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[09c60631-5d82-4db6-b75e-279818ef9051]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.216 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e3cefc71-a45f-4f6f-af0b-4f757b0000e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746833, 'reachable_time': 22776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349732, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.232 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bf6c2d-e49b-436c-842e-182e70a7bc1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:5ee5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 746833, 'tstamp': 746833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349733, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.249 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d46273-1a3f-41c4-ae68-162f2d1a1d4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 175], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746833, 'reachable_time': 22776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349734, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.279 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b818f989-79f4-40c6-a454-112f1eddac9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:15.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.337 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5835d0b2-5e2b-45b3-8c91-ee9b1c22b50d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.341 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.341 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.342 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43bdb40a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.345 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 NetworkManager[49057]: <info>  [1769163555.3456] manager: (tap43bdb40a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 23 05:19:15 np0005593232 kernel: tap43bdb40a-e0: entered promiscuous mode
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.347 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.348 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43bdb40a-e0, col_values=(('external_ids', {'iface-id': '8a8ef4f2-2ba5-405a-811e-058c5ff2b91e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:15Z|00592|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.351 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.351 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.352 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[09099b65-0861-4de5-8d26-35c45e7db35e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.353 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:19:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:15.354 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'env', 'PROCESS_TAG=haproxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.365 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.429 250273 DEBUG nova.compute.manager [req-3f4a0666-17a3-4810-84d3-28dfa9e7b3d8 req-30f1c180-27db-43dc-a725-2792452db7a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.430 250273 DEBUG oslo_concurrency.lockutils [req-3f4a0666-17a3-4810-84d3-28dfa9e7b3d8 req-30f1c180-27db-43dc-a725-2792452db7a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.431 250273 DEBUG oslo_concurrency.lockutils [req-3f4a0666-17a3-4810-84d3-28dfa9e7b3d8 req-30f1c180-27db-43dc-a725-2792452db7a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.431 250273 DEBUG oslo_concurrency.lockutils [req-3f4a0666-17a3-4810-84d3-28dfa9e7b3d8 req-30f1c180-27db-43dc-a725-2792452db7a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.431 250273 DEBUG nova.compute.manager [req-3f4a0666-17a3-4810-84d3-28dfa9e7b3d8 req-30f1c180-27db-43dc-a725-2792452db7a0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Processing event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.503 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163555.5023773, 26889fbb-8ea6-4457-897c-0e236ccc4b38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.516 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] VM Started (Lifecycle Event)#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.520 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.525 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.531 250273 INFO nova.virt.libvirt.driver [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Instance spawned successfully.#033[00m
Jan 23 05:19:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2691: 321 pgs: 321 active+clean; 213 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.532 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.565 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.575 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.580 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.582 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.583 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.583 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.584 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.586 250273 DEBUG nova.virt.libvirt.driver [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.597 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.598 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163555.5025535, 26889fbb-8ea6-4457-897c-0e236ccc4b38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.598 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.621 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.625 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163555.5244768, 26889fbb-8ea6-4457-897c-0e236ccc4b38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.626 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.630 250273 DEBUG nova.network.neutron [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Updated VIF entry in instance network info cache for port 356b746d-7a0d-4dda-8cdd-f648ae1b22e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.630 250273 DEBUG nova.network.neutron [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Updating instance_info_cache with network_info: [{"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.682 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.684 250273 DEBUG oslo_concurrency.lockutils [req-7fa2f8cc-876f-460b-a34b-368de176abe3 req-e4360661-6351-4d3e-b1e6-afa1b72ed43b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-26889fbb-8ea6-4457-897c-0e236ccc4b38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.687 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.696 250273 INFO nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Took 9.94 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.698 250273 DEBUG nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:19:15 np0005593232 podman[349808]: 2026-01-23 10:19:15.720338449 +0000 UTC m=+0.047349277 container create d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.735 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:19:15 np0005593232 systemd[1]: Started libpod-conmon-d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf.scope.
Jan 23 05:19:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:15 np0005593232 podman[349808]: 2026-01-23 10:19:15.696997175 +0000 UTC m=+0.024008033 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:19:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41ef0b67ce382634a0d5af90acaf0dd546476803716edd6512ce5f1aaebf6e0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:15 np0005593232 podman[349808]: 2026-01-23 10:19:15.806354634 +0000 UTC m=+0.133365482 container init d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.812 250273 INFO nova.compute.manager [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Took 11.21 seconds to build instance.#033[00m
Jan 23 05:19:15 np0005593232 podman[349808]: 2026-01-23 10:19:15.813487977 +0000 UTC m=+0.140498805 container start d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:19:15 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [NOTICE]   (349826) : New worker (349828) forked
Jan 23 05:19:15 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [NOTICE]   (349826) : Loading success.
Jan 23 05:19:15 np0005593232 nova_compute[250269]: 2026-01-23 10:19:15.852 250273 DEBUG oslo_concurrency.lockutils [None req-3766a86d-e34f-4909-a229-20a73cc4d58a ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.397s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:16.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:17.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.365 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.365 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.395 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:19:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2692: 321 pgs: 321 active+clean; 236 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.1 MiB/s wr, 168 op/s
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.569 250273 DEBUG nova.compute.manager [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.570 250273 DEBUG oslo_concurrency.lockutils [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.570 250273 DEBUG oslo_concurrency.lockutils [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.571 250273 DEBUG oslo_concurrency.lockutils [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.571 250273 DEBUG nova.compute.manager [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] No waiting events found dispatching network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:19:17 np0005593232 nova_compute[250269]: 2026-01-23 10:19:17.571 250273 WARNING nova.compute.manager [req-7f44b438-2cb0-4c4f-8b3f-f371c08bf935 req-e626d9f9-9742-498c-9633-a0895dc10a55 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received unexpected event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:19:18 np0005593232 nova_compute[250269]: 2026-01-23 10:19:18.408 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:18 np0005593232 nova_compute[250269]: 2026-01-23 10:19:18.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:18.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:19.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2693: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.9 MiB/s wr, 154 op/s
Jan 23 05:19:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:20.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:20.997 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:19:20 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:20.999 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:19:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:21.000 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:21 np0005593232 nova_compute[250269]: 2026-01-23 10:19:21.021 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:21.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2694: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Jan 23 05:19:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:23.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:23 np0005593232 nova_compute[250269]: 2026-01-23 10:19:23.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2695: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 23 05:19:23 np0005593232 nova_compute[250269]: 2026-01-23 10:19:23.569 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:25.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2696: 321 pgs: 321 active+clean; 246 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 23 05:19:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:26.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2697: 321 pgs: 321 active+clean; 296 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 179 op/s
Jan 23 05:19:28 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:28Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:11:5d 10.100.0.6
Jan 23 05:19:28 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:28Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:11:5d 10.100.0.6
Jan 23 05:19:28 np0005593232 nova_compute[250269]: 2026-01-23 10:19:28.457 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:28 np0005593232 nova_compute[250269]: 2026-01-23 10:19:28.570 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:28.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:29.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2698: 321 pgs: 321 active+clean; 341 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 5.2 MiB/s wr, 157 op/s
Jan 23 05:19:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:30.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:31.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2699: 321 pgs: 321 active+clean; 341 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 4.4 MiB/s wr, 103 op/s
Jan 23 05:19:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:32 np0005593232 podman[350065]: 2026-01-23 10:19:32.769469799 +0000 UTC m=+0.085054888 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:19:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:32.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:32 np0005593232 podman[350065]: 2026-01-23 10:19:32.901248886 +0000 UTC m=+0.216833975 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:19:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:33.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:33 np0005593232 nova_compute[250269]: 2026-01-23 10:19:33.495 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2700: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 366 KiB/s rd, 5.7 MiB/s wr, 118 op/s
Jan 23 05:19:33 np0005593232 nova_compute[250269]: 2026-01-23 10:19:33.573 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:33 np0005593232 podman[350222]: 2026-01-23 10:19:33.694808464 +0000 UTC m=+0.074259212 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:19:33 np0005593232 podman[350222]: 2026-01-23 10:19:33.708951026 +0000 UTC m=+0.088401724 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.817219) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573817775, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1074, "num_deletes": 253, "total_data_size": 1686615, "memory_usage": 1719320, "flush_reason": "Manual Compaction"}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573836807, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 1670104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59467, "largest_seqno": 60540, "table_properties": {"data_size": 1664752, "index_size": 2747, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11991, "raw_average_key_size": 20, "raw_value_size": 1653925, "raw_average_value_size": 2808, "num_data_blocks": 120, "num_entries": 589, "num_filter_entries": 589, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163483, "oldest_key_time": 1769163483, "file_creation_time": 1769163573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 19655 microseconds, and 6149 cpu microseconds.
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.836889) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 1670104 bytes OK
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.836907) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.838904) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.838934) EVENT_LOG_v1 {"time_micros": 1769163573838915, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.838954) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 1681592, prev total WAL file size 1681592, number of live WAL files 2.
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.839780) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(1630KB)], [137(10MB)]
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573839897, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12530196, "oldest_snapshot_seqno": -1}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 8362 keys, 10637617 bytes, temperature: kUnknown
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573944037, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 10637617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10584521, "index_size": 31095, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20933, "raw_key_size": 221138, "raw_average_key_size": 26, "raw_value_size": 10438321, "raw_average_value_size": 1248, "num_data_blocks": 1187, "num_entries": 8362, "num_filter_entries": 8362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.944281) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 10637617 bytes
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.964278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.2 rd, 102.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(13.9) write-amplify(6.4) OK, records in: 8887, records dropped: 525 output_compression: NoCompression
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.964336) EVENT_LOG_v1 {"time_micros": 1769163573964315, "job": 84, "event": "compaction_finished", "compaction_time_micros": 104211, "compaction_time_cpu_micros": 27993, "output_level": 6, "num_output_files": 1, "total_output_size": 10637617, "num_input_records": 8887, "num_output_records": 8362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573966155, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163573969676, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.839626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.969977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.969987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.969990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.969992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:19:33.969994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:19:33 np0005593232 podman[350283]: 2026-01-23 10:19:33.985312643 +0000 UTC m=+0.101895578 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, com.redhat.component=keepalived-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, vcs-type=git, io.buildah.version=1.28.2)
Jan 23 05:19:34 np0005593232 podman[350283]: 2026-01-23 10:19:34.001145693 +0000 UTC m=+0.117728598 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, distribution-scope=public, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, version=2.2.4, description=keepalived for Ceph, name=keepalived, vcs-type=git, io.buildah.version=1.28.2)
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:34.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f36d110d-1ee2-456b-80e5-1324f03aa18b does not exist
Jan 23 05:19:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3b46c37c-4f58-499f-910b-eccacd918c5e does not exist
Jan 23 05:19:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6eb7a88f-3ef1-416b-a7ad-969ee3c89818 does not exist
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:19:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:19:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:19:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:19:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:35.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.492358663 +0000 UTC m=+0.061884660 container create cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:19:35 np0005593232 systemd[1]: Started libpod-conmon-cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649.scope.
Jan 23 05:19:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2701: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 5.7 MiB/s wr, 118 op/s
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.461096914 +0000 UTC m=+0.030623011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.590982947 +0000 UTC m=+0.160508944 container init cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.598420228 +0000 UTC m=+0.167946225 container start cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:19:35 np0005593232 lucid_kirch[350601]: 167 167
Jan 23 05:19:35 np0005593232 systemd[1]: libpod-cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649.scope: Deactivated successfully.
Jan 23 05:19:35 np0005593232 conmon[350601]: conmon cdd688baa38e98dc10d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649.scope/container/memory.events
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.605060867 +0000 UTC m=+0.174586884 container attach cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.605375126 +0000 UTC m=+0.174901123 container died cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:19:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a0805d342aa34568cff80c5bf39092a639dde28d9dc9d7d4e29e2d2fbfb7b659-merged.mount: Deactivated successfully.
Jan 23 05:19:35 np0005593232 podman[350585]: 2026-01-23 10:19:35.645847326 +0000 UTC m=+0.215373323 container remove cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:19:35 np0005593232 systemd[1]: libpod-conmon-cdd688baa38e98dc10d8a35fe7d74c768b4b8697303ab8d7c3eef180e7ff5649.scope: Deactivated successfully.
Jan 23 05:19:35 np0005593232 podman[350603]: 2026-01-23 10:19:35.660187074 +0000 UTC m=+0.089179386 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:19:35 np0005593232 podman[350648]: 2026-01-23 10:19:35.838769531 +0000 UTC m=+0.047348347 container create dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:19:35 np0005593232 systemd[1]: Started libpod-conmon-dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65.scope.
Jan 23 05:19:35 np0005593232 podman[350648]: 2026-01-23 10:19:35.818783602 +0000 UTC m=+0.027362428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:35 np0005593232 podman[350648]: 2026-01-23 10:19:35.937638791 +0000 UTC m=+0.146217617 container init dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:19:35 np0005593232 podman[350648]: 2026-01-23 10:19:35.946503623 +0000 UTC m=+0.155082439 container start dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:19:35 np0005593232 podman[350648]: 2026-01-23 10:19:35.951071913 +0000 UTC m=+0.159650729 container attach dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:19:36 np0005593232 kind_wozniak[350665]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:19:36 np0005593232 kind_wozniak[350665]: --> relative data size: 1.0
Jan 23 05:19:36 np0005593232 kind_wozniak[350665]: --> All data devices are unavailable
Jan 23 05:19:36 np0005593232 systemd[1]: libpod-dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65.scope: Deactivated successfully.
Jan 23 05:19:36 np0005593232 podman[350648]: 2026-01-23 10:19:36.782922271 +0000 UTC m=+0.991501087 container died dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:19:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:36.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0acb4e55508efd62caaf23f4255d7425f6d51e2de34391446847a20cd357b45f-merged.mount: Deactivated successfully.
Jan 23 05:19:36 np0005593232 podman[350648]: 2026-01-23 10:19:36.843217705 +0000 UTC m=+1.051796511 container remove dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:19:36 np0005593232 systemd[1]: libpod-conmon-dca0e2eacb54c0ddbeed5ed239cefe260ed8f8cc088f032d46f3af1f03666d65.scope: Deactivated successfully.
Jan 23 05:19:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:19:37
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'backups', '.mgr', 'volumes']
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:19:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:37.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2702: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 360 KiB/s rd, 5.7 MiB/s wr, 119 op/s
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.698288542 +0000 UTC m=+0.047763899 container create 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:19:37 np0005593232 systemd[1]: Started libpod-conmon-21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7.scope.
Jan 23 05:19:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.679500638 +0000 UTC m=+0.028976025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.786486909 +0000 UTC m=+0.135962276 container init 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.792730397 +0000 UTC m=+0.142205744 container start 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.796283708 +0000 UTC m=+0.145759055 container attach 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:19:37 np0005593232 cool_antonelli[350849]: 167 167
Jan 23 05:19:37 np0005593232 systemd[1]: libpod-21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7.scope: Deactivated successfully.
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.800279311 +0000 UTC m=+0.149754668 container died 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:19:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-573d10c96a76adb305070ef4edbf6e360b3a01591177da1bdcfe9188224ab3bd-merged.mount: Deactivated successfully.
Jan 23 05:19:37 np0005593232 podman[350833]: 2026-01-23 10:19:37.836019727 +0000 UTC m=+0.185495084 container remove 21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:19:37 np0005593232 systemd[1]: libpod-conmon-21b81396680f57cec836ba3081f41189c3f30cee80cd72a84ac6fa9705e6c1d7.scope: Deactivated successfully.
Jan 23 05:19:38 np0005593232 podman[350875]: 2026-01-23 10:19:38.013102991 +0000 UTC m=+0.044793344 container create 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:19:38 np0005593232 systemd[1]: Started libpod-conmon-325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155.scope.
Jan 23 05:19:38 np0005593232 podman[350875]: 2026-01-23 10:19:37.994232095 +0000 UTC m=+0.025922438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac181f0463c280a96edf24937214d380b2eb9eced565724cf0bc8209fda31e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac181f0463c280a96edf24937214d380b2eb9eced565724cf0bc8209fda31e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac181f0463c280a96edf24937214d380b2eb9eced565724cf0bc8209fda31e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ac181f0463c280a96edf24937214d380b2eb9eced565724cf0bc8209fda31e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:38 np0005593232 podman[350875]: 2026-01-23 10:19:38.129665034 +0000 UTC m=+0.161355367 container init 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:19:38 np0005593232 podman[350875]: 2026-01-23 10:19:38.142252902 +0000 UTC m=+0.173943225 container start 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:19:38 np0005593232 podman[350875]: 2026-01-23 10:19:38.145988588 +0000 UTC m=+0.177679011 container attach 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:19:38 np0005593232 nova_compute[250269]: 2026-01-23 10:19:38.545 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:38 np0005593232 nova_compute[250269]: 2026-01-23 10:19:38.575 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:38.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]: {
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:    "0": [
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:        {
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "devices": [
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "/dev/loop3"
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            ],
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "lv_name": "ceph_lv0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "lv_size": "7511998464",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "name": "ceph_lv0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "tags": {
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.cluster_name": "ceph",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.crush_device_class": "",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.encrypted": "0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.osd_id": "0",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.type": "block",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:                "ceph.vdo": "0"
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            },
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "type": "block",
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:            "vg_name": "ceph_vg0"
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:        }
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]:    ]
Jan 23 05:19:38 np0005593232 goofy_wilson[350891]: }
Jan 23 05:19:39 np0005593232 systemd[1]: libpod-325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155.scope: Deactivated successfully.
Jan 23 05:19:39 np0005593232 podman[350875]: 2026-01-23 10:19:39.025662015 +0000 UTC m=+1.057352368 container died 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:19:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2ac181f0463c280a96edf24937214d380b2eb9eced565724cf0bc8209fda31e6-merged.mount: Deactivated successfully.
Jan 23 05:19:39 np0005593232 podman[350875]: 2026-01-23 10:19:39.096518559 +0000 UTC m=+1.128208912 container remove 325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_wilson, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:19:39 np0005593232 systemd[1]: libpod-conmon-325a6a748121fd5779b594ef761f98d9d70c9869a4b9173550312e891557d155.scope: Deactivated successfully.
Jan 23 05:19:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:39.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:39 np0005593232 nova_compute[250269]: 2026-01-23 10:19:39.533 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:39 np0005593232 nova_compute[250269]: 2026-01-23 10:19:39.535 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2703: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 763 KiB/s rd, 3.4 MiB/s wr, 104 op/s
Jan 23 05:19:39 np0005593232 nova_compute[250269]: 2026-01-23 10:19:39.584 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.733220519 +0000 UTC m=+0.047715287 container create 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:19:39 np0005593232 systemd[1]: Started libpod-conmon-2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f.scope.
Jan 23 05:19:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.712476149 +0000 UTC m=+0.026970967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.819540253 +0000 UTC m=+0.134035061 container init 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.826880342 +0000 UTC m=+0.141375120 container start 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.830367831 +0000 UTC m=+0.144862639 container attach 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:19:39 np0005593232 musing_shaw[351068]: 167 167
Jan 23 05:19:39 np0005593232 systemd[1]: libpod-2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f.scope: Deactivated successfully.
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.832308156 +0000 UTC m=+0.146803004 container died 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 23 05:19:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b69ab75f476671a67a89a6371b1c7db5ad1c2684eda986df753a731c0bea73de-merged.mount: Deactivated successfully.
Jan 23 05:19:39 np0005593232 podman[351052]: 2026-01-23 10:19:39.869702369 +0000 UTC m=+0.184197137 container remove 2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:19:39 np0005593232 systemd[1]: libpod-conmon-2059686dcdf55a37acd8f7135a99eb55da20869af33701becb74ee1c9627188f.scope: Deactivated successfully.
Jan 23 05:19:40 np0005593232 podman[351093]: 2026-01-23 10:19:40.051214359 +0000 UTC m=+0.048624523 container create 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:19:40 np0005593232 systemd[1]: Started libpod-conmon-07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c.scope.
Jan 23 05:19:40 np0005593232 podman[351093]: 2026-01-23 10:19:40.027121974 +0000 UTC m=+0.024532218 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:19:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:19:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560870c3303f1992d4cf87bfb2ba40072c8e2356701a6cc931edc206d96dd700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560870c3303f1992d4cf87bfb2ba40072c8e2356701a6cc931edc206d96dd700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560870c3303f1992d4cf87bfb2ba40072c8e2356701a6cc931edc206d96dd700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560870c3303f1992d4cf87bfb2ba40072c8e2356701a6cc931edc206d96dd700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:19:40 np0005593232 podman[351093]: 2026-01-23 10:19:40.1564471 +0000 UTC m=+0.153857274 container init 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:19:40 np0005593232 podman[351093]: 2026-01-23 10:19:40.166382813 +0000 UTC m=+0.163792977 container start 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:19:40 np0005593232 podman[351093]: 2026-01-23 10:19:40.170042777 +0000 UTC m=+0.167452991 container attach 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:19:40 np0005593232 nova_compute[250269]: 2026-01-23 10:19:40.315 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:40.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]: {
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:        "osd_id": 0,
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:        "type": "bluestore"
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]:    }
Jan 23 05:19:41 np0005593232 recursing_lamarr[351110]: }
Jan 23 05:19:41 np0005593232 systemd[1]: libpod-07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c.scope: Deactivated successfully.
Jan 23 05:19:41 np0005593232 podman[351093]: 2026-01-23 10:19:41.144941971 +0000 UTC m=+1.142352135 container died 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:19:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-560870c3303f1992d4cf87bfb2ba40072c8e2356701a6cc931edc206d96dd700-merged.mount: Deactivated successfully.
Jan 23 05:19:41 np0005593232 podman[351093]: 2026-01-23 10:19:41.265984952 +0000 UTC m=+1.263395126 container remove 07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:19:41 np0005593232 podman[351133]: 2026-01-23 10:19:41.283920301 +0000 UTC m=+0.106805437 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:19:41 np0005593232 nova_compute[250269]: 2026-01-23 10:19:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:19:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:41 np0005593232 systemd[1]: libpod-conmon-07fa9f7fb68657d676500fadea1e827359fc9c64a236e0ae6718413f2e065d8c.scope: Deactivated successfully.
Jan 23 05:19:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:19:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5e6f2cd2-802b-4388-a4ac-e4b66b6498d4 does not exist
Jan 23 05:19:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e8a66819-ba29-416d-b376-e644c2f4006d does not exist
Jan 23 05:19:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f4f3837e-a7f7-44b8-b9f0-ff990bd6eb5c does not exist
Jan 23 05:19:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:41.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2704: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 686 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Jan 23 05:19:41 np0005593232 nova_compute[250269]: 2026-01-23 10:19:41.793 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:41 np0005593232 nova_compute[250269]: 2026-01-23 10:19:41.794 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:41 np0005593232 nova_compute[250269]: 2026-01-23 10:19:41.806 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:19:41 np0005593232 nova_compute[250269]: 2026-01-23 10:19:41.806 250273 INFO nova.compute.claims [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:19:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:42.633 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:42.634 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:42.635 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:42.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.121 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:43.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2705: 321 pgs: 321 active+clean; 372 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.3 MiB/s wr, 143 op/s
Jan 23 05:19:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:19:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3327524108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.577 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.579 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.586 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.594 250273 DEBUG nova.compute.provider_tree [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.691 250273 DEBUG nova.scheduler.client.report [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.722 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.723 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.801 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.802 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.858 250273 INFO nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:19:43 np0005593232 nova_compute[250269]: 2026-01-23 10:19:43.906 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.073 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.076 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.077 250273 INFO nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Creating image(s)#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.124 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.169 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.213 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.219 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.313 250273 DEBUG nova.policy [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.324 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.325 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.326 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.327 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.367 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:44 np0005593232 nova_compute[250269]: 2026-01-23 10:19:44.373 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7599824a-e407-425c-9d2a-28db0744adb1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:19:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256659622' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:19:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:19:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256659622' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:19:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:44.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.239 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7599824a-e407-425c-9d2a-28db0744adb1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.866s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.319 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.327 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] resizing rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:19:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:45.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.442 250273 DEBUG nova.objects.instance [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 7599824a-e407-425c-9d2a-28db0744adb1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.474 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.474 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Ensure instance console log exists: /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.475 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.475 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:45 np0005593232 nova_compute[250269]: 2026-01-23 10:19:45.476 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2706: 321 pgs: 321 active+clean; 358 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 39 KiB/s wr, 160 op/s
Jan 23 05:19:46 np0005593232 nova_compute[250269]: 2026-01-23 10:19:46.295 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Successfully created port: d048f86b-08c3-4963-8743-6566b8a1a571 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:19:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:46.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008097853270984553 of space, bias 1.0, pg target 2.429355981295366 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:19:47 np0005593232 nova_compute[250269]: 2026-01-23 10:19:47.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:47 np0005593232 nova_compute[250269]: 2026-01-23 10:19:47.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:19:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:19:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:47.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:19:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2707: 321 pgs: 321 active+clean; 330 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 203 op/s
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.587 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.589 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.589 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.589 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.627 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:48 np0005593232 nova_compute[250269]: 2026-01-23 10:19:48.628 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 23 05:19:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:48.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:49 np0005593232 nova_compute[250269]: 2026-01-23 10:19:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:49.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2708: 321 pgs: 321 active+clean; 325 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 226 op/s
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.072 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Successfully updated port: d048f86b-08c3-4963-8743-6566b8a1a571 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.102 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.102 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.103 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.290 250273 DEBUG nova.compute.manager [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-changed-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.291 250273 DEBUG nova.compute.manager [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Refreshing instance network info cache due to event network-changed-d048f86b-08c3-4963-8743-6566b8a1a571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:19:50 np0005593232 nova_compute[250269]: 2026-01-23 10:19:50.291 250273 DEBUG oslo_concurrency.lockutils [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:19:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:50.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:51 np0005593232 nova_compute[250269]: 2026-01-23 10:19:51.041 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:19:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:51.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2709: 321 pgs: 321 active+clean; 325 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Jan 23 05:19:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:52.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.886 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.887 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.888 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.888 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.889 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.891 250273 INFO nova.compute.manager [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Terminating instance#033[00m
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.894 250273 DEBUG nova.compute.manager [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:19:52 np0005593232 kernel: tap356b746d-7a (unregistering): left promiscuous mode
Jan 23 05:19:52 np0005593232 NetworkManager[49057]: <info>  [1769163592.9692] device (tap356b746d-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:19:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:52Z|00593|binding|INFO|Releasing lport 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 from this chassis (sb_readonly=0)
Jan 23 05:19:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:52Z|00594|binding|INFO|Setting lport 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 down in Southbound
Jan 23 05:19:52 np0005593232 ovn_controller[151001]: 2026-01-23T10:19:52Z|00595|binding|INFO|Removing iface tap356b746d-7a ovn-installed in OVS
Jan 23 05:19:52 np0005593232 nova_compute[250269]: 2026-01-23 10:19:52.988 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.027 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:11:5d 10.100.0.6'], port_security=['fa:16:3e:fe:11:5d 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '26889fbb-8ea6-4457-897c-0e236ccc4b38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=356b746d-7a0d-4dda-8cdd-f648ae1b22e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.028 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.030 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 356b746d-7a0d-4dda-8cdd-f648ae1b22e1 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.032 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.034 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[150ccff7-6c8d-4b21-ae95-481bc6e88a1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.035 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace which is not needed anymore#033[00m
Jan 23 05:19:53 np0005593232 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 23 05:19:53 np0005593232 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d00000096.scope: Consumed 14.194s CPU time.
Jan 23 05:19:53 np0005593232 systemd-machined[215836]: Machine qemu-67-instance-00000096 terminated.
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.122 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.127 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.136 250273 INFO nova.virt.libvirt.driver [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Instance destroyed successfully.#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.137 250273 DEBUG nova.objects.instance [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'resources' on Instance uuid 26889fbb-8ea6-4457-897c-0e236ccc4b38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:19:53 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [NOTICE]   (349826) : haproxy version is 2.8.14-c23fe91
Jan 23 05:19:53 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [NOTICE]   (349826) : path to executable is /usr/sbin/haproxy
Jan 23 05:19:53 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [WARNING]  (349826) : Exiting Master process...
Jan 23 05:19:53 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [ALERT]    (349826) : Current worker (349828) exited with code 143 (Terminated)
Jan 23 05:19:53 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[349821]: [WARNING]  (349826) : All workers exited. Exiting... (0)
Jan 23 05:19:53 np0005593232 systemd[1]: libpod-d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf.scope: Deactivated successfully.
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.184 250273 DEBUG nova.virt.libvirt.vif [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:19:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-2121647420',display_name='tempest-ServersTestJSON-server-2121647420',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-2121647420',id=150,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:19:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-bdkvc653',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:19:15Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=26889fbb-8ea6-4457-897c-0e236ccc4b38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.185 250273 DEBUG nova.network.os_vif_util [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "address": "fa:16:3e:fe:11:5d", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap356b746d-7a", "ovs_interfaceid": "356b746d-7a0d-4dda-8cdd-f648ae1b22e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.185 250273 DEBUG nova.network.os_vif_util [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.186 250273 DEBUG os_vif [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:19:53 np0005593232 podman[351487]: 2026-01-23 10:19:53.187441194 +0000 UTC m=+0.044428164 container died d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.187 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap356b746d-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.189 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.193 250273 INFO os_vif [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:11:5d,bridge_name='br-int',has_traffic_filtering=True,id=356b746d-7a0d-4dda-8cdd-f648ae1b22e1,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap356b746d-7a')#033[00m
Jan 23 05:19:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf-userdata-shm.mount: Deactivated successfully.
Jan 23 05:19:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d41ef0b67ce382634a0d5af90acaf0dd546476803716edd6512ce5f1aaebf6e0-merged.mount: Deactivated successfully.
Jan 23 05:19:53 np0005593232 podman[351487]: 2026-01-23 10:19:53.224654442 +0000 UTC m=+0.081641412 container cleanup d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:19:53 np0005593232 systemd[1]: libpod-conmon-d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf.scope: Deactivated successfully.
Jan 23 05:19:53 np0005593232 podman[351535]: 2026-01-23 10:19:53.297791211 +0000 UTC m=+0.041828620 container remove d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.304 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[639eff1b-7df6-4bb4-880e-9482ad615d71]: (4, ('Fri Jan 23 10:19:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf)\nd53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf\nFri Jan 23 10:19:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (d53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf)\nd53b804ae953ce69d5528b74bfc63296fbca97bf46519531261d04bb9d049eaf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.306 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4f356d16-4c88-4fa4-825a-8bb04e5ac807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.307 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 kernel: tap43bdb40a-e0: left promiscuous mode
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.311 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.315 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[13a1efcb-7887-48e0-98cd-2ee3b74cbb15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.335 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bfa71a-2975-4ffb-a14f-1ecfb3a6ca79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.337 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5b9868-7ea6-401d-8e8b-dbf7d961e7e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.353 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56f0ccbc-af64-4129-9b43-c99d47cd1719]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 746827, 'reachable_time': 27937, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351553, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.356 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:19:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:19:53.357 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb76367-2828-40b0-903f-155239a95d21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:19:53 np0005593232 systemd[1]: run-netns-ovnmeta\x2d43bdb40a\x2deff5\x2d45cd\x2d9cb3\x2dcfdf465ad1f7.mount: Deactivated successfully.
Jan 23 05:19:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:53.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2710: 321 pgs: 321 active+clean; 325 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.8 MiB/s wr, 199 op/s
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.618 250273 INFO nova.virt.libvirt.driver [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Deleting instance files /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38_del#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.619 250273 INFO nova.virt.libvirt.driver [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Deletion of /var/lib/nova/instances/26889fbb-8ea6-4457-897c-0e236ccc4b38_del complete#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.629 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.713 250273 DEBUG nova.compute.manager [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-unplugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.714 250273 DEBUG oslo_concurrency.lockutils [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.714 250273 DEBUG oslo_concurrency.lockutils [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.714 250273 DEBUG oslo_concurrency.lockutils [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.715 250273 DEBUG nova.compute.manager [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] No waiting events found dispatching network-vif-unplugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.715 250273 DEBUG nova.compute.manager [req-a34f9879-9d76-4dd7-a7a0-2d55665dd8c8 req-7a108fe9-31a3-4f9b-8a85-91e5a84317c3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-unplugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.770 250273 INFO nova.compute.manager [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.771 250273 DEBUG oslo.service.loopingcall [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.771 250273 DEBUG nova.compute.manager [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:19:53 np0005593232 nova_compute[250269]: 2026-01-23 10:19:53.771 250273 DEBUG nova.network.neutron [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:19:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:54.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:54 np0005593232 nova_compute[250269]: 2026-01-23 10:19:54.974 250273 DEBUG nova.network.neutron [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updating instance_info_cache with network_info: [{"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.267 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.268 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance network_info: |[{"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.268 250273 DEBUG oslo_concurrency.lockutils [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.269 250273 DEBUG nova.network.neutron [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Refreshing network info cache for port d048f86b-08c3-4963-8743-6566b8a1a571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.275 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Start _get_guest_xml network_info=[{"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.281 250273 WARNING nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.293 250273 DEBUG nova.virt.libvirt.host [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.294 250273 DEBUG nova.virt.libvirt.host [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.303 250273 DEBUG nova.virt.libvirt.host [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.304 250273 DEBUG nova.virt.libvirt.host [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.307 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.307 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.308 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.309 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.309 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.310 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.311 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.311 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.312 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.312 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.313 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.313 250273 DEBUG nova.virt.hardware [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.318 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2711: 321 pgs: 321 active+clean; 289 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 682 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 05:19:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:19:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/429264553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.819 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.847 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:19:55 np0005593232 nova_compute[250269]: 2026-01-23 10:19:55.852 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:19:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:19:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417885859' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:19:56 np0005593232 nova_compute[250269]: 2026-01-23 10:19:56.335 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:19:56 np0005593232 nova_compute[250269]: 2026-01-23 10:19:56.338 250273 DEBUG nova.virt.libvirt.vif [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-984675457',display_name='tempest-TestNetworkBasicOps-server-984675457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-984675457',id=153,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK1nM+eHevP0D1+qwz2Lwd9FCCupUyTEuZC5XAb2ITgMd9JK8Wsp66qF0mr4mibxTKRMv/w7qXU01TwS1bHnFdS3XtvDRoLHy2ERucHi8SRKNY0fLgC3FOWjksab2YQuQ==',key_name='tempest-TestNetworkBasicOps-1652034461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-q98hh0ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:19:43Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=7599824a-e407-425c-9d2a-28db0744adb1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:19:56 np0005593232 nova_compute[250269]: 2026-01-23 10:19:56.338 250273 DEBUG nova.network.os_vif_util [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:19:56 np0005593232 nova_compute[250269]: 2026-01-23 10:19:56.339 250273 DEBUG nova.network.os_vif_util [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:19:56 np0005593232 nova_compute[250269]: 2026-01-23 10:19:56.341 250273 DEBUG nova.objects.instance [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7599824a-e407-425c-9d2a-28db0744adb1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:19:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:56.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:19:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 23 05:19:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 23 05:19:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 23 05:19:57 np0005593232 nova_compute[250269]: 2026-01-23 10:19:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:57 np0005593232 nova_compute[250269]: 2026-01-23 10:19:57.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2713: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.3 MiB/s wr, 78 op/s
Jan 23 05:19:58 np0005593232 nova_compute[250269]: 2026-01-23 10:19:58.227 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:58 np0005593232 nova_compute[250269]: 2026-01-23 10:19:58.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:19:58 np0005593232 nova_compute[250269]: 2026-01-23 10:19:58.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:19:58 np0005593232 nova_compute[250269]: 2026-01-23 10:19:58.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:19:58 np0005593232 nova_compute[250269]: 2026-01-23 10:19:58.633 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:19:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:19:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:19:58.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:19:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:19:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:19:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:19:59.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:19:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2714: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 23 05:20:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:20:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:20:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:00.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:01.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2715: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.0 MiB/s wr, 49 op/s
Jan 23 05:20:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:02.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:03 np0005593232 nova_compute[250269]: 2026-01-23 10:20:03.231 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:03.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:20:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2716: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 2.0 MiB/s wr, 52 op/s
Jan 23 05:20:03 np0005593232 nova_compute[250269]: 2026-01-23 10:20:03.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:04.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:05.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2717: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.243 250273 DEBUG nova.compute.manager [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.243 250273 DEBUG oslo_concurrency.lockutils [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.244 250273 DEBUG oslo_concurrency.lockutils [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.245 250273 DEBUG oslo_concurrency.lockutils [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.245 250273 DEBUG nova.compute.manager [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] No waiting events found dispatching network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.245 250273 WARNING nova.compute.manager [req-95d162a9-97c2-4774-8d9d-e9203d42a6f4 req-37c037e1-b3c3-4109-92d7-20dec6279eb4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received unexpected event network-vif-plugged-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:20:06 np0005593232 podman[351625]: 2026-01-23 10:20:06.496860105 +0000 UTC m=+0.146729902 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.701 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <uuid>7599824a-e407-425c-9d2a-28db0744adb1</uuid>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <name>instance-00000099</name>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkBasicOps-server-984675457</nova:name>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:19:55</nova:creationTime>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <nova:port uuid="d048f86b-08c3-4963-8743-6566b8a1a571">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="serial">7599824a-e407-425c-9d2a-28db0744adb1</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="uuid">7599824a-e407-425c-9d2a-28db0744adb1</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7599824a-e407-425c-9d2a-28db0744adb1_disk">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7599824a-e407-425c-9d2a-28db0744adb1_disk.config">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:bd:12:18"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <target dev="tapd048f86b-08"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/console.log" append="off"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:20:06 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:20:06 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:20:06 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:20:06 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.703 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Preparing to wait for external event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.703 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.704 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.704 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.705 250273 DEBUG nova.virt.libvirt.vif [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-984675457',display_name='tempest-TestNetworkBasicOps-server-984675457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-984675457',id=153,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK1nM+eHevP0D1+qwz2Lwd9FCCupUyTEuZC5XAb2ITgMd9JK8Wsp66qF0mr4mibxTKRMv/w7qXU01TwS1bHnFdS3XtvDRoLHy2ERucHi8SRKNY0fLgC3FOWjksab2YQuQ==',key_name='tempest-TestNetworkBasicOps-1652034461',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-q98hh0ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:19:43Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=7599824a-e407-425c-9d2a-28db0744adb1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.705 250273 DEBUG nova.network.os_vif_util [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.706 250273 DEBUG nova.network.os_vif_util [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.706 250273 DEBUG os_vif [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.707 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.708 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.711 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.711 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd048f86b-08, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.711 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd048f86b-08, col_values=(('external_ids', {'iface-id': 'd048f86b-08c3-4963-8743-6566b8a1a571', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:12:18', 'vm-uuid': '7599824a-e407-425c-9d2a-28db0744adb1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:06 np0005593232 NetworkManager[49057]: <info>  [1769163606.7147] manager: (tapd048f86b-08): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/278)
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.714 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.715 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.715 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.715 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.718 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:06 np0005593232 nova_compute[250269]: 2026-01-23 10:20:06.725 250273 INFO os_vif [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08')#033[00m
Jan 23 05:20:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:06.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:07.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2718: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 790 KiB/s wr, 19 op/s
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:08 np0005593232 nova_compute[250269]: 2026-01-23 10:20:08.135 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163593.134464, 26889fbb-8ea6-4457-897c-0e236ccc4b38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:20:08 np0005593232 nova_compute[250269]: 2026-01-23 10:20:08.136 250273 INFO nova.compute.manager [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:20:08 np0005593232 nova_compute[250269]: 2026-01-23 10:20:08.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:08.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:09.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.456 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.456 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.492 250273 DEBUG nova.compute.manager [None req-b8b3f67c-5b11-4e99-82b9-858c583560e9 - - - - - -] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:20:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2719: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.9 KiB/s rd, 255 B/s wr, 3 op/s
Jan 23 05:20:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:20:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656889830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.973 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.974 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.974 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:bd:12:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:20:09 np0005593232 nova_compute[250269]: 2026-01-23 10:20:09.975 250273 INFO nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Using config drive#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.002 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.008 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.447 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.447 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-00000099 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.617 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.618 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4244MB free_disk=20.87645721435547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.729 250273 DEBUG nova.network.neutron [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:20:10 np0005593232 nova_compute[250269]: 2026-01-23 10:20:10.764 250273 INFO nova.compute.manager [-] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Took 16.99 seconds to deallocate network for instance.#033[00m
Jan 23 05:20:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.027 250273 DEBUG nova.compute.manager [req-220b5797-d613-4191-a942-ffdbc98cbd7b req-69348da2-c7b7-4dd6-9f4a-a21f330326c7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 26889fbb-8ea6-4457-897c-0e236ccc4b38] Received event network-vif-deleted-356b746d-7a0d-4dda-8cdd-f648ae1b22e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.042 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 26889fbb-8ea6-4457-897c-0e236ccc4b38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.042 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 7599824a-e407-425c-9d2a-28db0744adb1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.042 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.043 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.131 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.158 250273 DEBUG nova.network.neutron [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updated VIF entry in instance network info cache for port d048f86b-08c3-4963-8743-6566b8a1a571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.158 250273 DEBUG nova.network.neutron [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updating instance_info_cache with network_info: [{"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.174 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.248 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.249 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.351 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:20:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:11.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.394 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:20:11 np0005593232 podman[351697]: 2026-01-23 10:20:11.418695459 +0000 UTC m=+0.079104130 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.542 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:20:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2720: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:20:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2489850020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:20:11 np0005593232 nova_compute[250269]: 2026-01-23 10:20:11.996 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:20:12 np0005593232 nova_compute[250269]: 2026-01-23 10:20:12.002 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:20:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:12.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:13 np0005593232 nova_compute[250269]: 2026-01-23 10:20:13.306 250273 DEBUG oslo_concurrency.lockutils [req-1c444ed0-55ca-493f-b967-bdd34b6b74ee req-3720867a-c669-4d4d-a427-9cc7f15a59af 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:20:13 np0005593232 nova_compute[250269]: 2026-01-23 10:20:13.341 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:20:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2721: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 255 B/s wr, 3 op/s
Jan 23 05:20:13 np0005593232 nova_compute[250269]: 2026-01-23 10:20:13.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:13 np0005593232 nova_compute[250269]: 2026-01-23 10:20:13.911 250273 INFO nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Creating config drive at /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config#033[00m
Jan 23 05:20:13 np0005593232 nova_compute[250269]: 2026-01-23 10:20:13.918 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikemmaju execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.085 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpikemmaju" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.122 250273 DEBUG nova.storage.rbd_utils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image 7599824a-e407-425c-9d2a-28db0744adb1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.127 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config 7599824a-e407-425c-9d2a-28db0744adb1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.299 250273 DEBUG oslo_concurrency.processutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config 7599824a-e407-425c-9d2a-28db0744adb1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.300 250273 INFO nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Deleting local config drive /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1/disk.config because it was imported into RBD.#033[00m
Jan 23 05:20:14 np0005593232 kernel: tapd048f86b-08: entered promiscuous mode
Jan 23 05:20:14 np0005593232 NetworkManager[49057]: <info>  [1769163614.3656] manager: (tapd048f86b-08): new Tun device (/org/freedesktop/NetworkManager/Devices/279)
Jan 23 05:20:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:14Z|00596|binding|INFO|Claiming lport d048f86b-08c3-4963-8743-6566b8a1a571 for this chassis.
Jan 23 05:20:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:14Z|00597|binding|INFO|d048f86b-08c3-4963-8743-6566b8a1a571: Claiming fa:16:3e:bd:12:18 10.100.0.14
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.403 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:14 np0005593232 systemd-udevd[351842]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:20:14 np0005593232 systemd-machined[215836]: New machine qemu-68-instance-00000099.
Jan 23 05:20:14 np0005593232 NetworkManager[49057]: <info>  [1769163614.4411] device (tapd048f86b-08): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:20:14 np0005593232 NetworkManager[49057]: <info>  [1769163614.4418] device (tapd048f86b-08): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:20:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:14Z|00598|binding|INFO|Setting lport d048f86b-08c3-4963-8743-6566b8a1a571 ovn-installed in OVS
Jan 23 05:20:14 np0005593232 nova_compute[250269]: 2026-01-23 10:20:14.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:14 np0005593232 systemd[1]: Started Virtual Machine qemu-68-instance-00000099.
Jan 23 05:20:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:14.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:15 np0005593232 nova_compute[250269]: 2026-01-23 10:20:15.446 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163615.4459558, 7599824a-e407-425c-9d2a-28db0744adb1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:20:15 np0005593232 nova_compute[250269]: 2026-01-23 10:20:15.447 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:20:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2722: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 682 B/s rd, 341 B/s wr, 1 op/s
Jan 23 05:20:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:16Z|00599|binding|INFO|Setting lport d048f86b-08c3-4963-8743-6566b8a1a571 up in Southbound
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.063 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:12:18 10.100.0.14'], port_security=['fa:16:3e:bd:12:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7599824a-e407-425c-9d2a-28db0744adb1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '641b8665-1bd0-4720-8aa5-a1f02bd1ccbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=211b0b88-3773-4743-9d70-bd2a35ab028e, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d048f86b-08c3-4963-8743-6566b8a1a571) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.066 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d048f86b-08c3-4963-8743-6566b8a1a571 in datapath ef2a274e-4da0-400b-bcb7-ccf7f53401c1 bound to our chassis#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.069 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef2a274e-4da0-400b-bcb7-ccf7f53401c1#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.082 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[74220287-8115-4c3a-a1bc-ee1b3165a634]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.083 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef2a274e-41 in ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.085 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef2a274e-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.085 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5a367b-1aa6-452d-863a-19b0b239c2a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.086 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8f7609-cf7a-49fa-b044-9e0a9e5f217a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.099 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[eef1db01-ab19-4aa7-9964-fb14ef600ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.113 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[274ab002-802f-454f-820b-a2e3755c67cc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.146 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f239762a-911f-4541-96ba-4caadde79759]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.157 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[60e0a245-4ab7-48e2-9fc8-41fdfc6f87ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 NetworkManager[49057]: <info>  [1769163616.1593] manager: (tapef2a274e-40): new Veth device (/org/freedesktop/NetworkManager/Devices/280)
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.181 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.182 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 5.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.182 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 5.051s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.198 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[223e3f2d-3eaa-4b58-b8b4-7f1bad07c0fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.202 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[913f113b-4c0b-4f00-9513-15ad05bbf2e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 NetworkManager[49057]: <info>  [1769163616.2324] device (tapef2a274e-40): carrier: link connected
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.238 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[39d4dba7-5e8d-45e3-acd0-2484d36ea7ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.263 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[88b58bed-388c-4a31-9dc4-e3709ba43764]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2a274e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:af:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752937, 'reachable_time': 26090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351921, 'error': None, 'target': 'ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.282 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eef2cdb6-b76f-4cc6-93df-51fa3512d203]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:af99'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 752937, 'tstamp': 752937}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351922, 'error': None, 'target': 'ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.294 250273 DEBUG oslo_concurrency.processutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.314 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0a71bb3c-7c49-41e0-bf96-fce0e573f641]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef2a274e-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:af:99'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 178], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752937, 'reachable_time': 26090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 351923, 'error': None, 'target': 'ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.357 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[770351c2-1cd5-45d7-bc3a-07f0141ead04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.417 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e668233a-782d-4130-b4cc-82c06aa1a737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.418 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2a274e-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.419 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.419 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef2a274e-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:16 np0005593232 NetworkManager[49057]: <info>  [1769163616.4219] manager: (tapef2a274e-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:16 np0005593232 kernel: tapef2a274e-40: entered promiscuous mode
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.430 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef2a274e-40, col_values=(('external_ids', {'iface-id': '4dd3507c-09b2-4097-8357-2c398ef8e03c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:16Z|00600|binding|INFO|Releasing lport 4dd3507c-09b2-4097-8357-2c398ef8e03c from this chassis (sb_readonly=1)
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.433 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef2a274e-4da0-400b-bcb7-ccf7f53401c1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef2a274e-4da0-400b-bcb7-ccf7f53401c1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.433 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f512885-60e4-4316-937b-994088544f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.434 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ef2a274e-4da0-400b-bcb7-ccf7f53401c1
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ef2a274e-4da0-400b-bcb7-ccf7f53401c1.pid.haproxy
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ef2a274e-4da0-400b-bcb7-ccf7f53401c1
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:20:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:16.435 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'env', 'PROCESS_TAG=haproxy-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef2a274e-4da0-400b-bcb7-ccf7f53401c1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.447 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.720 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:20:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41262605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.746 250273 DEBUG oslo_concurrency.processutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:20:16 np0005593232 nova_compute[250269]: 2026-01-23 10:20:16.753 250273 DEBUG nova.compute.provider_tree [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:20:16 np0005593232 podman[351976]: 2026-01-23 10:20:16.781817268 +0000 UTC m=+0.044636280 container create 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:20:16 np0005593232 systemd[1]: Started libpod-conmon-21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de.scope.
Jan 23 05:20:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1275a4a302a20303297c4079d0737845ff7d797cc05e7578c3c26c67b8353ea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:16 np0005593232 podman[351976]: 2026-01-23 10:20:16.758515115 +0000 UTC m=+0.021334137 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:20:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:16 np0005593232 podman[351976]: 2026-01-23 10:20:16.867831843 +0000 UTC m=+0.130650885 container init 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 05:20:16 np0005593232 podman[351976]: 2026-01-23 10:20:16.877441146 +0000 UTC m=+0.140260198 container start 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:20:16 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [NOTICE]   (351997) : New worker (351999) forked
Jan 23 05:20:16 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [NOTICE]   (351997) : Loading success.
Jan 23 05:20:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:17 np0005593232 nova_compute[250269]: 2026-01-23 10:20:17.246 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:20:17 np0005593232 nova_compute[250269]: 2026-01-23 10:20:17.251 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163615.4463031, 7599824a-e407-425c-9d2a-28db0744adb1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:20:17 np0005593232 nova_compute[250269]: 2026-01-23 10:20:17.251 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:20:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:17.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2723: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 23 05:20:18 np0005593232 nova_compute[250269]: 2026-01-23 10:20:18.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2724: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.150 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.151 250273 DEBUG nova.scheduler.client.report [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.157 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.177 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.386 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.441 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 4.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.532 250273 INFO nova.scheduler.client.report [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Deleted allocations for instance 26889fbb-8ea6-4457-897c-0e236ccc4b38#033[00m
Jan 23 05:20:20 np0005593232 nova_compute[250269]: 2026-01-23 10:20:20.650 250273 DEBUG oslo_concurrency.lockutils [None req-e9e44c60-46a3-494a-9f88-1be1b8ac254b ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "26889fbb-8ea6-4457-897c-0e236ccc4b38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 27.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2725: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 21 KiB/s wr, 11 op/s
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.653 250273 DEBUG nova.compute.manager [req-0913bed9-3f5a-4254-a27f-98dcdc45575c req-7ec33439-b5aa-4036-97a8-f6f28ef6d2f1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.653 250273 DEBUG oslo_concurrency.lockutils [req-0913bed9-3f5a-4254-a27f-98dcdc45575c req-7ec33439-b5aa-4036-97a8-f6f28ef6d2f1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.654 250273 DEBUG oslo_concurrency.lockutils [req-0913bed9-3f5a-4254-a27f-98dcdc45575c req-7ec33439-b5aa-4036-97a8-f6f28ef6d2f1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.654 250273 DEBUG oslo_concurrency.lockutils [req-0913bed9-3f5a-4254-a27f-98dcdc45575c req-7ec33439-b5aa-4036-97a8-f6f28ef6d2f1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.654 250273 DEBUG nova.compute.manager [req-0913bed9-3f5a-4254-a27f-98dcdc45575c req-7ec33439-b5aa-4036-97a8-f6f28ef6d2f1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Processing event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.655 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.659 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163621.659162, 7599824a-e407-425c-9d2a-28db0744adb1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.659 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.661 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.665 250273 INFO nova.virt.libvirt.driver [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance spawned successfully.#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.665 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:20:21 np0005593232 nova_compute[250269]: 2026-01-23 10:20:21.721 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:23.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.497 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:23.500 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:20:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:23.504 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.563 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.568 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.568 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.569 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2726: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 173 KiB/s rd, 21 KiB/s wr, 16 op/s
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.569 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.570 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.570 250273 DEBUG nova.virt.libvirt.driver [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.574 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:20:23 np0005593232 nova_compute[250269]: 2026-01-23 10:20:23.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:24 np0005593232 nova_compute[250269]: 2026-01-23 10:20:24.103 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:20:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:24.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:24 np0005593232 nova_compute[250269]: 2026-01-23 10:20:24.944 250273 INFO nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Took 40.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:20:24 np0005593232 nova_compute[250269]: 2026-01-23 10:20:24.946 250273 DEBUG nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:20:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2727: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 477 KiB/s rd, 21 KiB/s wr, 28 op/s
Jan 23 05:20:26 np0005593232 nova_compute[250269]: 2026-01-23 10:20:26.723 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:26.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:27.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2728: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 73 op/s
Jan 23 05:20:28 np0005593232 nova_compute[250269]: 2026-01-23 10:20:28.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:28.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:29.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2729: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 8.4 KiB/s wr, 65 op/s
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.511 250273 DEBUG nova.compute.manager [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.511 250273 DEBUG oslo_concurrency.lockutils [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.513 250273 DEBUG oslo_concurrency.lockutils [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.513 250273 DEBUG oslo_concurrency.lockutils [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.514 250273 DEBUG nova.compute.manager [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] No waiting events found dispatching network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.514 250273 WARNING nova.compute.manager [req-5eb4e5d6-76f7-4d60-b455-b298235b54a6 req-0ad30702-7d64-41fc-b871-483637e7e022 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received unexpected event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.870 250273 INFO nova.compute.manager [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Took 49.13 seconds to build instance.#033[00m
Jan 23 05:20:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:30.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:30 np0005593232 nova_compute[250269]: 2026-01-23 10:20:30.946 250273 DEBUG oslo_concurrency.lockutils [None req-8f756d31-7f11-42f7-ad80-c7100dd02feb 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 51.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:31.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2730: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 23 05:20:31 np0005593232 nova_compute[250269]: 2026-01-23 10:20:31.726 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:32.508 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:33.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2731: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 23 05:20:33 np0005593232 nova_compute[250269]: 2026-01-23 10:20:33.753 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:35.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2732: 321 pgs: 321 active+clean; 278 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 1.0 MiB/s wr, 73 op/s
Jan 23 05:20:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:35Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:12:18 10.100.0.14
Jan 23 05:20:35 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:35Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:12:18 10.100.0.14
Jan 23 05:20:36 np0005593232 nova_compute[250269]: 2026-01-23 10:20:36.729 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:20:37
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr']
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:20:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:37.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:37 np0005593232 podman[352069]: 2026-01-23 10:20:37.502086129 +0000 UTC m=+0.145972061 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2733: 321 pgs: 321 active+clean; 292 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 95 op/s
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:20:38 np0005593232 nova_compute[250269]: 2026-01-23 10:20:38.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2734: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:20:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:41.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2735: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:20:41 np0005593232 nova_compute[250269]: 2026-01-23 10:20:41.732 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:41 np0005593232 podman[352122]: 2026-01-23 10:20:41.918127257 +0000 UTC m=+0.052512144 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:42 np0005593232 nova_compute[250269]: 2026-01-23 10:20:42.516 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:42.634 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:42.634 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:42.635 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 847dc4dd-d9ce-46c6-b761-d93c3ff36435 does not exist
Jan 23 05:20:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ac6d8c7a-92f8-4f30-a8a8-3bf536d6ff84 does not exist
Jan 23 05:20:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 198d68e1-3347-495d-950a-28b2ab2af15d does not exist
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:20:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:20:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:20:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:43 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:20:43 np0005593232 nova_compute[250269]: 2026-01-23 10:20:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:43 np0005593232 podman[352391]: 2026-01-23 10:20:43.435095249 +0000 UTC m=+0.066522042 container create 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:20:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:43.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:43 np0005593232 systemd[1]: Started libpod-conmon-407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33.scope.
Jan 23 05:20:43 np0005593232 podman[352391]: 2026-01-23 10:20:43.407277069 +0000 UTC m=+0.038703902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:43 np0005593232 podman[352391]: 2026-01-23 10:20:43.54205546 +0000 UTC m=+0.173482293 container init 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:20:43 np0005593232 podman[352391]: 2026-01-23 10:20:43.549761359 +0000 UTC m=+0.181188122 container start 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:20:43 np0005593232 podman[352391]: 2026-01-23 10:20:43.554073462 +0000 UTC m=+0.185500305 container attach 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:20:43 np0005593232 elegant_golick[352407]: 167 167
Jan 23 05:20:43 np0005593232 systemd[1]: libpod-407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33.scope: Deactivated successfully.
Jan 23 05:20:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2736: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 23 05:20:43 np0005593232 podman[352414]: 2026-01-23 10:20:43.62612255 +0000 UTC m=+0.048881671 container died 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:20:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c36176dcb82081853fa5473dce931bf6423251941ca118e803dd93490be2d3ed-merged.mount: Deactivated successfully.
Jan 23 05:20:43 np0005593232 podman[352414]: 2026-01-23 10:20:43.681433012 +0000 UTC m=+0.104192123 container remove 407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:20:43 np0005593232 systemd[1]: libpod-conmon-407abeb493206caa22b8f7326f3c4a1fb184591dd9ce2d8f17919b1c66acff33.scope: Deactivated successfully.
Jan 23 05:20:43 np0005593232 nova_compute[250269]: 2026-01-23 10:20:43.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:43 np0005593232 podman[352436]: 2026-01-23 10:20:43.891817083 +0000 UTC m=+0.054956983 container create 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:20:43 np0005593232 systemd[1]: Started libpod-conmon-0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb.scope.
Jan 23 05:20:43 np0005593232 podman[352436]: 2026-01-23 10:20:43.869625652 +0000 UTC m=+0.032765542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:43 np0005593232 podman[352436]: 2026-01-23 10:20:43.998846255 +0000 UTC m=+0.161986215 container init 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:20:44 np0005593232 podman[352436]: 2026-01-23 10:20:44.014958633 +0000 UTC m=+0.178098513 container start 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:20:44 np0005593232 podman[352436]: 2026-01-23 10:20:44.01869646 +0000 UTC m=+0.181836340 container attach 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:20:44 np0005593232 goofy_johnson[352454]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:20:44 np0005593232 goofy_johnson[352454]: --> relative data size: 1.0
Jan 23 05:20:44 np0005593232 goofy_johnson[352454]: --> All data devices are unavailable
Jan 23 05:20:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:44 np0005593232 systemd[1]: libpod-0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb.scope: Deactivated successfully.
Jan 23 05:20:44 np0005593232 podman[352436]: 2026-01-23 10:20:44.904605664 +0000 UTC m=+1.067745634 container died 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:20:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f0f42bfa50f1547e2f3542f6ce08f22acca2838f73aacff34ca010be5a074492-merged.mount: Deactivated successfully.
Jan 23 05:20:44 np0005593232 podman[352436]: 2026-01-23 10:20:44.977492056 +0000 UTC m=+1.140631936 container remove 0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:20:44 np0005593232 systemd[1]: libpod-conmon-0b78a7d92637dc911ebbdebaf66db475a9ac174c6507240d4eced455c6d056cb.scope: Deactivated successfully.
Jan 23 05:20:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:20:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3433986020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:20:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:20:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3433986020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:20:45 np0005593232 nova_compute[250269]: 2026-01-23 10:20:45.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:45.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2737: 321 pgs: 321 active+clean; 327 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 84 op/s
Jan 23 05:20:45 np0005593232 podman[352621]: 2026-01-23 10:20:45.715379592 +0000 UTC m=+0.072678577 container create b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:20:45 np0005593232 systemd[1]: Started libpod-conmon-b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077.scope.
Jan 23 05:20:45 np0005593232 podman[352621]: 2026-01-23 10:20:45.682027224 +0000 UTC m=+0.039326259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:45 np0005593232 podman[352621]: 2026-01-23 10:20:45.823829445 +0000 UTC m=+0.181128440 container init b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 05:20:45 np0005593232 podman[352621]: 2026-01-23 10:20:45.836014511 +0000 UTC m=+0.193313506 container start b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:20:45 np0005593232 podman[352621]: 2026-01-23 10:20:45.840391166 +0000 UTC m=+0.197690151 container attach b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:20:45 np0005593232 jolly_golick[352637]: 167 167
Jan 23 05:20:45 np0005593232 systemd[1]: libpod-b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077.scope: Deactivated successfully.
Jan 23 05:20:45 np0005593232 podman[352642]: 2026-01-23 10:20:45.8950613 +0000 UTC m=+0.035225622 container died b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:20:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5b9d320a7cc3e7f38dc0432951c6aea76c98f09006c953a009fa01d9b9ef37ed-merged.mount: Deactivated successfully.
Jan 23 05:20:45 np0005593232 podman[352642]: 2026-01-23 10:20:45.970203966 +0000 UTC m=+0.110368288 container remove b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_golick, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:20:45 np0005593232 systemd[1]: libpod-conmon-b0716eefe3f80d4956aaf38e7d5194900ad9b371b8ca8f1c21bcae867434e077.scope: Deactivated successfully.
Jan 23 05:20:46 np0005593232 NetworkManager[49057]: <info>  [1769163646.1671] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/282)
Jan 23 05:20:46 np0005593232 NetworkManager[49057]: <info>  [1769163646.1683] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 23 05:20:46 np0005593232 nova_compute[250269]: 2026-01-23 10:20:46.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:46 np0005593232 podman[352665]: 2026-01-23 10:20:46.228490228 +0000 UTC m=+0.053042009 container create 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:20:46 np0005593232 nova_compute[250269]: 2026-01-23 10:20:46.264 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:46 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:46Z|00601|binding|INFO|Releasing lport 4dd3507c-09b2-4097-8357-2c398ef8e03c from this chassis (sb_readonly=0)
Jan 23 05:20:46 np0005593232 systemd[1]: Started libpod-conmon-03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678.scope.
Jan 23 05:20:46 np0005593232 nova_compute[250269]: 2026-01-23 10:20:46.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:46 np0005593232 podman[352665]: 2026-01-23 10:20:46.213123681 +0000 UTC m=+0.037675492 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ac00275dc9b185d7631b5a3003e47eb677932c68eae00d3fd6886dc7875066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ac00275dc9b185d7631b5a3003e47eb677932c68eae00d3fd6886dc7875066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ac00275dc9b185d7631b5a3003e47eb677932c68eae00d3fd6886dc7875066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24ac00275dc9b185d7631b5a3003e47eb677932c68eae00d3fd6886dc7875066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:46 np0005593232 podman[352665]: 2026-01-23 10:20:46.369060654 +0000 UTC m=+0.193612485 container init 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:20:46 np0005593232 podman[352665]: 2026-01-23 10:20:46.376848325 +0000 UTC m=+0.201400136 container start 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:20:46 np0005593232 podman[352665]: 2026-01-23 10:20:46.392934582 +0000 UTC m=+0.217486373 container attach 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:20:46 np0005593232 nova_compute[250269]: 2026-01-23 10:20:46.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007100934359296482 of space, bias 1.0, pg target 2.1302803077889445 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8506777231062721 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]: {
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:    "0": [
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:        {
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "devices": [
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "/dev/loop3"
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            ],
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "lv_name": "ceph_lv0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "lv_size": "7511998464",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "name": "ceph_lv0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "tags": {
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.cluster_name": "ceph",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.crush_device_class": "",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.encrypted": "0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.osd_id": "0",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.type": "block",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:                "ceph.vdo": "0"
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            },
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "type": "block",
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:            "vg_name": "ceph_vg0"
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:        }
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]:    ]
Jan 23 05:20:47 np0005593232 gifted_jennings[352682]: }
Jan 23 05:20:47 np0005593232 systemd[1]: libpod-03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678.scope: Deactivated successfully.
Jan 23 05:20:47 np0005593232 podman[352665]: 2026-01-23 10:20:47.261553975 +0000 UTC m=+1.086105756 container died 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:20:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-24ac00275dc9b185d7631b5a3003e47eb677932c68eae00d3fd6886dc7875066-merged.mount: Deactivated successfully.
Jan 23 05:20:47 np0005593232 podman[352665]: 2026-01-23 10:20:47.327902611 +0000 UTC m=+1.152454392 container remove 03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jennings, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:20:47 np0005593232 systemd[1]: libpod-conmon-03710cf4707ebeeb56ddf5b28283caaef3d12bf87382ca89aa5a2a86514e2678.scope: Deactivated successfully.
Jan 23 05:20:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:47.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2738: 321 pgs: 321 active+clean; 346 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 82 op/s
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.029583688 +0000 UTC m=+0.062186439 container create ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:20:48 np0005593232 systemd[1]: Started libpod-conmon-ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385.scope.
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.006724778 +0000 UTC m=+0.039327619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.11936885 +0000 UTC m=+0.151971621 container init ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.125623938 +0000 UTC m=+0.158226719 container start ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:20:48 np0005593232 systemd[1]: libpod-ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385.scope: Deactivated successfully.
Jan 23 05:20:48 np0005593232 awesome_hugle[352863]: 167 167
Jan 23 05:20:48 np0005593232 conmon[352863]: conmon ab08e84a96311e1c2fdd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385.scope/container/memory.events
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.133929994 +0000 UTC m=+0.166532775 container attach ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.134373277 +0000 UTC m=+0.166976038 container died ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:20:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a046f839553ed8aae7c87abcf28bbfda58f03cf68c74fa776b6e8ee5e575dc35-merged.mount: Deactivated successfully.
Jan 23 05:20:48 np0005593232 podman[352847]: 2026-01-23 10:20:48.178502371 +0000 UTC m=+0.211105142 container remove ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_hugle, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:20:48 np0005593232 systemd[1]: libpod-conmon-ab08e84a96311e1c2fdd5134fed47aa2926689807b951a691e18934a33296385.scope: Deactivated successfully.
Jan 23 05:20:48 np0005593232 podman[352886]: 2026-01-23 10:20:48.368275536 +0000 UTC m=+0.045007590 container create 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:20:48 np0005593232 systemd[1]: Started libpod-conmon-9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa.scope.
Jan 23 05:20:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:20:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fdf32fe6dfccfe2e9727a2faa0b629b84aa7566a8354ab1e39cecdf27075f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fdf32fe6dfccfe2e9727a2faa0b629b84aa7566a8354ab1e39cecdf27075f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fdf32fe6dfccfe2e9727a2faa0b629b84aa7566a8354ab1e39cecdf27075f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fdf32fe6dfccfe2e9727a2faa0b629b84aa7566a8354ab1e39cecdf27075f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:20:48 np0005593232 podman[352886]: 2026-01-23 10:20:48.347586788 +0000 UTC m=+0.024318812 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:20:48 np0005593232 podman[352886]: 2026-01-23 10:20:48.447242301 +0000 UTC m=+0.123974335 container init 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:20:48 np0005593232 podman[352886]: 2026-01-23 10:20:48.45566009 +0000 UTC m=+0.132392114 container start 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:20:48 np0005593232 podman[352886]: 2026-01-23 10:20:48.459303184 +0000 UTC m=+0.136035258 container attach 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:20:48 np0005593232 nova_compute[250269]: 2026-01-23 10:20:48.871 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:48.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]: {
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:        "osd_id": 0,
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:        "type": "bluestore"
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]:    }
Jan 23 05:20:49 np0005593232 trusting_noyce[352902]: }
Jan 23 05:20:49 np0005593232 nova_compute[250269]: 2026-01-23 10:20:49.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:49 np0005593232 nova_compute[250269]: 2026-01-23 10:20:49.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:20:49 np0005593232 systemd[1]: libpod-9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa.scope: Deactivated successfully.
Jan 23 05:20:49 np0005593232 podman[352886]: 2026-01-23 10:20:49.302208715 +0000 UTC m=+0.978940799 container died 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:20:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-773fdf32fe6dfccfe2e9727a2faa0b629b84aa7566a8354ab1e39cecdf27075f-merged.mount: Deactivated successfully.
Jan 23 05:20:49 np0005593232 podman[352886]: 2026-01-23 10:20:49.376911309 +0000 UTC m=+1.053643343 container remove 9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:20:49 np0005593232 systemd[1]: libpod-conmon-9fbed7872f6741046cbdf28252509553029166bc7e56dc5832d74ed054732faa.scope: Deactivated successfully.
Jan 23 05:20:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:20:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:20:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:49.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d200ea79-e804-4f44-a9df-96a751a050a0 does not exist
Jan 23 05:20:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c0dae675-753d-4c64-9649-c37902e4e70e does not exist
Jan 23 05:20:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 863e4d5e-b058-4050-90c7-1486b6fc03da does not exist
Jan 23 05:20:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2739: 321 pgs: 321 active+clean; 373 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 62 op/s
Jan 23 05:20:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:20:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:50.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:51.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2740: 321 pgs: 321 active+clean; 373 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.0 MiB/s wr, 46 op/s
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.655 250273 DEBUG nova.compute.manager [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-changed-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.655 250273 DEBUG nova.compute.manager [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Refreshing instance network info cache due to event network-changed-d048f86b-08c3-4963-8743-6566b8a1a571. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.656 250273 DEBUG oslo_concurrency.lockutils [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.656 250273 DEBUG oslo_concurrency.lockutils [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.656 250273 DEBUG nova.network.neutron [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Refreshing network info cache for port d048f86b-08c3-4963-8743-6566b8a1a571 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.738 250273 INFO nova.compute.manager [None req-e3a3b887-09bb-4313-8c2e-718da250113c 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Get console output#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:51 np0005593232 nova_compute[250269]: 2026-01-23 10:20:51.746 312104 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 23 05:20:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:52.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.260 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.261 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.261 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.261 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.262 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.263 250273 INFO nova.compute.manager [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Terminating instance#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.264 250273 DEBUG nova.compute.manager [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:20:53 np0005593232 kernel: tapd048f86b-08 (unregistering): left promiscuous mode
Jan 23 05:20:53 np0005593232 NetworkManager[49057]: <info>  [1769163653.3207] device (tapd048f86b-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:20:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:53Z|00602|binding|INFO|Releasing lport d048f86b-08c3-4963-8743-6566b8a1a571 from this chassis (sb_readonly=0)
Jan 23 05:20:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:53Z|00603|binding|INFO|Setting lport d048f86b-08c3-4963-8743-6566b8a1a571 down in Southbound
Jan 23 05:20:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:20:53Z|00604|binding|INFO|Removing iface tapd048f86b-08 ovn-installed in OVS
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.349 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:12:18 10.100.0.14'], port_security=['fa:16:3e:bd:12:18 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7599824a-e407-425c-9d2a-28db0744adb1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '641b8665-1bd0-4720-8aa5-a1f02bd1ccbe', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=211b0b88-3773-4743-9d70-bd2a35ab028e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d048f86b-08c3-4963-8743-6566b8a1a571) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.350 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d048f86b-08c3-4963-8743-6566b8a1a571 in datapath ef2a274e-4da0-400b-bcb7-ccf7f53401c1 unbound from our chassis#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.351 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef2a274e-4da0-400b-bcb7-ccf7f53401c1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.353 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85127218-1ff1-42bb-bb18-51ef9d5ea431]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.354 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1 namespace which is not needed anymore#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 23 05:20:53 np0005593232 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d00000099.scope: Consumed 15.776s CPU time.
Jan 23 05:20:53 np0005593232 systemd-machined[215836]: Machine qemu-68-instance-00000099 terminated.
Jan 23 05:20:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.532 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [NOTICE]   (351997) : haproxy version is 2.8.14-c23fe91
Jan 23 05:20:53 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [NOTICE]   (351997) : path to executable is /usr/sbin/haproxy
Jan 23 05:20:53 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [WARNING]  (351997) : Exiting Master process...
Jan 23 05:20:53 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [ALERT]    (351997) : Current worker (351999) exited with code 143 (Terminated)
Jan 23 05:20:53 np0005593232 neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1[351993]: [WARNING]  (351997) : All workers exited. Exiting... (0)
Jan 23 05:20:53 np0005593232 systemd[1]: libpod-21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de.scope: Deactivated successfully.
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.545 250273 INFO nova.virt.libvirt.driver [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance destroyed successfully.#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.546 250273 DEBUG nova.objects.instance [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'resources' on Instance uuid 7599824a-e407-425c-9d2a-28db0744adb1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:20:53 np0005593232 podman[353064]: 2026-01-23 10:20:53.54998264 +0000 UTC m=+0.098689837 container died 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:20:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de-userdata-shm.mount: Deactivated successfully.
Jan 23 05:20:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b1275a4a302a20303297c4079d0737845ff7d797cc05e7578c3c26c67b8353ea-merged.mount: Deactivated successfully.
Jan 23 05:20:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2741: 321 pgs: 321 active+clean; 392 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 61 op/s
Jan 23 05:20:53 np0005593232 podman[353064]: 2026-01-23 10:20:53.598632582 +0000 UTC m=+0.147339759 container cleanup 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.601 250273 DEBUG nova.virt.libvirt.vif [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:19:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-984675457',display_name='tempest-TestNetworkBasicOps-server-984675457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-984675457',id=153,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCK1nM+eHevP0D1+qwz2Lwd9FCCupUyTEuZC5XAb2ITgMd9JK8Wsp66qF0mr4mibxTKRMv/w7qXU01TwS1bHnFdS3XtvDRoLHy2ERucHi8SRKNY0fLgC3FOWjksab2YQuQ==',key_name='tempest-TestNetworkBasicOps-1652034461',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:20:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-q98hh0ig',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:20:30Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=7599824a-e407-425c-9d2a-28db0744adb1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.602 250273 DEBUG nova.network.os_vif_util [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.603 250273 DEBUG nova.network.os_vif_util [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.603 250273 DEBUG os_vif [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.605 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd048f86b-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:53 np0005593232 systemd[1]: libpod-conmon-21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de.scope: Deactivated successfully.
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.612 250273 INFO os_vif [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:12:18,bridge_name='br-int',has_traffic_filtering=True,id=d048f86b-08c3-4963-8743-6566b8a1a571,network=Network(ef2a274e-4da0-400b-bcb7-ccf7f53401c1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd048f86b-08')#033[00m
Jan 23 05:20:53 np0005593232 podman[353103]: 2026-01-23 10:20:53.662699113 +0000 UTC m=+0.043264050 container remove 21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.669 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85800556-37bd-4fd6-9b5c-d2c0bf91af6f]: (4, ('Fri Jan 23 10:20:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1 (21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de)\n21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de\nFri Jan 23 10:20:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1 (21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de)\n21b1d5ba0e26d42a309e244170fa37e179ab9f300e2448bdc624bda2f57262de\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.672 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[03e6c727-4a35-4845-a961-917724ec4144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.673 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef2a274e-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.675 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 kernel: tapef2a274e-40: left promiscuous mode
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.692 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[752e26e7-cddd-44f7-b096-f9b84362cf46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 nova_compute[250269]: 2026-01-23 10:20:53.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.932 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[08c8470d-55a1-4735-aabe-e3ef0b11cde6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.934 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[31b0c386-7a3f-4b16-8006-e78053c4e911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.952 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7bee3616-c887-48af-94d7-4ce027c036a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 752928, 'reachable_time': 32465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353135, 'error': None, 'target': 'ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:53 np0005593232 systemd[1]: run-netns-ovnmeta\x2def2a274e\x2d4da0\x2d400b\x2dbcb7\x2dccf7f53401c1.mount: Deactivated successfully.
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.958 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef2a274e-4da0-400b-bcb7-ccf7f53401c1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:20:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:20:53.959 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ac05d813-11d6-45e3-a221-e203debb0b0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.652 250273 INFO nova.virt.libvirt.driver [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Deleting instance files /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1_del#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.653 250273 INFO nova.virt.libvirt.driver [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Deletion of /var/lib/nova/instances/7599824a-e407-425c-9d2a-28db0744adb1_del complete#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.776 250273 INFO nova.compute.manager [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.777 250273 DEBUG oslo.service.loopingcall [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.777 250273 DEBUG nova.compute.manager [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.777 250273 DEBUG nova.network.neutron [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:20:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:54.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.980 250273 DEBUG nova.compute.manager [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-unplugged-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.981 250273 DEBUG oslo_concurrency.lockutils [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.982 250273 DEBUG oslo_concurrency.lockutils [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.982 250273 DEBUG oslo_concurrency.lockutils [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.983 250273 DEBUG nova.compute.manager [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] No waiting events found dispatching network-vif-unplugged-d048f86b-08c3-4963-8743-6566b8a1a571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:20:54 np0005593232 nova_compute[250269]: 2026-01-23 10:20:54.984 250273 DEBUG nova.compute.manager [req-7d5998a6-200a-4a88-af26-00dd9e59a883 req-b3433cfa-fb43-43d9-85c4-a4d0060f6a08 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-unplugged-d048f86b-08c3-4963-8743-6566b8a1a571 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:20:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:55.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2742: 321 pgs: 321 active+clean; 360 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 59 op/s
Jan 23 05:20:56 np0005593232 nova_compute[250269]: 2026-01-23 10:20:56.603 250273 DEBUG nova.network.neutron [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updated VIF entry in instance network info cache for port d048f86b-08c3-4963-8743-6566b8a1a571. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:20:56 np0005593232 nova_compute[250269]: 2026-01-23 10:20:56.604 250273 DEBUG nova.network.neutron [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updating instance_info_cache with network_info: [{"id": "d048f86b-08c3-4963-8743-6566b8a1a571", "address": "fa:16:3e:bd:12:18", "network": {"id": "ef2a274e-4da0-400b-bcb7-ccf7f53401c1", "bridge": "br-int", "label": "tempest-network-smoke--403303166", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd048f86b-08", "ovs_interfaceid": "d048f86b-08c3-4963-8743-6566b8a1a571", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:20:56 np0005593232 nova_compute[250269]: 2026-01-23 10:20:56.641 250273 DEBUG oslo_concurrency.lockutils [req-261ee281-db82-4146-9db2-8c332e88b60f req-c513263c-be51-4341-b919-a3aefb0d143f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:20:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:20:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:56.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:20:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:57.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2743: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 2.5 MiB/s wr, 65 op/s
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.607 250273 DEBUG nova.compute.manager [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.607 250273 DEBUG oslo_concurrency.lockutils [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7599824a-e407-425c-9d2a-28db0744adb1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.608 250273 DEBUG oslo_concurrency.lockutils [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.608 250273 DEBUG oslo_concurrency.lockutils [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.609 250273 DEBUG nova.compute.manager [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] No waiting events found dispatching network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:20:57 np0005593232 nova_compute[250269]: 2026-01-23 10:20:57.609 250273 WARNING nova.compute.manager [req-b70368a2-93a6-490c-ab68-a736f3d3cfaf req-83d875e8-97e7-4877-8dcc-e76f022fb204 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received unexpected event network-vif-plugged-d048f86b-08c3-4963-8743-6566b8a1a571 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.396 250273 DEBUG nova.network.neutron [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.464 250273 INFO nova.compute.manager [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Took 3.69 seconds to deallocate network for instance.#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.541 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.542 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.637 250273 DEBUG nova.compute.manager [req-bbee8d08-32a8-4dfa-8a0a-bac2f1764281 req-e48cad2a-c53f-4b93-9097-ca30d87fd53e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Received event network-vif-deleted-d048f86b-08c3-4963-8743-6566b8a1a571 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:20:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:20:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:20:58.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:20:58 np0005593232 nova_compute[250269]: 2026-01-23 10:20:58.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:20:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:20:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:20:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:20:59.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:20:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2744: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.863 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.864 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.864 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.864 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7599824a-e407-425c-9d2a-28db0744adb1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:21:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:00.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:00 np0005593232 nova_compute[250269]: 2026-01-23 10:21:00.927 250273 DEBUG oslo_concurrency.processutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:21:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1827785258' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:21:01 np0005593232 nova_compute[250269]: 2026-01-23 10:21:01.405 250273 DEBUG oslo_concurrency.processutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:01 np0005593232 nova_compute[250269]: 2026-01-23 10:21:01.416 250273 DEBUG nova.compute.provider_tree [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:21:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:01.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2745: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 580 KiB/s wr, 45 op/s
Jan 23 05:21:01 np0005593232 nova_compute[250269]: 2026-01-23 10:21:01.806 250273 DEBUG nova.scheduler.client.report [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:21:01 np0005593232 nova_compute[250269]: 2026-01-23 10:21:01.851 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:02.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:02 np0005593232 nova_compute[250269]: 2026-01-23 10:21:02.932 250273 INFO nova.scheduler.client.report [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Deleted allocations for instance 7599824a-e407-425c-9d2a-28db0744adb1#033[00m
Jan 23 05:21:02 np0005593232 nova_compute[250269]: 2026-01-23 10:21:02.952 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:21:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:03.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2746: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 802 KiB/s rd, 593 KiB/s wr, 72 op/s
Jan 23 05:21:03 np0005593232 nova_compute[250269]: 2026-01-23 10:21:03.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:03 np0005593232 nova_compute[250269]: 2026-01-23 10:21:03.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.178 250273 DEBUG oslo_concurrency.lockutils [None req-9d850f24-3134-47b9-b80b-f5d8ae6003ef 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "7599824a-e407-425c-9d2a-28db0744adb1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.324 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.773 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-7599824a-e407-425c-9d2a-28db0744adb1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.774 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.774 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.921 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.922 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.922 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.922 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:21:04 np0005593232 nova_compute[250269]: 2026-01-23 10:21:04.923 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:04.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:21:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414816319' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.414 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:05.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2747: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.601 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.602 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4266MB free_disk=20.855560302734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.603 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.603 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.946 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.947 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:21:05 np0005593232 nova_compute[250269]: 2026-01-23 10:21:05.992 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:21:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621266651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:21:06 np0005593232 nova_compute[250269]: 2026-01-23 10:21:06.476 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:06 np0005593232 nova_compute[250269]: 2026-01-23 10:21:06.483 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:21:06 np0005593232 nova_compute[250269]: 2026-01-23 10:21:06.525 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:21:06 np0005593232 nova_compute[250269]: 2026-01-23 10:21:06.700 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:21:06 np0005593232 nova_compute[250269]: 2026-01-23 10:21:06.701 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:06.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:07.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2748: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 13 KiB/s wr, 125 op/s
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:08 np0005593232 podman[353212]: 2026-01-23 10:21:08.437757454 +0000 UTC m=+0.092282214 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 23 05:21:08 np0005593232 nova_compute[250269]: 2026-01-23 10:21:08.543 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163653.5415998, 7599824a-e407-425c-9d2a-28db0744adb1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:21:08 np0005593232 nova_compute[250269]: 2026-01-23 10:21:08.543 250273 INFO nova.compute.manager [-] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:21:08 np0005593232 nova_compute[250269]: 2026-01-23 10:21:08.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:08 np0005593232 nova_compute[250269]: 2026-01-23 10:21:08.934 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:08.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:09.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2749: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 150 op/s
Jan 23 05:21:10 np0005593232 nova_compute[250269]: 2026-01-23 10:21:10.303 250273 DEBUG nova.compute.manager [None req-0cccd40c-9010-48fb-b923-c1d8f0f1b581 - - - - - -] [instance: 7599824a-e407-425c-9d2a-28db0744adb1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:21:10 np0005593232 nova_compute[250269]: 2026-01-23 10:21:10.857 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:10.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2750: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 144 op/s
Jan 23 05:21:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:12 np0005593232 podman[353243]: 2026-01-23 10:21:12.408170663 +0000 UTC m=+0.064830474 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:21:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:12.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2751: 321 pgs: 321 active+clean; 333 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 169 op/s
Jan 23 05:21:13 np0005593232 nova_compute[250269]: 2026-01-23 10:21:13.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:13 np0005593232 nova_compute[250269]: 2026-01-23 10:21:13.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:14.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2752: 321 pgs: 321 active+clean; 344 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 164 op/s
Jan 23 05:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 44K writes, 166K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 44K writes, 16K syncs, 2.69 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6893 writes, 24K keys, 6893 commit groups, 1.0 writes per commit group, ingest: 24.63 MB, 0.04 MB/s#012Interval WAL: 6892 writes, 2911 syncs, 2.37 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 05:21:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:16.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2753: 321 pgs: 321 active+clean; 316 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 23 05:21:18 np0005593232 nova_compute[250269]: 2026-01-23 10:21:18.744 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:18 np0005593232 nova_compute[250269]: 2026-01-23 10:21:18.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:18.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2754: 321 pgs: 321 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Jan 23 05:21:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:20.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:21.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2755: 321 pgs: 321 active+clean; 266 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 94 op/s
Jan 23 05:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:22 np0005593232 nova_compute[250269]: 2026-01-23 10:21:22.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:23.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2756: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 364 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 23 05:21:23 np0005593232 nova_compute[250269]: 2026-01-23 10:21:23.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:23 np0005593232 nova_compute[250269]: 2026-01-23 10:21:23.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 23 05:21:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:24.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 23 05:21:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 23 05:21:25 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 23 05:21:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:25.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2758: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 115 KiB/s rd, 70 KiB/s wr, 88 op/s
Jan 23 05:21:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:26.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:21:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:27.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2759: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 18 KiB/s wr, 74 op/s
Jan 23 05:21:28 np0005593232 nova_compute[250269]: 2026-01-23 10:21:28.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:28 np0005593232 nova_compute[250269]: 2026-01-23 10:21:28.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:28.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:29.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2760: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 16 KiB/s wr, 39 op/s
Jan 23 05:21:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2761: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 16 KiB/s wr, 39 op/s
Jan 23 05:21:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:32.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:33.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2762: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.9 KiB/s rd, 14 KiB/s wr, 8 op/s
Jan 23 05:21:33 np0005593232 nova_compute[250269]: 2026-01-23 10:21:33.753 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:33 np0005593232 nova_compute[250269]: 2026-01-23 10:21:33.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:34.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:35.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2763: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 5.3 KiB/s rd, 1.5 KiB/s wr, 7 op/s
Jan 23 05:21:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:36.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:21:37
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr']
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:21:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2764: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 KiB/s rd, 1.3 KiB/s wr, 6 op/s
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:21:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:21:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:38.177 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.178 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:38.180 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.199 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.199 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:21:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.384 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.385 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.655 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:38 np0005593232 nova_compute[250269]: 2026-01-23 10:21:38.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:38.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:39 np0005593232 nova_compute[250269]: 2026-01-23 10:21:39.205 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:21:39 np0005593232 podman[353376]: 2026-01-23 10:21:39.42482599 +0000 UTC m=+0.082118666 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 23 05:21:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:39.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2765: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 1.1 KiB/s wr, 0 op/s
Jan 23 05:21:39 np0005593232 nova_compute[250269]: 2026-01-23 10:21:39.762 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:39 np0005593232 nova_compute[250269]: 2026-01-23 10:21:39.763 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:39 np0005593232 nova_compute[250269]: 2026-01-23 10:21:39.773 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:21:39 np0005593232 nova_compute[250269]: 2026-01-23 10:21:39.773 250273 INFO nova.compute.claims [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:21:40 np0005593232 nova_compute[250269]: 2026-01-23 10:21:40.163 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:40.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:41.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2766: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 05:21:42 np0005593232 nova_compute[250269]: 2026-01-23 10:21:42.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:42 np0005593232 nova_compute[250269]: 2026-01-23 10:21:42.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:42 np0005593232 nova_compute[250269]: 2026-01-23 10:21:42.533 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:42.635 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:42.635 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:42.636 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:42.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:21:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223326219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.016 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.025 250273 DEBUG nova.compute.provider_tree [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:21:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:21:43.182 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.283 250273 DEBUG nova.scheduler.client.report [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:21:43 np0005593232 podman[353428]: 2026-01-23 10:21:43.385641813 +0000 UTC m=+0.048622673 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.481 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.482 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.484 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 3.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.491 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.492 250273 INFO nova.compute.claims [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:21:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:43.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2767: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.701 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.702 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:43 np0005593232 nova_compute[250269]: 2026-01-23 10:21:43.976 250273 INFO nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.129 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.185 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388830088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.644 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.650 250273 DEBUG nova.compute.provider_tree [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344471546' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:21:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344471546' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.782 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.783 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.784 250273 INFO nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Creating image(s)#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.820 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.863 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.894 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.899 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.946 250273 DEBUG nova.policy [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '558050347cf24d20a73a9a6d08d4c242', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a6757124292b484abb7a27e68cab3408', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:21:44 np0005593232 nova_compute[250269]: 2026-01-23 10:21:44.954 250273 DEBUG nova.scheduler.client.report [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:21:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:44.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.000 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.001 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.002 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.003 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.034 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.037 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:21:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:21:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2768: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Jan 23 05:21:45 np0005593232 nova_compute[250269]: 2026-01-23 10:21:45.911 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.874s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.009 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] resizing rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.194 250273 DEBUG nova.objects.instance [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lazy-loading 'migration_context' on Instance uuid 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.219 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.220 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.423 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.424 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.430 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.431 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Ensure instance console log exists: /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.431 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.432 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.432 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.776 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:21:46 np0005593232 nova_compute[250269]: 2026-01-23 10:21:46.776 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:21:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:46.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:47 np0005593232 nova_compute[250269]: 2026-01-23 10:21:47.074 250273 INFO nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:21:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004337733226316769 of space, bias 1.0, pg target 1.3013199678950307 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:21:47 np0005593232 nova_compute[250269]: 2026-01-23 10:21:47.285 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:21:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2769: 321 pgs: 321 active+clean; 240 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1023 B/s rd, 841 KiB/s wr, 4 op/s
Jan 23 05:21:48 np0005593232 nova_compute[250269]: 2026-01-23 10:21:48.530 250273 DEBUG nova.policy [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ec99ae7c69d0438280441e0434374cbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:21:48 np0005593232 nova_compute[250269]: 2026-01-23 10:21:48.801 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:48 np0005593232 nova_compute[250269]: 2026-01-23 10:21:48.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:48.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:49 np0005593232 nova_compute[250269]: 2026-01-23 10:21:49.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:49 np0005593232 nova_compute[250269]: 2026-01-23 10:21:49.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:21:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:49.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2770: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:21:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:51.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:21:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:21:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2771: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:21:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:52 np0005593232 nova_compute[250269]: 2026-01-23 10:21:52.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 81cf4956-8a59-44d7-ad4f-a5d3d1181c23 does not exist
Jan 23 05:21:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 667e5b3a-6d80-4b0f-b8d1-565465242dec does not exist
Jan 23 05:21:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0ff4cc1a-8306-44f0-adbf-9127fd1d7483 does not exist
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:21:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:21:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:53.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.173 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.175 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.176 250273 INFO nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Creating image(s)#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.213 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.241 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.271 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.274 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.295306969 +0000 UTC m=+0.078052660 container create cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:21:53 np0005593232 systemd[1]: Started libpod-conmon-cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43.scope.
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.244992609 +0000 UTC m=+0.027738310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.368 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.369 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.370 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.371 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.399378397 +0000 UTC m=+0.182124098 container init cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.404 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.407782376 +0000 UTC m=+0.190528067 container start cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.409 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6ddf1404-0e71-447c-ba86-6d730ff54120_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.412270224 +0000 UTC m=+0.195015925 container attach cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:21:53 np0005593232 hardcore_feistel[354032]: 167 167
Jan 23 05:21:53 np0005593232 systemd[1]: libpod-cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43.scope: Deactivated successfully.
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.419493409 +0000 UTC m=+0.202239090 container died cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:21:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-598229830ded526ca23e1f3d7fd196b31d20b248d461405dc56c2f06769f6213-merged.mount: Deactivated successfully.
Jan 23 05:21:53 np0005593232 podman[353976]: 2026-01-23 10:21:53.463065148 +0000 UTC m=+0.245810829 container remove cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:21:53 np0005593232 systemd[1]: libpod-conmon-cc0a12a3e20025804daa7731ed51a90c6393faf14c5b0f471011238bc392cb43.scope: Deactivated successfully.
Jan 23 05:21:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2772: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:21:53 np0005593232 podman[354091]: 2026-01-23 10:21:53.633290757 +0000 UTC m=+0.025198207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.835 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:53 np0005593232 nova_compute[250269]: 2026-01-23 10:21:53.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:54 np0005593232 podman[354091]: 2026-01-23 10:21:54.530720807 +0000 UTC m=+0.922628237 container create a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:21:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:21:54 np0005593232 systemd[1]: Started libpod-conmon-a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0.scope.
Jan 23 05:21:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:54 np0005593232 podman[354091]: 2026-01-23 10:21:54.663749479 +0000 UTC m=+1.055656949 container init a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:21:54 np0005593232 podman[354091]: 2026-01-23 10:21:54.671667414 +0000 UTC m=+1.063574864 container start a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:21:54 np0005593232 podman[354091]: 2026-01-23 10:21:54.70248601 +0000 UTC m=+1.094393450 container attach a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:21:54 np0005593232 nova_compute[250269]: 2026-01-23 10:21:54.765 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 6ddf1404-0e71-447c-ba86-6d730ff54120_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:21:54 np0005593232 nova_compute[250269]: 2026-01-23 10:21:54.850 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] resizing rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:21:54 np0005593232 nova_compute[250269]: 2026-01-23 10:21:54.983 250273 DEBUG nova.objects.instance [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 6ddf1404-0e71-447c-ba86-6d730ff54120 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:21:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:55.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.200 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.201 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Ensure instance console log exists: /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.201 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.201 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.202 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:21:55 np0005593232 determined_shtern[354115]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:21:55 np0005593232 determined_shtern[354115]: --> relative data size: 1.0
Jan 23 05:21:55 np0005593232 determined_shtern[354115]: --> All data devices are unavailable
Jan 23 05:21:55 np0005593232 systemd[1]: libpod-a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0.scope: Deactivated successfully.
Jan 23 05:21:55 np0005593232 podman[354091]: 2026-01-23 10:21:55.471860262 +0000 UTC m=+1.863767732 container died a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:21:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:55.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2773: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.806 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Successfully created port: 138b230d-d47c-4ec5-9572-c92a07aacb0b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:21:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-80bd817932c85b3289166b27b74c82ef9549114ca1addc5af231e3e11db2c868-merged.mount: Deactivated successfully.
Jan 23 05:21:55 np0005593232 nova_compute[250269]: 2026-01-23 10:21:55.968 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Successfully created port: eeb6e4f9-f657-4553-8f43-ca56cd0a1034 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:21:56 np0005593232 podman[354091]: 2026-01-23 10:21:56.306964181 +0000 UTC m=+2.698871611 container remove a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:21:56 np0005593232 systemd[1]: libpod-conmon-a02d1ba5809b112be7710fa1ad9b828e57d6e1a19622fe0e6440904fab7e39e0.scope: Deactivated successfully.
Jan 23 05:21:56 np0005593232 podman[354359]: 2026-01-23 10:21:56.987474817 +0000 UTC m=+0.035017817 container create ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:21:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:21:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:57.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:21:57 np0005593232 systemd[1]: Started libpod-conmon-ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2.scope.
Jan 23 05:21:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:57.068317195 +0000 UTC m=+0.115860195 container init ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:56.97212265 +0000 UTC m=+0.019665670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:57.077372802 +0000 UTC m=+0.124915802 container start ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:57.080724087 +0000 UTC m=+0.128267087 container attach ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:21:57 np0005593232 tender_brown[354375]: 167 167
Jan 23 05:21:57 np0005593232 systemd[1]: libpod-ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2.scope: Deactivated successfully.
Jan 23 05:21:57 np0005593232 conmon[354375]: conmon ca4a1ca1738869c9df3f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2.scope/container/memory.events
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:57.085297097 +0000 UTC m=+0.132840107 container died ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:21:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-192977ee36c3de1f14a52b2a8a1adf74cb26fddbd44fde0ff010ae465985c0c9-merged.mount: Deactivated successfully.
Jan 23 05:21:57 np0005593232 podman[354359]: 2026-01-23 10:21:57.121223769 +0000 UTC m=+0.168766769 container remove ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:21:57 np0005593232 systemd[1]: libpod-conmon-ca4a1ca1738869c9df3f1ebc75af7104f1f92a26fc32c21761e78be8998898b2.scope: Deactivated successfully.
Jan 23 05:21:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:21:57 np0005593232 podman[354399]: 2026-01-23 10:21:57.281699041 +0000 UTC m=+0.039004740 container create 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:21:57 np0005593232 systemd[1]: Started libpod-conmon-2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996.scope.
Jan 23 05:21:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e14d228be3efc9c76421407234771f7d173752214a940685d8029a282bf91d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e14d228be3efc9c76421407234771f7d173752214a940685d8029a282bf91d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e14d228be3efc9c76421407234771f7d173752214a940685d8029a282bf91d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e14d228be3efc9c76421407234771f7d173752214a940685d8029a282bf91d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:57 np0005593232 podman[354399]: 2026-01-23 10:21:57.262827744 +0000 UTC m=+0.020133463 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:57 np0005593232 podman[354399]: 2026-01-23 10:21:57.380804108 +0000 UTC m=+0.138109817 container init 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:21:57 np0005593232 podman[354399]: 2026-01-23 10:21:57.387480568 +0000 UTC m=+0.144786267 container start 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:21:57 np0005593232 podman[354399]: 2026-01-23 10:21:57.390801742 +0000 UTC m=+0.148107441 container attach 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:21:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:57.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2774: 321 pgs: 321 active+clean; 306 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 42 op/s
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]: {
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:    "0": [
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:        {
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "devices": [
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "/dev/loop3"
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            ],
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "lv_name": "ceph_lv0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "lv_size": "7511998464",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "name": "ceph_lv0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "tags": {
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.cluster_name": "ceph",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.crush_device_class": "",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.encrypted": "0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.osd_id": "0",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.type": "block",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:                "ceph.vdo": "0"
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            },
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "type": "block",
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:            "vg_name": "ceph_vg0"
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:        }
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]:    ]
Jan 23 05:21:58 np0005593232 intelligent_mcnulty[354415]: }
Jan 23 05:21:58 np0005593232 systemd[1]: libpod-2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996.scope: Deactivated successfully.
Jan 23 05:21:58 np0005593232 podman[354399]: 2026-01-23 10:21:58.126443954 +0000 UTC m=+0.883749673 container died 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:21:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0e14d228be3efc9c76421407234771f7d173752214a940685d8029a282bf91d5-merged.mount: Deactivated successfully.
Jan 23 05:21:58 np0005593232 podman[354399]: 2026-01-23 10:21:58.188537819 +0000 UTC m=+0.945843528 container remove 2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:21:58 np0005593232 systemd[1]: libpod-conmon-2bcba4f315bcf386d4b08746729ae29e221a77243c38b1c336e3820905015996.scope: Deactivated successfully.
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.211 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Successfully updated port: 138b230d-d47c-4ec5-9572-c92a07aacb0b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.243 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.244 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquired lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.244 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.263 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Successfully updated port: eeb6e4f9-f657-4553-8f43-ca56cd0a1034 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.324 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.325 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquired lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.325 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.440 250273 DEBUG nova.compute.manager [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-changed-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.440 250273 DEBUG nova.compute.manager [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Refreshing instance network info cache due to event network-changed-eeb6e4f9-f657-4553-8f43-ca56cd0a1034. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.440 250273 DEBUG oslo_concurrency.lockutils [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.567 250273 DEBUG nova.compute.manager [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-changed-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.567 250273 DEBUG nova.compute.manager [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Refreshing instance network info cache due to event network-changed-138b230d-d47c-4ec5-9572-c92a07aacb0b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.567 250273 DEBUG oslo_concurrency.lockutils [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.839 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:58 np0005593232 podman[354579]: 2026-01-23 10:21:58.945697033 +0000 UTC m=+0.052691369 container create def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:21:58 np0005593232 nova_compute[250269]: 2026-01-23 10:21:58.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:21:58 np0005593232 systemd[1]: Started libpod-conmon-def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333.scope.
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:58.917782539 +0000 UTC m=+0.024776965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:21:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:59.028492517 +0000 UTC m=+0.135486863 container init def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:59.04021665 +0000 UTC m=+0.147210986 container start def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:59.043367349 +0000 UTC m=+0.150361685 container attach def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:21:59 np0005593232 stupefied_murdock[354595]: 167 167
Jan 23 05:21:59 np0005593232 systemd[1]: libpod-def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333.scope: Deactivated successfully.
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:59.048006411 +0000 UTC m=+0.155000837 container died def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:21:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-49ed69f4e6e046cfba2b120446abec182c204b042ae0fb2c250c7c83e618ad02-merged.mount: Deactivated successfully.
Jan 23 05:21:59 np0005593232 podman[354579]: 2026-01-23 10:21:59.088099031 +0000 UTC m=+0.195093357 container remove def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_murdock, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:21:59 np0005593232 systemd[1]: libpod-conmon-def0181b83f0462a712dcb4f30b3a55196695380495b99841811f81da1662333.scope: Deactivated successfully.
Jan 23 05:21:59 np0005593232 podman[354618]: 2026-01-23 10:21:59.312278014 +0000 UTC m=+0.077477054 container create 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:21:59 np0005593232 systemd[1]: Started libpod-conmon-2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761.scope.
Jan 23 05:21:59 np0005593232 podman[354618]: 2026-01-23 10:21:59.282052515 +0000 UTC m=+0.047251615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:21:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:21:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ef76e9ce95574a1d6591901dab925c09fd3fc7202f7e047350f73c47146377/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ef76e9ce95574a1d6591901dab925c09fd3fc7202f7e047350f73c47146377/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ef76e9ce95574a1d6591901dab925c09fd3fc7202f7e047350f73c47146377/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95ef76e9ce95574a1d6591901dab925c09fd3fc7202f7e047350f73c47146377/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:21:59 np0005593232 nova_compute[250269]: 2026-01-23 10:21:59.434 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:21:59 np0005593232 nova_compute[250269]: 2026-01-23 10:21:59.443 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:21:59 np0005593232 podman[354618]: 2026-01-23 10:21:59.448857236 +0000 UTC m=+0.214056336 container init 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:21:59 np0005593232 podman[354618]: 2026-01-23 10:21:59.463180544 +0000 UTC m=+0.228379594 container start 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:21:59 np0005593232 podman[354618]: 2026-01-23 10:21:59.468221407 +0000 UTC m=+0.233420437 container attach 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:21:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:21:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:21:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:21:59.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:21:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2775: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.7 MiB/s wr, 50 op/s
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.378 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.379 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.380 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:00 np0005593232 frosty_germain[354635]: {
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:        "osd_id": 0,
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:        "type": "bluestore"
Jan 23 05:22:00 np0005593232 frosty_germain[354635]:    }
Jan 23 05:22:00 np0005593232 frosty_germain[354635]: }
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.381 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.383 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:00 np0005593232 systemd[1]: libpod-2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761.scope: Deactivated successfully.
Jan 23 05:22:00 np0005593232 podman[354658]: 2026-01-23 10:22:00.515130448 +0000 UTC m=+0.060083079 container died 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:22:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95ef76e9ce95574a1d6591901dab925c09fd3fc7202f7e047350f73c47146377-merged.mount: Deactivated successfully.
Jan 23 05:22:00 np0005593232 podman[354658]: 2026-01-23 10:22:00.589203234 +0000 UTC m=+0.134155835 container remove 2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_germain, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:22:00 np0005593232 systemd[1]: libpod-conmon-2d622a857b9c5a92e1c682e52e4ddd14307482688155d273f4b106d8ee49e761.scope: Deactivated successfully.
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:22:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1fb4843c-34e6-4489-9f3d-d6062e634b74 does not exist
Jan 23 05:22:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e6399435-3f04-4951-82bc-c9302dee7b39 does not exist
Jan 23 05:22:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d91fed31-0fa7-47df-9e2b-a2ee51bdc35e does not exist
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:22:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3218611956' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:22:00 np0005593232 nova_compute[250269]: 2026-01-23 10:22:00.920 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.129 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.131 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4197MB free_disk=20.855735778808594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.132 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.132 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:22:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.284 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.285 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 6ddf1404-0e71-447c-ba86-6d730ff54120 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.285 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.285 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.408 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2776: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:22:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:22:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627026216' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.882 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.895 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.925 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.959 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:22:01 np0005593232 nova_compute[250269]: 2026-01-23 10:22:01.960 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.392 250273 DEBUG nova.network.neutron [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Updating instance_info_cache with network_info: [{"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.428 250273 DEBUG nova.network.neutron [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Updating instance_info_cache with network_info: [{"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.467 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Releasing lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.468 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance network_info: |[{"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.469 250273 DEBUG oslo_concurrency.lockutils [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.470 250273 DEBUG nova.network.neutron [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Refreshing network info cache for port eeb6e4f9-f657-4553-8f43-ca56cd0a1034 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.475 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Start _get_guest_xml network_info=[{"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.476 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Releasing lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.477 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Instance network_info: |[{"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.479 250273 DEBUG oslo_concurrency.lockutils [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.479 250273 DEBUG nova.network.neutron [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Refreshing network info cache for port 138b230d-d47c-4ec5-9572-c92a07aacb0b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.484 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Start _get_guest_xml network_info=[{"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.494 250273 WARNING nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.498 250273 WARNING nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.504 250273 DEBUG nova.virt.libvirt.host [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.505 250273 DEBUG nova.virt.libvirt.host [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.506 250273 DEBUG nova.virt.libvirt.host [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.507 250273 DEBUG nova.virt.libvirt.host [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.520 250273 DEBUG nova.virt.libvirt.host [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.521 250273 DEBUG nova.virt.libvirt.host [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.523 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.523 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.524 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.525 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.525 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.526 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.526 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.526 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.527 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.528 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.528 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.528 250273 DEBUG nova.virt.hardware [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.533 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.592 250273 DEBUG nova.virt.libvirt.host [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.593 250273 DEBUG nova.virt.libvirt.host [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.594 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.595 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.595 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.596 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.596 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.596 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.596 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.597 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.597 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.597 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.598 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.598 250273 DEBUG nova.virt.hardware [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:22:02 np0005593232 nova_compute[250269]: 2026-01-23 10:22:02.602 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432040452' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:22:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:03.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365225750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.065 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.098 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.104 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.137 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.178 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.186 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:03.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2777: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567741496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.654 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.657 250273 DEBUG nova.virt.libvirt.vif [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-182217745',display_name='tempest-ServersTestJSON-server-182217745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-182217745',id=157,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-0w9a5fem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:21:48Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=6ddf1404-0e71-447c-ba86-6d730ff54120,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.658 250273 DEBUG nova.network.os_vif_util [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.660 250273 DEBUG nova.network.os_vif_util [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.662 250273 DEBUG nova.objects.instance [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6ddf1404-0e71-447c-ba86-6d730ff54120 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.690 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <uuid>6ddf1404-0e71-447c-ba86-6d730ff54120</uuid>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <name>instance-0000009d</name>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServersTestJSON-server-182217745</nova:name>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:22:02</nova:creationTime>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:user uuid="ec99ae7c69d0438280441e0434374cbf">tempest-ServersTestJSON-1611255243-project-member</nova:user>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:project uuid="c59351a1b59c4cc9ad389dff900935f2">tempest-ServersTestJSON-1611255243</nova:project>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:port uuid="eeb6e4f9-f657-4553-8f43-ca56cd0a1034">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="serial">6ddf1404-0e71-447c-ba86-6d730ff54120</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="uuid">6ddf1404-0e71-447c-ba86-6d730ff54120</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6ddf1404-0e71-447c-ba86-6d730ff54120_disk">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:c1:4f:53"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="tapeeb6e4f9-f6"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/console.log" append="off"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:22:03 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:22:03 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.693 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Preparing to wait for external event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.693 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.694 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.694 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.696 250273 DEBUG nova.virt.libvirt.vif [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-182217745',display_name='tempest-ServersTestJSON-server-182217745',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-182217745',id=157,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-0w9a5fem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:21:48Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=6ddf1404-0e71-447c-ba86-6d730ff54120,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.696 250273 DEBUG nova.network.os_vif_util [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.698 250273 DEBUG nova.network.os_vif_util [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.698 250273 DEBUG os_vif [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:22:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990934784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.700 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.701 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.711 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeeb6e4f9-f6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.712 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeeb6e4f9-f6, col_values=(('external_ids', {'iface-id': 'eeb6e4f9-f657-4553-8f43-ca56cd0a1034', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:4f:53', 'vm-uuid': '6ddf1404-0e71-447c-ba86-6d730ff54120'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 NetworkManager[49057]: <info>  [1769163723.7166] manager: (tapeeb6e4f9-f6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/284)
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.725 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.728 250273 DEBUG nova.virt.libvirt.vif [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1582557665',display_name='tempest-ServerMetadataTestJSON-server-1582557665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1582557665',id=156,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6757124292b484abb7a27e68cab3408',ramdisk_id='',reservation_id='r-1bmpdmwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1422902891',owner_user_name='tempest-ServerMetadataTestJSON-1422902891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:21:44Z,user_data=None,user_id='558050347cf24d20a73a9a6d08d4c242',uuid=4b1c1eac-7834-4d9a-92c7-304ff03a42a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.729 250273 DEBUG nova.network.os_vif_util [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converting VIF {"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.730 250273 DEBUG nova.network.os_vif_util [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.732 250273 DEBUG nova.objects.instance [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.736 250273 INFO os_vif [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6')#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.755 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <uuid>4b1c1eac-7834-4d9a-92c7-304ff03a42a0</uuid>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <name>instance-0000009c</name>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerMetadataTestJSON-server-1582557665</nova:name>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:22:02</nova:creationTime>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:user uuid="558050347cf24d20a73a9a6d08d4c242">tempest-ServerMetadataTestJSON-1422902891-project-member</nova:user>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:project uuid="a6757124292b484abb7a27e68cab3408">tempest-ServerMetadataTestJSON-1422902891</nova:project>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <nova:port uuid="138b230d-d47c-4ec5-9572-c92a07aacb0b">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="serial">4b1c1eac-7834-4d9a-92c7-304ff03a42a0</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="uuid">4b1c1eac-7834-4d9a-92c7-304ff03a42a0</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:d7:e0:96"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <target dev="tap138b230d-d4"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/console.log" append="off"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:22:03 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:22:03 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:22:03 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:22:03 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.758 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Preparing to wait for external event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.758 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.759 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.759 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.760 250273 DEBUG nova.virt.libvirt.vif [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1582557665',display_name='tempest-ServerMetadataTestJSON-server-1582557665',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1582557665',id=156,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a6757124292b484abb7a27e68cab3408',ramdisk_id='',reservation_id='r-1bmpdmwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1422902891',owner_user_name='tempest-ServerMetadataTestJSON-1422902891-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:21:44Z,user_data=None,user_id='558050347cf24d20a73a9a6d08d4c242',uuid=4b1c1eac-7834-4d9a-92c7-304ff03a42a0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.761 250273 DEBUG nova.network.os_vif_util [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converting VIF {"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.762 250273 DEBUG nova.network.os_vif_util [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.762 250273 DEBUG os_vif [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.763 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.763 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.764 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.769 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap138b230d-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.769 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap138b230d-d4, col_values=(('external_ids', {'iface-id': '138b230d-d47c-4ec5-9572-c92a07aacb0b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d7:e0:96', 'vm-uuid': '4b1c1eac-7834-4d9a-92c7-304ff03a42a0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.771 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 NetworkManager[49057]: <info>  [1769163723.7729] manager: (tap138b230d-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/285)
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.784 250273 INFO os_vif [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4')#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.853 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.854 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.854 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] No VIF found with MAC fa:16:3e:c1:4f:53, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.855 250273 INFO nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Using config drive#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.891 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.965 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.965 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.965 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.974 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.975 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.975 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] No VIF found with MAC fa:16:3e:d7:e0:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:22:03 np0005593232 nova_compute[250269]: 2026-01-23 10:22:03.975 250273 INFO nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Using config drive#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.011 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.019 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.020 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.020 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.342 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.846 250273 INFO nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Creating config drive at /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.854 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvk7s2x7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.912 250273 INFO nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Creating config drive at /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config#033[00m
Jan 23 05:22:04 np0005593232 nova_compute[250269]: 2026-01-23 10:22:04.920 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbe3re7vj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.024 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgvk7s2x7" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:05.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.073 250273 DEBUG nova.storage.rbd_utils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] rbd image 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.081 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.141 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbe3re7vj" returned: 0 in 0.220s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.175 250273 DEBUG nova.storage.rbd_utils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] rbd image 6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.180 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config 6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.306 250273 DEBUG oslo_concurrency.processutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config 4b1c1eac-7834-4d9a-92c7-304ff03a42a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.225s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.308 250273 INFO nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Deleting local config drive /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0/disk.config because it was imported into RBD.#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.370 250273 DEBUG oslo_concurrency.processutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config 6ddf1404-0e71-447c-ba86-6d730ff54120_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.371 250273 INFO nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Deleting local config drive /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120/disk.config because it was imported into RBD.#033[00m
Jan 23 05:22:05 np0005593232 kernel: tap138b230d-d4: entered promiscuous mode
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.3849] manager: (tap138b230d-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/286)
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00605|binding|INFO|Claiming lport 138b230d-d47c-4ec5-9572-c92a07aacb0b for this chassis.
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00606|binding|INFO|138b230d-d47c-4ec5-9572-c92a07aacb0b: Claiming fa:16:3e:d7:e0:96 10.100.0.5
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.409 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e0:96 10.100.0.5'], port_security=['fa:16:3e:d7:e0:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4b1c1eac-7834-4d9a-92c7-304ff03a42a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79dd8398-301e-4e13-ab64-2cdbc4503040', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6757124292b484abb7a27e68cab3408', 'neutron:revision_number': '2', 'neutron:security_group_ids': '80da2f70-1021-49f4-a1b9-2db158ba45a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c646f4f2-19b5-463c-b4ed-89aab42b4633, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=138b230d-d47c-4ec5-9572-c92a07aacb0b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.413 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 138b230d-d47c-4ec5-9572-c92a07aacb0b in datapath 79dd8398-301e-4e13-ab64-2cdbc4503040 bound to our chassis#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.415 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 79dd8398-301e-4e13-ab64-2cdbc4503040#033[00m
Jan 23 05:22:05 np0005593232 systemd-udevd[355029]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.440 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee4b1b1-5772-4595-9fe1-5fc30b63f15a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.442 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap79dd8398-31 in ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.4449] device (tap138b230d-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.4477] manager: (tapeeb6e4f9-f6): new Tun device (/org/freedesktop/NetworkManager/Devices/287)
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.446 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap79dd8398-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.447 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8cffb4bc-a11c-4048-a02c-6a930c6f54cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.448 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b7495402-0d61-466d-9b68-cda0009f6e56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.4509] device (tap138b230d-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:22:05 np0005593232 systemd-udevd[355038]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:22:05 np0005593232 systemd-machined[215836]: New machine qemu-69-instance-0000009c.
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.468 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[20814442-f13f-4901-b8e5-3e31bcfade2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 systemd[1]: Started Virtual Machine qemu-69-instance-0000009c.
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.4905] device (tapeeb6e4f9-f6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:22:05 np0005593232 kernel: tapeeb6e4f9-f6: entered promiscuous mode
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.4917] device (tapeeb6e4f9-f6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00607|binding|INFO|Claiming lport eeb6e4f9-f657-4553-8f43-ca56cd0a1034 for this chassis.
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00608|binding|INFO|eeb6e4f9-f657-4553-8f43-ca56cd0a1034: Claiming fa:16:3e:c1:4f:53 10.100.0.7
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.501 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[27949ff8-d712-404a-b5f5-c65f38c55986]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.507 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.507 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4f:53 10.100.0.7'], port_security=['fa:16:3e:c1:4f:53 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ddf1404-0e71-447c-ba86-6d730ff54120', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=eeb6e4f9-f657-4553-8f43-ca56cd0a1034) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00609|binding|INFO|Setting lport 138b230d-d47c-4ec5-9572-c92a07aacb0b up in Southbound
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00610|binding|INFO|Setting lport 138b230d-d47c-4ec5-9572-c92a07aacb0b ovn-installed in OVS
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 systemd-machined[215836]: New machine qemu-70-instance-0000009d.
Jan 23 05:22:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:05.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.554 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4fd9ba-a4fb-49f7-a647-8401010b4e05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 systemd[1]: Started Virtual Machine qemu-70-instance-0000009d.
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.564 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a1553ce3-b0ab-4f81-8629-564435d62c35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.5654] manager: (tap79dd8398-30): new Veth device (/org/freedesktop/NetworkManager/Devices/288)
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00611|binding|INFO|Setting lport eeb6e4f9-f657-4553-8f43-ca56cd0a1034 ovn-installed in OVS
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00612|binding|INFO|Setting lport eeb6e4f9-f657-4553-8f43-ca56cd0a1034 up in Southbound
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.581 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.607 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[05777840-2e23-452b-ba13-b6adfba2ee93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.612 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e989b444-6870-4918-8897-a2b9dccf8543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2778: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.6457] device (tap79dd8398-30): carrier: link connected
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.651 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[935db8c1-6042-4c27-acb5-371e5477280b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.679 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f5437314-e8a2-47c4-9f03-e7c7d8dd1bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79dd8398-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b0:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763878, 'reachable_time': 39432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355078, 'error': None, 'target': 'ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.702 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e695e1f2-d0e0-49e9-b70d-e136f5e5f87b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe94:b0ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763878, 'tstamp': 763878}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355079, 'error': None, 'target': 'ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.726 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[073d5043-408a-4364-964d-f45869d53650]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap79dd8398-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:94:b0:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 182], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763878, 'reachable_time': 39432, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355080, 'error': None, 'target': 'ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.779 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab8a383-bb96-4f1f-8531-e018a065ba5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.873 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5b85b40f-776d-4411-bd91-37bdfb3c0566]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.876 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79dd8398-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.876 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.877 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79dd8398-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 NetworkManager[49057]: <info>  [1769163725.8818] manager: (tap79dd8398-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Jan 23 05:22:05 np0005593232 kernel: tap79dd8398-30: entered promiscuous mode
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.886 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap79dd8398-30, col_values=(('external_ids', {'iface-id': '2ec0855c-307b-4b1e-a1b1-a551163c8841'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:05Z|00613|binding|INFO|Releasing lport 2ec0855c-307b-4b1e-a1b1-a551163c8841 from this chassis (sb_readonly=0)
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.919 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/79dd8398-301e-4e13-ab64-2cdbc4503040.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/79dd8398-301e-4e13-ab64-2cdbc4503040.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.921 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a8782504-6f46-49b3-b27f-b37deec93f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.922 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-79dd8398-301e-4e13-ab64-2cdbc4503040
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/79dd8398-301e-4e13-ab64-2cdbc4503040.pid.haproxy
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 79dd8398-301e-4e13-ab64-2cdbc4503040
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:22:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:05.923 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040', 'env', 'PROCESS_TAG=haproxy-79dd8398-301e-4e13-ab64-2cdbc4503040', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/79dd8398-301e-4e13-ab64-2cdbc4503040.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.970 250273 DEBUG nova.network.neutron [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Updated VIF entry in instance network info cache for port 138b230d-d47c-4ec5-9572-c92a07aacb0b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:22:05 np0005593232 nova_compute[250269]: 2026-01-23 10:22:05.972 250273 DEBUG nova.network.neutron [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Updating instance_info_cache with network_info: [{"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.022 250273 DEBUG oslo_concurrency.lockutils [req-df14040a-b166-4d79-b6e6-16357c27ba1e req-4f187904-13fd-4090-9a92-14e054370df9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-4b1c1eac-7834-4d9a-92c7-304ff03a42a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.127 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163726.1271179, 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.128 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] VM Started (Lifecycle Event)#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.171 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.178 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163726.1283574, 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.178 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.207 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.212 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.245 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.245 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163726.2290826, 6ddf1404-0e71-447c-ba86-6d730ff54120 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.245 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] VM Started (Lifecycle Event)#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.280 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.285 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163726.229236, 6ddf1404-0e71-447c-ba86-6d730ff54120 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.286 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.329 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.333 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.371 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:22:06 np0005593232 podman[355197]: 2026-01-23 10:22:06.410836996 +0000 UTC m=+0.065382640 container create 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.450 250273 DEBUG nova.compute.manager [req-bad41369-fa67-4223-aae8-9d22aa4d2a6b req-984ae240-0234-441c-979d-9aa225813133 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.452 250273 DEBUG oslo_concurrency.lockutils [req-bad41369-fa67-4223-aae8-9d22aa4d2a6b req-984ae240-0234-441c-979d-9aa225813133 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.453 250273 DEBUG oslo_concurrency.lockutils [req-bad41369-fa67-4223-aae8-9d22aa4d2a6b req-984ae240-0234-441c-979d-9aa225813133 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.454 250273 DEBUG oslo_concurrency.lockutils [req-bad41369-fa67-4223-aae8-9d22aa4d2a6b req-984ae240-0234-441c-979d-9aa225813133 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.455 250273 DEBUG nova.compute.manager [req-bad41369-fa67-4223-aae8-9d22aa4d2a6b req-984ae240-0234-441c-979d-9aa225813133 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Processing event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.456 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:22:06 np0005593232 systemd[1]: Started libpod-conmon-383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35.scope.
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.462 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163726.462514, 6ddf1404-0e71-447c-ba86-6d730ff54120 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.463 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.466 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:22:06 np0005593232 podman[355197]: 2026-01-23 10:22:06.37929447 +0000 UTC m=+0.033840094 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.471 250273 INFO nova.virt.libvirt.driver [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance spawned successfully.#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.472 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:22:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.492 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53016305e880af240a5c81fec9f621e6b0a5dfdad2011671c49b131531f48af8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.500 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.509 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.510 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.511 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.511 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.512 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.512 250273 DEBUG nova.virt.libvirt.driver [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:06 np0005593232 podman[355197]: 2026-01-23 10:22:06.522683606 +0000 UTC m=+0.177229240 container init 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:22:06 np0005593232 podman[355197]: 2026-01-23 10:22:06.530473047 +0000 UTC m=+0.185018651 container start 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.545 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:22:06 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [NOTICE]   (355216) : New worker (355218) forked
Jan 23 05:22:06 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [NOTICE]   (355216) : Loading success.
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.594 250273 INFO nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Took 13.42 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.594 250273 DEBUG nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.602 161902 INFO neutron.agent.ovn.metadata.agent [-] Port eeb6e4f9-f657-4553-8f43-ca56cd0a1034 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.605 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.616 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fba6855b-1bac-4845-bc4a-87d41b604129]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.618 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43bdb40a-e1 in ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.619 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43bdb40a-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.620 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[53343d98-471d-484b-9a6a-e9dd7b9d1b1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.621 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd34f80-9610-43d0-b553-464d56b60e7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.633 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[e88e1f52-304a-4d8a-ac59-506377a96b61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.657 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e0da5959-2d60-4a95-a1f3-fd3ba12eec4f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.692 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7e2a0f-1a88-4c1d-89db-6cff39341668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.694 250273 INFO nova.compute.manager [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Took 26.56 seconds to build instance.#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.701 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7adc47b8-b14e-4c0b-bd2a-dafd798d551f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 NetworkManager[49057]: <info>  [1769163726.7029] manager: (tap43bdb40a-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/290)
Jan 23 05:22:06 np0005593232 systemd-udevd[355060]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.737 250273 DEBUG oslo_concurrency.lockutils [None req-1cc65f01-b6d4-4add-ba25-f9f610f27cbb ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 28.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.740 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8d438e-2aa0-44c4-ade5-68f104c678c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.743 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1170975b-d3ad-4a93-935f-a044f462be3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 NetworkManager[49057]: <info>  [1769163726.7698] device (tap43bdb40a-e0): carrier: link connected
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.774 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[69f42e39-0aa0-4846-9add-e55ac7b58cab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.791 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8777c2-f0c9-4895-85a7-d743f8c9f66d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763991, 'reachable_time': 38719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355237, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.809 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3b506416-cf78-44ba-9b53-ee0cd26ddd61]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2b:5ee5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 763991, 'tstamp': 763991}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355238, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.826 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e54c862-6628-4bda-804f-4076799accea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43bdb40a-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2b:5e:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 183], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763991, 'reachable_time': 38719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355239, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.859 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1a45d1-46fc-47f5-b8ab-a02e93de1012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.915 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[93fef046-c26e-4240-af35-54e120520249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.917 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.917 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.918 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43bdb40a-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.919 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:06 np0005593232 NetworkManager[49057]: <info>  [1769163726.9206] manager: (tap43bdb40a-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/291)
Jan 23 05:22:06 np0005593232 kernel: tap43bdb40a-e0: entered promiscuous mode
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.923 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.924 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43bdb40a-e0, col_values=(('external_ids', {'iface-id': '8a8ef4f2-2ba5-405a-811e-058c5ff2b91e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.924 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:06 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:06Z|00614|binding|INFO|Releasing lport 8a8ef4f2-2ba5-405a-811e-058c5ff2b91e from this chassis (sb_readonly=0)
Jan 23 05:22:06 np0005593232 nova_compute[250269]: 2026-01-23 10:22:06.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.954 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.955 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d7aed7ef-b0ba-4d5c-8dce-20637fb335eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.955 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.pid.haproxy
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:22:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:06.956 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'env', 'PROCESS_TAG=haproxy-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:22:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.252 250273 DEBUG nova.compute.manager [req-e6debc38-cef5-4cb6-9664-21da23072c46 req-8bd2b675-e641-41bd-8cbc-2d83312bf529 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.253 250273 DEBUG oslo_concurrency.lockutils [req-e6debc38-cef5-4cb6-9664-21da23072c46 req-8bd2b675-e641-41bd-8cbc-2d83312bf529 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.253 250273 DEBUG oslo_concurrency.lockutils [req-e6debc38-cef5-4cb6-9664-21da23072c46 req-8bd2b675-e641-41bd-8cbc-2d83312bf529 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.253 250273 DEBUG oslo_concurrency.lockutils [req-e6debc38-cef5-4cb6-9664-21da23072c46 req-8bd2b675-e641-41bd-8cbc-2d83312bf529 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.254 250273 DEBUG nova.compute.manager [req-e6debc38-cef5-4cb6-9664-21da23072c46 req-8bd2b675-e641-41bd-8cbc-2d83312bf529 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Processing event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.255 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.260 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163727.2602117, 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.261 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.264 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.268 250273 INFO nova.virt.libvirt.driver [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Instance spawned successfully.#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.268 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.291 250273 DEBUG nova.network.neutron [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Updated VIF entry in instance network info cache for port eeb6e4f9-f657-4553-8f43-ca56cd0a1034. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.292 250273 DEBUG nova.network.neutron [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Updating instance_info_cache with network_info: [{"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.296 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.304 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.309 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.309 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.310 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.310 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.311 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.311 250273 DEBUG nova.virt.libvirt.driver [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:22:07 np0005593232 podman[355273]: 2026-01-23 10:22:07.396204298 +0000 UTC m=+0.054590723 container create 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.408 250273 DEBUG oslo_concurrency.lockutils [req-3ccfc559-0368-4a9c-b64e-bfcaf29ae33c req-d002b7c2-b456-479d-8442-e3d7e6dc0e62 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-6ddf1404-0e71-447c-ba86-6d730ff54120" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:22:07 np0005593232 systemd[1]: Started libpod-conmon-277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43.scope.
Jan 23 05:22:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:22:07 np0005593232 podman[355273]: 2026-01-23 10:22:07.367471541 +0000 UTC m=+0.025858016 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:22:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea72b2c3c7b53df4164cd19130c36e1611f2eb5fe78cc2a87d5e51c29aa43b6d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:22:07 np0005593232 podman[355273]: 2026-01-23 10:22:07.485115215 +0000 UTC m=+0.143501720 container init 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:22:07 np0005593232 podman[355273]: 2026-01-23 10:22:07.498584728 +0000 UTC m=+0.156971173 container start 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:22:07 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [NOTICE]   (355292) : New worker (355294) forked
Jan 23 05:22:07 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [NOTICE]   (355292) : Loading success.
Jan 23 05:22:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:07.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2779: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 875 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.683 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.906 250273 INFO nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Took 23.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:22:07 np0005593232 nova_compute[250269]: 2026-01-23 10:22:07.907 250273 DEBUG nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:08 np0005593232 nova_compute[250269]: 2026-01-23 10:22:08.157 250273 INFO nova.compute.manager [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Took 28.45 seconds to build instance.#033[00m
Jan 23 05:22:08 np0005593232 nova_compute[250269]: 2026-01-23 10:22:08.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:08 np0005593232 nova_compute[250269]: 2026-01-23 10:22:08.961 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.138 250273 DEBUG oslo_concurrency.lockutils [None req-1086fe7a-a053-40f8-b5ca-a6036be817b6 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 30.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:09.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2780: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 258 KiB/s wr, 100 op/s
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.652 250273 DEBUG nova.compute.manager [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.653 250273 DEBUG oslo_concurrency.lockutils [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.653 250273 DEBUG oslo_concurrency.lockutils [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.654 250273 DEBUG oslo_concurrency.lockutils [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.654 250273 DEBUG nova.compute.manager [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] No waiting events found dispatching network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.654 250273 WARNING nova.compute.manager [req-4d351d83-0b45-4ac1-aac6-75cc511dab5a req-86844aec-e902-4e92-9d4b-236f8d6b5304 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received unexpected event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.887 250273 DEBUG nova.compute.manager [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.888 250273 DEBUG oslo_concurrency.lockutils [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.888 250273 DEBUG oslo_concurrency.lockutils [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.888 250273 DEBUG oslo_concurrency.lockutils [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.888 250273 DEBUG nova.compute.manager [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] No waiting events found dispatching network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:09 np0005593232 nova_compute[250269]: 2026-01-23 10:22:09.888 250273 WARNING nova.compute.manager [req-ef872b16-3321-4fa7-8c73-dcb66282db0f req-1b4a8dcc-4d6d-44ee-8e22-7a08e89b67c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received unexpected event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b for instance with vm_state active and task_state None.#033[00m
Jan 23 05:22:10 np0005593232 podman[355305]: 2026-01-23 10:22:10.472972471 +0000 UTC m=+0.125974442 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 05:22:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:11.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2781: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 25 KiB/s wr, 88 op/s
Jan 23 05:22:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.029 250273 DEBUG oslo_concurrency.lockutils [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.029 250273 DEBUG oslo_concurrency.lockutils [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.030 250273 DEBUG nova.compute.manager [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.034 250273 DEBUG nova.compute.manager [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.035 250273 DEBUG nova.objects.instance [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'flavor' on Instance uuid 6ddf1404-0e71-447c-ba86-6d730ff54120 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.085 250273 DEBUG nova.virt.libvirt.driver [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:22:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:13.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2782: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 149 op/s
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 23 05:22:13 np0005593232 nova_compute[250269]: 2026-01-23 10:22:13.962 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 23 05:22:14 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 23 05:22:14 np0005593232 podman[355383]: 2026-01-23 10:22:14.448031041 +0000 UTC m=+0.098869941 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 05:22:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:15.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:15.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2784: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 4.6 MiB/s rd, 50 KiB/s wr, 181 op/s
Jan 23 05:22:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:17.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.268 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.269 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.269 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.270 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.270 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.273 250273 INFO nova.compute.manager [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Terminating instance#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.274 250273 DEBUG nova.compute.manager [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:22:17 np0005593232 kernel: tap138b230d-d4 (unregistering): left promiscuous mode
Jan 23 05:22:17 np0005593232 NetworkManager[49057]: <info>  [1769163737.3193] device (tap138b230d-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:17Z|00615|binding|INFO|Releasing lport 138b230d-d47c-4ec5-9572-c92a07aacb0b from this chassis (sb_readonly=0)
Jan 23 05:22:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:17Z|00616|binding|INFO|Setting lport 138b230d-d47c-4ec5-9572-c92a07aacb0b down in Southbound
Jan 23 05:22:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:17Z|00617|binding|INFO|Removing iface tap138b230d-d4 ovn-installed in OVS
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.349 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d7:e0:96 10.100.0.5'], port_security=['fa:16:3e:d7:e0:96 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '4b1c1eac-7834-4d9a-92c7-304ff03a42a0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-79dd8398-301e-4e13-ab64-2cdbc4503040', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a6757124292b484abb7a27e68cab3408', 'neutron:revision_number': '4', 'neutron:security_group_ids': '80da2f70-1021-49f4-a1b9-2db158ba45a9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c646f4f2-19b5-463c-b4ed-89aab42b4633, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=138b230d-d47c-4ec5-9572-c92a07aacb0b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.352 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 138b230d-d47c-4ec5-9572-c92a07aacb0b in datapath 79dd8398-301e-4e13-ab64-2cdbc4503040 unbound from our chassis#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.352 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.355 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 79dd8398-301e-4e13-ab64-2cdbc4503040, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.357 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8ce3e6-b24b-4032-ad67-e5b423ccc78a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.358 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040 namespace which is not needed anymore#033[00m
Jan 23 05:22:17 np0005593232 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009c.scope: Deactivated successfully.
Jan 23 05:22:17 np0005593232 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000009c.scope: Consumed 10.799s CPU time.
Jan 23 05:22:17 np0005593232 systemd-machined[215836]: Machine qemu-69-instance-0000009c terminated.
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.507 250273 INFO nova.virt.libvirt.driver [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Instance destroyed successfully.#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.508 250273 DEBUG nova.objects.instance [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lazy-loading 'resources' on Instance uuid 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:17 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [NOTICE]   (355216) : haproxy version is 2.8.14-c23fe91
Jan 23 05:22:17 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [NOTICE]   (355216) : path to executable is /usr/sbin/haproxy
Jan 23 05:22:17 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [WARNING]  (355216) : Exiting Master process...
Jan 23 05:22:17 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [ALERT]    (355216) : Current worker (355218) exited with code 143 (Terminated)
Jan 23 05:22:17 np0005593232 neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040[355212]: [WARNING]  (355216) : All workers exited. Exiting... (0)
Jan 23 05:22:17 np0005593232 systemd[1]: libpod-383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35.scope: Deactivated successfully.
Jan 23 05:22:17 np0005593232 podman[355426]: 2026-01-23 10:22:17.531367122 +0000 UTC m=+0.062860678 container died 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.543 250273 DEBUG nova.virt.libvirt.vif [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:21:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-1582557665',display_name='tempest-ServerMetadataTestJSON-server-1582557665',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-1582557665',id=156,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:22:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a6757124292b484abb7a27e68cab3408',ramdisk_id='',reservation_id='r-1bmpdmwj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1422902891',owner_user_name='tempest-ServerMetadataTestJSON-1422902891-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:22:16Z,user_data=None,user_id='558050347cf24d20a73a9a6d08d4c242',uuid=4b1c1eac-7834-4d9a-92c7-304ff03a42a0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.543 250273 DEBUG nova.network.os_vif_util [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converting VIF {"id": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "address": "fa:16:3e:d7:e0:96", "network": {"id": "79dd8398-301e-4e13-ab64-2cdbc4503040", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1285411968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a6757124292b484abb7a27e68cab3408", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap138b230d-d4", "ovs_interfaceid": "138b230d-d47c-4ec5-9572-c92a07aacb0b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.545 250273 DEBUG nova.network.os_vif_util [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.545 250273 DEBUG os_vif [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.547 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.548 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap138b230d-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35-userdata-shm.mount: Deactivated successfully.
Jan 23 05:22:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-53016305e880af240a5c81fec9f621e6b0a5dfdad2011671c49b131531f48af8-merged.mount: Deactivated successfully.
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.586 250273 INFO os_vif [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d7:e0:96,bridge_name='br-int',has_traffic_filtering=True,id=138b230d-d47c-4ec5-9572-c92a07aacb0b,network=Network(79dd8398-301e-4e13-ab64-2cdbc4503040),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap138b230d-d4')#033[00m
Jan 23 05:22:17 np0005593232 podman[355426]: 2026-01-23 10:22:17.595617198 +0000 UTC m=+0.127110754 container cleanup 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:22:17 np0005593232 systemd[1]: libpod-conmon-383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35.scope: Deactivated successfully.
Jan 23 05:22:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2785: 321 pgs: 321 active+clean; 313 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 21 KiB/s wr, 152 op/s
Jan 23 05:22:17 np0005593232 podman[355482]: 2026-01-23 10:22:17.663946801 +0000 UTC m=+0.042670434 container remove 383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.671 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[28a7f96d-42a8-4c1c-828e-ae7ce41186fe]: (4, ('Fri Jan 23 10:22:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040 (383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35)\n383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35\nFri Jan 23 10:22:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040 (383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35)\n383bd7aaafad056df7e72c26ba8aeff7f5660a8cf6f2b8b11e102eb268989c35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.673 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[035c5686-0b4d-44d4-987c-5162cdc7f013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.674 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79dd8398-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 kernel: tap79dd8398-30: left promiscuous mode
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.681 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c915cef1-3eb5-46fe-b16e-6bda2218841e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.701 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[afd1ee37-6806-419a-8172-3a40455f6fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.702 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c625111-6a9e-417e-aa27-83d5472213dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.719 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a8abdf8b-2fcb-4372-b096-b6c1d102fcaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763868, 'reachable_time': 44540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355498, 'error': None, 'target': 'ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.722 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-79dd8398-301e-4e13-ab64-2cdbc4503040 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:22:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:17.722 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[17bd8998-ea8b-4110-b79e-2cbf5a006150]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:17 np0005593232 systemd[1]: run-netns-ovnmeta\x2d79dd8398\x2d301e\x2d4e13\x2dab64\x2d2cdbc4503040.mount: Deactivated successfully.
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.993 250273 DEBUG nova.compute.manager [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-unplugged-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.993 250273 DEBUG oslo_concurrency.lockutils [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.994 250273 DEBUG oslo_concurrency.lockutils [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.994 250273 DEBUG oslo_concurrency.lockutils [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.994 250273 DEBUG nova.compute.manager [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] No waiting events found dispatching network-vif-unplugged-138b230d-d47c-4ec5-9572-c92a07aacb0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:17 np0005593232 nova_compute[250269]: 2026-01-23 10:22:17.994 250273 DEBUG nova.compute.manager [req-dcfbd9d6-1c56-4ab1-9598-57067a123c2a req-5a2ef63c-71dc-40fa-9739-98690d9cb6e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-unplugged-138b230d-d47c-4ec5-9572-c92a07aacb0b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.314 250273 INFO nova.virt.libvirt.driver [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Deleting instance files /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0_del#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.315 250273 INFO nova.virt.libvirt.driver [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Deletion of /var/lib/nova/instances/4b1c1eac-7834-4d9a-92c7-304ff03a42a0_del complete#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.432 250273 INFO nova.compute.manager [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Took 1.16 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.433 250273 DEBUG oslo.service.loopingcall [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.433 250273 DEBUG nova.compute.manager [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.433 250273 DEBUG nova.network.neutron [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:22:18 np0005593232 nova_compute[250269]: 2026-01-23 10:22:18.998 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:19Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:4f:53 10.100.0.7
Jan 23 05:22:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:19Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:4f:53 10.100.0.7
Jan 23 05:22:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:19.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:19.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2786: 321 pgs: 321 active+clean; 297 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 981 KiB/s wr, 128 op/s
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.219 250273 DEBUG nova.compute.manager [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.219 250273 DEBUG oslo_concurrency.lockutils [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.219 250273 DEBUG oslo_concurrency.lockutils [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.219 250273 DEBUG oslo_concurrency.lockutils [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.219 250273 DEBUG nova.compute.manager [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] No waiting events found dispatching network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:20 np0005593232 nova_compute[250269]: 2026-01-23 10:22:20.220 250273 WARNING nova.compute.manager [req-b23b9628-487f-4132-85c1-6f1506dab225 req-13680c5d-888e-4159-9c86-3d565cd5f9fc 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received unexpected event network-vif-plugged-138b230d-d47c-4ec5-9572-c92a07aacb0b for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.004 250273 DEBUG nova.network.neutron [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.038 250273 INFO nova.compute.manager [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Took 2.61 seconds to deallocate network for instance.#033[00m
Jan 23 05:22:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:21.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.163 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.164 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.340 250273 DEBUG oslo_concurrency.processutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:21.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2787: 321 pgs: 321 active+clean; 297 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 981 KiB/s wr, 128 op/s
Jan 23 05:22:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:22:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1119136550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.863 250273 DEBUG oslo_concurrency.processutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.871 250273 DEBUG nova.compute.provider_tree [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.896 250273 DEBUG nova.scheduler.client.report [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:22:21 np0005593232 nova_compute[250269]: 2026-01-23 10:22:21.946 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:22 np0005593232 nova_compute[250269]: 2026-01-23 10:22:22.002 250273 INFO nova.scheduler.client.report [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Deleted allocations for instance 4b1c1eac-7834-4d9a-92c7-304ff03a42a0#033[00m
Jan 23 05:22:22 np0005593232 nova_compute[250269]: 2026-01-23 10:22:22.116 250273 DEBUG oslo_concurrency.lockutils [None req-88467e22-6292-41b7-af06-e5c78063b5ce 558050347cf24d20a73a9a6d08d4c242 a6757124292b484abb7a27e68cab3408 - - default default] Lock "4b1c1eac-7834-4d9a-92c7-304ff03a42a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 23 05:22:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 23 05:22:22 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 23 05:22:22 np0005593232 nova_compute[250269]: 2026-01-23 10:22:22.473 250273 DEBUG nova.compute.manager [req-d9604549-c9d3-441f-9c4a-ae6daa290910 req-9d67cfe5-f303-4af0-a597-e7ffa3592404 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Received event network-vif-deleted-138b230d-d47c-4ec5-9572-c92a07aacb0b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:22 np0005593232 nova_compute[250269]: 2026-01-23 10:22:22.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:23.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:23 np0005593232 nova_compute[250269]: 2026-01-23 10:22:23.132 250273 DEBUG nova.virt.libvirt.driver [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:22:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:23.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2789: 321 pgs: 321 active+clean; 298 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 125 op/s
Jan 23 05:22:24 np0005593232 nova_compute[250269]: 2026-01-23 10:22:24.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:25.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:25 np0005593232 kernel: tapeeb6e4f9-f6 (unregistering): left promiscuous mode
Jan 23 05:22:25 np0005593232 NetworkManager[49057]: <info>  [1769163745.4416] device (tapeeb6e4f9-f6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:22:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:25Z|00618|binding|INFO|Releasing lport eeb6e4f9-f657-4553-8f43-ca56cd0a1034 from this chassis (sb_readonly=0)
Jan 23 05:22:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:25Z|00619|binding|INFO|Setting lport eeb6e4f9-f657-4553-8f43-ca56cd0a1034 down in Southbound
Jan 23 05:22:25 np0005593232 nova_compute[250269]: 2026-01-23 10:22:25.450 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 ovn_controller[151001]: 2026-01-23T10:22:25Z|00620|binding|INFO|Removing iface tapeeb6e4f9-f6 ovn-installed in OVS
Jan 23 05:22:25 np0005593232 nova_compute[250269]: 2026-01-23 10:22:25.454 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.459 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:4f:53 10.100.0.7'], port_security=['fa:16:3e:c1:4f:53 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6ddf1404-0e71-447c-ba86-6d730ff54120', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c59351a1b59c4cc9ad389dff900935f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd59f1dd0-018a-40d5-b9a0-54c6c1f9d925', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c808b115-ccf1-41c4-acea-daabae8abf5b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=eeb6e4f9-f657-4553-8f43-ca56cd0a1034) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.460 161902 INFO neutron.agent.ovn.metadata.agent [-] Port eeb6e4f9-f657-4553-8f43-ca56cd0a1034 in datapath 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 unbound from our chassis#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.461 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.462 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4142139f-cd0d-4053-aeeb-2afe13908a12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.463 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 namespace which is not needed anymore#033[00m
Jan 23 05:22:25 np0005593232 nova_compute[250269]: 2026-01-23 10:22:25.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009d.scope: Deactivated successfully.
Jan 23 05:22:25 np0005593232 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000009d.scope: Consumed 13.546s CPU time.
Jan 23 05:22:25 np0005593232 systemd-machined[215836]: Machine qemu-70-instance-0000009d terminated.
Jan 23 05:22:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:25.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:25 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [NOTICE]   (355292) : haproxy version is 2.8.14-c23fe91
Jan 23 05:22:25 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [NOTICE]   (355292) : path to executable is /usr/sbin/haproxy
Jan 23 05:22:25 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [WARNING]  (355292) : Exiting Master process...
Jan 23 05:22:25 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [ALERT]    (355292) : Current worker (355294) exited with code 143 (Terminated)
Jan 23 05:22:25 np0005593232 neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7[355288]: [WARNING]  (355292) : All workers exited. Exiting... (0)
Jan 23 05:22:25 np0005593232 systemd[1]: libpod-277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43.scope: Deactivated successfully.
Jan 23 05:22:25 np0005593232 podman[355548]: 2026-01-23 10:22:25.629339314 +0000 UTC m=+0.061927322 container died 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 23 05:22:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2790: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.6 MiB/s wr, 176 op/s
Jan 23 05:22:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43-userdata-shm.mount: Deactivated successfully.
Jan 23 05:22:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ea72b2c3c7b53df4164cd19130c36e1611f2eb5fe78cc2a87d5e51c29aa43b6d-merged.mount: Deactivated successfully.
Jan 23 05:22:25 np0005593232 podman[355548]: 2026-01-23 10:22:25.676743741 +0000 UTC m=+0.109331739 container cleanup 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 05:22:25 np0005593232 systemd[1]: libpod-conmon-277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43.scope: Deactivated successfully.
Jan 23 05:22:25 np0005593232 podman[355582]: 2026-01-23 10:22:25.736392187 +0000 UTC m=+0.040424150 container remove 277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.742 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cf784803-08e6-4332-a5ba-9ee1f5ceddbb]: (4, ('Fri Jan 23 10:22:25 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43)\n277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43\nFri Jan 23 10:22:25 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 (277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43)\n277caa91008f1bb460dec29f0a245a9e00a92fb8bb47e3ed8b53d2272ea41d43\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.743 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5c03c975-9134-44e6-90a2-64e0ab8ec1d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.744 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43bdb40a-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:25 np0005593232 nova_compute[250269]: 2026-01-23 10:22:25.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 kernel: tap43bdb40a-e0: left promiscuous mode
Jan 23 05:22:25 np0005593232 nova_compute[250269]: 2026-01-23 10:22:25.765 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.767 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b5346d71-5067-4da5-b572-1a61f2afa4f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.786 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba0a25f-962d-4541-9f57-a23aff63d915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.787 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e6945042-5645-44d8-a09f-d461f207fecb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.802 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b853e495-3ed9-49b4-a22a-180fa8644b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 763982, 'reachable_time': 39626, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355607, 'error': None, 'target': 'ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:25 np0005593232 systemd[1]: run-netns-ovnmeta\x2d43bdb40a\x2deff5\x2d45cd\x2d9cb3\x2dcfdf465ad1f7.mount: Deactivated successfully.
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.806 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:22:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:25.806 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4e62cb-0725-4dbe-b6a2-4adbcb4b1dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:22:26 np0005593232 nova_compute[250269]: 2026-01-23 10:22:26.147 250273 INFO nova.virt.libvirt.driver [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance shutdown successfully after 13 seconds.#033[00m
Jan 23 05:22:26 np0005593232 nova_compute[250269]: 2026-01-23 10:22:26.153 250273 INFO nova.virt.libvirt.driver [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance destroyed successfully.#033[00m
Jan 23 05:22:26 np0005593232 nova_compute[250269]: 2026-01-23 10:22:26.154 250273 DEBUG nova.objects.instance [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 6ddf1404-0e71-447c-ba86-6d730ff54120 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:26 np0005593232 nova_compute[250269]: 2026-01-23 10:22:26.207 250273 DEBUG nova.compute.manager [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:26 np0005593232 nova_compute[250269]: 2026-01-23 10:22:26.283 250273 DEBUG oslo_concurrency.lockutils [None req-d0b5880a-095b-4276-b524-a6c29f7548e3 ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:27.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:27 np0005593232 nova_compute[250269]: 2026-01-23 10:22:27.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2791: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 199 op/s
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.014 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:29.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:29.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.621 250273 DEBUG nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-vif-unplugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.622 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.622 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.623 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.623 250273 DEBUG nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] No waiting events found dispatching network-vif-unplugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.623 250273 WARNING nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received unexpected event network-vif-unplugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.624 250273 DEBUG nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.624 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.624 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.625 250273 DEBUG oslo_concurrency.lockutils [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.625 250273 DEBUG nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] No waiting events found dispatching network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:22:29 np0005593232 nova_compute[250269]: 2026-01-23 10:22:29.626 250273 WARNING nova.compute.manager [req-18c6704c-1f49-421c-9bc8-a1f4000cf7fd req-39ae0ea4-f232-45c7-be37-ccc3d7f6dcce 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received unexpected event network-vif-plugged-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 for instance with vm_state stopped and task_state None.#033[00m
Jan 23 05:22:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2792: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 156 op/s
Jan 23 05:22:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2793: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 1.6 MiB/s wr, 156 op/s
Jan 23 05:22:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:32 np0005593232 nova_compute[250269]: 2026-01-23 10:22:32.504 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163737.5030313, 4b1c1eac-7834-4d9a-92c7-304ff03a42a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:32 np0005593232 nova_compute[250269]: 2026-01-23 10:22:32.504 250273 INFO nova.compute.manager [-] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:22:32 np0005593232 nova_compute[250269]: 2026-01-23 10:22:32.537 250273 DEBUG nova.compute.manager [None req-7525c53f-6dd9-411a-a8f2-acf6526af916 - - - - - -] [instance: 4b1c1eac-7834-4d9a-92c7-304ff03a42a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:32 np0005593232 nova_compute[250269]: 2026-01-23 10:22:32.586 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.508 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:33.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2794: 321 pgs: 321 active+clean; 300 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.4 MiB/s wr, 136 op/s
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.703 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.704 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.705 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.705 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.706 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.708 250273 INFO nova.compute.manager [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Terminating instance#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.710 250273 DEBUG nova.compute.manager [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.720 250273 INFO nova.virt.libvirt.driver [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Instance destroyed successfully.#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.720 250273 DEBUG nova.objects.instance [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lazy-loading 'resources' on Instance uuid 6ddf1404-0e71-447c-ba86-6d730ff54120 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.748 250273 DEBUG nova.virt.libvirt.vif [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:21:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-182217745',display_name='tempest-Íñstáñcé-969460751',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-182217745',id=157,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:22:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c59351a1b59c4cc9ad389dff900935f2',ramdisk_id='',reservation_id='r-0w9a5fem',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1611255243',owner_user_name='tempest-ServersTestJSON-1611255243-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:22:29Z,user_data=None,user_id='ec99ae7c69d0438280441e0434374cbf',uuid=6ddf1404-0e71-447c-ba86-6d730ff54120,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.748 250273 DEBUG nova.network.os_vif_util [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converting VIF {"id": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "address": "fa:16:3e:c1:4f:53", "network": {"id": "43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7", "bridge": "br-int", "label": "tempest-ServersTestJSON-244954383-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c59351a1b59c4cc9ad389dff900935f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeeb6e4f9-f6", "ovs_interfaceid": "eeb6e4f9-f657-4553-8f43-ca56cd0a1034", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.749 250273 DEBUG nova.network.os_vif_util [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.750 250273 DEBUG os_vif [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.752 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.752 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeeb6e4f9-f6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.759 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:22:33 np0005593232 nova_compute[250269]: 2026-01-23 10:22:33.764 250273 INFO os_vif [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:4f:53,bridge_name='br-int',has_traffic_filtering=True,id=eeb6e4f9-f657-4553-8f43-ca56cd0a1034,network=Network(43bdb40a-eff5-45cd-9cb3-cfdf465ad1f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeeb6e4f9-f6')#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.016 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.232 250273 INFO nova.virt.libvirt.driver [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Deleting instance files /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120_del#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.233 250273 INFO nova.virt.libvirt.driver [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Deletion of /var/lib/nova/instances/6ddf1404-0e71-447c-ba86-6d730ff54120_del complete#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.348 250273 INFO nova.compute.manager [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Took 0.64 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.349 250273 DEBUG oslo.service.loopingcall [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.349 250273 DEBUG nova.compute.manager [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:22:34 np0005593232 nova_compute[250269]: 2026-01-23 10:22:34.350 250273 DEBUG nova.network.neutron [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:22:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2795: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 30 KiB/s wr, 105 op/s
Jan 23 05:22:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:22:37
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.561 250273 DEBUG nova.network.neutron [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:22:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:37.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.608 250273 INFO nova.compute.manager [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Took 3.26 seconds to deallocate network for instance.#033[00m
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:22:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2796: 321 pgs: 321 active+clean; 220 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 38 KiB/s wr, 93 op/s
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.703 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.703 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.763 250273 DEBUG nova.compute.manager [req-787ad1c4-18fe-4f32-9b11-f8116aeef6ba req-e1c2f3e4-8f06-445a-948e-093347a191a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Received event network-vif-deleted-eeb6e4f9-f657-4553-8f43-ca56cd0a1034 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:22:37 np0005593232 nova_compute[250269]: 2026-01-23 10:22:37.825 250273 DEBUG oslo_concurrency.processutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:22:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:22:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3828199311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.254 250273 DEBUG oslo_concurrency.processutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.262 250273 DEBUG nova.compute.provider_tree [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:22:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.550 250273 DEBUG nova.scheduler.client.report [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.731 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.756 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.780 250273 INFO nova.scheduler.client.report [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Deleted allocations for instance 6ddf1404-0e71-447c-ba86-6d730ff54120#033[00m
Jan 23 05:22:38 np0005593232 nova_compute[250269]: 2026-01-23 10:22:38.932 250273 DEBUG oslo_concurrency.lockutils [None req-4dd97696-ee9e-4c3f-9349-3f7943a23fcc ec99ae7c69d0438280441e0434374cbf c59351a1b59c4cc9ad389dff900935f2 - - default default] Lock "6ddf1404-0e71-447c-ba86-6d730ff54120" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:39 np0005593232 nova_compute[250269]: 2026-01-23 10:22:39.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:39.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2797: 321 pgs: 321 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 555 KiB/s rd, 444 KiB/s wr, 85 op/s
Jan 23 05:22:40 np0005593232 nova_compute[250269]: 2026-01-23 10:22:40.687 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163745.6861432, 6ddf1404-0e71-447c-ba86-6d730ff54120 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:22:40 np0005593232 nova_compute[250269]: 2026-01-23 10:22:40.688 250273 INFO nova.compute.manager [-] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:22:40 np0005593232 nova_compute[250269]: 2026-01-23 10:22:40.739 250273 DEBUG nova.compute.manager [None req-166b5a23-fdd9-462b-a9e8-3305d10cb93d - - - - - -] [instance: 6ddf1404-0e71-447c-ba86-6d730ff54120] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:22:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:41 np0005593232 podman[355709]: 2026-01-23 10:22:41.533830222 +0000 UTC m=+0.177827706 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:22:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:41.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2798: 321 pgs: 321 active+clean; 233 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 555 KiB/s rd, 444 KiB/s wr, 84 op/s
Jan 23 05:22:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:42.637 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:42.638 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:42.638 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:22:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:43.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:43.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2799: 321 pgs: 321 active+clean; 267 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 562 KiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 23 05:22:43 np0005593232 nova_compute[250269]: 2026-01-23 10:22:43.759 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:44 np0005593232 nova_compute[250269]: 2026-01-23 10:22:44.059 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:44 np0005593232 nova_compute[250269]: 2026-01-23 10:22:44.321 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:44.504 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:22:44 np0005593232 nova_compute[250269]: 2026-01-23 10:22:44.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:44.506 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:22:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:45.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:45 np0005593232 podman[355738]: 2026-01-23 10:22:45.407318935 +0000 UTC m=+0.066304726 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:22:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:45.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2800: 321 pgs: 321 active+clean; 240 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 569 KiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 23 05:22:46 np0005593232 nova_compute[250269]: 2026-01-23 10:22:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:46 np0005593232 nova_compute[250269]: 2026-01-23 10:22:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00486045679787828 of space, bias 1.0, pg target 1.458137039363484 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.452610273590173e-07 of space, bias 1.0, pg target 0.00016303304718034617 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:22:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:47.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2801: 321 pgs: 321 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 413 KiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 23 05:22:48 np0005593232 nova_compute[250269]: 2026-01-23 10:22:48.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:49 np0005593232 nova_compute[250269]: 2026-01-23 10:22:49.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:49.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:49 np0005593232 nova_compute[250269]: 2026-01-23 10:22:49.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:49 np0005593232 nova_compute[250269]: 2026-01-23 10:22:49.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:22:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:49.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2802: 321 pgs: 321 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 658 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 23 05:22:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:22:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:51.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:22:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:51.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2803: 321 pgs: 321 active+clean; 189 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 565 KiB/s rd, 1.4 MiB/s wr, 69 op/s
Jan 23 05:22:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:53.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:22:53.509 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:22:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:53.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2804: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 116 op/s
Jan 23 05:22:53 np0005593232 nova_compute[250269]: 2026-01-23 10:22:53.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:54 np0005593232 nova_compute[250269]: 2026-01-23 10:22:54.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:54 np0005593232 nova_compute[250269]: 2026-01-23 10:22:54.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:22:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:22:55 np0005593232 nova_compute[250269]: 2026-01-23 10:22:55.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:55.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2805: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 103 op/s
Jan 23 05:22:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:57.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:22:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:57.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2806: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 25 KiB/s wr, 94 op/s
Jan 23 05:22:58 np0005593232 nova_compute[250269]: 2026-01-23 10:22:58.773 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:22:59.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.126 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.473 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.474 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.517 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:22:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:22:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:22:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:22:59.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.641 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.642 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.656 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.657 250273 INFO nova.compute.claims [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:22:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2807: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 72 op/s
Jan 23 05:22:59 np0005593232 nova_compute[250269]: 2026-01-23 10:22:59.809 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.318 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.326 250273 DEBUG nova.compute.provider_tree [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.334 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.346 250273 DEBUG nova.scheduler.client.report [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.441 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.443 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.448 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.115s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.449 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.449 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.450 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.558 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.560 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.585 250273 INFO nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.604 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.738 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.740 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.741 250273 INFO nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Creating image(s)#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.783 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.827 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.868 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.873 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.924 250273 DEBUG nova.policy [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ac37a02e35ca45168d217a1444415569', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa744bcb00cc46deb672354c357387d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:23:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:23:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/903961690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.977 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.979 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.980 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:00 np0005593232 nova_compute[250269]: 2026-01-23 10:23:00.980 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.026 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.033 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 600f3a38-db46-412a-86f4-3859d28c7e4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.089 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:01.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.420 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.423 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4220MB free_disk=20.921714782714844GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.423 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.424 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.501 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 600f3a38-db46-412a-86f4-3859d28c7e4f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.501 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.502 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.547 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.609 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 600f3a38-db46-412a-86f4-3859d28c7e4f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:01.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2808: 321 pgs: 321 active+clean; 189 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 11 KiB/s wr, 51 op/s
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.719 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] resizing rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.887 250273 DEBUG nova.objects.instance [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lazy-loading 'migration_context' on Instance uuid 600f3a38-db46-412a-86f4-3859d28c7e4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.902774) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163781902899, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2123, "num_deletes": 252, "total_data_size": 3744170, "memory_usage": 3811264, "flush_reason": "Manual Compaction"}
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.909 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.910 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Ensure instance console log exists: /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.911 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.913 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:01 np0005593232 nova_compute[250269]: 2026-01-23 10:23:01.914 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163781931178, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3670488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60541, "largest_seqno": 62663, "table_properties": {"data_size": 3660948, "index_size": 5969, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20183, "raw_average_key_size": 20, "raw_value_size": 3641701, "raw_average_value_size": 3700, "num_data_blocks": 260, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163574, "oldest_key_time": 1769163574, "file_creation_time": 1769163781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 28447 microseconds, and 11422 cpu microseconds.
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.931228) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3670488 bytes OK
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.931250) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.942622) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.942645) EVENT_LOG_v1 {"time_micros": 1769163781942638, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.942660) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3735381, prev total WAL file size 3776857, number of live WAL files 2.
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.944056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3584KB)], [140(10MB)]
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163781944297, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 14308105, "oldest_snapshot_seqno": -1}
Jan 23 05:23:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2727975651' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.065 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.072 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.095 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8823 keys, 12408109 bytes, temperature: kUnknown
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163782106086, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 12408109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12350448, "index_size": 34508, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22085, "raw_key_size": 231549, "raw_average_key_size": 26, "raw_value_size": 12194806, "raw_average_value_size": 1382, "num_data_blocks": 1327, "num_entries": 8823, "num_filter_entries": 8823, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163781, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.106498) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12408109 bytes
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.108514) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.4 rd, 76.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 10.1 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 9346, records dropped: 523 output_compression: NoCompression
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.108542) EVENT_LOG_v1 {"time_micros": 1769163782108528, "job": 86, "event": "compaction_finished", "compaction_time_micros": 161901, "compaction_time_cpu_micros": 56409, "output_level": 6, "num_output_files": 1, "total_output_size": 12408109, "num_input_records": 9346, "num_output_records": 8823, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163782109523, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163782111911, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:01.943901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.112023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.112033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.112035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.112038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:23:02.112040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.132 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.132 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:02 np0005593232 nova_compute[250269]: 2026-01-23 10:23:02.305 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Successfully created port: d74ba041-6a1d-406b-b08e-5f67e7c5def6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd5ee8c1-bd57-4b72-97c4-2581cabaab99 does not exist
Jan 23 05:23:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ed568d08-22cb-4e86-8e52-e68cb37fcf7d does not exist
Jan 23 05:23:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3a495c44-3bce-4201-ab67-d4c625a3bbb7 does not exist
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:23:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:03.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.314 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Successfully updated port: d74ba041-6a1d-406b-b08e-5f67e7c5def6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.342 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.343 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquired lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.343 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:23:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2809: 321 pgs: 321 active+clean; 234 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.776 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.782 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.881 250273 DEBUG nova.compute.manager [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-changed-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.882 250273 DEBUG nova.compute.manager [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Refreshing instance network info cache due to event network-changed-d74ba041-6a1d-406b-b08e-5f67e7c5def6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:23:03 np0005593232 nova_compute[250269]: 2026-01-23 10:23:03.882 250273 DEBUG oslo_concurrency.lockutils [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:23:03 np0005593232 podman[356441]: 2026-01-23 10:23:03.90400466 +0000 UTC m=+0.051094893 container create 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:23:03 np0005593232 systemd[1]: Started libpod-conmon-6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5.scope.
Jan 23 05:23:03 np0005593232 podman[356441]: 2026-01-23 10:23:03.880937724 +0000 UTC m=+0.028027987 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:04 np0005593232 podman[356441]: 2026-01-23 10:23:04.01269613 +0000 UTC m=+0.159786383 container init 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:23:04 np0005593232 podman[356441]: 2026-01-23 10:23:04.027698066 +0000 UTC m=+0.174788299 container start 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:23:04 np0005593232 podman[356441]: 2026-01-23 10:23:04.031938717 +0000 UTC m=+0.179028970 container attach 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:23:04 np0005593232 ecstatic_hermann[356458]: 167 167
Jan 23 05:23:04 np0005593232 systemd[1]: libpod-6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5.scope: Deactivated successfully.
Jan 23 05:23:04 np0005593232 conmon[356458]: conmon 6238ab2a9e27c6b82062 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5.scope/container/memory.events
Jan 23 05:23:04 np0005593232 podman[356441]: 2026-01-23 10:23:04.042077765 +0000 UTC m=+0.189168038 container died 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:23:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b871e936710b69f76be48d6660d747e9dc990c2d3aef2571919be6c3fa6c126b-merged.mount: Deactivated successfully.
Jan 23 05:23:04 np0005593232 podman[356441]: 2026-01-23 10:23:04.093148347 +0000 UTC m=+0.240238590 container remove 6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_hermann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:23:04 np0005593232 systemd[1]: libpod-conmon-6238ab2a9e27c6b820629ee79ccebc07e9d3bce83eb78a74590f2aa69ad7f4a5.scope: Deactivated successfully.
Jan 23 05:23:04 np0005593232 nova_compute[250269]: 2026-01-23 10:23:04.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:04 np0005593232 podman[356481]: 2026-01-23 10:23:04.322474676 +0000 UTC m=+0.061922481 container create 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:23:04 np0005593232 systemd[1]: Started libpod-conmon-8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8.scope.
Jan 23 05:23:04 np0005593232 podman[356481]: 2026-01-23 10:23:04.294968144 +0000 UTC m=+0.034415959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:04 np0005593232 podman[356481]: 2026-01-23 10:23:04.443712553 +0000 UTC m=+0.183160348 container init 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:23:04 np0005593232 podman[356481]: 2026-01-23 10:23:04.450599548 +0000 UTC m=+0.190047333 container start 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:23:04 np0005593232 podman[356481]: 2026-01-23 10:23:04.45418408 +0000 UTC m=+0.193631875 container attach 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:23:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:05.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.133 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.135 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.136 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.162 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.163 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:23:05 np0005593232 dazzling_lamport[356497]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:23:05 np0005593232 dazzling_lamport[356497]: --> relative data size: 1.0
Jan 23 05:23:05 np0005593232 dazzling_lamport[356497]: --> All data devices are unavailable
Jan 23 05:23:05 np0005593232 systemd[1]: libpod-8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8.scope: Deactivated successfully.
Jan 23 05:23:05 np0005593232 podman[356481]: 2026-01-23 10:23:05.435291701 +0000 UTC m=+1.174739506 container died 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b78f13c369c90a5629771e3db5bd16b2fc0cd08a20657418927a51217a9fdc32-merged.mount: Deactivated successfully.
Jan 23 05:23:05 np0005593232 podman[356481]: 2026-01-23 10:23:05.526606396 +0000 UTC m=+1.266054211 container remove 8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:23:05 np0005593232 systemd[1]: libpod-conmon-8f5df3550470f68469932590d660bf5ceb6cdc9f325bfacd88131dabac2a0dc8.scope: Deactivated successfully.
Jan 23 05:23:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:05.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2810: 321 pgs: 321 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.923 250273 DEBUG nova.network.neutron [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Updating instance_info_cache with network_info: [{"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.948 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Releasing lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.949 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Instance network_info: |[{"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.949 250273 DEBUG oslo_concurrency.lockutils [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.949 250273 DEBUG nova.network.neutron [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Refreshing network info cache for port d74ba041-6a1d-406b-b08e-5f67e7c5def6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.954 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Start _get_guest_xml network_info=[{"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.962 250273 WARNING nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.969 250273 DEBUG nova.virt.libvirt.host [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.970 250273 DEBUG nova.virt.libvirt.host [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.979 250273 DEBUG nova.virt.libvirt.host [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.980 250273 DEBUG nova.virt.libvirt.host [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.981 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.982 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.982 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.983 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.983 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.983 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.984 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.984 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.984 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.985 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.985 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.985 250273 DEBUG nova.virt.hardware [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:23:05 np0005593232 nova_compute[250269]: 2026-01-23 10:23:05.989 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:23:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463192671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.493421839 +0000 UTC m=+0.075156737 container create ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:23:06 np0005593232 nova_compute[250269]: 2026-01-23 10:23:06.518 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.460064701 +0000 UTC m=+0.041799659 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:06 np0005593232 systemd[1]: Started libpod-conmon-ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63.scope.
Jan 23 05:23:06 np0005593232 nova_compute[250269]: 2026-01-23 10:23:06.568 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:06 np0005593232 nova_compute[250269]: 2026-01-23 10:23:06.580 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.623418465 +0000 UTC m=+0.205153413 container init ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.638115313 +0000 UTC m=+0.219850221 container start ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.643087974 +0000 UTC m=+0.224823002 container attach ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:23:06 np0005593232 cool_williamson[356718]: 167 167
Jan 23 05:23:06 np0005593232 systemd[1]: libpod-ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63.scope: Deactivated successfully.
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.652030748 +0000 UTC m=+0.233765646 container died ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:23:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-33fcab7a7dca715c9cffbdaff497ec5f4b4adf2e66f762fba01b100b8046437c-merged.mount: Deactivated successfully.
Jan 23 05:23:06 np0005593232 podman[356684]: 2026-01-23 10:23:06.711253982 +0000 UTC m=+0.292988890 container remove ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_williamson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:06 np0005593232 systemd[1]: libpod-conmon-ab7b1a3a219617097df991b657b8ac0f2eee2d9f45ab417e24951db1e0cdca63.scope: Deactivated successfully.
Jan 23 05:23:06 np0005593232 podman[356765]: 2026-01-23 10:23:06.937326099 +0000 UTC m=+0.056232570 container create 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:23:06 np0005593232 systemd[1]: Started libpod-conmon-8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860.scope.
Jan 23 05:23:07 np0005593232 podman[356765]: 2026-01-23 10:23:06.913731668 +0000 UTC m=+0.032638129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d76cab381275ebc98ef302b3bc4e3e703beeb1fddcb91a01f88e177bc2ed2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d76cab381275ebc98ef302b3bc4e3e703beeb1fddcb91a01f88e177bc2ed2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d76cab381275ebc98ef302b3bc4e3e703beeb1fddcb91a01f88e177bc2ed2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d76cab381275ebc98ef302b3bc4e3e703beeb1fddcb91a01f88e177bc2ed2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:07 np0005593232 podman[356765]: 2026-01-23 10:23:07.070070682 +0000 UTC m=+0.188977143 container init 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:23:07 np0005593232 podman[356765]: 2026-01-23 10:23:07.079725317 +0000 UTC m=+0.198631748 container start 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:23:07 np0005593232 podman[356765]: 2026-01-23 10:23:07.083615167 +0000 UTC m=+0.202521598 container attach 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:23:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571191023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:23:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:07.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.144 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.151 250273 DEBUG nova.virt.libvirt.vif [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1915755406',display_name='tempest-ServerPasswordTestJSON-server-1915755406',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1915755406',id=159,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa744bcb00cc46deb672354c357387d4',ramdisk_id='',reservation_id='r-ia58gl6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-937516278',owner_user_name='tempest-ServerPasswordTestJSON-937516278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:00Z,user_data=None,user_id='ac37a02e35ca45168d217a1444415569',uuid=600f3a38-db46-412a-86f4-3859d28c7e4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.152 250273 DEBUG nova.network.os_vif_util [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converting VIF {"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.154 250273 DEBUG nova.network.os_vif_util [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.157 250273 DEBUG nova.objects.instance [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 600f3a38-db46-412a-86f4-3859d28c7e4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.184 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <uuid>600f3a38-db46-412a-86f4-3859d28c7e4f</uuid>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <name>instance-0000009f</name>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerPasswordTestJSON-server-1915755406</nova:name>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:23:05</nova:creationTime>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:user uuid="ac37a02e35ca45168d217a1444415569">tempest-ServerPasswordTestJSON-937516278-project-member</nova:user>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:project uuid="aa744bcb00cc46deb672354c357387d4">tempest-ServerPasswordTestJSON-937516278</nova:project>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <nova:port uuid="d74ba041-6a1d-406b-b08e-5f67e7c5def6">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="serial">600f3a38-db46-412a-86f4-3859d28c7e4f</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="uuid">600f3a38-db46-412a-86f4-3859d28c7e4f</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/600f3a38-db46-412a-86f4-3859d28c7e4f_disk">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:34:45:84"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <target dev="tapd74ba041-6a"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/console.log" append="off"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:23:07 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:23:07 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:23:07 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:23:07 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.198 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Preparing to wait for external event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.199 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.200 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.200 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.203 250273 DEBUG nova.virt.libvirt.vif [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1915755406',display_name='tempest-ServerPasswordTestJSON-server-1915755406',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1915755406',id=159,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa744bcb00cc46deb672354c357387d4',ramdisk_id='',reservation_id='r-ia58gl6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-937516278',owner_user_name='tempest-ServerPasswordTestJSON-937516278-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:00Z,user_data=None,user_id='ac37a02e35ca45168d217a1444415569',uuid=600f3a38-db46-412a-86f4-3859d28c7e4f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.204 250273 DEBUG nova.network.os_vif_util [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converting VIF {"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.206 250273 DEBUG nova.network.os_vif_util [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.207 250273 DEBUG os_vif [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.212 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.213 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.221 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd74ba041-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.221 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd74ba041-6a, col_values=(('external_ids', {'iface-id': 'd74ba041-6a1d-406b-b08e-5f67e7c5def6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:45:84', 'vm-uuid': '600f3a38-db46-412a-86f4-3859d28c7e4f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.224 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:07 np0005593232 NetworkManager[49057]: <info>  [1769163787.2261] manager: (tapd74ba041-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.230 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.235 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.236 250273 INFO os_vif [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a')#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.304 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.305 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.305 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] No VIF found with MAC fa:16:3e:34:45:84, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.306 250273 INFO nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Using config drive#033[00m
Jan 23 05:23:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.344 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:07.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2811: 321 pgs: 321 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 660 KiB/s rd, 3.9 MiB/s wr, 100 op/s
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.774 250273 INFO nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Creating config drive at /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.784 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5h3qb0q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]: {
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:    "0": [
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:        {
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "devices": [
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "/dev/loop3"
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            ],
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "lv_name": "ceph_lv0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "lv_size": "7511998464",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "name": "ceph_lv0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "tags": {
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.cluster_name": "ceph",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.crush_device_class": "",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.encrypted": "0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.osd_id": "0",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.type": "block",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:                "ceph.vdo": "0"
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            },
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "type": "block",
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:            "vg_name": "ceph_vg0"
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:        }
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]:    ]
Jan 23 05:23:07 np0005593232 zealous_burnell[356782]: }
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.933 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe5h3qb0q" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:07 np0005593232 systemd[1]: libpod-8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860.scope: Deactivated successfully.
Jan 23 05:23:07 np0005593232 podman[356765]: 2026-01-23 10:23:07.954306429 +0000 UTC m=+1.073212910 container died 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.985 250273 DEBUG nova.storage.rbd_utils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] rbd image 600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:07 np0005593232 nova_compute[250269]: 2026-01-23 10:23:07.995 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config 600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-12d76cab381275ebc98ef302b3bc4e3e703beeb1fddcb91a01f88e177bc2ed2b-merged.mount: Deactivated successfully.
Jan 23 05:23:08 np0005593232 podman[356765]: 2026-01-23 10:23:08.045435079 +0000 UTC m=+1.164341560 container remove 8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_burnell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:23:08 np0005593232 systemd[1]: libpod-conmon-8e30c32ab1722da637fdf8cc45a4d47917a389842263942befb89144c27b6860.scope: Deactivated successfully.
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.271 250273 DEBUG oslo_concurrency.processutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config 600f3a38-db46-412a-86f4-3859d28c7e4f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.272 250273 INFO nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Deleting local config drive /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f/disk.config because it was imported into RBD.#033[00m
Jan 23 05:23:08 np0005593232 kernel: tapd74ba041-6a: entered promiscuous mode
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.3597] manager: (tapd74ba041-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Jan 23 05:23:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:08Z|00621|binding|INFO|Claiming lport d74ba041-6a1d-406b-b08e-5f67e7c5def6 for this chassis.
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:08Z|00622|binding|INFO|d74ba041-6a1d-406b-b08e-5f67e7c5def6: Claiming fa:16:3e:34:45:84 10.100.0.13
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.373 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 systemd-udevd[356973]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.394 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:45:84 10.100.0.13'], port_security=['fa:16:3e:34:45:84 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '600f3a38-db46-412a-86f4-3859d28c7e4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa744bcb00cc46deb672354c357387d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efa06a2a-f6fb-42e7-a030-f855667d36d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fcf7702-391a-4c11-ae2c-231e08c003b7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d74ba041-6a1d-406b-b08e-5f67e7c5def6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.397 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d74ba041-6a1d-406b-b08e-5f67e7c5def6 in datapath ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 bound to our chassis#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.398 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8#033[00m
Jan 23 05:23:08 np0005593232 systemd-machined[215836]: New machine qemu-71-instance-0000009f.
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.416 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[344b2ca3-08f7-4f8c-b6b8-888c8694f965]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.418 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce7ab3f8-21 in ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.422 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce7ab3f8-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.422 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[73159f67-fae4-4738-99ce-cc590ef1e3b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.4237] device (tapd74ba041-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.423 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[35ea0906-bbc8-4021-ab20-803b785aea12]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.4247] device (tapd74ba041-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.441 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[65abe798-268a-4dba-95af-7ec4a32abeeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 systemd[1]: Started Virtual Machine qemu-71-instance-0000009f.
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.471 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2f792a89-ceaa-4e78-a9fb-d0eadb40911d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:08Z|00623|binding|INFO|Setting lport d74ba041-6a1d-406b-b08e-5f67e7c5def6 ovn-installed in OVS
Jan 23 05:23:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:08Z|00624|binding|INFO|Setting lport d74ba041-6a1d-406b-b08e-5f67e7c5def6 up in Southbound
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.511 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dad5ac-31d5-41bd-8134-01cdb9465c25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.5215] manager: (tapce7ab3f8-20): new Veth device (/org/freedesktop/NetworkManager/Devices/294)
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.520 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6ea7a26c-24c8-4deb-985a-de319482c2d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.565 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0b5f8d-5119-48a8-a811-8236442beb8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.572 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e008ccad-7847-4b29-b83b-459aba399925]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.587 250273 DEBUG nova.network.neutron [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Updated VIF entry in instance network info cache for port d74ba041-6a1d-406b-b08e-5f67e7c5def6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.587 250273 DEBUG nova.network.neutron [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Updating instance_info_cache with network_info: [{"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.5984] device (tapce7ab3f8-20): carrier: link connected
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.603 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b930ffd4-f717-4a1f-b8e7-520ac6c2f744]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.605 250273 DEBUG oslo_concurrency.lockutils [req-1d2d3fbd-9640-4028-8b92-ccb97d969a8b req-e94c9ebf-0ec7-4979-af98-945988e80972 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-600f3a38-db46-412a-86f4-3859d28c7e4f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.624 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0a411674-1bd6-4eaa-9b5a-5f254ac1f703]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7ab3f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:aa:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770174, 'reachable_time': 30631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357024, 'error': None, 'target': 'ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.645 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4c977ca1-f6fe-4987-b827-78ec644eb7d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe09:aa9c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 770174, 'tstamp': 770174}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357034, 'error': None, 'target': 'ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.669 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ad5f2a-0ade-436e-b299-0dd20a1b53a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7ab3f8-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:09:aa:9c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 187], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770174, 'reachable_time': 30631, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357038, 'error': None, 'target': 'ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.711 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5f98ef-9f1b-45c5-a09a-c282c8a0df2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.801 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a7469a24-4092-461a-aaf0-be3192711af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.805 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7ab3f8-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.805 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.806 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce7ab3f8-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.808631265 +0000 UTC m=+0.051453234 container create 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:23:08 np0005593232 kernel: tapce7ab3f8-20: entered promiscuous mode
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 NetworkManager[49057]: <info>  [1769163788.8141] manager: (tapce7ab3f8-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/295)
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.819 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce7ab3f8-20, col_values=(('external_ids', {'iface-id': 'edc321fb-3fbe-40ba-bf56-84579abbd736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:08Z|00625|binding|INFO|Releasing lport edc321fb-3fbe-40ba-bf56-84579abbd736 from this chassis (sb_readonly=0)
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.821 250273 DEBUG nova.compute.manager [req-6c3634d0-08a9-4bf3-9469-33e4b73154ee req-3f492ac0-bcbf-44d8-827f-8e36511c2c01 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.822 250273 DEBUG oslo_concurrency.lockutils [req-6c3634d0-08a9-4bf3-9469-33e4b73154ee req-3f492ac0-bcbf-44d8-827f-8e36511c2c01 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.822 250273 DEBUG oslo_concurrency.lockutils [req-6c3634d0-08a9-4bf3-9469-33e4b73154ee req-3f492ac0-bcbf-44d8-827f-8e36511c2c01 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.823 250273 DEBUG oslo_concurrency.lockutils [req-6c3634d0-08a9-4bf3-9469-33e4b73154ee req-3f492ac0-bcbf-44d8-827f-8e36511c2c01 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.823 250273 DEBUG nova.compute.manager [req-6c3634d0-08a9-4bf3-9469-33e4b73154ee req-3f492ac0-bcbf-44d8-827f-8e36511c2c01 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Processing event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.824 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 nova_compute[250269]: 2026-01-23 10:23:08.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.855 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:23:08 np0005593232 systemd[1]: Started libpod-conmon-7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164.scope.
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.857 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b92f1306-2126-4b5f-917d-8ff1714410ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.859 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8.pid.haproxy
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:23:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:08.865 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'env', 'PROCESS_TAG=haproxy-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.784626863 +0000 UTC m=+0.027448832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.922156172 +0000 UTC m=+0.164978211 container init 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.936428058 +0000 UTC m=+0.179250027 container start 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.940246617 +0000 UTC m=+0.183068586 container attach 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:23:08 np0005593232 busy_mirzakhani[357074]: 167 167
Jan 23 05:23:08 np0005593232 systemd[1]: libpod-7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164.scope: Deactivated successfully.
Jan 23 05:23:08 np0005593232 conmon[357074]: conmon 7765335a7d6881876d22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164.scope/container/memory.events
Jan 23 05:23:08 np0005593232 podman[357055]: 2026-01-23 10:23:08.95161816 +0000 UTC m=+0.194440169 container died 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:23:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7e0dfc6df3a3b290b6812ba7676ff17182bdee59ec7dcf5003b3911a5e0b677b-merged.mount: Deactivated successfully.
Jan 23 05:23:09 np0005593232 podman[357055]: 2026-01-23 10:23:09.021767094 +0000 UTC m=+0.264589073 container remove 7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_mirzakhani, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:23:09 np0005593232 systemd[1]: libpod-conmon-7765335a7d6881876d2205fdef5a3ab8e330471346d102aaff046a7b112a3164.scope: Deactivated successfully.
Jan 23 05:23:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:09.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.160 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:09 np0005593232 podman[357146]: 2026-01-23 10:23:09.280637832 +0000 UTC m=+0.062502968 container create a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.318 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163789.3173194, 600f3a38-db46-412a-86f4-3859d28c7e4f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.318 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] VM Started (Lifecycle Event)#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.321 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.326 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:23:09 np0005593232 systemd[1]: Started libpod-conmon-a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4.scope.
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.331 250273 INFO nova.virt.libvirt.driver [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Instance spawned successfully.#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.331 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:23:09 np0005593232 podman[357146]: 2026-01-23 10:23:09.258303067 +0000 UTC m=+0.040168223 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.351 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.357 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:23:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.364 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.364 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.364 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.365 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.365 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d44fb9b59611a09cfa7f74faa3b60024c38ba016c93d498f0c960779f82448/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.365 250273 DEBUG nova.virt.libvirt.driver [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d44fb9b59611a09cfa7f74faa3b60024c38ba016c93d498f0c960779f82448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d44fb9b59611a09cfa7f74faa3b60024c38ba016c93d498f0c960779f82448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07d44fb9b59611a09cfa7f74faa3b60024c38ba016c93d498f0c960779f82448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:09 np0005593232 podman[357146]: 2026-01-23 10:23:09.383816475 +0000 UTC m=+0.165681631 container init a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:23:09 np0005593232 podman[357146]: 2026-01-23 10:23:09.39314159 +0000 UTC m=+0.175006736 container start a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:23:09 np0005593232 podman[357146]: 2026-01-23 10:23:09.398144092 +0000 UTC m=+0.180009248 container attach a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.401 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.402 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163789.3208573, 600f3a38-db46-412a-86f4-3859d28c7e4f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.403 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:23:09 np0005593232 podman[357182]: 2026-01-23 10:23:09.414571119 +0000 UTC m=+0.071351249 container create be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:23:09 np0005593232 systemd[1]: Started libpod-conmon-be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994.scope.
Jan 23 05:23:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:09 np0005593232 podman[357182]: 2026-01-23 10:23:09.38257695 +0000 UTC m=+0.039357100 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:23:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103ad7f874fcbc077f95cd80f7b3dc0fc4e977f2943e0cfa84fcef5547b819e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:09 np0005593232 podman[357182]: 2026-01-23 10:23:09.493915235 +0000 UTC m=+0.150695375 container init be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:23:09 np0005593232 podman[357182]: 2026-01-23 10:23:09.499538165 +0000 UTC m=+0.156318295 container start be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:23:09 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [NOTICE]   (357206) : New worker (357208) forked
Jan 23 05:23:09 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [NOTICE]   (357206) : Loading success.
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.646 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.652 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163789.3250644, 600f3a38-db46-412a-86f4-3859d28c7e4f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.652 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:23:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:09.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2812: 321 pgs: 321 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 610 KiB/s rd, 3.9 MiB/s wr, 102 op/s
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.707 250273 INFO nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Took 8.97 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.707 250273 DEBUG nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.939 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.945 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.975 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:23:09 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.977 250273 INFO nova.compute.manager [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Took 10.38 seconds to build instance.#033[00m
Jan 23 05:23:10 np0005593232 nova_compute[250269]: 2026-01-23 10:23:09.999 250273 DEBUG oslo_concurrency.lockutils [None req-1b86ff9b-41aa-4528-b3da-073126472aa6 ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]: {
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:        "osd_id": 0,
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:        "type": "bluestore"
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]:    }
Jan 23 05:23:10 np0005593232 compassionate_mcnulty[357183]: }
Jan 23 05:23:10 np0005593232 systemd[1]: libpod-a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4.scope: Deactivated successfully.
Jan 23 05:23:10 np0005593232 podman[357146]: 2026-01-23 10:23:10.361992342 +0000 UTC m=+1.143857568 container died a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:23:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-07d44fb9b59611a09cfa7f74faa3b60024c38ba016c93d498f0c960779f82448-merged.mount: Deactivated successfully.
Jan 23 05:23:10 np0005593232 podman[357146]: 2026-01-23 10:23:10.440245567 +0000 UTC m=+1.222110713 container remove a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:23:10 np0005593232 systemd[1]: libpod-conmon-a4d7644cc410f2e580eb7ea2c7c025f6a0fa4325000953d8a839affadff8b4d4.scope: Deactivated successfully.
Jan 23 05:23:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:23:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:23:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6bc15946-9c83-4488-be09-8a34e34bed1b does not exist
Jan 23 05:23:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d050967f-d68a-481b-b8be-e874b5743a9a does not exist
Jan 23 05:23:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 379c21f5-8e2e-411a-b9c4-1f8ada15bbfc does not exist
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.092 250273 DEBUG nova.compute.manager [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.092 250273 DEBUG oslo_concurrency.lockutils [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.093 250273 DEBUG oslo_concurrency.lockutils [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.093 250273 DEBUG oslo_concurrency.lockutils [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.094 250273 DEBUG nova.compute.manager [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] No waiting events found dispatching network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:23:11 np0005593232 nova_compute[250269]: 2026-01-23 10:23:11.094 250273 WARNING nova.compute.manager [req-9176591c-c1dc-4772-b57f-7abddd958085 req-74202089-a929-40ce-8d0e-057c4922d93d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received unexpected event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:23:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:11.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:23:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:11.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2813: 321 pgs: 321 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 571 KiB/s rd, 3.9 MiB/s wr, 99 op/s
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.224 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:12 np0005593232 podman[357299]: 2026-01-23 10:23:12.457616856 +0000 UTC m=+0.111896122 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.659 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.659 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.660 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.660 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.660 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.661 250273 INFO nova.compute.manager [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Terminating instance#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.662 250273 DEBUG nova.compute.manager [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:23:12 np0005593232 kernel: tapd74ba041-6a (unregistering): left promiscuous mode
Jan 23 05:23:12 np0005593232 NetworkManager[49057]: <info>  [1769163792.7208] device (tapd74ba041-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:23:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:12Z|00626|binding|INFO|Releasing lport d74ba041-6a1d-406b-b08e-5f67e7c5def6 from this chassis (sb_readonly=0)
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:12Z|00627|binding|INFO|Setting lport d74ba041-6a1d-406b-b08e-5f67e7c5def6 down in Southbound
Jan 23 05:23:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:12Z|00628|binding|INFO|Removing iface tapd74ba041-6a ovn-installed in OVS
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:12.744 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:45:84 10.100.0.13'], port_security=['fa:16:3e:34:45:84 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '600f3a38-db46-412a-86f4-3859d28c7e4f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa744bcb00cc46deb672354c357387d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'efa06a2a-f6fb-42e7-a030-f855667d36d9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1fcf7702-391a-4c11-ae2c-231e08c003b7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d74ba041-6a1d-406b-b08e-5f67e7c5def6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:23:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:12.746 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d74ba041-6a1d-406b-b08e-5f67e7c5def6 in datapath ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 unbound from our chassis#033[00m
Jan 23 05:23:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:12.748 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:23:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:12.749 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3db95670-414f-461d-b880-e9187f17df58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:12.750 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 namespace which is not needed anymore#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009f.scope: Deactivated successfully.
Jan 23 05:23:12 np0005593232 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d0000009f.scope: Consumed 4.335s CPU time.
Jan 23 05:23:12 np0005593232 systemd-machined[215836]: Machine qemu-71-instance-0000009f terminated.
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.910 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.917 250273 INFO nova.virt.libvirt.driver [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Instance destroyed successfully.#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.917 250273 DEBUG nova.objects.instance [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lazy-loading 'resources' on Instance uuid 600f3a38-db46-412a-86f4-3859d28c7e4f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:23:12 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [NOTICE]   (357206) : haproxy version is 2.8.14-c23fe91
Jan 23 05:23:12 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [NOTICE]   (357206) : path to executable is /usr/sbin/haproxy
Jan 23 05:23:12 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [WARNING]  (357206) : Exiting Master process...
Jan 23 05:23:12 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [ALERT]    (357206) : Current worker (357208) exited with code 143 (Terminated)
Jan 23 05:23:12 np0005593232 neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8[357202]: [WARNING]  (357206) : All workers exited. Exiting... (0)
Jan 23 05:23:12 np0005593232 systemd[1]: libpod-be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994.scope: Deactivated successfully.
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.935 250273 DEBUG nova.virt.libvirt.vif [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-1915755406',display_name='tempest-ServerPasswordTestJSON-server-1915755406',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-1915755406',id=159,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:23:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa744bcb00cc46deb672354c357387d4',ramdisk_id='',reservation_id='r-ia58gl6r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-937516278',owner_user_name='tempest-ServerPasswordTestJSON-937516278-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:23:12Z,user_data=None,user_id='ac37a02e35ca45168d217a1444415569',uuid=600f3a38-db46-412a-86f4-3859d28c7e4f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.936 250273 DEBUG nova.network.os_vif_util [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converting VIF {"id": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "address": "fa:16:3e:34:45:84", "network": {"id": "ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1617422078-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa744bcb00cc46deb672354c357387d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd74ba041-6a", "ovs_interfaceid": "d74ba041-6a1d-406b-b08e-5f67e7c5def6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.937 250273 DEBUG nova.network.os_vif_util [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.937 250273 DEBUG os_vif [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.939 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd74ba041-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:12 np0005593232 podman[357351]: 2026-01-23 10:23:12.940750069 +0000 UTC m=+0.062822237 container died be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:12 np0005593232 nova_compute[250269]: 2026-01-23 10:23:12.946 250273 INFO os_vif [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:45:84,bridge_name='br-int',has_traffic_filtering=True,id=d74ba041-6a1d-406b-b08e-5f67e7c5def6,network=Network(ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd74ba041-6a')#033[00m
Jan 23 05:23:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994-userdata-shm.mount: Deactivated successfully.
Jan 23 05:23:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-103ad7f874fcbc077f95cd80f7b3dc0fc4e977f2943e0cfa84fcef5547b819e4-merged.mount: Deactivated successfully.
Jan 23 05:23:12 np0005593232 podman[357351]: 2026-01-23 10:23:12.985067269 +0000 UTC m=+0.107139447 container cleanup be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:23:12 np0005593232 systemd[1]: libpod-conmon-be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994.scope: Deactivated successfully.
Jan 23 05:23:13 np0005593232 podman[357403]: 2026-01-23 10:23:13.053993228 +0000 UTC m=+0.047574813 container remove be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.062 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6873c78e-0ab1-4445-8c4b-d0787cc437ab]: (4, ('Fri Jan 23 10:23:12 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 (be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994)\nbe0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994\nFri Jan 23 10:23:12 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 (be0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994)\nbe0617274c1dee80fe1fe7477c871775c6250effde84c7f78e806115be14e994\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.064 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e2db02bd-94ca-4ae1-8533-6cb61a3f4c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.066 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7ab3f8-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.068 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:13 np0005593232 kernel: tapce7ab3f8-20: left promiscuous mode
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.091 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[03e21962-80e8-4cf7-b10c-f86517d47564]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.112 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[980c3614-a2cb-4426-9083-2d34eef33f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.114 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[59fd5fa3-4f5b-4e09-bdb4-62613e4d1bed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:13.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.135 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c8125e74-41a4-4d9d-b24d-12d2294276d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 770164, 'reachable_time': 26697, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357421, 'error': None, 'target': 'ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.138 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce7ab3f8-21a8-452d-b07a-39bd3dfa53d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:23:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:13.138 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7b2db8-6a77-4aff-94eb-75c3d4c64a9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:13 np0005593232 systemd[1]: run-netns-ovnmeta\x2dce7ab3f8\x2d21a8\x2d452d\x2db07a\x2d39bd3dfa53d8.mount: Deactivated successfully.
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.399 250273 DEBUG nova.compute.manager [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-unplugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.399 250273 DEBUG oslo_concurrency.lockutils [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.399 250273 DEBUG oslo_concurrency.lockutils [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.400 250273 DEBUG oslo_concurrency.lockutils [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.400 250273 DEBUG nova.compute.manager [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] No waiting events found dispatching network-vif-unplugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.400 250273 DEBUG nova.compute.manager [req-e48ba7c7-0c99-44d3-8ac0-d799af4cbc9f req-75c68762-7717-4458-938c-c1265574a364 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-unplugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.412 250273 INFO nova.virt.libvirt.driver [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Deleting instance files /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f_del#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.413 250273 INFO nova.virt.libvirt.driver [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Deletion of /var/lib/nova/instances/600f3a38-db46-412a-86f4-3859d28c7e4f_del complete#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.486 250273 INFO nova.compute.manager [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.487 250273 DEBUG oslo.service.loopingcall [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.487 250273 DEBUG nova.compute.manager [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:23:13 np0005593232 nova_compute[250269]: 2026-01-23 10:23:13.487 250273 DEBUG nova.network.neutron [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:23:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:13.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2814: 321 pgs: 321 active+clean; 269 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 143 op/s
Jan 23 05:23:14 np0005593232 nova_compute[250269]: 2026-01-23 10:23:14.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:15.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.209 250273 DEBUG nova.network.neutron [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.241 250273 INFO nova.compute.manager [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Took 1.75 seconds to deallocate network for instance.#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.320 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.321 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.408 250273 DEBUG oslo_concurrency.processutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.524 250273 DEBUG nova.compute.manager [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.525 250273 DEBUG oslo_concurrency.lockutils [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.525 250273 DEBUG oslo_concurrency.lockutils [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.526 250273 DEBUG oslo_concurrency.lockutils [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.526 250273 DEBUG nova.compute.manager [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] No waiting events found dispatching network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.526 250273 WARNING nova.compute.manager [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received unexpected event network-vif-plugged-d74ba041-6a1d-406b-b08e-5f67e7c5def6 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:23:15 np0005593232 nova_compute[250269]: 2026-01-23 10:23:15.526 250273 DEBUG nova.compute.manager [req-b2986549-b8cf-46cd-8d73-40dbd782d18d req-d9662e29-10ca-4caf-88a7-7838c6b28d94 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Received event network-vif-deleted-d74ba041-6a1d-406b-b08e-5f67e7c5def6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:15.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2815: 321 pgs: 321 active+clean; 206 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 142 op/s
Jan 23 05:23:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:23:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3991721102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.045 250273 DEBUG oslo_concurrency.processutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.051 250273 DEBUG nova.compute.provider_tree [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.086 250273 DEBUG nova.scheduler.client.report [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.127 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.163 250273 INFO nova.scheduler.client.report [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Deleted allocations for instance 600f3a38-db46-412a-86f4-3859d28c7e4f#033[00m
Jan 23 05:23:16 np0005593232 nova_compute[250269]: 2026-01-23 10:23:16.233 250273 DEBUG oslo_concurrency.lockutils [None req-746485e1-142a-475b-b37a-b619b7aad42a ac37a02e35ca45168d217a1444415569 aa744bcb00cc46deb672354c357387d4 - - default default] Lock "600f3a38-db46-412a-86f4-3859d28c7e4f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:16 np0005593232 podman[357497]: 2026-01-23 10:23:16.410046521 +0000 UTC m=+0.065227224 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 05:23:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 23 05:23:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 23 05:23:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 23 05:23:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:17.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2817: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 36 KiB/s wr, 287 op/s
Jan 23 05:23:17 np0005593232 nova_compute[250269]: 2026-01-23 10:23:17.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:19.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:19 np0005593232 nova_compute[250269]: 2026-01-23 10:23:19.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 23 05:23:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 23 05:23:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 23 05:23:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:19.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2819: 321 pgs: 321 active+clean; 154 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.6 MiB/s wr, 402 op/s
Jan 23 05:23:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:21.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:21.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2820: 321 pgs: 321 active+clean; 154 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 337 op/s
Jan 23 05:23:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:22 np0005593232 nova_compute[250269]: 2026-01-23 10:23:22.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:23.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2821: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 174 KiB/s rd, 2.6 MiB/s wr, 273 op/s
Jan 23 05:23:24 np0005593232 nova_compute[250269]: 2026-01-23 10:23:24.167 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:25.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2822: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 2.5 MiB/s wr, 87 op/s
Jan 23 05:23:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:27.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 23 05:23:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 23 05:23:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 23 05:23:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:27.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2824: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1013 KiB/s wr, 42 op/s
Jan 23 05:23:27 np0005593232 nova_compute[250269]: 2026-01-23 10:23:27.914 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163792.9136086, 600f3a38-db46-412a-86f4-3859d28c7e4f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:27 np0005593232 nova_compute[250269]: 2026-01-23 10:23:27.914 250273 INFO nova.compute.manager [-] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:23:28 np0005593232 nova_compute[250269]: 2026-01-23 10:23:28.002 250273 DEBUG nova.compute.manager [None req-a6f2e4f2-525c-4153-8310-8924f2125bd2 - - - - - -] [instance: 600f3a38-db46-412a-86f4-3859d28c7e4f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:28 np0005593232 nova_compute[250269]: 2026-01-23 10:23:28.003 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:29.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:29 np0005593232 nova_compute[250269]: 2026-01-23 10:23:29.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2825: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 823 KiB/s wr, 35 op/s
Jan 23 05:23:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:29.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:31.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2826: 321 pgs: 321 active+clean; 141 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 823 KiB/s wr, 35 op/s
Jan 23 05:23:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:31.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:33 np0005593232 nova_compute[250269]: 2026-01-23 10:23:33.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:33.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2827: 321 pgs: 321 active+clean; 165 MiB data, 1.1 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 38 op/s
Jan 23 05:23:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:33.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:34 np0005593232 nova_compute[250269]: 2026-01-23 10:23:34.232 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:35.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2828: 321 pgs: 321 active+clean; 187 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 23 05:23:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:35.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:36 np0005593232 nova_compute[250269]: 2026-01-23 10:23:36.873 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:36 np0005593232 nova_compute[250269]: 2026-01-23 10:23:36.873 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:36 np0005593232 nova_compute[250269]: 2026-01-23 10:23:36.908 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:23:36 np0005593232 nova_compute[250269]: 2026-01-23 10:23:36.998 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:36 np0005593232 nova_compute[250269]: 2026-01-23 10:23:36.999 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:37 np0005593232 nova_compute[250269]: 2026-01-23 10:23:37.008 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:23:37 np0005593232 nova_compute[250269]: 2026-01-23 10:23:37.009 250273 INFO nova.compute.claims [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:23:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:37.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:23:37
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'images', 'default.rgw.log', 'backups', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta']
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:23:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:37 np0005593232 nova_compute[250269]: 2026-01-23 10:23:37.466 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:23:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2829: 321 pgs: 321 active+clean; 188 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 77 op/s
Jan 23 05:23:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:23:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032733719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:23:37 np0005593232 nova_compute[250269]: 2026-01-23 10:23:37.979 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:37 np0005593232 nova_compute[250269]: 2026-01-23 10:23:37.989 250273 DEBUG nova.compute.provider_tree [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.010 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.200 250273 DEBUG nova.scheduler.client.report [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.245 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.246 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.312 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.312 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.334 250273 INFO nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.372 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:23:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.480 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.482 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.482 250273 INFO nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Creating image(s)#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.512 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.562 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.608 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.612 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.706 250273 DEBUG nova.policy [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '60291ce86b6946629a2e48f6680312cb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.712 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.715 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.716 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.717 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.810 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:38 np0005593232 nova_compute[250269]: 2026-01-23 10:23:38.816 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 de449075-cfee-456b-ac52-1d6f8284a3f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:39.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.200 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 de449075-cfee-456b-ac52-1d6f8284a3f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.333 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] resizing rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.457 250273 DEBUG nova.objects.instance [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'migration_context' on Instance uuid de449075-cfee-456b-ac52-1d6f8284a3f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.498 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.498 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Ensure instance console log exists: /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.499 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.499 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:39 np0005593232 nova_compute[250269]: 2026-01-23 10:23:39.500 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2830: 321 pgs: 321 active+clean; 191 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Jan 23 05:23:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:39.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:41.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:41 np0005593232 nova_compute[250269]: 2026-01-23 10:23:41.317 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Successfully created port: 88963f9b-dcc2-45e1-b825-d8646349d037 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:23:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:41.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2831: 321 pgs: 321 active+clean; 191 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Jan 23 05:23:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:42.638 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:42.639 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:42.639 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.095 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:43.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.290 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Successfully updated port: 88963f9b-dcc2-45e1-b825-d8646349d037 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.323 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.324 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquired lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.324 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:23:43 np0005593232 podman[357768]: 2026-01-23 10:23:43.452464082 +0000 UTC m=+0.111106780 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.488 250273 DEBUG nova.compute.manager [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-changed-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.488 250273 DEBUG nova.compute.manager [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Refreshing instance network info cache due to event network-changed-88963f9b-dcc2-45e1-b825-d8646349d037. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.488 250273 DEBUG oslo_concurrency.lockutils [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:23:43 np0005593232 nova_compute[250269]: 2026-01-23 10:23:43.634 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:23:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:43.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2832: 321 pgs: 321 active+clean; 254 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.2 MiB/s wr, 152 op/s
Jan 23 05:23:44 np0005593232 nova_compute[250269]: 2026-01-23 10:23:44.237 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:45.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:45.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2833: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 MiB/s wr, 131 op/s
Jan 23 05:23:45 np0005593232 nova_compute[250269]: 2026-01-23 10:23:45.955 250273 DEBUG nova.network.neutron [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updating instance_info_cache with network_info: [{"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.014 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Releasing lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.014 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Instance network_info: |[{"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.015 250273 DEBUG oslo_concurrency.lockutils [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.015 250273 DEBUG nova.network.neutron [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Refreshing network info cache for port 88963f9b-dcc2-45e1-b825-d8646349d037 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.017 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Start _get_guest_xml network_info=[{"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.022 250273 WARNING nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.030 250273 DEBUG nova.virt.libvirt.host [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.031 250273 DEBUG nova.virt.libvirt.host [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.038 250273 DEBUG nova.virt.libvirt.host [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.039 250273 DEBUG nova.virt.libvirt.host [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.040 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.040 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.040 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.040 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.041 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.041 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.041 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.041 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.042 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.042 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.042 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.042 250273 DEBUG nova.virt.hardware [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.045 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.289 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:23:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/197750598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.527 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.553 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:46 np0005593232 nova_compute[250269]: 2026-01-23 10:23:46.558 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:23:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3729657970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.037 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.040 250273 DEBUG nova.virt.libvirt.vif [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-430049992',display_name='tempest-TestNetworkBasicOps-server-430049992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-430049992',id=161,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXjfY4J2leZkUFqywwEh/hJwVUFbbyhVPL+4OrbHmG3fAIzZdVwERtLKYtaFZPmy2cEzabxetaK6cnPdiCPKltm8GLjpZYcNnGtMR/l7P3TWEKb7uzSi0YG6Qrc4W9Bfw==',key_name='tempest-TestNetworkBasicOps-880469914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-m0gd0yrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:38Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=de449075-cfee-456b-ac52-1d6f8284a3f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.040 250273 DEBUG nova.network.os_vif_util [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.041 250273 DEBUG nova.network.os_vif_util [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.042 250273 DEBUG nova.objects.instance [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'pci_devices' on Instance uuid de449075-cfee-456b-ac52-1d6f8284a3f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.077 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <uuid>de449075-cfee-456b-ac52-1d6f8284a3f0</uuid>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <name>instance-000000a1</name>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkBasicOps-server-430049992</nova:name>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:23:46</nova:creationTime>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:user uuid="60291ce86b6946629a2e48f6680312cb">tempest-TestNetworkBasicOps-789276745-project-member</nova:user>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:project uuid="98c94577fcdb4c3d893898ede79ea2d4">tempest-TestNetworkBasicOps-789276745</nova:project>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <nova:port uuid="88963f9b-dcc2-45e1-b825-d8646349d037">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="serial">de449075-cfee-456b-ac52-1d6f8284a3f0</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="uuid">de449075-cfee-456b-ac52-1d6f8284a3f0</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/de449075-cfee-456b-ac52-1d6f8284a3f0_disk">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:af:d6:d2"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <target dev="tap88963f9b-dc"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/console.log" append="off"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:23:47 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:23:47 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:23:47 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:23:47 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.079 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Preparing to wait for external event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.080 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.081 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.081 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.083 250273 DEBUG nova.virt.libvirt.vif [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-430049992',display_name='tempest-TestNetworkBasicOps-server-430049992',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-430049992',id=161,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXjfY4J2leZkUFqywwEh/hJwVUFbbyhVPL+4OrbHmG3fAIzZdVwERtLKYtaFZPmy2cEzabxetaK6cnPdiCPKltm8GLjpZYcNnGtMR/l7P3TWEKb7uzSi0YG6Qrc4W9Bfw==',key_name='tempest-TestNetworkBasicOps-880469914',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-m0gd0yrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:23:38Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=de449075-cfee-456b-ac52-1d6f8284a3f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.084 250273 DEBUG nova.network.os_vif_util [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.086 250273 DEBUG nova.network.os_vif_util [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.087 250273 DEBUG os_vif [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.090 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.091 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.096 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.097 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap88963f9b-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.098 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap88963f9b-dc, col_values=(('external_ids', {'iface-id': '88963f9b-dcc2-45e1-b825-d8646349d037', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:af:d6:d2', 'vm-uuid': 'de449075-cfee-456b-ac52-1d6f8284a3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:47 np0005593232 NetworkManager[49057]: <info>  [1769163827.1031] manager: (tap88963f9b-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/296)
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.112 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.113 250273 INFO os_vif [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc')#033[00m
Jan 23 05:23:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:47.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:47 np0005593232 podman[357861]: 2026-01-23 10:23:47.208798095 +0000 UTC m=+0.060694676 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.211 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.212 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.212 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] No VIF found with MAC fa:16:3e:af:d6:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.212 250273 INFO nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Using config drive#033[00m
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.240 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.005147082344593337 of space, bias 1.0, pg target 1.5441247033780012 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0028546232319002418 of space, bias 1.0, pg target 0.8535323463381723 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:47.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2834: 321 pgs: 321 active+clean; 280 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.829 250273 INFO nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Creating config drive at /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.836 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj18lbxca execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:47 np0005593232 nova_compute[250269]: 2026-01-23 10:23:47.993 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj18lbxca" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.027 250273 DEBUG nova.storage.rbd_utils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] rbd image de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.032 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.232 250273 DEBUG oslo_concurrency.processutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config de449075-cfee-456b-ac52-1d6f8284a3f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.233 250273 INFO nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Deleting local config drive /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0/disk.config because it was imported into RBD.#033[00m
Jan 23 05:23:48 np0005593232 kernel: tap88963f9b-dc: entered promiscuous mode
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.2938] manager: (tap88963f9b-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/297)
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:48Z|00629|binding|INFO|Claiming lport 88963f9b-dcc2-45e1-b825-d8646349d037 for this chassis.
Jan 23 05:23:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:48Z|00630|binding|INFO|88963f9b-dcc2-45e1-b825-d8646349d037: Claiming fa:16:3e:af:d6:d2 10.100.0.25
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.310 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:d6:d2 10.100.0.25'], port_security=['fa:16:3e:af:d6:d2 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'de449075-cfee-456b-ac52-1d6f8284a3f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e227a777-0e88-4409-a4a5-266ef225baae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '36b702c3-000b-4837-b755-3dc876c460bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9352757c-3308-4452-a338-cff1ca2f64b6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=88963f9b-dcc2-45e1-b825-d8646349d037) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.312 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 88963f9b-dcc2-45e1-b825-d8646349d037 in datapath e227a777-0e88-4409-a4a5-266ef225baae bound to our chassis#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.314 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e227a777-0e88-4409-a4a5-266ef225baae#033[00m
Jan 23 05:23:48 np0005593232 systemd-udevd[357954]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.326 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4db64c47-1c10-4955-89df-4abf93701b84]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.327 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape227a777-01 in ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:23:48 np0005593232 systemd-machined[215836]: New machine qemu-72-instance-000000a1.
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.333 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape227a777-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.333 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7d8f5f6c-c1e9-4e2e-80b5-647348de8f57]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.334 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eab4b824-9798-4d4b-8a33-079d29c3e354]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.3417] device (tap88963f9b-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.341 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.3439] device (tap88963f9b-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.348 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[15e70438-4df4-44e1-a842-345600a31a97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:48Z|00631|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 ovn-installed in OVS
Jan 23 05:23:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:48Z|00632|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 up in Southbound
Jan 23 05:23:48 np0005593232 systemd[1]: Started Virtual Machine qemu-72-instance-000000a1.
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.353 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.362 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5ebe10-b9a4-46a4-8da3-532634582516]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.391 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[0615e25f-9922-4051-ad48-150cc4d4d7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.3977] manager: (tape227a777-00): new Veth device (/org/freedesktop/NetworkManager/Devices/298)
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.396 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3f5fcb-3c9e-41fb-81d0-1b916deeeb1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.428 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[59e0ec86-a423-47ec-ad4e-8dcdbdaa87c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.433 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[47e07b6b-8046-452c-a8f7-1fb4cbdc5569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.4553] device (tape227a777-00): carrier: link connected
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.462 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[31070dff-c71f-45d2-854f-5777e4f3a1b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.482 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9393baec-7877-4c61-9dbd-d5a60d2071ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape227a777-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:39:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774159, 'reachable_time': 18538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357987, 'error': None, 'target': 'ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.499 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d52b4bf4-e41b-4273-8414-c1336961f7e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fead:3980'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 774159, 'tstamp': 774159}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357988, 'error': None, 'target': 'ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.516 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c4400c-beb3-4c90-8755-0ab5a65cdd0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape227a777-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ad:39:80'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 190], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774159, 'reachable_time': 18538, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357989, 'error': None, 'target': 'ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.549 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4bdeedf2-d784-499c-80b8-802f8b340917]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.608 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[777df0ad-c6f6-4782-beb4-3a3f6fa09941]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.609 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape227a777-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.610 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.610 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape227a777-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.612 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 NetworkManager[49057]: <info>  [1769163828.6126] manager: (tape227a777-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/299)
Jan 23 05:23:48 np0005593232 kernel: tape227a777-00: entered promiscuous mode
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.613 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.614 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape227a777-00, col_values=(('external_ids', {'iface-id': 'bec6dd0a-3f1b-4b60-8b52-1e4ba653a56d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.615 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.617 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e227a777-0e88-4409-a4a5-266ef225baae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e227a777-0e88-4409-a4a5-266ef225baae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:23:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:23:48Z|00633|binding|INFO|Releasing lport bec6dd0a-3f1b-4b60-8b52-1e4ba653a56d from this chassis (sb_readonly=0)
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.633 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97051a45-6fa3-4bf8-a6b5-2a06e1fe37b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.634 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-e227a777-0e88-4409-a4a5-266ef225baae
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/e227a777-0e88-4409-a4a5-266ef225baae.pid.haproxy
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID e227a777-0e88-4409-a4a5-266ef225baae
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.635 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae', 'env', 'PROCESS_TAG=haproxy-e227a777-0e88-4409-a4a5-266ef225baae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e227a777-0e88-4409-a4a5-266ef225baae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:23:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:48.875 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:23:48 np0005593232 nova_compute[250269]: 2026-01-23 10:23:48.876 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.009 250273 DEBUG nova.network.neutron [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updated VIF entry in instance network info cache for port 88963f9b-dcc2-45e1-b825-d8646349d037. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.010 250273 DEBUG nova.network.neutron [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updating instance_info_cache with network_info: [{"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.046 250273 DEBUG oslo_concurrency.lockutils [req-56ce1e8c-1f59-40ed-9e22-dd24337ce522 req-b52a06e9-aaf0-402a-96c3-449f207dccb3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:23:49 np0005593232 podman[358021]: 2026-01-23 10:23:49.100062058 +0000 UTC m=+0.079711767 container create 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:23:49 np0005593232 systemd[1]: Started libpod-conmon-99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e.scope.
Jan 23 05:23:49 np0005593232 podman[358021]: 2026-01-23 10:23:49.061633865 +0000 UTC m=+0.041283614 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:23:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:23:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed6fab68dcfc7e7c1241676bd89827eefe0c1857615b70bcce9e43be53b06579/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:23:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:49.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:49 np0005593232 podman[358021]: 2026-01-23 10:23:49.201254925 +0000 UTC m=+0.180904634 container init 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:23:49 np0005593232 podman[358021]: 2026-01-23 10:23:49.212126034 +0000 UTC m=+0.191775743 container start 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 05:23:49 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [NOTICE]   (358041) : New worker (358043) forked
Jan 23 05:23:49 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [NOTICE]   (358041) : Loading success.
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.271 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:49.287 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:49 np0005593232 nova_compute[250269]: 2026-01-23 10:23:49.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:23:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:49.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2835: 321 pgs: 321 active+clean; 296 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.6 MiB/s wr, 129 op/s
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.002 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163830.001632, de449075-cfee-456b-ac52-1d6f8284a3f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.003 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] VM Started (Lifecycle Event)#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.027 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.032 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163830.0032034, de449075-cfee-456b-ac52-1d6f8284a3f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.032 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.061 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.065 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.092 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:23:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:23:50.289 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.752 250273 DEBUG nova.compute.manager [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.752 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.752 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.752 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG nova.compute.manager [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Processing event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG nova.compute.manager [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG oslo_concurrency.lockutils [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 DEBUG nova.compute.manager [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.753 250273 WARNING nova.compute.manager [req-b1302e5e-ba30-4943-bdfe-595b4452ee43 req-a76dc21b-558b-481c-bb46-bca6719bfc3b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received unexpected event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.754 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.758 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163830.75835, de449075-cfee-456b-ac52-1d6f8284a3f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.758 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.760 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.763 250273 INFO nova.virt.libvirt.driver [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Instance spawned successfully.#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.763 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.788 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.792 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.793 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.793 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.793 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.794 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.794 250273 DEBUG nova.virt.libvirt.driver [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.798 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.864 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.913 250273 INFO nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Took 12.43 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:23:50 np0005593232 nova_compute[250269]: 2026-01-23 10:23:50.914 250273 DEBUG nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:23:51 np0005593232 nova_compute[250269]: 2026-01-23 10:23:51.163 250273 INFO nova.compute.manager [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Took 14.21 seconds to build instance.#033[00m
Jan 23 05:23:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:51.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:51 np0005593232 nova_compute[250269]: 2026-01-23 10:23:51.359 250273 DEBUG oslo_concurrency.lockutils [None req-a1f10c42-b335-43a9-a293-4bb62afb8d9a 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:23:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:51.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2836: 321 pgs: 321 active+clean; 296 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 376 KiB/s rd, 4.6 MiB/s wr, 75 op/s
Jan 23 05:23:52 np0005593232 nova_compute[250269]: 2026-01-23 10:23:52.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:53.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:53.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2837: 321 pgs: 321 active+clean; 340 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 212 op/s
Jan 23 05:23:54 np0005593232 nova_compute[250269]: 2026-01-23 10:23:54.274 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:55.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:23:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:55.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:23:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2838: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 5.0 MiB/s wr, 250 op/s
Jan 23 05:23:56 np0005593232 nova_compute[250269]: 2026-01-23 10:23:56.320 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:23:57 np0005593232 nova_compute[250269]: 2026-01-23 10:23:57.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:57.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:23:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:57.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2839: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 243 op/s
Jan 23 05:23:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:23:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:23:59.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:23:59 np0005593232 nova_compute[250269]: 2026-01-23 10:23:59.278 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:23:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:23:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:23:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:23:59.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:23:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2840: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 240 op/s
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.321 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.322 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.322 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.322 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.323 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:24:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358503290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.784 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.892 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:24:00 np0005593232 nova_compute[250269]: 2026-01-23 10:24:00.893 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a1 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.040 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.041 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4059MB free_disk=20.834545135498047GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.042 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.042 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.126 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance de449075-cfee-456b-ac52-1d6f8284a3f0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.127 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.127 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.175 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:01.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:24:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2750676341' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.599 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.607 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.633 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.666 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:24:01 np0005593232 nova_compute[250269]: 2026-01-23 10:24:01.667 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:01.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2841: 321 pgs: 321 active+clean; 360 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Jan 23 05:24:02 np0005593232 nova_compute[250269]: 2026-01-23 10:24:02.109 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:02 np0005593232 nova_compute[250269]: 2026-01-23 10:24:02.669 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:02 np0005593232 nova_compute[250269]: 2026-01-23 10:24:02.670 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:03.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.624 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.625 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.626 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:24:03 np0005593232 nova_compute[250269]: 2026-01-23 10:24:03.626 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid de449075-cfee-456b-ac52-1d6f8284a3f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:24:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:03.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2842: 321 pgs: 321 active+clean; 381 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 260 op/s
Jan 23 05:24:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:03Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:af:d6:d2 10.100.0.25
Jan 23 05:24:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:03Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:af:d6:d2 10.100.0.25
Jan 23 05:24:04 np0005593232 nova_compute[250269]: 2026-01-23 10:24:04.280 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:05.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:05.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2843: 321 pgs: 321 active+clean; 405 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.3 MiB/s wr, 151 op/s
Jan 23 05:24:07 np0005593232 nova_compute[250269]: 2026-01-23 10:24:07.112 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:07.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.357771) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847357827, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 888, "num_deletes": 251, "total_data_size": 1259917, "memory_usage": 1285176, "flush_reason": "Manual Compaction"}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847369096, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 824235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 62664, "largest_seqno": 63551, "table_properties": {"data_size": 820435, "index_size": 1451, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10302, "raw_average_key_size": 21, "raw_value_size": 812251, "raw_average_value_size": 1664, "num_data_blocks": 64, "num_entries": 488, "num_filter_entries": 488, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163781, "oldest_key_time": 1769163781, "file_creation_time": 1769163847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 11409 microseconds, and 3284 cpu microseconds.
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.369175) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 824235 bytes OK
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.369199) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.373830) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.373847) EVENT_LOG_v1 {"time_micros": 1769163847373841, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.373886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 1255574, prev total WAL file size 1255574, number of live WAL files 2.
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.374672) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323538' seq:72057594037927935, type:22 .. '6D6772737461740032353039' seq:0, type:0; will stop at (end)
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(804KB)], [143(11MB)]
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847374712, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 13232344, "oldest_snapshot_seqno": -1}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8819 keys, 9795430 bytes, temperature: kUnknown
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847486552, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 9795430, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9741698, "index_size": 30579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22085, "raw_key_size": 231761, "raw_average_key_size": 26, "raw_value_size": 9590135, "raw_average_value_size": 1087, "num_data_blocks": 1165, "num_entries": 8819, "num_filter_entries": 8819, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.486932) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 9795430 bytes
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.488264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.2 rd, 87.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.8 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(27.9) write-amplify(11.9) OK, records in: 9311, records dropped: 492 output_compression: NoCompression
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.488281) EVENT_LOG_v1 {"time_micros": 1769163847488272, "job": 88, "event": "compaction_finished", "compaction_time_micros": 111988, "compaction_time_cpu_micros": 25081, "output_level": 6, "num_output_files": 1, "total_output_size": 9795430, "num_input_records": 9311, "num_output_records": 8819, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847488643, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163847490846, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.374568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.490989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.490994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.490996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.490997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:07.490999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:07.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2844: 321 pgs: 321 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 177 op/s
Jan 23 05:24:08 np0005593232 nova_compute[250269]: 2026-01-23 10:24:08.154 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updating instance_info_cache with network_info: [{"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:24:08 np0005593232 nova_compute[250269]: 2026-01-23 10:24:08.184 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-de449075-cfee-456b-ac52-1d6f8284a3f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:24:08 np0005593232 nova_compute[250269]: 2026-01-23 10:24:08.184 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:24:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:09.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:09 np0005593232 nova_compute[250269]: 2026-01-23 10:24:09.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:09.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2845: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 205 op/s
Jan 23 05:24:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 23 05:24:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 23 05:24:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 23 05:24:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:11.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:24:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:11.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2848: 321 pgs: 321 active+clean; 426 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 4.3 MiB/s wr, 231 op/s
Jan 23 05:24:12 np0005593232 nova_compute[250269]: 2026-01-23 10:24:12.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9165f4ee-f3ed-46f8-bc0a-aab614373640 does not exist
Jan 23 05:24:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8a281ce8-dd27-4294-889e-97822f260851 does not exist
Jan 23 05:24:12 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 40f0b871-52a8-41af-9e4f-ae59338c1fc8 does not exist
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:24:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:24:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:13.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.432256469 +0000 UTC m=+0.051835965 container create a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:24:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:24:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:24:13 np0005593232 systemd[1]: Started libpod-conmon-a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c.scope.
Jan 23 05:24:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.408075171 +0000 UTC m=+0.027654677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:13.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.699270568 +0000 UTC m=+0.318850074 container init a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:24:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2850: 321 pgs: 321 active+clean; 468 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 78 op/s
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.709609632 +0000 UTC m=+0.329189118 container start a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:24:13 np0005593232 lucid_hopper[358493]: 167 167
Jan 23 05:24:13 np0005593232 systemd[1]: libpod-a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c.scope: Deactivated successfully.
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.718897486 +0000 UTC m=+0.338477002 container attach a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.719494083 +0000 UTC m=+0.339073569 container died a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:24:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9fcbbc0263443485e1ac7a921b5439f2742902d86e2a4e43c2cbd86f4a3f0827-merged.mount: Deactivated successfully.
Jan 23 05:24:13 np0005593232 podman[358477]: 2026-01-23 10:24:13.768725702 +0000 UTC m=+0.388305198 container remove a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_hopper, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:24:13 np0005593232 systemd[1]: libpod-conmon-a6d4d780f222741fb49bfed4bcb5224a5fec1c76a6b4ff2d77a4e9d8c6290e7c.scope: Deactivated successfully.
Jan 23 05:24:13 np0005593232 podman[358494]: 2026-01-23 10:24:13.845975018 +0000 UTC m=+0.357302227 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Jan 23 05:24:13 np0005593232 podman[358543]: 2026-01-23 10:24:13.975098959 +0000 UTC m=+0.060978624 container create dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:24:14 np0005593232 systemd[1]: Started libpod-conmon-dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd.scope.
Jan 23 05:24:14 np0005593232 podman[358543]: 2026-01-23 10:24:13.943226563 +0000 UTC m=+0.029106318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:14 np0005593232 podman[358543]: 2026-01-23 10:24:14.088190504 +0000 UTC m=+0.174070189 container init dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:24:14 np0005593232 podman[358543]: 2026-01-23 10:24:14.100337369 +0000 UTC m=+0.186217044 container start dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:24:14 np0005593232 podman[358543]: 2026-01-23 10:24:14.103958132 +0000 UTC m=+0.189837827 container attach dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:24:14 np0005593232 nova_compute[250269]: 2026-01-23 10:24:14.176 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:14 np0005593232 nova_compute[250269]: 2026-01-23 10:24:14.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:14 np0005593232 dazzling_hellman[358561]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:24:14 np0005593232 dazzling_hellman[358561]: --> relative data size: 1.0
Jan 23 05:24:14 np0005593232 dazzling_hellman[358561]: --> All data devices are unavailable
Jan 23 05:24:14 np0005593232 systemd[1]: libpod-dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd.scope: Deactivated successfully.
Jan 23 05:24:14 np0005593232 podman[358543]: 2026-01-23 10:24:14.978428841 +0000 UTC m=+1.064308506 container died dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:24:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-878a4f01915d6545cb61aa7dd83c4498e469bcda5be2d55eeb199379b63ad2dc-merged.mount: Deactivated successfully.
Jan 23 05:24:15 np0005593232 podman[358543]: 2026-01-23 10:24:15.081266945 +0000 UTC m=+1.167146610 container remove dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:24:15 np0005593232 systemd[1]: libpod-conmon-dae7ee19e8a0ec1a54f38d520dadb8e0a2c48d6fd635d8d11f7cb93c08cab4fd.scope: Deactivated successfully.
Jan 23 05:24:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:15.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.639072502 +0000 UTC m=+0.041714257 container create 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Jan 23 05:24:15 np0005593232 systemd[1]: Started libpod-conmon-0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2.scope.
Jan 23 05:24:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:15.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2851: 321 pgs: 321 active+clean; 472 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.623684274 +0000 UTC m=+0.026326069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.725000324 +0000 UTC m=+0.127642099 container init 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.731801218 +0000 UTC m=+0.134442973 container start 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.734957417 +0000 UTC m=+0.137599292 container attach 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:24:15 np0005593232 cranky_lichterman[358794]: 167 167
Jan 23 05:24:15 np0005593232 systemd[1]: libpod-0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2.scope: Deactivated successfully.
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.736845691 +0000 UTC m=+0.139487456 container died 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:24:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c1de2238b86b747808fb793b7593a20010d2fde8471ec0c3409b8e4bf9796ebb-merged.mount: Deactivated successfully.
Jan 23 05:24:15 np0005593232 podman[358777]: 2026-01-23 10:24:15.776186249 +0000 UTC m=+0.178828004 container remove 0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:24:15 np0005593232 systemd[1]: libpod-conmon-0dd0f41ea1c143c2c9ffe997133815c8fe7ddd325ebb8853afcf1143289077d2.scope: Deactivated successfully.
Jan 23 05:24:15 np0005593232 podman[358818]: 2026-01-23 10:24:15.952251594 +0000 UTC m=+0.042798708 container create 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:24:15 np0005593232 systemd[1]: Started libpod-conmon-8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b.scope.
Jan 23 05:24:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709335ec65305827776833bb207b4e4bdcf8cfcb275ebbddfa874a20dced06a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709335ec65305827776833bb207b4e4bdcf8cfcb275ebbddfa874a20dced06a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:16 np0005593232 podman[358818]: 2026-01-23 10:24:15.934479949 +0000 UTC m=+0.025027082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709335ec65305827776833bb207b4e4bdcf8cfcb275ebbddfa874a20dced06a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709335ec65305827776833bb207b4e4bdcf8cfcb275ebbddfa874a20dced06a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:16 np0005593232 podman[358818]: 2026-01-23 10:24:16.044439445 +0000 UTC m=+0.134986608 container init 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:24:16 np0005593232 podman[358818]: 2026-01-23 10:24:16.053020549 +0000 UTC m=+0.143567662 container start 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:24:16 np0005593232 podman[358818]: 2026-01-23 10:24:16.055980043 +0000 UTC m=+0.146527166 container attach 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.439 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.441 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.441 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.442 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.442 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.444 250273 INFO nova.compute.manager [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Terminating instance#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.446 250273 DEBUG nova.compute.manager [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:24:16 np0005593232 kernel: tap88963f9b-dc (unregistering): left promiscuous mode
Jan 23 05:24:16 np0005593232 NetworkManager[49057]: <info>  [1769163856.5110] device (tap88963f9b-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00634|binding|INFO|Releasing lport 88963f9b-dcc2-45e1-b825-d8646349d037 from this chassis (sb_readonly=0)
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00635|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 down in Southbound
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00636|binding|INFO|Removing iface tap88963f9b-dc ovn-installed in OVS
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.532 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:d6:d2 10.100.0.25'], port_security=['fa:16:3e:af:d6:d2 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'de449075-cfee-456b-ac52-1d6f8284a3f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e227a777-0e88-4409-a4a5-266ef225baae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36b702c3-000b-4837-b755-3dc876c460bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9352757c-3308-4452-a338-cff1ca2f64b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=88963f9b-dcc2-45e1-b825-d8646349d037) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.536 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 88963f9b-dcc2-45e1-b825-d8646349d037 in datapath e227a777-0e88-4409-a4a5-266ef225baae unbound from our chassis#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.539 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e227a777-0e88-4409-a4a5-266ef225baae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.544 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c942c6d9-d0d4-4efc-adeb-9d66c9392568]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.545 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae namespace which is not needed anymore#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.551 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a1.scope: Deactivated successfully.
Jan 23 05:24:16 np0005593232 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d000000a1.scope: Consumed 15.503s CPU time.
Jan 23 05:24:16 np0005593232 systemd-machined[215836]: Machine qemu-72-instance-000000a1 terminated.
Jan 23 05:24:16 np0005593232 kernel: tap88963f9b-dc: entered promiscuous mode
Jan 23 05:24:16 np0005593232 NetworkManager[49057]: <info>  [1769163856.6668] manager: (tap88963f9b-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/300)
Jan 23 05:24:16 np0005593232 kernel: tap88963f9b-dc (unregistering): left promiscuous mode
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00637|binding|INFO|Claiming lport 88963f9b-dcc2-45e1-b825-d8646349d037 for this chassis.
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00638|binding|INFO|88963f9b-dcc2-45e1-b825-d8646349d037: Claiming fa:16:3e:af:d6:d2 10.100.0.25
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.675 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.690 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:d6:d2 10.100.0.25'], port_security=['fa:16:3e:af:d6:d2 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'de449075-cfee-456b-ac52-1d6f8284a3f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e227a777-0e88-4409-a4a5-266ef225baae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36b702c3-000b-4837-b755-3dc876c460bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9352757c-3308-4452-a338-cff1ca2f64b6, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=88963f9b-dcc2-45e1-b825-d8646349d037) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.695 250273 INFO nova.virt.libvirt.driver [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Instance destroyed successfully.#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.695 250273 DEBUG nova.objects.instance [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lazy-loading 'resources' on Instance uuid de449075-cfee-456b-ac52-1d6f8284a3f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00639|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 ovn-installed in OVS
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00640|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 up in Southbound
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00641|binding|INFO|Releasing lport 88963f9b-dcc2-45e1-b825-d8646349d037 from this chassis (sb_readonly=1)
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00642|if_status|INFO|Dropped 1 log messages in last 347 seconds (most recently, 347 seconds ago) due to excessive rate
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00643|if_status|INFO|Not setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 down as sb is readonly
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00644|binding|INFO|Removing iface tap88963f9b-dc ovn-installed in OVS
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00645|binding|INFO|Releasing lport 88963f9b-dcc2-45e1-b825-d8646349d037 from this chassis (sb_readonly=0)
Jan 23 05:24:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:16Z|00646|binding|INFO|Setting lport 88963f9b-dcc2-45e1-b825-d8646349d037 down in Southbound
Jan 23 05:24:16 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:16.718 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:af:d6:d2 10.100.0.25'], port_security=['fa:16:3e:af:d6:d2 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': 'de449075-cfee-456b-ac52-1d6f8284a3f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e227a777-0e88-4409-a4a5-266ef225baae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98c94577fcdb4c3d893898ede79ea2d4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '36b702c3-000b-4837-b755-3dc876c460bd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9352757c-3308-4452-a338-cff1ca2f64b6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=88963f9b-dcc2-45e1-b825-d8646349d037) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.720 250273 DEBUG nova.virt.libvirt.vif [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:23:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-430049992',display_name='tempest-TestNetworkBasicOps-server-430049992',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-430049992',id=161,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCXjfY4J2leZkUFqywwEh/hJwVUFbbyhVPL+4OrbHmG3fAIzZdVwERtLKYtaFZPmy2cEzabxetaK6cnPdiCPKltm8GLjpZYcNnGtMR/l7P3TWEKb7uzSi0YG6Qrc4W9Bfw==',key_name='tempest-TestNetworkBasicOps-880469914',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:23:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98c94577fcdb4c3d893898ede79ea2d4',ramdisk_id='',reservation_id='r-m0gd0yrc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-789276745',owner_user_name='tempest-TestNetworkBasicOps-789276745-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:23:51Z,user_data=None,user_id='60291ce86b6946629a2e48f6680312cb',uuid=de449075-cfee-456b-ac52-1d6f8284a3f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.720 250273 DEBUG nova.network.os_vif_util [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converting VIF {"id": "88963f9b-dcc2-45e1-b825-d8646349d037", "address": "fa:16:3e:af:d6:d2", "network": {"id": "e227a777-0e88-4409-a4a5-266ef225baae", "bridge": "br-int", "label": "tempest-network-smoke--2008856721", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98c94577fcdb4c3d893898ede79ea2d4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap88963f9b-dc", "ovs_interfaceid": "88963f9b-dcc2-45e1-b825-d8646349d037", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.721 250273 DEBUG nova.network.os_vif_util [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.721 250273 DEBUG os_vif [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.722 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.723 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap88963f9b-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.725 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.728 250273 INFO os_vif [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:af:d6:d2,bridge_name='br-int',has_traffic_filtering=True,id=88963f9b-dcc2-45e1-b825-d8646349d037,network=Network(e227a777-0e88-4409-a4a5-266ef225baae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap88963f9b-dc')#033[00m
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [NOTICE]   (358041) : haproxy version is 2.8.14-c23fe91
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [NOTICE]   (358041) : path to executable is /usr/sbin/haproxy
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [WARNING]  (358041) : Exiting Master process...
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [WARNING]  (358041) : Exiting Master process...
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [ALERT]    (358041) : Current worker (358043) exited with code 143 (Terminated)
Jan 23 05:24:16 np0005593232 neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae[358037]: [WARNING]  (358041) : All workers exited. Exiting... (0)
Jan 23 05:24:16 np0005593232 systemd[1]: libpod-99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e.scope: Deactivated successfully.
Jan 23 05:24:16 np0005593232 podman[358866]: 2026-01-23 10:24:16.796556086 +0000 UTC m=+0.125937121 container died 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]: {
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:    "0": [
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:        {
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "devices": [
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "/dev/loop3"
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            ],
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "lv_name": "ceph_lv0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "lv_size": "7511998464",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "name": "ceph_lv0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "tags": {
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.cluster_name": "ceph",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.crush_device_class": "",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.encrypted": "0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.osd_id": "0",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.type": "block",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:                "ceph.vdo": "0"
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            },
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "type": "block",
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:            "vg_name": "ceph_vg0"
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:        }
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]:    ]
Jan 23 05:24:16 np0005593232 funny_wescoff[358836]: }
Jan 23 05:24:16 np0005593232 systemd[1]: libpod-8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b.scope: Deactivated successfully.
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.878 250273 DEBUG nova.compute.manager [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.878 250273 DEBUG oslo_concurrency.lockutils [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.879 250273 DEBUG oslo_concurrency.lockutils [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.879 250273 DEBUG oslo_concurrency.lockutils [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.879 250273 DEBUG nova.compute.manager [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:16 np0005593232 nova_compute[250269]: 2026-01-23 10:24:16.879 250273 DEBUG nova.compute.manager [req-0b225c0b-30a9-4872-bc74-1ca3e3a64b81 req-ed50e00e-66f4-410e-8f8d-9bfa86ad78d2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:24:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ed6fab68dcfc7e7c1241676bd89827eefe0c1857615b70bcce9e43be53b06579-merged.mount: Deactivated successfully.
Jan 23 05:24:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e-userdata-shm.mount: Deactivated successfully.
Jan 23 05:24:16 np0005593232 podman[358818]: 2026-01-23 10:24:16.933833428 +0000 UTC m=+1.024380561 container died 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:24:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:17.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 23 05:24:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:17.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2852: 321 pgs: 321 active+clean; 433 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 131 op/s
Jan 23 05:24:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 23 05:24:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-709335ec65305827776833bb207b4e4bdcf8cfcb275ebbddfa874a20dced06a3-merged.mount: Deactivated successfully.
Jan 23 05:24:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 23 05:24:17 np0005593232 podman[358916]: 2026-01-23 10:24:17.807361389 +0000 UTC m=+0.960104123 container remove 8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:24:17 np0005593232 systemd[1]: libpod-conmon-8d7fa71f8234388e8c8ba450b0f85855838c32e5478f113bbae7bba2746e394b.scope: Deactivated successfully.
Jan 23 05:24:17 np0005593232 podman[358929]: 2026-01-23 10:24:17.844620249 +0000 UTC m=+0.491393890 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:24:17 np0005593232 podman[358866]: 2026-01-23 10:24:17.897675307 +0000 UTC m=+1.227056342 container cleanup 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:24:17 np0005593232 systemd[1]: libpod-conmon-99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e.scope: Deactivated successfully.
Jan 23 05:24:17 np0005593232 podman[358976]: 2026-01-23 10:24:17.973814501 +0000 UTC m=+0.047707837 container remove 99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:24:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:17.981 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c8200f01-4b5c-49bf-9215-d24565fae2fe]: (4, ('Fri Jan 23 10:24:16 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae (99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e)\n99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e\nFri Jan 23 10:24:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae (99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e)\n99a90b991747c5100ad7b42ca02b83adf2503cfa047e53f5ddb560b409a62e3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:17.984 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[12308a45-f372-4dae-b13a-96928e0e3c4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:17.985 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape227a777-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:17 np0005593232 nova_compute[250269]: 2026-01-23 10:24:17.987 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:17 np0005593232 kernel: tape227a777-00: left promiscuous mode
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.003 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.007 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf5218c-ad21-4a4d-8df4-29664b5b1c6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.020 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b94e400-be3b-4717-b6e1-1761ff55c55a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.022 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[315eeefe-1b2d-4e2b-aded-ebb95c3d7a8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.038 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f3cb84-be90-4076-ba4f-46957ec8cb79]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 774152, 'reachable_time': 32811, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359036, 'error': None, 'target': 'ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.041 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e227a777-0e88-4409-a4a5-266ef225baae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.041 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a473a9cd-d4aa-4a5a-b28b-50a8b0e20fb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.042 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 88963f9b-dcc2-45e1-b825-d8646349d037 in datapath e227a777-0e88-4409-a4a5-266ef225baae unbound from our chassis#033[00m
Jan 23 05:24:18 np0005593232 systemd[1]: run-netns-ovnmeta\x2de227a777\x2d0e88\x2d4409\x2da4a5\x2d266ef225baae.mount: Deactivated successfully.
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.044 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e227a777-0e88-4409-a4a5-266ef225baae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.045 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d51b1332-d50f-416d-9296-30ef707e56b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.047 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 88963f9b-dcc2-45e1-b825-d8646349d037 in datapath e227a777-0e88-4409-a4a5-266ef225baae unbound from our chassis#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.049 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e227a777-0e88-4409-a4a5-266ef225baae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.048 250273 INFO nova.virt.libvirt.driver [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Deleting instance files /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0_del#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.049 250273 INFO nova.virt.libvirt.driver [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Deletion of /var/lib/nova/instances/de449075-cfee-456b-ac52-1d6f8284a3f0_del complete#033[00m
Jan 23 05:24:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:18.049 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec522da-6a63-4137-a4a2-4acaa59bc2b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.139 250273 INFO nova.compute.manager [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Took 1.69 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.139 250273 DEBUG oslo.service.loopingcall [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.139 250273 DEBUG nova.compute.manager [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:24:18 np0005593232 nova_compute[250269]: 2026-01-23 10:24:18.140 250273 DEBUG nova.network.neutron [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.458436128 +0000 UTC m=+0.037136227 container create acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:24:18 np0005593232 systemd[1]: Started libpod-conmon-acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2.scope.
Jan 23 05:24:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.538171675 +0000 UTC m=+0.116871794 container init acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.441508287 +0000 UTC m=+0.020208406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.547742577 +0000 UTC m=+0.126442676 container start acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.551289867 +0000 UTC m=+0.129989966 container attach acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:24:18 np0005593232 elastic_easley[359125]: 167 167
Jan 23 05:24:18 np0005593232 systemd[1]: libpod-acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2.scope: Deactivated successfully.
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.556402523 +0000 UTC m=+0.135102622 container died acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:24:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4e1ed37aa16d7ee28182ba01b1144cbc5dc9c49ec182a05e3bbb6b65cfdfa1cb-merged.mount: Deactivated successfully.
Jan 23 05:24:18 np0005593232 podman[359109]: 2026-01-23 10:24:18.603229954 +0000 UTC m=+0.181930053 container remove acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_easley, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:24:18 np0005593232 systemd[1]: libpod-conmon-acd85994d9b7b29e59b5dee5f99340a5197045b1c7fa7495387c7138bd161ec2.scope: Deactivated successfully.
Jan 23 05:24:18 np0005593232 podman[359149]: 2026-01-23 10:24:18.77614753 +0000 UTC m=+0.046427241 container create a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:24:18 np0005593232 systemd[1]: Started libpod-conmon-a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50.scope.
Jan 23 05:24:18 np0005593232 podman[359149]: 2026-01-23 10:24:18.759197608 +0000 UTC m=+0.029477369 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:24:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc28eff6155fc2bb33f77c3d5f214078a981fc047a995f5ac8dbd2b433ed6c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc28eff6155fc2bb33f77c3d5f214078a981fc047a995f5ac8dbd2b433ed6c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc28eff6155fc2bb33f77c3d5f214078a981fc047a995f5ac8dbd2b433ed6c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fc28eff6155fc2bb33f77c3d5f214078a981fc047a995f5ac8dbd2b433ed6c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:18 np0005593232 podman[359149]: 2026-01-23 10:24:18.881437803 +0000 UTC m=+0.151717524 container init a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 05:24:18 np0005593232 podman[359149]: 2026-01-23 10:24:18.888822793 +0000 UTC m=+0.159102524 container start a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:24:18 np0005593232 podman[359149]: 2026-01-23 10:24:18.89190433 +0000 UTC m=+0.162184091 container attach a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.086 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.088 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.088 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.088 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.088 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.088 250273 WARNING nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received unexpected event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.089 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.089 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.089 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.089 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.090 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.090 250273 WARNING nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received unexpected event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.090 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.090 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.091 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.091 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.091 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.091 250273 WARNING nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received unexpected event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.092 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.092 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.092 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.092 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.093 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.093 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-unplugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.093 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.094 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.094 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.094 250273 DEBUG oslo_concurrency.lockutils [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.095 250273 DEBUG nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] No waiting events found dispatching network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.095 250273 WARNING nova.compute.manager [req-73666e33-9e93-479e-bb89-8e537c33ebf6 req-62819e4b-f1b6-4145-8f1a-b89ad03a1812 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received unexpected event network-vif-plugged-88963f9b-dcc2-45e1-b825-d8646349d037 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:24:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:19.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.256 250273 DEBUG nova.network.neutron [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.275 250273 INFO nova.compute.manager [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Took 1.14 seconds to deallocate network for instance.#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.300 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.325 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.327 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.365 250273 DEBUG nova.compute.manager [req-98eb499f-27cd-4615-8b54-b7acf18c3f34 req-3ce3ead0-5bb6-40a2-9baa-9fbb1f3b39a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Received event network-vif-deleted-88963f9b-dcc2-45e1-b825-d8646349d037 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.395 250273 DEBUG oslo_concurrency.processutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:19.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2854: 321 pgs: 321 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 4.1 MiB/s wr, 179 op/s
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]: {
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:        "osd_id": 0,
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:        "type": "bluestore"
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]:    }
Jan 23 05:24:19 np0005593232 compassionate_hoover[359166]: }
Jan 23 05:24:19 np0005593232 systemd[1]: libpod-a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50.scope: Deactivated successfully.
Jan 23 05:24:19 np0005593232 podman[359149]: 2026-01-23 10:24:19.782695743 +0000 UTC m=+1.052975464 container died a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:24:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9fc28eff6155fc2bb33f77c3d5f214078a981fc047a995f5ac8dbd2b433ed6c2-merged.mount: Deactivated successfully.
Jan 23 05:24:19 np0005593232 podman[359149]: 2026-01-23 10:24:19.852442666 +0000 UTC m=+1.122722387 container remove a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 05:24:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:24:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/974882284' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:24:19 np0005593232 systemd[1]: libpod-conmon-a841aeaf7c6d2b61053a780c1e392984bb4ce05adb3582965f5718f56f77ff50.scope: Deactivated successfully.
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.890 250273 DEBUG oslo_concurrency.processutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:19 np0005593232 nova_compute[250269]: 2026-01-23 10:24:19.896 250273 DEBUG nova.compute.provider_tree [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:24:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:24:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:24:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 32d335b3-923e-47b7-8239-3c4d69329f6d does not exist
Jan 23 05:24:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f5fb124b-d882-4152-ae32-efb0e1cc94b3 does not exist
Jan 23 05:24:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1036343e-738d-4ca4-a10e-bf4cff54e1c9 does not exist
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.194 250273 DEBUG nova.scheduler.client.report [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.230 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.285 250273 INFO nova.scheduler.client.report [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Deleted allocations for instance de449075-cfee-456b-ac52-1d6f8284a3f0#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.395 250273 DEBUG oslo_concurrency.lockutils [None req-0cb1cd9e-4de0-42d6-8d96-fcb92884b802 60291ce86b6946629a2e48f6680312cb 98c94577fcdb4c3d893898ede79ea2d4 - - default default] Lock "de449075-cfee-456b-ac52-1d6f8284a3f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.446 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.446 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.461 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.531 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.532 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.538 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.538 250273 INFO nova.compute.claims [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:24:20 np0005593232 nova_compute[250269]: 2026-01-23 10:24:20.651 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:24:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2451512119' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.099 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.105 250273 DEBUG nova.compute.provider_tree [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.124 250273 DEBUG nova.scheduler.client.report [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.166 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.167 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.231 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.231 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:24:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:21.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.276 250273 INFO nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.303 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.383 250273 INFO nova.virt.block_device [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Booting with blank volume at /dev/vda#033[00m
Jan 23 05:24:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.660 250273 DEBUG nova.policy [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d6a628e0dcb441fa41457bf719e65a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:24:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:21.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2855: 321 pgs: 321 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.1 MiB/s wr, 146 op/s
Jan 23 05:24:21 np0005593232 nova_compute[250269]: 2026-01-23 10:24:21.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:22 np0005593232 nova_compute[250269]: 2026-01-23 10:24:22.697 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Successfully created port: 8a815bf1-0b58-47f0-a81a-267ec84efc82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:24:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:23.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:23.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2856: 321 pgs: 321 active+clean; 421 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 226 op/s
Jan 23 05:24:23 np0005593232 nova_compute[250269]: 2026-01-23 10:24:23.737 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Successfully updated port: 8a815bf1-0b58-47f0-a81a-267ec84efc82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:24:24 np0005593232 nova_compute[250269]: 2026-01-23 10:24:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:24 np0005593232 nova_compute[250269]: 2026-01-23 10:24:24.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:25.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.681 250273 DEBUG nova.compute.manager [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-changed-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.681 250273 DEBUG nova.compute.manager [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Refreshing instance network info cache due to event network-changed-8a815bf1-0b58-47f0-a81a-267ec84efc82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.681 250273 DEBUG oslo_concurrency.lockutils [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.682 250273 DEBUG oslo_concurrency.lockutils [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.682 250273 DEBUG nova.network.neutron [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Refreshing network info cache for port 8a815bf1-0b58-47f0-a81a-267ec84efc82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:24:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:25.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2857: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 209 op/s
Jan 23 05:24:25 np0005593232 nova_compute[250269]: 2026-01-23 10:24:25.718 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.034 250273 DEBUG nova.network.neutron [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.115 250273 DEBUG os_brick.utils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.119 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.136 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.137 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[d2123092-c418-4c79-907d-46798af3fc84]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.138 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.151 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.152 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[7c1e21d0-d180-4604-a270-75ec793f50d9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.154 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.164 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.164 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[78ab5e2a-719b-4ca0-9631-db7bd9ffe595]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.167 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b3ac02-3465-4979-a603-0e3f8effeb2f]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.167 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.213 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "nvme version" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.218 250273 DEBUG os_brick.initiator.connectors.lightos [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.219 250273 DEBUG os_brick.initiator.connectors.lightos [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.219 250273 DEBUG os_brick.initiator.connectors.lightos [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.221 250273 DEBUG os_brick.utils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.221 250273 DEBUG nova.virt.block_device [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating existing volume attachment record: 791794f9-87d4-4aa0-a262-f6adf41498a4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.412 250273 DEBUG nova.network.neutron [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.777 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.911 250273 DEBUG oslo_concurrency.lockutils [req-4f38b10d-faea-484a-bf3e-494158930b78 req-4af00275-6030-41ae-98e6-37e4a1b2529b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.913 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:24:26 np0005593232 nova_compute[250269]: 2026-01-23 10:24:26.913 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:24:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:27.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:27 np0005593232 nova_compute[250269]: 2026-01-23 10:24:27.600 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:24:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:27.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2858: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 201 op/s
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.135 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.138 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.138 250273 INFO nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Creating image(s)#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.139 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.139 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Ensure instance console log exists: /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.140 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.140 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.140 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.141 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.691 250273 DEBUG nova.network.neutron [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.714 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.715 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance network_info: |[{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.717 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start _get_guest_xml network_info=[{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '791794f9-87d4-4aa0-a262-f6adf41498a4', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-890430a6-e7b9-4647-b7ef-49be14bad5fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '890430a6-e7b9-4647-b7ef-49be14bad5fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'attached_at': '', 'detached_at': '', 'volume_id': '890430a6-e7b9-4647-b7ef-49be14bad5fe', 'serial': '890430a6-e7b9-4647-b7ef-49be14bad5fe'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.721 250273 WARNING nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.727 250273 DEBUG nova.virt.libvirt.host [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.727 250273 DEBUG nova.virt.libvirt.host [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.730 250273 DEBUG nova.virt.libvirt.host [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.731 250273 DEBUG nova.virt.libvirt.host [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.732 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.733 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.734 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.734 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.734 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.735 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.735 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.735 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.735 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.736 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.736 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.736 250273 DEBUG nova.virt.hardware [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.875 250273 DEBUG nova.storage.rbd_utils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:24:28 np0005593232 nova_compute[250269]: 2026-01-23 10:24:28.880 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:24:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3842194447' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.324 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.348 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.380 250273 DEBUG nova.virt.libvirt.vif [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-464630064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-464630064',id=164,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-gwy2rl7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:24:21Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=ae979986-7780-443a-afbc-6b4be8f71da1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.381 250273 DEBUG nova.network.os_vif_util [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.382 250273 DEBUG nova.network.os_vif_util [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.384 250273 DEBUG nova.objects.instance [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.404 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <uuid>ae979986-7780-443a-afbc-6b4be8f71da1</uuid>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <name>instance-000000a4</name>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-464630064</nova:name>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:24:28</nova:creationTime>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:user uuid="0d6a628e0dcb441fa41457bf719e65a0">tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member</nova:user>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:project uuid="5c27429e1d8f433a8a67ddb76f8798f1">tempest-ServerBootFromVolumeStableRescueTest-1351337832</nova:project>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <nova:port uuid="8a815bf1-0b58-47f0-a81a-267ec84efc82">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="serial">ae979986-7780-443a-afbc-6b4be8f71da1</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="uuid">ae979986-7780-443a-afbc-6b4be8f71da1</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ae979986-7780-443a-afbc-6b4be8f71da1_disk.config">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-890430a6-e7b9-4647-b7ef-49be14bad5fe">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <serial>890430a6-e7b9-4647-b7ef-49be14bad5fe</serial>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:da:00:1f"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <target dev="tap8a815bf1-0b"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/console.log" append="off"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:24:29 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:24:29 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:24:29 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:24:29 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.406 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Preparing to wait for external event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.406 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.406 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.407 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.407 250273 DEBUG nova.virt.libvirt.vif [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-464630064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-464630064',id=164,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-gwy2rl7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:24:21Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=ae979986-7780-443a-afbc-6b4be8f71da1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.408 250273 DEBUG nova.network.os_vif_util [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.408 250273 DEBUG nova.network.os_vif_util [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.409 250273 DEBUG os_vif [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.409 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.410 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.414 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a815bf1-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.414 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a815bf1-0b, col_values=(('external_ids', {'iface-id': '8a815bf1-0b58-47f0-a81a-267ec84efc82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:00:1f', 'vm-uuid': 'ae979986-7780-443a-afbc-6b4be8f71da1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.416 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:29 np0005593232 NetworkManager[49057]: <info>  [1769163869.4172] manager: (tap8a815bf1-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.418 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.425 250273 INFO os_vif [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b')#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.693 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.693 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.694 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No VIF found with MAC fa:16:3e:da:00:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.694 250273 INFO nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Using config drive#033[00m
Jan 23 05:24:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2859: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 207 op/s
Jan 23 05:24:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:29.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:29 np0005593232 nova_compute[250269]: 2026-01-23 10:24:29.746 250273 DEBUG nova.storage.rbd_utils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.162 250273 INFO nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Creating config drive at /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.167 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprbqn77w0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.308 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprbqn77w0" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.340 250273 DEBUG nova.storage.rbd_utils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.344 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config ae979986-7780-443a-afbc-6b4be8f71da1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.715 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.715 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.785 250273 DEBUG oslo_concurrency.processutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config ae979986-7780-443a-afbc-6b4be8f71da1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.785 250273 INFO nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Deleting local config drive /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config because it was imported into RBD.#033[00m
Jan 23 05:24:30 np0005593232 kernel: tap8a815bf1-0b: entered promiscuous mode
Jan 23 05:24:30 np0005593232 NetworkManager[49057]: <info>  [1769163870.8500] manager: (tap8a815bf1-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/302)
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.851 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:30Z|00647|binding|INFO|Claiming lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 for this chassis.
Jan 23 05:24:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:30Z|00648|binding|INFO|8a815bf1-0b58-47f0-a81a-267ec84efc82: Claiming fa:16:3e:da:00:1f 10.100.0.9
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.878 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.879 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 bound to our chassis#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.881 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4#033[00m
Jan 23 05:24:30 np0005593232 systemd-udevd[359420]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:24:30 np0005593232 systemd-machined[215836]: New machine qemu-73-instance-000000a4.
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.894 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e7bbe87a-590c-4676-8248-79878eb28cae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.895 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd64ab8-91 in ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.897 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.897 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd64ab8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.898 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b26d3df-70ca-4779-952c-22797155b479]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.898 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7607cabd-ba35-4ef6-95ba-b75dc165dfb1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 NetworkManager[49057]: <info>  [1769163870.9013] device (tap8a815bf1-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:24:30 np0005593232 NetworkManager[49057]: <info>  [1769163870.9024] device (tap8a815bf1-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:24:30 np0005593232 systemd[1]: Started Virtual Machine qemu-73-instance-000000a4.
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.915 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[1073ea60-0801-45ad-a553-c57f47b478a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.927 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:30Z|00649|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 ovn-installed in OVS
Jan 23 05:24:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:30Z|00650|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 up in Southbound
Jan 23 05:24:30 np0005593232 nova_compute[250269]: 2026-01-23 10:24:30.932 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.943 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a855d3df-f704-47e1-af15-a3e8b9ac5130]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.974 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[813cc2e3-67f2-4088-904a-7fed04185ab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:30 np0005593232 NetworkManager[49057]: <info>  [1769163870.9802] manager: (tapfbd64ab8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/303)
Jan 23 05:24:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:30.979 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a742d16-89ef-42cd-b4e6-7e5d0d2d2ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.013 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a5980e-6df5-45ea-9d68-fc2bd330d989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.017 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1ef87a-a391-42e9-bcad-4b93793c5f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 NetworkManager[49057]: <info>  [1769163871.0379] device (tapfbd64ab8-90): carrier: link connected
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.044 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[80dd51f9-401c-4015-8814-8de91356df9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.060 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4944a4-b4c2-4161-bf6e-f986d7db787d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778418, 'reachable_time': 18301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359453, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.074 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[43b3097d-820d-4007-8c7c-8d88ade2aede]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:7c5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 778418, 'tstamp': 778418}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359454, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.091 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b2d2884f-8aa3-4c4b-a3d9-7bfbcc4a447b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 193], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778418, 'reachable_time': 18301, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359455, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.122 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[339946d2-186c-4d10-a86a-cb25c0d2edcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.176 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a77008cd-9883-4d59-b413-a532a601d8ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.178 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.178 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.179 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd64ab8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:31 np0005593232 kernel: tapfbd64ab8-90: entered promiscuous mode
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.183 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd64ab8-90, col_values=(('external_ids', {'iface-id': 'b648300b-e46c-4d3b-b02e-94ff684c03ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.181 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:24:31Z|00651|binding|INFO|Releasing lport b648300b-e46c-4d3b-b02e-94ff684c03ae from this chassis (sb_readonly=0)
Jan 23 05:24:31 np0005593232 NetworkManager[49057]: <info>  [1769163871.1854] manager: (tapfbd64ab8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.186 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.199 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.200 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[60adb34c-71cf-4d99-b48d-026d6c002302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.201 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:24:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:31.202 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'env', 'PROCESS_TAG=haproxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:24:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:31.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.695 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769163856.693683, de449075-cfee-456b-ac52-1d6f8284a3f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.696 250273 INFO nova.compute.manager [-] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:24:31 np0005593232 podman[359521]: 2026-01-23 10:24:31.622261989 +0000 UTC m=+0.038254068 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:24:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2860: 321 pgs: 321 active+clean; 425 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.3 MiB/s rd, 1.2 MiB/s wr, 166 op/s
Jan 23 05:24:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:31.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.746 250273 DEBUG nova.compute.manager [None req-3dbda958-ee47-4aee-ac5e-06a4eac46788 - - - - - -] [instance: de449075-cfee-456b-ac52-1d6f8284a3f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:24:31 np0005593232 podman[359521]: 2026-01-23 10:24:31.757387451 +0000 UTC m=+0.173379490 container create 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:24:31 np0005593232 systemd[1]: Started libpod-conmon-91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82.scope.
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.815 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163871.8124712, ae979986-7780-443a-afbc-6b4be8f71da1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:24:31 np0005593232 nova_compute[250269]: 2026-01-23 10:24:31.816 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:24:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:24:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61f132d562dfbfe84802b5fbf48b09828601978a3d98d07668a2ebd778905e09/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:24:31 np0005593232 podman[359521]: 2026-01-23 10:24:31.842216642 +0000 UTC m=+0.258208691 container init 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:24:31 np0005593232 podman[359521]: 2026-01-23 10:24:31.847535003 +0000 UTC m=+0.263527033 container start 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:24:31 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [NOTICE]   (359546) : New worker (359548) forked
Jan 23 05:24:31 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [NOTICE]   (359546) : Loading success.
Jan 23 05:24:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:33.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.433 250273 DEBUG nova.compute.manager [req-dabe4164-186a-4607-a381-c7369b40ed73 req-462beb0a-8a37-4596-bfe2-20d784ed43ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.434 250273 DEBUG oslo_concurrency.lockutils [req-dabe4164-186a-4607-a381-c7369b40ed73 req-462beb0a-8a37-4596-bfe2-20d784ed43ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.434 250273 DEBUG oslo_concurrency.lockutils [req-dabe4164-186a-4607-a381-c7369b40ed73 req-462beb0a-8a37-4596-bfe2-20d784ed43ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.434 250273 DEBUG oslo_concurrency.lockutils [req-dabe4164-186a-4607-a381-c7369b40ed73 req-462beb0a-8a37-4596-bfe2-20d784ed43ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.434 250273 DEBUG nova.compute.manager [req-dabe4164-186a-4607-a381-c7369b40ed73 req-462beb0a-8a37-4596-bfe2-20d784ed43ec 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Processing event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.435 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.440 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.446 250273 INFO nova.virt.libvirt.driver [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance spawned successfully.#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.446 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.453 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:24:33 np0005593232 nova_compute[250269]: 2026-01-23 10:24:33.457 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:24:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2861: 321 pgs: 321 active+clean; 371 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.1 MiB/s rd, 1.2 MiB/s wr, 218 op/s
Jan 23 05:24:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:33.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:34 np0005593232 nova_compute[250269]: 2026-01-23 10:24:34.325 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:34 np0005593232 nova_compute[250269]: 2026-01-23 10:24:34.417 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:35.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2862: 321 pgs: 321 active+clean; 346 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 56 KiB/s wr, 174 op/s
Jan 23 05:24:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:35.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:37.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:24:37
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'default.rgw.control', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', 'images']
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.403 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.444 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.444 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163871.8157995, ae979986-7780-443a-afbc-6b4be8f71da1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.444 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.452 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.453 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.453 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.454 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.454 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 nova_compute[250269]: 2026-01-23 10:24:37.455 250273 DEBUG nova.virt.libvirt.driver [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:24:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:24:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2863: 321 pgs: 321 active+clean; 346 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 39 KiB/s wr, 159 op/s
Jan 23 05:24:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:24:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:37.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:24:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:24:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:39.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:39 np0005593232 nova_compute[250269]: 2026-01-23 10:24:39.327 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:39 np0005593232 nova_compute[250269]: 2026-01-23 10:24:39.420 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2864: 321 pgs: 321 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 49 KiB/s wr, 156 op/s
Jan 23 05:24:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:39.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.176 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.186 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163873.4398127, ae979986-7780-443a-afbc-6b4be8f71da1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.186 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.188 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid ae979986-7780-443a-afbc-6b4be8f71da1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.189 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.418 250273 DEBUG nova.compute.manager [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.419 250273 DEBUG oslo_concurrency.lockutils [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.420 250273 DEBUG oslo_concurrency.lockutils [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.420 250273 DEBUG oslo_concurrency.lockutils [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.420 250273 DEBUG nova.compute.manager [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.421 250273 WARNING nova.compute.manager [req-ec4464fb-a433-40bc-be93-db733b7fb0e3 req-eecee468-9cd6-4c56-93c7-6ed67c370742 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state building and task_state spawning.#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.506 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.510 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.527 250273 INFO nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Took 12.39 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:24:40 np0005593232 nova_compute[250269]: 2026-01-23 10:24:40.527 250273 DEBUG nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.043 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.227 250273 INFO nova.compute.manager [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Took 20.72 seconds to build instance.#033[00m
Jan 23 05:24:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:41.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.539 250273 DEBUG oslo_concurrency.lockutils [None req-48f2854e-3f6a-4724-8c1f-d7cc22af6d51 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.540 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.540 250273 INFO nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:24:41 np0005593232 nova_compute[250269]: 2026-01-23 10:24:41.540 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2865: 321 pgs: 321 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 117 op/s
Jan 23 05:24:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:41.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:42.639 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:24:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:42.640 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:24:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:42.640 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.681120) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163882681188, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 643, "num_deletes": 257, "total_data_size": 760159, "memory_usage": 773368, "flush_reason": "Manual Compaction"}
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163882688482, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 752775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63552, "largest_seqno": 64194, "table_properties": {"data_size": 749273, "index_size": 1345, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8106, "raw_average_key_size": 19, "raw_value_size": 742106, "raw_average_value_size": 1766, "num_data_blocks": 58, "num_entries": 420, "num_filter_entries": 420, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163848, "oldest_key_time": 1769163848, "file_creation_time": 1769163882, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 7386 microseconds, and 2818 cpu microseconds.
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.688514) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 752775 bytes OK
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.688529) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.690082) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.690095) EVENT_LOG_v1 {"time_micros": 1769163882690091, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.690118) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 756697, prev total WAL file size 756697, number of live WAL files 2.
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.690699) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353131' seq:72057594037927935, type:22 .. '6C6F676D0032373633' seq:0, type:0; will stop at (end)
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(735KB)], [146(9565KB)]
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163882690778, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 10548205, "oldest_snapshot_seqno": -1}
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8707 keys, 10400487 bytes, temperature: kUnknown
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163882935280, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 10400487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10346296, "index_size": 31294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21829, "raw_key_size": 230402, "raw_average_key_size": 26, "raw_value_size": 10195497, "raw_average_value_size": 1170, "num_data_blocks": 1192, "num_entries": 8707, "num_filter_entries": 8707, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163882, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:24:42 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.935629) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 10400487 bytes
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.143297) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 43.1 rd, 42.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(27.8) write-amplify(13.8) OK, records in: 9239, records dropped: 532 output_compression: NoCompression
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.143340) EVENT_LOG_v1 {"time_micros": 1769163883143324, "job": 90, "event": "compaction_finished", "compaction_time_micros": 244605, "compaction_time_cpu_micros": 26701, "output_level": 6, "num_output_files": 1, "total_output_size": 10400487, "num_input_records": 9239, "num_output_records": 8707, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163883143672, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163883145444, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:42.690543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.145554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.145564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.145567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.145570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:24:43.145583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:24:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:43.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2866: 321 pgs: 321 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 50 KiB/s wr, 148 op/s
Jan 23 05:24:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:44 np0005593232 nova_compute[250269]: 2026-01-23 10:24:44.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:44 np0005593232 nova_compute[250269]: 2026-01-23 10:24:44.422 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:44 np0005593232 podman[359614]: 2026-01-23 10:24:44.459929977 +0000 UTC m=+0.118823728 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 05:24:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:45.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:45 np0005593232 nova_compute[250269]: 2026-01-23 10:24:45.416 250273 INFO nova.compute.manager [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Rescuing#033[00m
Jan 23 05:24:45 np0005593232 nova_compute[250269]: 2026-01-23 10:24:45.417 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:24:45 np0005593232 nova_compute[250269]: 2026-01-23 10:24:45.417 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:24:45 np0005593232 nova_compute[250269]: 2026-01-23 10:24:45.417 250273 DEBUG nova.network.neutron [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:24:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2867: 321 pgs: 321 active+clean; 348 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 36 KiB/s wr, 104 op/s
Jan 23 05:24:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:24:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:45.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006522775916620138 of space, bias 1.0, pg target 1.9568327749860415 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.00038041044342080775 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003843544981853713 of space, bias 1.0, pg target 1.14921994957426 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:24:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:24:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:24:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2868: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 835 KiB/s rd, 45 KiB/s wr, 72 op/s
Jan 23 05:24:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:47.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:48 np0005593232 podman[359644]: 2026-01-23 10:24:48.434449091 +0000 UTC m=+0.085905763 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:24:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:49.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:49 np0005593232 nova_compute[250269]: 2026-01-23 10:24:49.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2869: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 527 KiB/s rd, 37 KiB/s wr, 46 op/s
Jan 23 05:24:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:50 np0005593232 nova_compute[250269]: 2026-01-23 10:24:50.462 250273 DEBUG nova.network.neutron [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:24:50 np0005593232 nova_compute[250269]: 2026-01-23 10:24:50.488 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:24:50 np0005593232 nova_compute[250269]: 2026-01-23 10:24:50.929 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:24:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:51.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2870: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 25 KiB/s wr, 40 op/s
Jan 23 05:24:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:53.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2871: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 29 KiB/s wr, 40 op/s
Jan 23 05:24:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:54 np0005593232 nova_compute[250269]: 2026-01-23 10:24:54.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:54 np0005593232 nova_compute[250269]: 2026-01-23 10:24:54.426 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:55.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2872: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 83 KiB/s rd, 19 KiB/s wr, 10 op/s
Jan 23 05:24:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:56 np0005593232 nova_compute[250269]: 2026-01-23 10:24:56.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:24:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:56.335 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:24:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:24:56.336 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:24:56 np0005593232 nova_compute[250269]: 2026-01-23 10:24:56.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:24:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:24:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2873: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 27 KiB/s wr, 3 op/s
Jan 23 05:24:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:24:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:24:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:24:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:24:59.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:24:59 np0005593232 nova_compute[250269]: 2026-01-23 10:24:59.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:59 np0005593232 nova_compute[250269]: 2026-01-23 10:24:59.426 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:24:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2874: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 18 KiB/s wr, 2 op/s
Jan 23 05:24:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:24:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:24:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:24:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:00 np0005593232 nova_compute[250269]: 2026-01-23 10:25:00.982 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:25:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:01.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2875: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s rd, 15 KiB/s wr, 2 op/s
Jan 23 05:25:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.331 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.331 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.331 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.332 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:25:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/451832764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.878 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.991 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:25:02 np0005593232 nova_compute[250269]: 2026-01-23 10:25:02.992 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:25:03 np0005593232 nova_compute[250269]: 2026-01-23 10:25:03.274 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:25:03 np0005593232 nova_compute[250269]: 2026-01-23 10:25:03.276 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4203MB free_disk=20.85132598876953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:25:03 np0005593232 nova_compute[250269]: 2026-01-23 10:25:03.276 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:03 np0005593232 nova_compute[250269]: 2026-01-23 10:25:03.277 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:03.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2876: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.7 KiB/s rd, 16 KiB/s wr, 3 op/s
Jan 23 05:25:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:04 np0005593232 nova_compute[250269]: 2026-01-23 10:25:04.338 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:04 np0005593232 nova_compute[250269]: 2026-01-23 10:25:04.429 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:05.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2877: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 KiB/s rd, 14 KiB/s wr, 3 op/s
Jan 23 05:25:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:05 np0005593232 nova_compute[250269]: 2026-01-23 10:25:05.940 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ae979986-7780-443a-afbc-6b4be8f71da1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:25:05 np0005593232 nova_compute[250269]: 2026-01-23 10:25:05.941 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:25:05 np0005593232 nova_compute[250269]: 2026-01-23 10:25:05.942 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:25:05 np0005593232 nova_compute[250269]: 2026-01-23 10:25:05.997 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:06.339 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:25:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:25:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1566826227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:25:06 np0005593232 nova_compute[250269]: 2026-01-23 10:25:06.588 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:06 np0005593232 nova_compute[250269]: 2026-01-23 10:25:06.598 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:25:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:07.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2878: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.5 KiB/s rd, 25 KiB/s wr, 5 op/s
Jan 23 05:25:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:08 np0005593232 nova_compute[250269]: 2026-01-23 10:25:08.661 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:25:09 np0005593232 nova_compute[250269]: 2026-01-23 10:25:09.289 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:25:09 np0005593232 nova_compute[250269]: 2026-01-23 10:25:09.290 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:09.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:09 np0005593232 nova_compute[250269]: 2026-01-23 10:25:09.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:09 np0005593232 nova_compute[250269]: 2026-01-23 10:25:09.432 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2879: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 18 KiB/s wr, 4 op/s
Jan 23 05:25:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:11.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2880: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 5.5 KiB/s rd, 16 KiB/s wr, 3 op/s
Jan 23 05:25:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:11.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.090 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.614 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.614 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.615 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:25:12 np0005593232 nova_compute[250269]: 2026-01-23 10:25:12.615 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:13.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2881: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 16 op/s
Jan 23 05:25:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:13.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:14 np0005593232 nova_compute[250269]: 2026-01-23 10:25:14.358 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:14 np0005593232 nova_compute[250269]: 2026-01-23 10:25:14.434 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:15 np0005593232 podman[359796]: 2026-01-23 10:25:15.06450619 +0000 UTC m=+0.126508517 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:25:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:15 np0005593232 nova_compute[250269]: 2026-01-23 10:25:15.707 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:25:15 np0005593232 nova_compute[250269]: 2026-01-23 10:25:15.732 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:25:15 np0005593232 nova_compute[250269]: 2026-01-23 10:25:15.733 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:25:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2882: 321 pgs: 321 active+clean; 350 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 16 op/s
Jan 23 05:25:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:17.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2883: 321 pgs: 321 active+clean; 314 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 33 op/s
Jan 23 05:25:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:19.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:19 np0005593232 nova_compute[250269]: 2026-01-23 10:25:19.359 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:19 np0005593232 podman[359851]: 2026-01-23 10:25:19.46448752 +0000 UTC m=+0.111824280 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:25:19 np0005593232 nova_compute[250269]: 2026-01-23 10:25:19.464 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2884: 321 pgs: 321 active+clean; 269 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.5 KiB/s wr, 41 op/s
Jan 23 05:25:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:25:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:21.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:25:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2885: 321 pgs: 321 active+clean; 269 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:25:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:25:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:21.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2ef001ea-48ba-4992-baf6-096581efee75 does not exist
Jan 23 05:25:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 62cfdeb7-f4bb-4a0a-a1df-959ed6dec5e9 does not exist
Jan 23 05:25:22 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d7da6440-501d-4cd3-bd36-abadc6184024 does not exist
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:25:22 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 23 05:25:23 np0005593232 nova_compute[250269]: 2026-01-23 10:25:23.162 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:25:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:23.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.508632353 +0000 UTC m=+0.055210140 container create 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:25:23 np0005593232 systemd[1]: Started libpod-conmon-2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363.scope.
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.485450554 +0000 UTC m=+0.032028321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.615599254 +0000 UTC m=+0.162177021 container init 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.626195425 +0000 UTC m=+0.172773192 container start 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.630367734 +0000 UTC m=+0.176945511 container attach 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:25:23 np0005593232 loving_faraday[360158]: 167 167
Jan 23 05:25:23 np0005593232 systemd[1]: libpod-2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363.scope: Deactivated successfully.
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.637305651 +0000 UTC m=+0.183883438 container died 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:25:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-326b64df6ac9a05675f04df805e8f38d2dd2f13bb998353eb0f9155db46ee83d-merged.mount: Deactivated successfully.
Jan 23 05:25:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:23 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:25:23 np0005593232 podman[360142]: 2026-01-23 10:25:23.699124668 +0000 UTC m=+0.245702465 container remove 2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_faraday, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:25:23 np0005593232 systemd[1]: libpod-conmon-2c6a2d810caf8181fc6ae8854c729e85ff63f75b4621ec4a925a2b3d19457363.scope: Deactivated successfully.
Jan 23 05:25:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2887: 321 pgs: 321 active+clean; 269 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 641 KiB/s rd, 2.1 KiB/s wr, 70 op/s
Jan 23 05:25:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:23.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:23 np0005593232 podman[360181]: 2026-01-23 10:25:23.925436812 +0000 UTC m=+0.047110330 container create 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:25:23 np0005593232 systemd[1]: Started libpod-conmon-01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a.scope.
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:23.906650088 +0000 UTC m=+0.028323616 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:24.096198796 +0000 UTC m=+0.217872324 container init 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:24.10266525 +0000 UTC m=+0.224338758 container start 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:24.137366446 +0000 UTC m=+0.259039954 container attach 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:25:24 np0005593232 nova_compute[250269]: 2026-01-23 10:25:24.362 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:24 np0005593232 nova_compute[250269]: 2026-01-23 10:25:24.467 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:24 np0005593232 beautiful_herschel[360200]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:25:24 np0005593232 beautiful_herschel[360200]: --> relative data size: 1.0
Jan 23 05:25:24 np0005593232 beautiful_herschel[360200]: --> All data devices are unavailable
Jan 23 05:25:24 np0005593232 systemd[1]: libpod-01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a.scope: Deactivated successfully.
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:24.919711407 +0000 UTC m=+1.041384915 container died 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:25:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e3a3be59c710e6eb0a939227af2cb2c3a8403f6534cb546547fa3049e4efbda8-merged.mount: Deactivated successfully.
Jan 23 05:25:24 np0005593232 podman[360181]: 2026-01-23 10:25:24.979719212 +0000 UTC m=+1.101392720 container remove 01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:25:24 np0005593232 systemd[1]: libpod-conmon-01fffc73e8cbca123eeae5f0be50a7b9ba356185fccdf2abbab910f58a85522a.scope: Deactivated successfully.
Jan 23 05:25:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:25.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2888: 321 pgs: 321 active+clean; 256 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 2.1 KiB/s wr, 116 op/s
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.782972836 +0000 UTC m=+0.056313372 container create e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:25:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:25.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:25 np0005593232 systemd[1]: Started libpod-conmon-e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65.scope.
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.755620998 +0000 UTC m=+0.028961534 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.898669825 +0000 UTC m=+0.172010351 container init e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.9093884 +0000 UTC m=+0.182728946 container start e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:25:25 np0005593232 distracted_euclid[360387]: 167 167
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.914028721 +0000 UTC m=+0.187369277 container attach e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:25:25 np0005593232 systemd[1]: libpod-e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65.scope: Deactivated successfully.
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.916484341 +0000 UTC m=+0.189824847 container died e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:25:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-577bafc8aa73930245aa11f6d302d1ac9b301020f851dc44c5fb32ffff17621a-merged.mount: Deactivated successfully.
Jan 23 05:25:25 np0005593232 podman[360371]: 2026-01-23 10:25:25.967074499 +0000 UTC m=+0.240414995 container remove e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_euclid, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:25:25 np0005593232 systemd[1]: libpod-conmon-e1294ea131006a937f51fab04c942cdd2b036c319e2f7129c386de9d126c3a65.scope: Deactivated successfully.
Jan 23 05:25:26 np0005593232 podman[360412]: 2026-01-23 10:25:26.146746507 +0000 UTC m=+0.049629552 container create 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 05:25:26 np0005593232 systemd[1]: Started libpod-conmon-654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9.scope.
Jan 23 05:25:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:26 np0005593232 podman[360412]: 2026-01-23 10:25:26.124476864 +0000 UTC m=+0.027359989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bdb4843cdcd4d0a1a59595b9b2780c0fc3eb6b166c7311022d3750a6d7331fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bdb4843cdcd4d0a1a59595b9b2780c0fc3eb6b166c7311022d3750a6d7331fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bdb4843cdcd4d0a1a59595b9b2780c0fc3eb6b166c7311022d3750a6d7331fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bdb4843cdcd4d0a1a59595b9b2780c0fc3eb6b166c7311022d3750a6d7331fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:26 np0005593232 podman[360412]: 2026-01-23 10:25:26.244160556 +0000 UTC m=+0.147043701 container init 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:25:26 np0005593232 podman[360412]: 2026-01-23 10:25:26.256214999 +0000 UTC m=+0.159098054 container start 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:25:26 np0005593232 podman[360412]: 2026-01-23 10:25:26.259712118 +0000 UTC m=+0.162595183 container attach 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]: {
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:    "0": [
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:        {
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "devices": [
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "/dev/loop3"
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            ],
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "lv_name": "ceph_lv0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "lv_size": "7511998464",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "name": "ceph_lv0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "tags": {
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.cluster_name": "ceph",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.crush_device_class": "",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.encrypted": "0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.osd_id": "0",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.type": "block",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:                "ceph.vdo": "0"
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            },
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "type": "block",
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:            "vg_name": "ceph_vg0"
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:        }
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]:    ]
Jan 23 05:25:27 np0005593232 naughty_nobel[360428]: }
Jan 23 05:25:27 np0005593232 systemd[1]: libpod-654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9.scope: Deactivated successfully.
Jan 23 05:25:27 np0005593232 podman[360437]: 2026-01-23 10:25:27.184158298 +0000 UTC m=+0.044377753 container died 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:25:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7bdb4843cdcd4d0a1a59595b9b2780c0fc3eb6b166c7311022d3750a6d7331fb-merged.mount: Deactivated successfully.
Jan 23 05:25:27 np0005593232 podman[360437]: 2026-01-23 10:25:27.247041706 +0000 UTC m=+0.107261141 container remove 654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:25:27 np0005593232 systemd[1]: libpod-conmon-654044d8763b6a3bb7779632c4d9d77434ed0d315f59701e4bfa0bc62fbdf6a9.scope: Deactivated successfully.
Jan 23 05:25:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:27.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2889: 321 pgs: 321 active+clean; 259 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 553 KiB/s wr, 129 op/s
Jan 23 05:25:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:27.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.094483296 +0000 UTC m=+0.051139195 container create 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:25:28 np0005593232 systemd[1]: Started libpod-conmon-653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9.scope.
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.069943489 +0000 UTC m=+0.026599468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.198307648 +0000 UTC m=+0.154963637 container init 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.21209131 +0000 UTC m=+0.168747239 container start 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.216639949 +0000 UTC m=+0.173295948 container attach 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 05:25:28 np0005593232 angry_hodgkin[360609]: 167 167
Jan 23 05:25:28 np0005593232 systemd[1]: libpod-653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9.scope: Deactivated successfully.
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.222370942 +0000 UTC m=+0.179026881 container died 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:25:28 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4b4c7e2cca3318879e4f3b8d26464be3f90aa851ac2af200f4d72f249f2727fb-merged.mount: Deactivated successfully.
Jan 23 05:25:28 np0005593232 podman[360593]: 2026-01-23 10:25:28.276721337 +0000 UTC m=+0.233377246 container remove 653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_hodgkin, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:25:28 np0005593232 systemd[1]: libpod-conmon-653703ed06b9f2eb5162fa8f78dab6b525f0ae7a767ffe7bf23eede5422b42c9.scope: Deactivated successfully.
Jan 23 05:25:28 np0005593232 podman[360634]: 2026-01-23 10:25:28.517021188 +0000 UTC m=+0.054470140 container create 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:25:28 np0005593232 systemd[1]: Started libpod-conmon-4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c.scope.
Jan 23 05:25:28 np0005593232 podman[360634]: 2026-01-23 10:25:28.494396225 +0000 UTC m=+0.031845207 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:25:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c31bc0a69d95a5f55055b4cd798117583104dc34ccd854d134fd1e7c028586/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c31bc0a69d95a5f55055b4cd798117583104dc34ccd854d134fd1e7c028586/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c31bc0a69d95a5f55055b4cd798117583104dc34ccd854d134fd1e7c028586/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63c31bc0a69d95a5f55055b4cd798117583104dc34ccd854d134fd1e7c028586/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:28 np0005593232 podman[360634]: 2026-01-23 10:25:28.632636294 +0000 UTC m=+0.170085336 container init 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:25:28 np0005593232 podman[360634]: 2026-01-23 10:25:28.645941213 +0000 UTC m=+0.183390195 container start 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:25:28 np0005593232 podman[360634]: 2026-01-23 10:25:28.650477332 +0000 UTC m=+0.187926314 container attach 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:25:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:29.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:29 np0005593232 nova_compute[250269]: 2026-01-23 10:25:29.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:29 np0005593232 nova_compute[250269]: 2026-01-23 10:25:29.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]: {
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:        "osd_id": 0,
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:        "type": "bluestore"
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]:    }
Jan 23 05:25:29 np0005593232 admiring_hamilton[360651]: }
Jan 23 05:25:29 np0005593232 systemd[1]: libpod-4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c.scope: Deactivated successfully.
Jan 23 05:25:29 np0005593232 systemd[1]: libpod-4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c.scope: Consumed 1.100s CPU time.
Jan 23 05:25:29 np0005593232 podman[360634]: 2026-01-23 10:25:29.743808811 +0000 UTC m=+1.281257763 container died 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:25:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2890: 321 pgs: 321 active+clean; 285 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Jan 23 05:25:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-63c31bc0a69d95a5f55055b4cd798117583104dc34ccd854d134fd1e7c028586-merged.mount: Deactivated successfully.
Jan 23 05:25:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:29.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:29 np0005593232 podman[360634]: 2026-01-23 10:25:29.819527194 +0000 UTC m=+1.356976166 container remove 4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:25:29 np0005593232 systemd[1]: libpod-conmon-4e8192a69d6b37c657a23d6272081e49da9cff9c277a52407fcd54dda6f5940c.scope: Deactivated successfully.
Jan 23 05:25:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:25:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:25:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ba557b3b-310c-40b3-a08c-88d4a411c2b6 does not exist
Jan 23 05:25:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b44501ae-910b-4b74-8291-419699de2191 does not exist
Jan 23 05:25:29 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1beb98a0-f354-466f-940c-9400a6e76b37 does not exist
Jan 23 05:25:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:25:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:31.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2891: 321 pgs: 321 active+clean; 285 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Jan 23 05:25:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:31.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 23 05:25:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 23 05:25:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 23 05:25:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:33.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2893: 321 pgs: 321 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.1 MiB/s wr, 106 op/s
Jan 23 05:25:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:34 np0005593232 nova_compute[250269]: 2026-01-23 10:25:34.231 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:25:34 np0005593232 nova_compute[250269]: 2026-01-23 10:25:34.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:34 np0005593232 nova_compute[250269]: 2026-01-23 10:25:34.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:35.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2894: 321 pgs: 321 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 548 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 23 05:25:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:25:37
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'images']
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:25:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:37.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:25:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2895: 321 pgs: 321 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 661 KiB/s rd, 1.6 MiB/s wr, 85 op/s
Jan 23 05:25:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:37.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:25:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:25:39 np0005593232 nova_compute[250269]: 2026-01-23 10:25:39.371 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:39.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:39 np0005593232 nova_compute[250269]: 2026-01-23 10:25:39.477 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2896: 321 pgs: 321 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 287 KiB/s wr, 121 op/s
Jan 23 05:25:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:39.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:41.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2897: 321 pgs: 321 active+clean; 295 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 287 KiB/s wr, 121 op/s
Jan 23 05:25:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:41.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:42.640 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:42.642 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:42.644 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:43.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2898: 321 pgs: 321 active+clean; 251 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 36 KiB/s wr, 152 op/s
Jan 23 05:25:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:44 np0005593232 nova_compute[250269]: 2026-01-23 10:25:44.374 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:44 np0005593232 nova_compute[250269]: 2026-01-23 10:25:44.479 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:45 np0005593232 nova_compute[250269]: 2026-01-23 10:25:45.300 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 23 05:25:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:45.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:45 np0005593232 podman[360792]: 2026-01-23 10:25:45.518273625 +0000 UTC m=+0.167904894 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Jan 23 05:25:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2899: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 33 KiB/s wr, 143 op/s
Jan 23 05:25:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:45.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003178326528475712 of space, bias 1.0, pg target 0.9534979585427136 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 9.087683789316955e-07 of space, bias 1.0, pg target 0.00027263051367950865 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8676193467336684 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:25:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:47.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2900: 321 pgs: 321 active+clean; 193 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 148 op/s
Jan 23 05:25:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:47.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:49 np0005593232 nova_compute[250269]: 2026-01-23 10:25:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:49 np0005593232 nova_compute[250269]: 2026-01-23 10:25:49.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:49 np0005593232 nova_compute[250269]: 2026-01-23 10:25:49.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:49 np0005593232 nova_compute[250269]: 2026-01-23 10:25:49.404 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:49 np0005593232 nova_compute[250269]: 2026-01-23 10:25:49.481 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2901: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 123 op/s
Jan 23 05:25:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:49.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:50 np0005593232 podman[360822]: 2026-01-23 10:25:50.4556059 +0000 UTC m=+0.097260826 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.332 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance failed to shutdown in 60 seconds.#033[00m
Jan 23 05:25:51 np0005593232 kernel: tap8a815bf1-0b (unregistering): left promiscuous mode
Jan 23 05:25:51 np0005593232 NetworkManager[49057]: <info>  [1769163951.3923] device (tap8a815bf1-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:25:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.413 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:51Z|00652|binding|INFO|Releasing lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 from this chassis (sb_readonly=0)
Jan 23 05:25:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:51Z|00653|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 down in Southbound
Jan 23 05:25:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:51Z|00654|binding|INFO|Removing iface tap8a815bf1-0b ovn-installed in OVS
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.419 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.430 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.433 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 unbound from our chassis#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.436 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.439 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cf2507eb-6277-48ee-b14a-c0f1fa8e9125]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.440 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace which is not needed anymore#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.444 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 23 05:25:51 np0005593232 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d000000a4.scope: Consumed 1.605s CPU time.
Jan 23 05:25:51 np0005593232 systemd-machined[215836]: Machine qemu-73-instance-000000a4 terminated.
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.587 250273 INFO nova.virt.libvirt.driver [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance destroyed successfully.#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.587 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [NOTICE]   (359546) : haproxy version is 2.8.14-c23fe91
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [NOTICE]   (359546) : path to executable is /usr/sbin/haproxy
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [WARNING]  (359546) : Exiting Master process...
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [WARNING]  (359546) : Exiting Master process...
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.615 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Attempting a stable device rescue#033[00m
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [ALERT]    (359546) : Current worker (359548) exited with code 143 (Terminated)
Jan 23 05:25:51 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[359542]: [WARNING]  (359546) : All workers exited. Exiting... (0)
Jan 23 05:25:51 np0005593232 systemd[1]: libpod-91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82.scope: Deactivated successfully.
Jan 23 05:25:51 np0005593232 podman[360867]: 2026-01-23 10:25:51.626344231 +0000 UTC m=+0.066242244 container died 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:25:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82-userdata-shm.mount: Deactivated successfully.
Jan 23 05:25:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-61f132d562dfbfe84802b5fbf48b09828601978a3d98d07668a2ebd778905e09-merged.mount: Deactivated successfully.
Jan 23 05:25:51 np0005593232 podman[360867]: 2026-01-23 10:25:51.692970865 +0000 UTC m=+0.132868878 container cleanup 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 23 05:25:51 np0005593232 systemd[1]: libpod-conmon-91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82.scope: Deactivated successfully.
Jan 23 05:25:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2902: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 812 KiB/s rd, 13 KiB/s wr, 81 op/s
Jan 23 05:25:51 np0005593232 podman[360908]: 2026-01-23 10:25:51.804366481 +0000 UTC m=+0.070149875 container remove 91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.817 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5bb9cc24-5a92-4c36-8c53-e80876057861]: (4, ('Fri Jan 23 10:25:51 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82)\n91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82\nFri Jan 23 10:25:51 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82)\n91c364847bad9c894595836bdd5a50fd9542f17e271ae18e959d960670902c82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.821 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bdac39-dfee-4f09-848d-96a6a6411c6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.823 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.823 250273 DEBUG nova.compute.manager [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.824 250273 DEBUG oslo_concurrency.lockutils [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.824 250273 DEBUG oslo_concurrency.lockutils [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.825 250273 DEBUG oslo_concurrency.lockutils [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.861 250273 DEBUG nova.compute.manager [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.862 250273 WARNING nova.compute.manager [req-1b1d973a-f41b-4823-9f9a-2b0642b9198a req-5a734705-afdc-4cf4-8e9b-70866c8c6147 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:25:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:51.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 kernel: tapfbd64ab8-90: left promiscuous mode
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 nova_compute[250269]: 2026-01-23 10:25:51.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.896 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cc792444-5ecb-4b54-bba5-59287a8731d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.923 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d8ee4ae4-fa1f-4f3f-a976-25ce675f3122]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.925 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4286b973-00ab-4d0b-a691-859904ddff7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.945 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07304719-480a-43e4-b4f1-aa1cc5262985]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 778411, 'reachable_time': 34975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360927, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.950 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:25:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:51.951 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[726f32b0-8ea8-4cfa-a598-f89420c50bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:51 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfbd64ab8\x2d9e5b\x2d4300\x2d98d7\x2d50a5d6fbefc4.mount: Deactivated successfully.
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.182 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.190 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.190 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Creating image(s)#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.231 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.238 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.300 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.349 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.356 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "8a0445e74d87b682f16a565a672bb4d6cbd82c0d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.358 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "8a0445e74d87b682f16a565a672bb4d6cbd82c0d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.706 250273 DEBUG nova.virt.libvirt.imagebackend [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/a891f488-4cba-4fea-b482-6ac469142f81/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/a891f488-4cba-4fea-b482-6ac469142f81/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.797 250273 DEBUG nova.virt.libvirt.imagebackend [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Selected location: {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/a891f488-4cba-4fea-b482-6ac469142f81/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.798 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] cloning images/a891f488-4cba-4fea-b482-6ac469142f81@snap to None/ae979986-7780-443a-afbc-6b4be8f71da1_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:25:52 np0005593232 nova_compute[250269]: 2026-01-23 10:25:52.963 250273 DEBUG oslo_concurrency.lockutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "8a0445e74d87b682f16a565a672bb4d6cbd82c0d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.039 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'migration_context' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.069 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.072 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start _get_guest_xml network_info=[{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "vif_mac": "fa:16:3e:da:00:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'a891f488-4cba-4fea-b482-6ac469142f81', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '791794f9-87d4-4aa0-a262-f6adf41498a4', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-890430a6-e7b9-4647-b7ef-49be14bad5fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '890430a6-e7b9-4647-b7ef-49be14bad5fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'attached_at': '', 'detached_at': '', 'volume_id': '890430a6-e7b9-4647-b7ef-49be14bad5fe', 'serial': '890430a6-e7b9-4647-b7ef-49be14bad5fe'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.073 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'resources' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.113 250273 WARNING nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.118 250273 DEBUG nova.virt.libvirt.host [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.119 250273 DEBUG nova.virt.libvirt.host [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.124 250273 DEBUG nova.virt.libvirt.host [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.124 250273 DEBUG nova.virt.libvirt.host [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.125 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.125 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.126 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.126 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.126 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.126 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.127 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.127 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.127 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.127 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.128 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.128 250273 DEBUG nova.virt.hardware [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.128 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.195 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2903: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 812 KiB/s rd, 13 KiB/s wr, 81 op/s
Jan 23 05:25:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:53.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.959 250273 DEBUG nova.compute.manager [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.960 250273 DEBUG oslo_concurrency.lockutils [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.961 250273 DEBUG oslo_concurrency.lockutils [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.961 250273 DEBUG oslo_concurrency.lockutils [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.962 250273 DEBUG nova.compute.manager [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:25:53 np0005593232 nova_compute[250269]: 2026-01-23 10:25:53.962 250273 WARNING nova.compute.manager [req-2c082b04-1e1f-4960-a2dd-bff1ebdf8e6b req-57a93313-e4da-4790-b8bd-142131435b9d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:25:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:25:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468612211' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:25:54 np0005593232 nova_compute[250269]: 2026-01-23 10:25:54.755 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:54 np0005593232 nova_compute[250269]: 2026-01-23 10:25:54.783 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:54 np0005593232 nova_compute[250269]: 2026-01-23 10:25:54.949 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:55.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:25:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3881084919' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.483 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.488 250273 DEBUG nova.virt.libvirt.vif [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-464630064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-464630064',id=164,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:24:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-gwy2rl7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:24:41Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=ae979986-7780-443a-afbc-6b4be8f71da1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "vif_mac": "fa:16:3e:da:00:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.489 250273 DEBUG nova.network.os_vif_util [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "vif_mac": "fa:16:3e:da:00:1f"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.491 250273 DEBUG nova.network.os_vif_util [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.493 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.510 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <uuid>ae979986-7780-443a-afbc-6b4be8f71da1</uuid>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <name>instance-000000a4</name>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-464630064</nova:name>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:25:53</nova:creationTime>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:user uuid="0d6a628e0dcb441fa41457bf719e65a0">tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member</nova:user>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:project uuid="5c27429e1d8f433a8a67ddb76f8798f1">tempest-ServerBootFromVolumeStableRescueTest-1351337832</nova:project>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <nova:port uuid="8a815bf1-0b58-47f0-a81a-267ec84efc82">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="serial">ae979986-7780-443a-afbc-6b4be8f71da1</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="uuid">ae979986-7780-443a-afbc-6b4be8f71da1</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ae979986-7780-443a-afbc-6b4be8f71da1_disk.config">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-890430a6-e7b9-4647-b7ef-49be14bad5fe">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <serial>890430a6-e7b9-4647-b7ef-49be14bad5fe</serial>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/ae979986-7780-443a-afbc-6b4be8f71da1_disk.rescue">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <boot order="1"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:da:00:1f"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <target dev="tap8a815bf1-0b"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/console.log" append="off"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:25:55 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:25:55 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:25:55 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:25:55 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.525 250273 INFO nova.virt.libvirt.driver [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance destroyed successfully.#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.601 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.601 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.602 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.602 250273 DEBUG nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No VIF found with MAC fa:16:3e:da:00:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.603 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Using config drive#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.634 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.683 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:55 np0005593232 nova_compute[250269]: 2026-01-23 10:25:55.723 250273 DEBUG nova.objects.instance [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'keypairs' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:25:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2904: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 53 KiB/s rd, 2.6 KiB/s wr, 47 op/s
Jan 23 05:25:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:55.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.442 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Creating config drive at /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.453 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeyupiqte execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.468 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.469 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.511 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.617 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeyupiqte" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.669 250273 DEBUG nova.storage.rbd_utils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image ae979986-7780-443a-afbc-6b4be8f71da1_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.675 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue ae979986-7780-443a-afbc-6b4be8f71da1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.880 250273 DEBUG oslo_concurrency.processutils [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue ae979986-7780-443a-afbc-6b4be8f71da1_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.881 250273 INFO nova.virt.libvirt.driver [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Deleting local config drive /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1/disk.config.rescue because it was imported into RBD.#033[00m
Jan 23 05:25:56 np0005593232 kernel: tap8a815bf1-0b: entered promiscuous mode
Jan 23 05:25:56 np0005593232 NetworkManager[49057]: <info>  [1769163956.9436] manager: (tap8a815bf1-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Jan 23 05:25:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:56Z|00655|binding|INFO|Claiming lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 for this chassis.
Jan 23 05:25:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:56Z|00656|binding|INFO|8a815bf1-0b58-47f0-a81a-267ec84efc82: Claiming fa:16:3e:da:00:1f 10.100.0.9
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.950 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.952 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 bound to our chassis#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.953 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4#033[00m
Jan 23 05:25:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:56Z|00657|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 ovn-installed in OVS
Jan 23 05:25:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:56Z|00658|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 up in Southbound
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.969 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd016d4-746f-44a4-9d77-20c76ab5873e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:56 np0005593232 nova_compute[250269]: 2026-01-23 10:25:56.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.971 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd64ab8-91 in ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.974 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd64ab8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.974 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e217b01-5f49-4395-aa02-b7e84738bba2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.975 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0c0724b3-bd9e-4aeb-8e9b-1256d2884df9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:56 np0005593232 systemd-udevd[361257]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:25:56 np0005593232 systemd-machined[215836]: New machine qemu-74-instance-000000a4.
Jan 23 05:25:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:56.989 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[773f107c-d2bf-44b9-ada6-61f5eaba8400]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:56 np0005593232 NetworkManager[49057]: <info>  [1769163956.9920] device (tap8a815bf1-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:25:56 np0005593232 NetworkManager[49057]: <info>  [1769163956.9924] device (tap8a815bf1-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:25:56 np0005593232 systemd[1]: Started Virtual Machine qemu-74-instance-000000a4.
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.012 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b263e9b-c702-4b5b-8c79-fb93ebcd3a0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.051 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bae32c88-ff92-43c3-9dd2-1f1e56615519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.056 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7d39c95e-8067-4062-836b-53c10e433bb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 NetworkManager[49057]: <info>  [1769163957.0576] manager: (tapfbd64ab8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.104 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[2c957a42-ec4f-4774-a258-03a96e1d309b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.109 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f851afb4-d528-4676-8292-0f2a314c0be3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 NetworkManager[49057]: <info>  [1769163957.1398] device (tapfbd64ab8-90): carrier: link connected
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.146 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c2d82d-cefa-446d-92a1-694d35a965dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.165 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2db4a3b6-64dc-45b2-926f-ddb30f9655bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787028, 'reachable_time': 27750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361289, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.181 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4a3105-7ef8-4489-a179-2f9695aecfcd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:7c5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787028, 'tstamp': 787028}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361290, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.197 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[347f6df3-0f2a-47a9-acd1-41a3934912cb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787028, 'reachable_time': 27750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361291, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.228 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[528ac4b2-b27c-43a3-9c3d-fdaba8ab3529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.249 250273 DEBUG nova.compute.manager [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.250 250273 DEBUG oslo_concurrency.lockutils [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.250 250273 DEBUG oslo_concurrency.lockutils [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.251 250273 DEBUG oslo_concurrency.lockutils [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.251 250273 DEBUG nova.compute.manager [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.251 250273 WARNING nova.compute.manager [req-dee0695b-1d00-40de-ba65-f9614a30a6e0 req-24393b20-54ba-4b9a-a788-4e47470def0f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.286 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0417ef8-54dc-4999-9f03-4726fda6df3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.287 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.287 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.288 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd64ab8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:57 np0005593232 kernel: tapfbd64ab8-90: entered promiscuous mode
Jan 23 05:25:57 np0005593232 NetworkManager[49057]: <info>  [1769163957.2904] manager: (tapfbd64ab8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.291 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.295 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd64ab8-90, col_values=(('external_ids', {'iface-id': 'b648300b-e46c-4d3b-b02e-94ff684c03ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:25:57Z|00659|binding|INFO|Releasing lport b648300b-e46c-4d3b-b02e-94ff684c03ae from this chassis (sb_readonly=0)
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.297 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.299 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.308 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e6cf96-6654-4799-8cb4-179778a167ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.311 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:25:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:25:57.313 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'env', 'PROCESS_TAG=haproxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:25:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:25:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:57.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:25:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.691 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for ae979986-7780-443a-afbc-6b4be8f71da1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.693 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163957.689857, ae979986-7780-443a-afbc-6b4be8f71da1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.694 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.700 250273 DEBUG nova.compute.manager [None req-b2d09c81-e720-4ecb-8e59-60523757dd45 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:25:57 np0005593232 podman[361384]: 2026-01-23 10:25:57.732965914 +0000 UTC m=+0.060511001 container create 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.738 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.744 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:25:57 np0005593232 systemd[1]: Started libpod-conmon-23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02.scope.
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.783 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.784 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163957.6900496, ae979986-7780-443a-afbc-6b4be8f71da1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:25:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2905: 321 pgs: 321 active+clean; 188 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 55 KiB/s rd, 967 KiB/s wr, 78 op/s
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.785 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:25:57 np0005593232 podman[361384]: 2026-01-23 10:25:57.706354878 +0000 UTC m=+0.033899995 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.824 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:25:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:25:57 np0005593232 nova_compute[250269]: 2026-01-23 10:25:57.833 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:25:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4114799e3b590c875b48144ab11d1bc24da1a148c1a0754d3f11b107b3987aa9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:25:57 np0005593232 podman[361384]: 2026-01-23 10:25:57.859323506 +0000 UTC m=+0.186868603 container init 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 05:25:57 np0005593232 podman[361384]: 2026-01-23 10:25:57.867027945 +0000 UTC m=+0.194573042 container start 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:25:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:25:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:57.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:25:57 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [NOTICE]   (361402) : New worker (361404) forked
Jan 23 05:25:57 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [NOTICE]   (361402) : Loading success.
Jan 23 05:25:58 np0005593232 nova_compute[250269]: 2026-01-23 10:25:58.616 250273 INFO nova.compute.manager [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Unrescuing#033[00m
Jan 23 05:25:58 np0005593232 nova_compute[250269]: 2026-01-23 10:25:58.617 250273 DEBUG oslo_concurrency.lockutils [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:25:58 np0005593232 nova_compute[250269]: 2026-01-23 10:25:58.617 250273 DEBUG oslo_concurrency.lockutils [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:25:58 np0005593232 nova_compute[250269]: 2026-01-23 10:25:58.618 250273 DEBUG nova.network.neutron [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.356 250273 DEBUG nova.compute.manager [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.358 250273 DEBUG oslo_concurrency.lockutils [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.359 250273 DEBUG oslo_concurrency.lockutils [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.359 250273 DEBUG oslo_concurrency.lockutils [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.359 250273 DEBUG nova.compute.manager [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.360 250273 WARNING nova.compute.manager [req-f351212f-86bc-4fa9-a77a-689f78daa182 req-0f7b085c-2f74-43a6-b7d5-83df0fbc32c4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:25:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:25:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:25:59.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:25:59 np0005593232 nova_compute[250269]: 2026-01-23 10:25:59.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:25:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2906: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 96 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 23 05:25:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:25:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:25:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:25:59.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:01.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:01 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:01.473 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2907: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 87 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 23 05:26:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:01.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.163 250273 DEBUG nova.network.neutron [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.296 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.499 250273 DEBUG oslo_concurrency.lockutils [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.500 250273 DEBUG nova.objects.instance [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'flavor' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.504 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.504 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.505 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.505 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.506 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:02 np0005593232 kernel: tap8a815bf1-0b (unregistering): left promiscuous mode
Jan 23 05:26:02 np0005593232 NetworkManager[49057]: <info>  [1769163962.6271] device (tap8a815bf1-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:26:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:02Z|00660|binding|INFO|Releasing lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 from this chassis (sb_readonly=0)
Jan 23 05:26:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:02Z|00661|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 down in Southbound
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:02Z|00662|binding|INFO|Removing iface tap8a815bf1-0b ovn-installed in OVS
Jan 23 05:26:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:02.670 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:26:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:02.671 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 unbound from our chassis#033[00m
Jan 23 05:26:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:02.672 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:26:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:02.673 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[103621a1-c4b0-4e60-b889-441209ce0744]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:02.674 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace which is not needed anymore#033[00m
Jan 23 05:26:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:02 np0005593232 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 23 05:26:02 np0005593232 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d000000a4.scope: Consumed 5.988s CPU time.
Jan 23 05:26:02 np0005593232 systemd-machined[215836]: Machine qemu-74-instance-000000a4 terminated.
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.822 250273 INFO nova.virt.libvirt.driver [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance destroyed successfully.#033[00m
Jan 23 05:26:02 np0005593232 nova_compute[250269]: 2026-01-23 10:26:02.823 250273 DEBUG nova.objects.instance [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'numa_topology' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:26:02 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [NOTICE]   (361402) : haproxy version is 2.8.14-c23fe91
Jan 23 05:26:02 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [NOTICE]   (361402) : path to executable is /usr/sbin/haproxy
Jan 23 05:26:02 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [WARNING]  (361402) : Exiting Master process...
Jan 23 05:26:02 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [ALERT]    (361402) : Current worker (361404) exited with code 143 (Terminated)
Jan 23 05:26:02 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361398]: [WARNING]  (361402) : All workers exited. Exiting... (0)
Jan 23 05:26:02 np0005593232 systemd[1]: libpod-23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02.scope: Deactivated successfully.
Jan 23 05:26:02 np0005593232 podman[361461]: 2026-01-23 10:26:02.858719516 +0000 UTC m=+0.059647037 container died 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:26:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02-userdata-shm.mount: Deactivated successfully.
Jan 23 05:26:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4114799e3b590c875b48144ab11d1bc24da1a148c1a0754d3f11b107b3987aa9-merged.mount: Deactivated successfully.
Jan 23 05:26:02 np0005593232 podman[361461]: 2026-01-23 10:26:02.921569932 +0000 UTC m=+0.122497423 container cleanup 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:26:02 np0005593232 systemd[1]: libpod-conmon-23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02.scope: Deactivated successfully.
Jan 23 05:26:03 np0005593232 podman[361497]: 2026-01-23 10:26:03.020055792 +0000 UTC m=+0.064794583 container remove 23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.027 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[989f206c-ff16-4565-b258-2c29661b6ce1]: (4, ('Fri Jan 23 10:26:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02)\n23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02\nFri Jan 23 10:26:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02)\n23bdca7f7a03939712e5374c2f5f0d267b4f4a5266d200bbf11610342988dd02\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.030 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3322ed97-877e-4319-b81c-c78d27ce521a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.031 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:03 np0005593232 kernel: tapfbd64ab8-90: left promiscuous mode
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.033 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.0531] manager: (tap8a815bf1-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/308)
Jan 23 05:26:03 np0005593232 systemd-udevd[361420]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.054 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:26:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3040910128' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.056 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[764b4b95-45b8-4ead-8038-0a6d3ee869fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 kernel: tap8a815bf1-0b: entered promiscuous mode
Jan 23 05:26:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:03Z|00663|binding|INFO|Claiming lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 for this chassis.
Jan 23 05:26:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:03Z|00664|binding|INFO|8a815bf1-0b58-47f0-a81a-267ec84efc82: Claiming fa:16:3e:da:00:1f 10.100.0.9
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.063 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.0718] device (tap8a815bf1-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.0723] device (tap8a815bf1-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.070 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[73c43a97-7bf2-4a5b-8c11-9ead9a736d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.073 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[53a87764-3765-415b-9776-f422e29c1516]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:03Z|00665|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 ovn-installed in OVS
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.086 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.089 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd70c84-4da2-4985-b610-59221e3b9b35]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787018, 'reachable_time': 19189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361524, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.092 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.092 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[fb775ef6-250a-4f33-8edc-c6cb977aa623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfbd64ab8\x2d9e5b\x2d4300\x2d98d7\x2d50a5d6fbefc4.mount: Deactivated successfully.
Jan 23 05:26:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:03Z|00666|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 up in Southbound
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.128 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.130 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 bound to our chassis#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.133 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4#033[00m
Jan 23 05:26:03 np0005593232 systemd-machined[215836]: New machine qemu-75-instance-000000a4.
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.145 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0e7cbd-e2ba-41d5-9eae-f5050bb60ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.148 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfbd64ab8-91 in ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.150 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfbd64ab8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.151 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[328bf0f6-bfee-4b7d-8f1c-0a453b5d93ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.151 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8077e797-3802-469f-ba0a-efdd65bcf108]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 systemd[1]: Started Virtual Machine qemu-75-instance-000000a4.
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.166 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[57057707-d2e1-4d1d-ac11-ddabb48bc553]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.189 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e742f548-c36b-43a0-ba76-0223107b89fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.219 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4141c85a-895c-4d40-8b98-33742704a73c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.2276] manager: (tapfbd64ab8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/309)
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.228 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[82355143-f85d-424f-b34f-a9ccf83481ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.262 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5b98ff06-a603-4df7-bcf5-1c456edbe70f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.264 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b390972a-acc4-4aa5-bc74-9fa6db99b2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.2831] device (tapfbd64ab8-90): carrier: link connected
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.291 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bee33aa1-ec7a-4d46-b816-1e866fe78ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.305 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8c72d2-9607-4f24-ae10-0953a443d66f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787642, 'reachable_time': 39698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361560, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.329 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf1586d-2372-4668-b91e-337e17aae105]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:7c5a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787642, 'tstamp': 787642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361561, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.347 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0787e5d7-c756-4837-a94e-05fdebc7ba54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787642, 'reachable_time': 39698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361562, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.389 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7871b429-ed56-4dd8-8d87-c518c5dac77d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:03.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.458 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d815de29-7b83-435b-86ff-ddfdea4400d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.459 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.460 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.460 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd64ab8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:03 np0005593232 kernel: tapfbd64ab8-90: entered promiscuous mode
Jan 23 05:26:03 np0005593232 NetworkManager[49057]: <info>  [1769163963.4641] manager: (tapfbd64ab8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/310)
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.464 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd64ab8-90, col_values=(('external_ids', {'iface-id': 'b648300b-e46c-4d3b-b02e-94ff684c03ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:03 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:03Z|00667|binding|INFO|Releasing lport b648300b-e46c-4d3b-b02e-94ff684c03ae from this chassis (sb_readonly=0)
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.468 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.469 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[25aace55-8a18-4ce5-8be3-cb3bd6a0d954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.470 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.pid.haproxy
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:26:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:03.472 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'env', 'PROCESS_TAG=haproxy-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.479 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2908: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.869 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for ae979986-7780-443a-afbc-6b4be8f71da1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.870 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163963.8689966, ae979986-7780-443a-afbc-6b4be8f71da1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.871 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:26:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:03.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:03 np0005593232 podman[361635]: 2026-01-23 10:26:03.974839704 +0000 UTC m=+0.088392784 container create d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.989 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.998 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:26:03 np0005593232 nova_compute[250269]: 2026-01-23 10:26:03.998 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.000 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:26:04 np0005593232 podman[361635]: 2026-01-23 10:26:03.935757793 +0000 UTC m=+0.049310883 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.035 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.039 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163963.8713064, ae979986-7780-443a-afbc-6b4be8f71da1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.040 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Started (Lifecycle Event)#033[00m
Jan 23 05:26:04 np0005593232 systemd[1]: Started libpod-conmon-d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35.scope.
Jan 23 05:26:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5204ad2e1d5ec9589fcda41b7a30409d4167b4cd72e0dad5027d919787567051/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:04 np0005593232 podman[361635]: 2026-01-23 10:26:04.114662059 +0000 UTC m=+0.228215219 container init d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:26:04 np0005593232 podman[361635]: 2026-01-23 10:26:04.123770798 +0000 UTC m=+0.237323888 container start d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:26:04 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [NOTICE]   (361674) : New worker (361677) forked
Jan 23 05:26:04 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [NOTICE]   (361674) : Loading success.
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.283 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.284 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4231MB free_disk=20.921730041503906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.285 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.285 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.367 250273 DEBUG nova.compute.manager [None req-f538ccbf-641f-4732-bb42-1a8b9eb0660c 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.712 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.725 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.762 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:04 np0005593232 nova_compute[250269]: 2026-01-23 10:26:04.812 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.081 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ae979986-7780-443a-afbc-6b4be8f71da1 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.082 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.082 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.186 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.304 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.305 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.350 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:26:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:05.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.532 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.673 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2909: 321 pgs: 321 active+clean; 214 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 143 op/s
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.839 250273 DEBUG nova.compute.manager [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.841 250273 DEBUG oslo_concurrency.lockutils [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.841 250273 DEBUG oslo_concurrency.lockutils [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.842 250273 DEBUG oslo_concurrency.lockutils [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.842 250273 DEBUG nova.compute.manager [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:26:05 np0005593232 nova_compute[250269]: 2026-01-23 10:26:05.843 250273 WARNING nova.compute.manager [req-492f8075-be1d-42f1-b34d-5c0f88446cca req-1f683eb1-242c-4296-9967-6aa529fa2838 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:26:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:26:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:05.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:26:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:26:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3744239460' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:26:06 np0005593232 nova_compute[250269]: 2026-01-23 10:26:06.200 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:06 np0005593232 nova_compute[250269]: 2026-01-23 10:26:06.211 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:26:06 np0005593232 nova_compute[250269]: 2026-01-23 10:26:06.568 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:26:07 np0005593232 nova_compute[250269]: 2026-01-23 10:26:07.108 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:26:07 np0005593232 nova_compute[250269]: 2026-01-23 10:26:07.109 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2910: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 23 05:26:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:07.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.079 250273 DEBUG nova.compute.manager [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.080 250273 DEBUG oslo_concurrency.lockutils [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.080 250273 DEBUG oslo_concurrency.lockutils [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.081 250273 DEBUG oslo_concurrency.lockutils [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.081 250273 DEBUG nova.compute.manager [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.081 250273 WARNING nova.compute.manager [req-e933e4a7-c376-47c2-ba6a-66c158d68ef6 req-b39e6328-176d-497f-b88d-894ad580fc19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.107 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.108 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:26:08 np0005593232 nova_compute[250269]: 2026-01-23 10:26:08.108 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:26:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:09.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:09 np0005593232 nova_compute[250269]: 2026-01-23 10:26:09.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2911: 321 pgs: 321 active+clean; 225 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 990 KiB/s wr, 118 op/s
Jan 23 05:26:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:09.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:09 np0005593232 nova_compute[250269]: 2026-01-23 10:26:09.945 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:26:09 np0005593232 nova_compute[250269]: 2026-01-23 10:26:09.945 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:26:09 np0005593232 nova_compute[250269]: 2026-01-23 10:26:09.946 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:26:09 np0005593232 nova_compute[250269]: 2026-01-23 10:26:09.946 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.200 250273 DEBUG nova.compute.manager [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.201 250273 DEBUG oslo_concurrency.lockutils [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.202 250273 DEBUG oslo_concurrency.lockutils [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.203 250273 DEBUG oslo_concurrency.lockutils [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.204 250273 DEBUG nova.compute.manager [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:26:10 np0005593232 nova_compute[250269]: 2026-01-23 10:26:10.204 250273 WARNING nova.compute.manager [req-6e6de69c-15b3-466c-b987-5d6b464df8f6 req-497f6c41-b589-4989-a4ab-4f73582f67a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:26:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:11.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2912: 321 pgs: 321 active+clean; 225 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 128 KiB/s wr, 106 op/s
Jan 23 05:26:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.332 250273 DEBUG nova.compute.manager [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.333 250273 DEBUG oslo_concurrency.lockutils [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.334 250273 DEBUG oslo_concurrency.lockutils [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.334 250273 DEBUG oslo_concurrency.lockutils [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.335 250273 DEBUG nova.compute.manager [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:26:12 np0005593232 nova_compute[250269]: 2026-01-23 10:26:12.336 250273 WARNING nova.compute.manager [req-2791e314-a8b4-45a8-ae9c-154c1b12475e req-e983a732-4fe6-4bc9-ae3d-c0b8f4550d71 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:26:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:13.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2913: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Jan 23 05:26:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:13.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:14 np0005593232 nova_compute[250269]: 2026-01-23 10:26:14.768 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:14 np0005593232 nova_compute[250269]: 2026-01-23 10:26:14.770 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:15.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:15 np0005593232 podman[361737]: 2026-01-23 10:26:15.797080509 +0000 UTC m=+0.147636258 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:26:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2914: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 23 05:26:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:16 np0005593232 nova_compute[250269]: 2026-01-23 10:26:16.048 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:26:16 np0005593232 nova_compute[250269]: 2026-01-23 10:26:16.077 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:26:16 np0005593232 nova_compute[250269]: 2026-01-23 10:26:16.077 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:26:16 np0005593232 nova_compute[250269]: 2026-01-23 10:26:16.078 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:17.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2915: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 48 op/s
Jan 23 05:26:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:17.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.417 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.418 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.457 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.546 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.546 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.556 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.557 250273 INFO nova.compute.claims [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:26:18 np0005593232 nova_compute[250269]: 2026-01-23 10:26:18.718 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:26:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4078492264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.269 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.277 250273 DEBUG nova.compute.provider_tree [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.313 250273 DEBUG nova.scheduler.client.report [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.339 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.341 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.389 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.389 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.411 250273 INFO nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.431 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:26:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.541 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.542 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.543 250273 INFO nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Creating image(s)#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.581 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.639 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2916: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 858 KiB/s rd, 1.8 MiB/s wr, 63 op/s
Jan 23 05:26:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:19.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.968 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:19 np0005593232 nova_compute[250269]: 2026-01-23 10:26:19.973 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.030 250273 DEBUG nova.policy [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0d6a628e0dcb441fa41457bf719e65a0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.079 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.080 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.081 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.081 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.114 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.118 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.418 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.526 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] resizing rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.695 250273 DEBUG nova.objects.instance [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'migration_context' on Instance uuid da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.717 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.718 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Ensure instance console log exists: /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.719 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.720 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:20 np0005593232 nova_compute[250269]: 2026-01-23 10:26:20.720 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:21 np0005593232 podman[361984]: 2026-01-23 10:26:21.445375234 +0000 UTC m=+0.085695647 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:26:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2917: 321 pgs: 321 active+clean; 260 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 848 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Jan 23 05:26:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:22 np0005593232 nova_compute[250269]: 2026-01-23 10:26:22.258 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:23 np0005593232 nova_compute[250269]: 2026-01-23 10:26:23.276 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Successfully created port: 610277e6-a454-47a7-8c51-aa13506c9f66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:26:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:23.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2918: 321 pgs: 321 active+clean; 284 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 114 op/s
Jan 23 05:26:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:23.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:24 np0005593232 nova_compute[250269]: 2026-01-23 10:26:24.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:24 np0005593232 nova_compute[250269]: 2026-01-23 10:26:24.939 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Successfully updated port: 610277e6-a454-47a7-8c51-aa13506c9f66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:26:24 np0005593232 nova_compute[250269]: 2026-01-23 10:26:24.956 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:26:24 np0005593232 nova_compute[250269]: 2026-01-23 10:26:24.957 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquired lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:26:24 np0005593232 nova_compute[250269]: 2026-01-23 10:26:24.957 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:26:25 np0005593232 nova_compute[250269]: 2026-01-23 10:26:25.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:25 np0005593232 nova_compute[250269]: 2026-01-23 10:26:25.174 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:26:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:25.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.724758) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985724842, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1140, "num_deletes": 252, "total_data_size": 1841886, "memory_usage": 1874816, "flush_reason": "Manual Compaction"}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985746707, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 1823916, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64195, "largest_seqno": 65334, "table_properties": {"data_size": 1818311, "index_size": 2935, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12543, "raw_average_key_size": 20, "raw_value_size": 1806945, "raw_average_value_size": 2938, "num_data_blocks": 128, "num_entries": 615, "num_filter_entries": 615, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163883, "oldest_key_time": 1769163883, "file_creation_time": 1769163985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 22044 microseconds, and 6864 cpu microseconds.
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.746801) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 1823916 bytes OK
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.746847) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.749473) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.749504) EVENT_LOG_v1 {"time_micros": 1769163985749493, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.749544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1836606, prev total WAL file size 1836606, number of live WAL files 2.
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.751458) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(1781KB)], [149(10156KB)]
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985751532, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12224403, "oldest_snapshot_seqno": -1}
Jan 23 05:26:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2919: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8799 keys, 10313330 bytes, temperature: kUnknown
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985867539, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 10313330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10258686, "index_size": 31555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 233173, "raw_average_key_size": 26, "raw_value_size": 10106338, "raw_average_value_size": 1148, "num_data_blocks": 1198, "num_entries": 8799, "num_filter_entries": 8799, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769163985, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.868287) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 10313330 bytes
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.897722) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.2 rd, 88.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 9.9 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(12.4) write-amplify(5.7) OK, records in: 9322, records dropped: 523 output_compression: NoCompression
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.897785) EVENT_LOG_v1 {"time_micros": 1769163985897759, "job": 92, "event": "compaction_finished", "compaction_time_micros": 116219, "compaction_time_cpu_micros": 43412, "output_level": 6, "num_output_files": 1, "total_output_size": 10313330, "num_input_records": 9322, "num_output_records": 8799, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985898832, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769163985903042, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.751234) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.903187) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.903201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.903207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.903211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:26:25.903215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:26:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:25.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.368 250273 DEBUG nova.compute.manager [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-changed-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.369 250273 DEBUG nova.compute.manager [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Refreshing instance network info cache due to event network-changed-610277e6-a454-47a7-8c51-aa13506c9f66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.369 250273 DEBUG oslo_concurrency.lockutils [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.883 250273 DEBUG nova.network.neutron [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updating instance_info_cache with network_info: [{"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.970 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Releasing lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.971 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Instance network_info: |[{"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.972 250273 DEBUG oslo_concurrency.lockutils [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.974 250273 DEBUG nova.network.neutron [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Refreshing network info cache for port 610277e6-a454-47a7-8c51-aa13506c9f66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.981 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Start _get_guest_xml network_info=[{"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:26:26 np0005593232 nova_compute[250269]: 2026-01-23 10:26:26.993 250273 WARNING nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.025 250273 DEBUG nova.virt.libvirt.host [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.027 250273 DEBUG nova.virt.libvirt.host [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.034 250273 DEBUG nova.virt.libvirt.host [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.036 250273 DEBUG nova.virt.libvirt.host [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.039 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.040 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.041 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.041 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.042 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.042 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.042 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.043 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.043 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.043 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.044 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.044 250273 DEBUG nova.virt.hardware [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.048 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:27.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.550 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.594 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:27 np0005593232 nova_compute[250269]: 2026-01-23 10:26:27.603 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2920: 321 pgs: 321 active+clean; 296 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 2.8 MiB/s wr, 192 op/s
Jan 23 05:26:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:27.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:26:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817452558' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.097 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.099 250273 DEBUG nova.virt.libvirt.vif [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-543715089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-543715089',id=168,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-cwl09i3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:26:19Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=da8fd4b4-46c3-412e-aeeb-499a3fec1bc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.100 250273 DEBUG nova.network.os_vif_util [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.101 250273 DEBUG nova.network.os_vif_util [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.102 250273 DEBUG nova.objects.instance [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'pci_devices' on Instance uuid da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.119 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <uuid>da8fd4b4-46c3-412e-aeeb-499a3fec1bc5</uuid>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <name>instance-000000a8</name>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerBootFromVolumeStableRescueTest-server-543715089</nova:name>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:26:26</nova:creationTime>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:user uuid="0d6a628e0dcb441fa41457bf719e65a0">tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member</nova:user>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:project uuid="5c27429e1d8f433a8a67ddb76f8798f1">tempest-ServerBootFromVolumeStableRescueTest-1351337832</nova:project>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <nova:port uuid="610277e6-a454-47a7-8c51-aa13506c9f66">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="serial">da8fd4b4-46c3-412e-aeeb-499a3fec1bc5</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="uuid">da8fd4b4-46c3-412e-aeeb-499a3fec1bc5</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e9:0a:45"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <target dev="tap610277e6-a4"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/console.log" append="off"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:26:28 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:26:28 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:26:28 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:26:28 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.121 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Preparing to wait for external event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.121 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.121 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.121 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.122 250273 DEBUG nova.virt.libvirt.vif [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-543715089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-543715089',id=168,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-cwl09i3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:26:19Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=da8fd4b4-46c3-412e-aeeb-499a3fec1bc5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.122 250273 DEBUG nova.network.os_vif_util [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.123 250273 DEBUG nova.network.os_vif_util [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.123 250273 DEBUG os_vif [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.124 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.125 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.125 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.130 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.130 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap610277e6-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.131 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap610277e6-a4, col_values=(('external_ids', {'iface-id': '610277e6-a454-47a7-8c51-aa13506c9f66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:0a:45', 'vm-uuid': 'da8fd4b4-46c3-412e-aeeb-499a3fec1bc5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:28 np0005593232 NetworkManager[49057]: <info>  [1769163988.1891] manager: (tap610277e6-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.191 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.199 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.200 250273 INFO os_vif [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4')#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.276 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.277 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.278 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No VIF found with MAC fa:16:3e:e9:0a:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.279 250273 INFO nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Using config drive#033[00m
Jan 23 05:26:28 np0005593232 nova_compute[250269]: 2026-01-23 10:26:28.325 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:29.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:29 np0005593232 nova_compute[250269]: 2026-01-23 10:26:29.777 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2921: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 218 op/s
Jan 23 05:26:29 np0005593232 nova_compute[250269]: 2026-01-23 10:26:29.837 250273 INFO nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Creating config drive at /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config#033[00m
Jan 23 05:26:29 np0005593232 nova_compute[250269]: 2026-01-23 10:26:29.851 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacabpfr_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:29.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:29 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.030 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpacabpfr_" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.070 250273 DEBUG nova.storage.rbd_utils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] rbd image da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.075 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.289 250273 DEBUG oslo_concurrency.processutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.291 250273 INFO nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Deleting local config drive /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5/disk.config because it was imported into RBD.#033[00m
Jan 23 05:26:30 np0005593232 kernel: tap610277e6-a4: entered promiscuous mode
Jan 23 05:26:30 np0005593232 NetworkManager[49057]: <info>  [1769163990.3739] manager: (tap610277e6-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/312)
Jan 23 05:26:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:30Z|00668|binding|INFO|Claiming lport 610277e6-a454-47a7-8c51-aa13506c9f66 for this chassis.
Jan 23 05:26:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:30Z|00669|binding|INFO|610277e6-a454-47a7-8c51-aa13506c9f66: Claiming fa:16:3e:e9:0a:45 10.100.0.7
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.389 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:0a:45 10.100.0.7'], port_security=['fa:16:3e:e9:0a:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'da8fd4b4-46c3-412e-aeeb-499a3fec1bc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=610277e6-a454-47a7-8c51-aa13506c9f66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.391 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 610277e6-a454-47a7-8c51-aa13506c9f66 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 bound to our chassis#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.395 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.397 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:30Z|00670|binding|INFO|Setting lport 610277e6-a454-47a7-8c51-aa13506c9f66 ovn-installed in OVS
Jan 23 05:26:30 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:30Z|00671|binding|INFO|Setting lport 610277e6-a454-47a7-8c51-aa13506c9f66 up in Southbound
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.409 250273 DEBUG nova.network.neutron [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updated VIF entry in instance network info cache for port 610277e6-a454-47a7-8c51-aa13506c9f66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.411 250273 DEBUG nova.network.neutron [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updating instance_info_cache with network_info: [{"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.428 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[459a4990-976a-402d-8146-57cba01d2e69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.432 250273 DEBUG oslo_concurrency.lockutils [req-77585b1f-e7c7-450a-8ff2-4091b5dd317d req-e40e3eee-8c4a-4366-9d21-00893e793afa 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:26:30 np0005593232 systemd-machined[215836]: New machine qemu-76-instance-000000a8.
Jan 23 05:26:30 np0005593232 systemd-udevd[362144]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:26:30 np0005593232 systemd[1]: Started Virtual Machine qemu-76-instance-000000a8.
Jan 23 05:26:30 np0005593232 NetworkManager[49057]: <info>  [1769163990.4635] device (tap610277e6-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:26:30 np0005593232 NetworkManager[49057]: <info>  [1769163990.4646] device (tap610277e6-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.487 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e924776c-b114-4671-9a43-4ac872bdefda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.493 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5545392e-80ab-4634-a463-8cd646a945c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.548 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f1c32ca8-9284-441e-b9ff-6a948b120c58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.578 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cb80f3-20bd-4c3b-898c-20afc2decc73]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787642, 'reachable_time': 39698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362180, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.605 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9a790b7c-9658-48e5-bc4d-3552dc3b154f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbd64ab8-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787655, 'tstamp': 787655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362189, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbd64ab8-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787659, 'tstamp': 787659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362189, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.610 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.615 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd64ab8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.616 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.616 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd64ab8-90, col_values=(('external_ids', {'iface-id': 'b648300b-e46c-4d3b-b02e-94ff684c03ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:26:30 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:30.617 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.726 250273 DEBUG nova.compute.manager [req-b3002481-516b-441b-a3cb-180f9cfcc8e3 req-ad49a208-1899-448b-9403-cab2c13631ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.727 250273 DEBUG oslo_concurrency.lockutils [req-b3002481-516b-441b-a3cb-180f9cfcc8e3 req-ad49a208-1899-448b-9403-cab2c13631ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.728 250273 DEBUG oslo_concurrency.lockutils [req-b3002481-516b-441b-a3cb-180f9cfcc8e3 req-ad49a208-1899-448b-9403-cab2c13631ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.728 250273 DEBUG oslo_concurrency.lockutils [req-b3002481-516b-441b-a3cb-180f9cfcc8e3 req-ad49a208-1899-448b-9403-cab2c13631ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:30 np0005593232 nova_compute[250269]: 2026-01-23 10:26:30.729 250273 DEBUG nova.compute.manager [req-b3002481-516b-441b-a3cb-180f9cfcc8e3 req-ad49a208-1899-448b-9403-cab2c13631ea 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Processing event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.254 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163991.2541995, da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.256 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] VM Started (Lifecycle Event)#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.260 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.265 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.271 250273 INFO nova.virt.libvirt.driver [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Instance spawned successfully.#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.272 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.277 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.280 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.295 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.295 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.295 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.296 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.296 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.296 250273 DEBUG nova.virt.libvirt.driver [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.304 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.304 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163991.2556322, da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.304 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.340 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.343 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769163991.2632568, da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.343 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.365 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.369 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.377 250273 INFO nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Took 11.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.378 250273 DEBUG nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.410 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.457 250273 INFO nova.compute.manager [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Took 12.94 seconds to build instance.#033[00m
Jan 23 05:26:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:31.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:31 np0005593232 nova_compute[250269]: 2026-01-23 10:26:31.476 250273 DEBUG oslo_concurrency.lockutils [None req-4e9e7691-7eb9-4f30-8165-0425a1747467 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1268322e-7009-4651-a5cc-2bb47d5e54f2 does not exist
Jan 23 05:26:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8811d53d-be2a-424d-8e71-4b02fd2da204 does not exist
Jan 23 05:26:31 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 31422ca6-fa34-4ebd-be10-43f8c8fb87d1 does not exist
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:31 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:26:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2922: 321 pgs: 321 active+clean; 306 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Jan 23 05:26:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:31.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.453632315 +0000 UTC m=+0.076459524 container create d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:26:32 np0005593232 systemd[1]: Started libpod-conmon-d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0.scope.
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.422398387 +0000 UTC m=+0.045225636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.568932383 +0000 UTC m=+0.191759672 container init d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.581276064 +0000 UTC m=+0.204103263 container start d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.587413668 +0000 UTC m=+0.210240887 container attach d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:26:32 np0005593232 systemd[1]: libpod-d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0.scope: Deactivated successfully.
Jan 23 05:26:32 np0005593232 laughing_allen[362488]: 167 167
Jan 23 05:26:32 np0005593232 conmon[362488]: conmon d084f9c64acc82000ec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0.scope/container/memory.events
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.594748767 +0000 UTC m=+0.217575976 container died d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:26:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-01e71430d3531e5f46e2b224cca2daa5616f3be6ba3aa240f6386c736c44952c-merged.mount: Deactivated successfully.
Jan 23 05:26:32 np0005593232 podman[362472]: 2026-01-23 10:26:32.65009116 +0000 UTC m=+0.272918389 container remove d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_allen, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:26:32 np0005593232 systemd[1]: libpod-conmon-d084f9c64acc82000ec509f3c70a6c62be996558fcfa3755763efc634b23f7d0.scope: Deactivated successfully.
Jan 23 05:26:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:32 np0005593232 nova_compute[250269]: 2026-01-23 10:26:32.885 250273 DEBUG nova.compute.manager [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:26:32 np0005593232 podman[362512]: 2026-01-23 10:26:32.929217925 +0000 UTC m=+0.069863507 container create 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:26:32 np0005593232 nova_compute[250269]: 2026-01-23 10:26:32.935 250273 INFO nova.compute.manager [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] instance snapshotting#033[00m
Jan 23 05:26:32 np0005593232 systemd[1]: Started libpod-conmon-246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4.scope.
Jan 23 05:26:33 np0005593232 podman[362512]: 2026-01-23 10:26:32.911861722 +0000 UTC m=+0.052507314 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:33 np0005593232 podman[362512]: 2026-01-23 10:26:33.053699634 +0000 UTC m=+0.194345276 container init 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:26:33 np0005593232 podman[362512]: 2026-01-23 10:26:33.073129006 +0000 UTC m=+0.213774638 container start 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:26:33 np0005593232 podman[362512]: 2026-01-23 10:26:33.076936774 +0000 UTC m=+0.217582426 container attach 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.083 250273 DEBUG nova.compute.manager [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.084 250273 DEBUG oslo_concurrency.lockutils [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.085 250273 DEBUG oslo_concurrency.lockutils [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.085 250273 DEBUG oslo_concurrency.lockutils [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.085 250273 DEBUG nova.compute.manager [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] No waiting events found dispatching network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.086 250273 WARNING nova.compute.manager [req-39b9ac34-2162-44ab-ac88-458d05183eb6 req-9a4ff099-4986-4e93-b972-1748477ede93 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received unexpected event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 for instance with vm_state active and task_state image_snapshot.#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.327 250273 INFO nova.virt.libvirt.driver [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Beginning live snapshot process#033[00m
Jan 23 05:26:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:33.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:33 np0005593232 nova_compute[250269]: 2026-01-23 10:26:33.550 250273 DEBUG nova.virt.libvirt.imagebackend [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:26:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2923: 321 pgs: 321 active+clean; 319 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 252 op/s
Jan 23 05:26:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:33.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:34 np0005593232 nova_compute[250269]: 2026-01-23 10:26:34.032 250273 DEBUG nova.storage.rbd_utils [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] creating snapshot(9e7c1f81e2de4995a0b3b819f8658094) on rbd image(da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:26:34 np0005593232 sleepy_stonebraker[362528]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:26:34 np0005593232 sleepy_stonebraker[362528]: --> relative data size: 1.0
Jan 23 05:26:34 np0005593232 sleepy_stonebraker[362528]: --> All data devices are unavailable
Jan 23 05:26:34 np0005593232 systemd[1]: libpod-246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4.scope: Deactivated successfully.
Jan 23 05:26:34 np0005593232 podman[362512]: 2026-01-23 10:26:34.098696199 +0000 UTC m=+1.239341811 container died 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:26:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-58aab634b454e85aa398484480cf2b53a457fe376faa18e25f8cc2072060bea0-merged.mount: Deactivated successfully.
Jan 23 05:26:34 np0005593232 podman[362512]: 2026-01-23 10:26:34.162831273 +0000 UTC m=+1.303476905 container remove 246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_stonebraker, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:26:34 np0005593232 systemd[1]: libpod-conmon-246ac8c0de870e2894990607c4327c0fe7ebfbe1d7b8ea2fda20ca2a386c3aa4.scope: Deactivated successfully.
Jan 23 05:26:34 np0005593232 nova_compute[250269]: 2026-01-23 10:26:34.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 23 05:26:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 23 05:26:35 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.158564359 +0000 UTC m=+0.050243449 container create 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 05:26:35 np0005593232 nova_compute[250269]: 2026-01-23 10:26:35.161 250273 DEBUG nova.storage.rbd_utils [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] cloning vms/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk@9e7c1f81e2de4995a0b3b819f8658094 to images/468a5657-114a-42e3-900b-a72cd304261c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:26:35 np0005593232 systemd[1]: Started libpod-conmon-8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788.scope.
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.140135645 +0000 UTC m=+0.031814805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.2774975 +0000 UTC m=+0.169176650 container init 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.287034361 +0000 UTC m=+0.178713461 container start 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.291129277 +0000 UTC m=+0.182808387 container attach 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:26:35 np0005593232 quirky_darwin[362785]: 167 167
Jan 23 05:26:35 np0005593232 systemd[1]: libpod-8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788.scope: Deactivated successfully.
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.298140297 +0000 UTC m=+0.189819417 container died 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:26:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cb579db3bb5c12fb311289f82dd132b5e73d0c169f7bbbe0f31a24654c845dcf-merged.mount: Deactivated successfully.
Jan 23 05:26:35 np0005593232 podman[362747]: 2026-01-23 10:26:35.354283863 +0000 UTC m=+0.245962943 container remove 8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_darwin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:26:35 np0005593232 nova_compute[250269]: 2026-01-23 10:26:35.362 250273 DEBUG nova.storage.rbd_utils [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] flattening images/468a5657-114a-42e3-900b-a72cd304261c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:26:35 np0005593232 systemd[1]: libpod-conmon-8c9de0ec3c5c9569979c88c2adf4447be2a3d8a18e08af10b0a0cf0283111788.scope: Deactivated successfully.
Jan 23 05:26:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:35.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:35 np0005593232 podman[362841]: 2026-01-23 10:26:35.616017963 +0000 UTC m=+0.070441623 container create f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:26:35 np0005593232 podman[362841]: 2026-01-23 10:26:35.587562704 +0000 UTC m=+0.041986404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:35 np0005593232 systemd[1]: Started libpod-conmon-f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073.scope.
Jan 23 05:26:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f152175729bfb6fdde7f5d9f626ffdc1cd2b82e963ad7891bf27f5eef042cd9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f152175729bfb6fdde7f5d9f626ffdc1cd2b82e963ad7891bf27f5eef042cd9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f152175729bfb6fdde7f5d9f626ffdc1cd2b82e963ad7891bf27f5eef042cd9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f152175729bfb6fdde7f5d9f626ffdc1cd2b82e963ad7891bf27f5eef042cd9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:35 np0005593232 podman[362841]: 2026-01-23 10:26:35.746567564 +0000 UTC m=+0.200991264 container init f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:26:35 np0005593232 podman[362841]: 2026-01-23 10:26:35.755375065 +0000 UTC m=+0.209798765 container start f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:26:35 np0005593232 podman[362841]: 2026-01-23 10:26:35.760021707 +0000 UTC m=+0.214445397 container attach f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:26:35 np0005593232 nova_compute[250269]: 2026-01-23 10:26:35.762 250273 DEBUG nova.storage.rbd_utils [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] removing snapshot(9e7c1f81e2de4995a0b3b819f8658094) on rbd image(da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:26:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2925: 321 pgs: 321 active+clean; 339 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.7 MiB/s wr, 284 op/s
Jan 23 05:26:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:35.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 23 05:26:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 23 05:26:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 23 05:26:36 np0005593232 nova_compute[250269]: 2026-01-23 10:26:36.157 250273 DEBUG nova.storage.rbd_utils [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] creating snapshot(snap) on rbd image(468a5657-114a-42e3-900b-a72cd304261c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:26:36 np0005593232 modest_easley[362873]: {
Jan 23 05:26:36 np0005593232 modest_easley[362873]:    "0": [
Jan 23 05:26:36 np0005593232 modest_easley[362873]:        {
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "devices": [
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "/dev/loop3"
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            ],
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "lv_name": "ceph_lv0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "lv_size": "7511998464",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "name": "ceph_lv0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "tags": {
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.cluster_name": "ceph",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.crush_device_class": "",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.encrypted": "0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.osd_id": "0",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.type": "block",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:                "ceph.vdo": "0"
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            },
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "type": "block",
Jan 23 05:26:36 np0005593232 modest_easley[362873]:            "vg_name": "ceph_vg0"
Jan 23 05:26:36 np0005593232 modest_easley[362873]:        }
Jan 23 05:26:36 np0005593232 modest_easley[362873]:    ]
Jan 23 05:26:36 np0005593232 modest_easley[362873]: }
Jan 23 05:26:36 np0005593232 systemd[1]: libpod-f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073.scope: Deactivated successfully.
Jan 23 05:26:36 np0005593232 conmon[362873]: conmon f13b16c64bd4a0495ff4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073.scope/container/memory.events
Jan 23 05:26:36 np0005593232 podman[362841]: 2026-01-23 10:26:36.685739033 +0000 UTC m=+1.140162723 container died f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:26:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f152175729bfb6fdde7f5d9f626ffdc1cd2b82e963ad7891bf27f5eef042cd9d-merged.mount: Deactivated successfully.
Jan 23 05:26:36 np0005593232 podman[362841]: 2026-01-23 10:26:36.765116509 +0000 UTC m=+1.219540199 container remove f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_easley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:26:36 np0005593232 systemd[1]: libpod-conmon-f13b16c64bd4a0495ff4fa1d1a4f4d5f235bb32ad34b28d231ef4078a5fe4073.scope: Deactivated successfully.
Jan 23 05:26:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:26:37
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:26:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:37.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 23 05:26:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:26:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.726496758 +0000 UTC m=+0.069742384 container create 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:26:37 np0005593232 systemd[1]: Started libpod-conmon-44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9.scope.
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.699684306 +0000 UTC m=+0.042929922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2928: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 8.5 MiB/s rd, 6.5 MiB/s wr, 440 op/s
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.82542369 +0000 UTC m=+0.168669286 container init 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.838341157 +0000 UTC m=+0.181586733 container start 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.842009692 +0000 UTC m=+0.185255268 container attach 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:26:37 np0005593232 naughty_dijkstra[363126]: 167 167
Jan 23 05:26:37 np0005593232 systemd[1]: libpod-44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9.scope: Deactivated successfully.
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.846505359 +0000 UTC m=+0.189750965 container died 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:26:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-41ab85c7039e793c88ac453350b3de616231cd52b50918056ee46594d58e6ce4-merged.mount: Deactivated successfully.
Jan 23 05:26:37 np0005593232 podman[363110]: 2026-01-23 10:26:37.903846719 +0000 UTC m=+0.247092305 container remove 44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:26:37 np0005593232 systemd[1]: libpod-conmon-44f0a7d9e44f5612cdef7a938a0e166119c5cbdd4353789538fcfdf32583b1a9.scope: Deactivated successfully.
Jan 23 05:26:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:37.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:38 np0005593232 podman[363150]: 2026-01-23 10:26:38.186806523 +0000 UTC m=+0.072017188 container create 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:26:38 np0005593232 nova_compute[250269]: 2026-01-23 10:26:38.189 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:38 np0005593232 systemd[1]: Started libpod-conmon-1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6.scope.
Jan 23 05:26:38 np0005593232 podman[363150]: 2026-01-23 10:26:38.155129113 +0000 UTC m=+0.040339828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:26:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:26:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7d1e48088c8dcc22c3193e6ffd9a0c14bec685fc2c215845c298b7baf1a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7d1e48088c8dcc22c3193e6ffd9a0c14bec685fc2c215845c298b7baf1a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7d1e48088c8dcc22c3193e6ffd9a0c14bec685fc2c215845c298b7baf1a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6c7d1e48088c8dcc22c3193e6ffd9a0c14bec685fc2c215845c298b7baf1a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:26:38 np0005593232 podman[363150]: 2026-01-23 10:26:38.313952778 +0000 UTC m=+0.199163473 container init 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 05:26:38 np0005593232 podman[363150]: 2026-01-23 10:26:38.324505088 +0000 UTC m=+0.209715753 container start 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:26:38 np0005593232 podman[363150]: 2026-01-23 10:26:38.329028586 +0000 UTC m=+0.214239281 container attach 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:26:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:26:39 np0005593232 nova_compute[250269]: 2026-01-23 10:26:39.017 250273 INFO nova.virt.libvirt.driver [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Snapshot image upload complete#033[00m
Jan 23 05:26:39 np0005593232 nova_compute[250269]: 2026-01-23 10:26:39.019 250273 INFO nova.compute.manager [None req-759461d4-d9e4-4d01-871a-496c3c4e66c0 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Took 6.08 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]: {
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:        "osd_id": 0,
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:        "type": "bluestore"
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]:    }
Jan 23 05:26:39 np0005593232 eloquent_cartwright[363166]: }
Jan 23 05:26:39 np0005593232 systemd[1]: libpod-1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6.scope: Deactivated successfully.
Jan 23 05:26:39 np0005593232 podman[363150]: 2026-01-23 10:26:39.370810682 +0000 UTC m=+1.256021357 container died 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:26:39 np0005593232 systemd[1]: libpod-1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6.scope: Consumed 1.047s CPU time.
Jan 23 05:26:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7e6c7d1e48088c8dcc22c3193e6ffd9a0c14bec685fc2c215845c298b7baf1a2-merged.mount: Deactivated successfully.
Jan 23 05:26:39 np0005593232 podman[363150]: 2026-01-23 10:26:39.444845076 +0000 UTC m=+1.330055751 container remove 1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:26:39 np0005593232 systemd[1]: libpod-conmon-1ccb06fedba1695b31d14aad0edd81aab04138288016a5031bf59275777f34c6.scope: Deactivated successfully.
Jan 23 05:26:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:39.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 41daf345-8ceb-4a32-81d3-9ec01ea54a4b does not exist
Jan 23 05:26:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f2e5e4ba-200b-4fb4-bc04-2f2bf4b48494 does not exist
Jan 23 05:26:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d239e0c0-c32e-4b1c-bc21-8a2b142e51df does not exist
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:26:39 np0005593232 nova_compute[250269]: 2026-01-23 10:26:39.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2929: 321 pgs: 321 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 10 MiB/s rd, 6.0 MiB/s wr, 428 op/s
Jan 23 05:26:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:39.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:41.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2930: 321 pgs: 321 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 7.2 MiB/s rd, 3.2 MiB/s wr, 262 op/s
Jan 23 05:26:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:42.642 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:26:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:42.643 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:26:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:26:42.645 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:26:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 23 05:26:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 23 05:26:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 23 05:26:43 np0005593232 nova_compute[250269]: 2026-01-23 10:26:43.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:43.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2932: 321 pgs: 321 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.3 MiB/s rd, 2.8 MiB/s wr, 234 op/s
Jan 23 05:26:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:43.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:44 np0005593232 nova_compute[250269]: 2026-01-23 10:26:44.785 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:45.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2933: 321 pgs: 321 active+clean; 386 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 1.4 MiB/s wr, 111 op/s
Jan 23 05:26:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:45Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:0a:45 10.100.0.7
Jan 23 05:26:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:26:45Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:0a:45 10.100.0.7
Jan 23 05:26:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:45.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:46 np0005593232 podman[363253]: 2026-01-23 10:26:46.5068584 +0000 UTC m=+0.142232475 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006336841906290713 of space, bias 1.0, pg target 1.9010525718872138 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.2722757305043737e-06 of space, bias 1.0, pg target 0.00038041044342080775 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:26:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:47.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2934: 321 pgs: 321 active+clean; 404 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.3 MiB/s wr, 131 op/s
Jan 23 05:26:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:47.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:48 np0005593232 nova_compute[250269]: 2026-01-23 10:26:48.196 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:49 np0005593232 nova_compute[250269]: 2026-01-23 10:26:49.332 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:49 np0005593232 nova_compute[250269]: 2026-01-23 10:26:49.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2935: 321 pgs: 321 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 116 op/s
Jan 23 05:26:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:49.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:26:50 np0005593232 nova_compute[250269]: 2026-01-23 10:26:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:51 np0005593232 nova_compute[250269]: 2026-01-23 10:26:51.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:51.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2936: 321 pgs: 321 active+clean; 425 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 3.4 MiB/s wr, 116 op/s
Jan 23 05:26:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:51.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:52 np0005593232 nova_compute[250269]: 2026-01-23 10:26:52.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:52 np0005593232 nova_compute[250269]: 2026-01-23 10:26:52.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:26:52 np0005593232 podman[363283]: 2026-01-23 10:26:52.433549208 +0000 UTC m=+0.081430905 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 05:26:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:53 np0005593232 nova_compute[250269]: 2026-01-23 10:26:53.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:53.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2937: 321 pgs: 321 active+clean; 446 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.6 MiB/s wr, 142 op/s
Jan 23 05:26:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:54 np0005593232 nova_compute[250269]: 2026-01-23 10:26:54.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2938: 321 pgs: 321 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 144 op/s
Jan 23 05:26:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:55.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:56 np0005593232 nova_compute[250269]: 2026-01-23 10:26:56.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:26:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:26:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2920527337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:26:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:57.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2939: 321 pgs: 321 active+clean; 451 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 148 op/s
Jan 23 05:26:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:26:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:57.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:26:58 np0005593232 nova_compute[250269]: 2026-01-23 10:26:58.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:26:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:26:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:26:59.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:26:59 np0005593232 nova_compute[250269]: 2026-01-23 10:26:59.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:26:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2940: 321 pgs: 321 active+clean; 424 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 108 op/s
Jan 23 05:26:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:26:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:26:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:26:59.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:01.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2941: 321 pgs: 321 active+clean; 424 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.4 MiB/s wr, 67 op/s
Jan 23 05:27:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:01.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:02 np0005593232 nova_compute[250269]: 2026-01-23 10:27:02.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:03 np0005593232 nova_compute[250269]: 2026-01-23 10:27:03.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:03 np0005593232 nova_compute[250269]: 2026-01-23 10:27:03.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:03.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2942: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 3.2 MiB/s wr, 111 op/s
Jan 23 05:27:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:03.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:04 np0005593232 nova_compute[250269]: 2026-01-23 10:27:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:04 np0005593232 nova_compute[250269]: 2026-01-23 10:27:04.795 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.341 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.342 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.342 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.342 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.343 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:05.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:27:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454799675' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:27:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2943: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 93 KiB/s rd, 1.9 MiB/s wr, 76 op/s
Jan 23 05:27:05 np0005593232 nova_compute[250269]: 2026-01-23 10:27:05.850 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:05.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:06 np0005593232 nova_compute[250269]: 2026-01-23 10:27:06.900 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:27:06 np0005593232 nova_compute[250269]: 2026-01-23 10:27:06.900 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:27:06 np0005593232 nova_compute[250269]: 2026-01-23 10:27:06.904 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:27:06 np0005593232 nova_compute[250269]: 2026-01-23 10:27:06.904 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:27:07 np0005593232 nova_compute[250269]: 2026-01-23 10:27:07.060 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:27:07 np0005593232 nova_compute[250269]: 2026-01-23 10:27:07.061 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4007MB free_disk=20.851627349853516GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:27:07 np0005593232 nova_compute[250269]: 2026-01-23 10:27:07.062 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:07 np0005593232 nova_compute[250269]: 2026-01-23 10:27:07.062 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:07.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2944: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Jan 23 05:27:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:07.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:08 np0005593232 nova_compute[250269]: 2026-01-23 10:27:08.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:09 np0005593232 nova_compute[250269]: 2026-01-23 10:27:09.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2945: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 23 05:27:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:09.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:10 np0005593232 nova_compute[250269]: 2026-01-23 10:27:10.528 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ae979986-7780-443a-afbc-6b4be8f71da1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:27:10 np0005593232 nova_compute[250269]: 2026-01-23 10:27:10.529 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:27:10 np0005593232 nova_compute[250269]: 2026-01-23 10:27:10.529 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:27:10 np0005593232 nova_compute[250269]: 2026-01-23 10:27:10.529 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:27:10 np0005593232 nova_compute[250269]: 2026-01-23 10:27:10.596 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:27:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120822850' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:27:11 np0005593232 nova_compute[250269]: 2026-01-23 10:27:11.049 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:11 np0005593232 nova_compute[250269]: 2026-01-23 10:27:11.055 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:27:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2946: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 864 KiB/s wr, 44 op/s
Jan 23 05:27:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:11.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:13 np0005593232 nova_compute[250269]: 2026-01-23 10:27:13.025 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:27:13 np0005593232 nova_compute[250269]: 2026-01-23 10:27:13.096 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:27:13 np0005593232 nova_compute[250269]: 2026-01-23 10:27:13.097 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 6.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:13 np0005593232 nova_compute[250269]: 2026-01-23 10:27:13.272 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:13.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2947: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 864 KiB/s wr, 47 op/s
Jan 23 05:27:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:13.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:14 np0005593232 nova_compute[250269]: 2026-01-23 10:27:14.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:15 np0005593232 nova_compute[250269]: 2026-01-23 10:27:15.098 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:15 np0005593232 nova_compute[250269]: 2026-01-23 10:27:15.098 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:27:15 np0005593232 nova_compute[250269]: 2026-01-23 10:27:15.098 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:27:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:15.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2948: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 4.2 KiB/s rd, 15 KiB/s wr, 6 op/s
Jan 23 05:27:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:15.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:16 np0005593232 nova_compute[250269]: 2026-01-23 10:27:16.802 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:27:16 np0005593232 nova_compute[250269]: 2026-01-23 10:27:16.802 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:27:16 np0005593232 nova_compute[250269]: 2026-01-23 10:27:16.802 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:27:16 np0005593232 nova_compute[250269]: 2026-01-23 10:27:16.802 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:27:17 np0005593232 podman[363459]: 2026-01-23 10:27:17.423703937 +0000 UTC m=+0.086917572 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:27:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:17.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:17 np0005593232 nova_compute[250269]: 2026-01-23 10:27:17.815 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:17.815 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:27:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:17.817 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:27:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2949: 321 pgs: 321 active+clean; 418 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 995 KiB/s rd, 15 KiB/s wr, 43 op/s
Jan 23 05:27:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:17.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:18 np0005593232 nova_compute[250269]: 2026-01-23 10:27:18.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:27:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657072557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:27:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:27:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3657072557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:27:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:19 np0005593232 nova_compute[250269]: 2026-01-23 10:27:19.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2950: 321 pgs: 321 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 581 KiB/s wr, 96 op/s
Jan 23 05:27:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:19.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:21.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2951: 321 pgs: 321 active+clean; 430 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 578 KiB/s wr, 96 op/s
Jan 23 05:27:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:21.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:22 np0005593232 nova_compute[250269]: 2026-01-23 10:27:22.024 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [{"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:27:22 np0005593232 nova_compute[250269]: 2026-01-23 10:27:22.062 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-ae979986-7780-443a-afbc-6b4be8f71da1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:27:22 np0005593232 nova_compute[250269]: 2026-01-23 10:27:22.062 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:27:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:22.819 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:27:23 np0005593232 nova_compute[250269]: 2026-01-23 10:27:23.305 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:23 np0005593232 podman[363490]: 2026-01-23 10:27:23.392899855 +0000 UTC m=+0.057448454 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 05:27:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:23.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2952: 321 pgs: 321 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Jan 23 05:27:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:23.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:24 np0005593232 nova_compute[250269]: 2026-01-23 10:27:24.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:25.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2953: 321 pgs: 321 active+clean; 465 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Jan 23 05:27:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:25.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2954: 321 pgs: 321 active+clean; 476 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.8 MiB/s wr, 169 op/s
Jan 23 05:27:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:28 np0005593232 nova_compute[250269]: 2026-01-23 10:27:28.308 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:29 np0005593232 nova_compute[250269]: 2026-01-23 10:27:29.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2955: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 168 op/s
Jan 23 05:27:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:29.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:31.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2956: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 373 KiB/s rd, 3.4 MiB/s wr, 115 op/s
Jan 23 05:27:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:33 np0005593232 nova_compute[250269]: 2026-01-23 10:27:33.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:33.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2957: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 168 op/s
Jan 23 05:27:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:34 np0005593232 nova_compute[250269]: 2026-01-23 10:27:34.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2958: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 23 05:27:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:36.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:27:37
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.log', 'volumes']
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:27:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:27:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2959: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 194 op/s
Jan 23 05:27:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:38.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:38 np0005593232 nova_compute[250269]: 2026-01-23 10:27:38.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:27:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:27:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:39 np0005593232 nova_compute[250269]: 2026-01-23 10:27:39.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2960: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 1.2 MiB/s wr, 182 op/s
Jan 23 05:27:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:40.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.475 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.477 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.498 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.593 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.594 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.603 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.604 250273 INFO nova.compute.claims [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:27:40 np0005593232 nova_compute[250269]: 2026-01-23 10:27:40.818 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/39726101' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.312 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.318 250273 DEBUG nova.compute.provider_tree [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.343 250273 DEBUG nova.scheduler.client.report [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.393 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.394 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.459 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.460 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.498 250273 INFO nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.519 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:27:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.651 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.653 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.653 250273 INFO nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Creating image(s)#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.751 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.782 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.810 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.813 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5f8a9567-a3ac-4e0a-af78-ecbe72c10d58 does not exist
Jan 23 05:27:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3d42688c-ae0e-4f79-832b-3dbad5a7ccee does not exist
Jan 23 05:27:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4fe24e4a-bbb3-43a0-a2d9-866126e19f70 does not exist
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:27:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:27:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2961: 321 pgs: 321 active+clean; 498 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 39 KiB/s wr, 145 op/s
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.886 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.887 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.888 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.888 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.918 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:41 np0005593232 nova_compute[250269]: 2026-01-23 10:27:41.923 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:42.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:42 np0005593232 nova_compute[250269]: 2026-01-23 10:27:42.214 250273 DEBUG nova.policy [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c17acb50b0d49d4b062d68aa88d1f7f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8887855ebd545a6bdab3b6a18c19dd9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.432752571 +0000 UTC m=+0.022502630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.60894763 +0000 UTC m=+0.198697699 container create 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:27:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:27:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:27:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:42.642 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:42.643 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:27:42.644 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:42 np0005593232 systemd[1]: Started libpod-conmon-965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0.scope.
Jan 23 05:27:42 np0005593232 nova_compute[250269]: 2026-01-23 10:27:42.677 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.754s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:42 np0005593232 nova_compute[250269]: 2026-01-23 10:27:42.756 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] resizing rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.839694968 +0000 UTC m=+0.429445017 container init 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.851484573 +0000 UTC m=+0.441234622 container start 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:27:42 np0005593232 distracted_dijkstra[363972]: 167 167
Jan 23 05:27:42 np0005593232 systemd[1]: libpod-965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0.scope: Deactivated successfully.
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.950780286 +0000 UTC m=+0.540530375 container attach 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:27:42 np0005593232 podman[363955]: 2026-01-23 10:27:42.952143684 +0000 UTC m=+0.541893763 container died 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:27:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4ae3fe97f42971b8b89a6ca5685eca0867a1ce3e91ba7b77e82c8d43042751f5-merged.mount: Deactivated successfully.
Jan 23 05:27:43 np0005593232 podman[363955]: 2026-01-23 10:27:43.043649355 +0000 UTC m=+0.633399394 container remove 965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.066 250273 DEBUG nova.objects.instance [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lazy-loading 'migration_context' on Instance uuid 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.081 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.081 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Ensure instance console log exists: /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.082 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.082 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.083 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:43 np0005593232 systemd[1]: libpod-conmon-965594a72c6f0cf1c40b277cd469bd9dd516d34baab8d125fcd17d3eff98fac0.scope: Deactivated successfully.
Jan 23 05:27:43 np0005593232 podman[364068]: 2026-01-23 10:27:43.212210546 +0000 UTC m=+0.046073420 container create 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:27:43 np0005593232 systemd[1]: Started libpod-conmon-2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0.scope.
Jan 23 05:27:43 np0005593232 podman[364068]: 2026-01-23 10:27:43.192764013 +0000 UTC m=+0.026626897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:43 np0005593232 podman[364068]: 2026-01-23 10:27:43.323289324 +0000 UTC m=+0.157152238 container init 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:27:43 np0005593232 podman[364068]: 2026-01-23 10:27:43.332628349 +0000 UTC m=+0.166491233 container start 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:27:43 np0005593232 podman[364068]: 2026-01-23 10:27:43.336412127 +0000 UTC m=+0.170275001 container attach 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:27:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:43 np0005593232 nova_compute[250269]: 2026-01-23 10:27:43.428 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:43.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2962: 321 pgs: 321 active+clean; 513 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 987 KiB/s wr, 190 op/s
Jan 23 05:27:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:44 np0005593232 agitated_poincare[364085]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:27:44 np0005593232 agitated_poincare[364085]: --> relative data size: 1.0
Jan 23 05:27:44 np0005593232 agitated_poincare[364085]: --> All data devices are unavailable
Jan 23 05:27:44 np0005593232 systemd[1]: libpod-2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0.scope: Deactivated successfully.
Jan 23 05:27:44 np0005593232 podman[364068]: 2026-01-23 10:27:44.202058571 +0000 UTC m=+1.035921455 container died 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:27:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8c7702d373510c33e66c048def7d8b15fbfe0e0d2dd561748476a3344911dee7-merged.mount: Deactivated successfully.
Jan 23 05:27:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:27:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275379426' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:27:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:27:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4275379426' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:27:44 np0005593232 podman[364068]: 2026-01-23 10:27:44.671345831 +0000 UTC m=+1.505208715 container remove 2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:27:44 np0005593232 systemd[1]: libpod-conmon-2febc2dbd7514dbe82eb035c446e1d05db7d8854362f3ec47040d256928e56d0.scope: Deactivated successfully.
Jan 23 05:27:44 np0005593232 nova_compute[250269]: 2026-01-23 10:27:44.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.374015382 +0000 UTC m=+0.067468319 container create c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:27:45 np0005593232 systemd[1]: Started libpod-conmon-c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58.scope.
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.330529216 +0000 UTC m=+0.023982163 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:45.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.692790893 +0000 UTC m=+0.386243850 container init c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.707303925 +0000 UTC m=+0.400756892 container start c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:27:45 np0005593232 sharp_bouman[364272]: 167 167
Jan 23 05:27:45 np0005593232 systemd[1]: libpod-c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58.scope: Deactivated successfully.
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.74545364 +0000 UTC m=+0.438906657 container attach c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.746376856 +0000 UTC m=+0.439829843 container died c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:27:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-06f72ddb912bbd2a30541f30ccf36bd499ddf309b17cd8b9ab73827ea013b056-merged.mount: Deactivated successfully.
Jan 23 05:27:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2963: 321 pgs: 321 active+clean; 543 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 2.9 MiB/s wr, 215 op/s
Jan 23 05:27:45 np0005593232 podman[364255]: 2026-01-23 10:27:45.992288506 +0000 UTC m=+0.685741433 container remove c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:27:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:46.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:46 np0005593232 systemd[1]: libpod-conmon-c6d5b5c259248170612476353fec82dc845447c6096cd5500ec0fc2bf1bccb58.scope: Deactivated successfully.
Jan 23 05:27:46 np0005593232 nova_compute[250269]: 2026-01-23 10:27:46.079 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Successfully created port: 4db60331-fd91-4563-aa5e-c8bb81879499 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:27:46 np0005593232 podman[364297]: 2026-01-23 10:27:46.207698978 +0000 UTC m=+0.035174340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:46 np0005593232 podman[364297]: 2026-01-23 10:27:46.343673283 +0000 UTC m=+0.171148605 container create c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:27:46 np0005593232 systemd[1]: Started libpod-conmon-c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce.scope.
Jan 23 05:27:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c0f49989ce2e9e95b71531b392af89a8fe5c3dfff832c3e34c7ea7e110a51e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c0f49989ce2e9e95b71531b392af89a8fe5c3dfff832c3e34c7ea7e110a51e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c0f49989ce2e9e95b71531b392af89a8fe5c3dfff832c3e34c7ea7e110a51e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0c0f49989ce2e9e95b71531b392af89a8fe5c3dfff832c3e34c7ea7e110a51e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:46 np0005593232 podman[364297]: 2026-01-23 10:27:46.528531877 +0000 UTC m=+0.356007239 container init c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:27:46 np0005593232 podman[364297]: 2026-01-23 10:27:46.545138099 +0000 UTC m=+0.372613441 container start c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:27:46 np0005593232 podman[364297]: 2026-01-23 10:27:46.591811706 +0000 UTC m=+0.419287068 container attach c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009414476898380793 of space, bias 1.0, pg target 2.8243430695142377 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6482179415596502 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.1565338992415783 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]: {
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:    "0": [
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:        {
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "devices": [
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "/dev/loop3"
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            ],
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "lv_name": "ceph_lv0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "lv_size": "7511998464",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "name": "ceph_lv0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "tags": {
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.cluster_name": "ceph",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.crush_device_class": "",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.encrypted": "0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.osd_id": "0",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.type": "block",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:                "ceph.vdo": "0"
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            },
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "type": "block",
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:            "vg_name": "ceph_vg0"
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:        }
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]:    ]
Jan 23 05:27:47 np0005593232 heuristic_robinson[364313]: }
Jan 23 05:27:47 np0005593232 systemd[1]: libpod-c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce.scope: Deactivated successfully.
Jan 23 05:27:47 np0005593232 conmon[364313]: conmon c307a4c1a036406eabe2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce.scope/container/memory.events
Jan 23 05:27:47 np0005593232 podman[364297]: 2026-01-23 10:27:47.400996006 +0000 UTC m=+1.228471338 container died c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:27:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a0c0f49989ce2e9e95b71531b392af89a8fe5c3dfff832c3e34c7ea7e110a51e-merged.mount: Deactivated successfully.
Jan 23 05:27:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2964: 321 pgs: 321 active+clean; 574 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 252 op/s
Jan 23 05:27:47 np0005593232 nova_compute[250269]: 2026-01-23 10:27:47.893 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Successfully updated port: 4db60331-fd91-4563-aa5e-c8bb81879499 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:27:47 np0005593232 nova_compute[250269]: 2026-01-23 10:27:47.920 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:27:47 np0005593232 nova_compute[250269]: 2026-01-23 10:27:47.921 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquired lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:27:47 np0005593232 nova_compute[250269]: 2026-01-23 10:27:47.921 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:27:47 np0005593232 podman[364297]: 2026-01-23 10:27:47.974045984 +0000 UTC m=+1.801521306 container remove c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_robinson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:27:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:48.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:48 np0005593232 nova_compute[250269]: 2026-01-23 10:27:48.024 250273 DEBUG nova.compute.manager [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:27:48 np0005593232 nova_compute[250269]: 2026-01-23 10:27:48.025 250273 DEBUG nova.compute.manager [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing instance network info cache due to event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:27:48 np0005593232 nova_compute[250269]: 2026-01-23 10:27:48.025 250273 DEBUG oslo_concurrency.lockutils [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:27:48 np0005593232 systemd[1]: libpod-conmon-c307a4c1a036406eabe26a7c52b75e30cdb2d81f182fdd46795e668d8dbbe3ce.scope: Deactivated successfully.
Jan 23 05:27:48 np0005593232 nova_compute[250269]: 2026-01-23 10:27:48.115 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:27:48 np0005593232 podman[364335]: 2026-01-23 10:27:48.178941588 +0000 UTC m=+0.701353756 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:27:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:48 np0005593232 nova_compute[250269]: 2026-01-23 10:27:48.430 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:48 np0005593232 podman[364503]: 2026-01-23 10:27:48.806533325 +0000 UTC m=+0.076693200 container create ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:27:48 np0005593232 systemd[1]: Started libpod-conmon-ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38.scope.
Jan 23 05:27:48 np0005593232 podman[364503]: 2026-01-23 10:27:48.757897633 +0000 UTC m=+0.028057538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:49 np0005593232 podman[364503]: 2026-01-23 10:27:49.120318644 +0000 UTC m=+0.390478539 container init ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:27:49 np0005593232 podman[364503]: 2026-01-23 10:27:49.130087442 +0000 UTC m=+0.400247357 container start ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:27:49 np0005593232 vigilant_albattani[364519]: 167 167
Jan 23 05:27:49 np0005593232 systemd[1]: libpod-ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38.scope: Deactivated successfully.
Jan 23 05:27:49 np0005593232 podman[364503]: 2026-01-23 10:27:49.275774943 +0000 UTC m=+0.545934818 container attach ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:27:49 np0005593232 podman[364503]: 2026-01-23 10:27:49.277807081 +0000 UTC m=+0.547966956 container died ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 23 05:27:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fd7719b87cc21e148231da7ba4c56e3b72c556049f048c5c2b01863a6354b1ca-merged.mount: Deactivated successfully.
Jan 23 05:27:49 np0005593232 podman[364503]: 2026-01-23 10:27:49.321858993 +0000 UTC m=+0.592018868 container remove ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:27:49 np0005593232 systemd[1]: libpod-conmon-ea31a233b2a587dfe99b1ef226d2ed58bc6bc189c8eba2da3bc95279535faa38.scope: Deactivated successfully.
Jan 23 05:27:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:49 np0005593232 podman[364543]: 2026-01-23 10:27:49.494304094 +0000 UTC m=+0.027048969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:27:49 np0005593232 podman[364543]: 2026-01-23 10:27:49.696960105 +0000 UTC m=+0.229704960 container create 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:27:49 np0005593232 nova_compute[250269]: 2026-01-23 10:27:49.813 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2965: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Jan 23 05:27:49 np0005593232 systemd[1]: Started libpod-conmon-9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef.scope.
Jan 23 05:27:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:27:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701396340c93f87585d4d9126f7080c9c73ebc905a2387ac7ceb23cf01f5072a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701396340c93f87585d4d9126f7080c9c73ebc905a2387ac7ceb23cf01f5072a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701396340c93f87585d4d9126f7080c9c73ebc905a2387ac7ceb23cf01f5072a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:49 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701396340c93f87585d4d9126f7080c9c73ebc905a2387ac7ceb23cf01f5072a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:27:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:50.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:50 np0005593232 podman[364543]: 2026-01-23 10:27:50.082491593 +0000 UTC m=+0.615236538 container init 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:27:50 np0005593232 podman[364543]: 2026-01-23 10:27:50.100399192 +0000 UTC m=+0.633144047 container start 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:27:50 np0005593232 podman[364543]: 2026-01-23 10:27:50.108328817 +0000 UTC m=+0.641073752 container attach 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:27:50 np0005593232 nova_compute[250269]: 2026-01-23 10:27:50.172 250273 DEBUG nova.network.neutron [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:27:50 np0005593232 nova_compute[250269]: 2026-01-23 10:27:50.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:51 np0005593232 serene_villani[364560]: {
Jan 23 05:27:51 np0005593232 serene_villani[364560]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:27:51 np0005593232 serene_villani[364560]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:27:51 np0005593232 serene_villani[364560]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:27:51 np0005593232 serene_villani[364560]:        "osd_id": 0,
Jan 23 05:27:51 np0005593232 serene_villani[364560]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:27:51 np0005593232 serene_villani[364560]:        "type": "bluestore"
Jan 23 05:27:51 np0005593232 serene_villani[364560]:    }
Jan 23 05:27:51 np0005593232 serene_villani[364560]: }
Jan 23 05:27:51 np0005593232 systemd[1]: libpod-9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef.scope: Deactivated successfully.
Jan 23 05:27:51 np0005593232 podman[364543]: 2026-01-23 10:27:51.216948978 +0000 UTC m=+1.749693863 container died 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 23 05:27:51 np0005593232 systemd[1]: libpod-9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef.scope: Consumed 1.116s CPU time.
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.298 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Releasing lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.299 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Instance network_info: |[{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.300 250273 DEBUG oslo_concurrency.lockutils [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.301 250273 DEBUG nova.network.neutron [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.307 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Start _get_guest_xml network_info=[{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.317 250273 WARNING nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.324 250273 DEBUG nova.virt.libvirt.host [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.325 250273 DEBUG nova.virt.libvirt.host [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.328 250273 DEBUG nova.virt.libvirt.host [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.329 250273 DEBUG nova.virt.libvirt.host [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.330 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.330 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.331 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.331 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.331 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.331 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.332 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.332 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.332 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.332 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.332 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.333 250273 DEBUG nova.virt.hardware [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.336 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-701396340c93f87585d4d9126f7080c9c73ebc905a2387ac7ceb23cf01f5072a-merged.mount: Deactivated successfully.
Jan 23 05:27:51 np0005593232 podman[364543]: 2026-01-23 10:27:51.563088547 +0000 UTC m=+2.095833422 container remove 9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:27:51 np0005593232 systemd[1]: libpod-conmon-9f5fb85781b35d8bc54c15a53fb2af95542ec498da2df308205b7d5ebc360bef.scope: Deactivated successfully.
Jan 23 05:27:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:51.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bb5decc0-4bb3-496a-8031-1dc54f926dae does not exist
Jan 23 05:27:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 018cf097-7140-48bd-b788-b67cb1b85881 does not exist
Jan 23 05:27:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fa82a47f-d953-444b-95e7-57dac03778f1 does not exist
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:27:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977413553' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.831 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2966: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 213 op/s
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.880 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:51 np0005593232 nova_compute[250269]: 2026-01-23 10:27:51.887 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:52.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:27:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077272617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:27:52 np0005593232 nova_compute[250269]: 2026-01-23 10:27:52.359 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:52 np0005593232 nova_compute[250269]: 2026-01-23 10:27:52.364 250273 DEBUG nova.virt.libvirt.vif [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-544089771-acc',id=172,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITO660aaiIUEAYb/nIuqp5SSLXHJpCV/HBAbegUJr3DnRFdwDI34p9mLd+os4mF8/ldbnLSdChV+EEJIvwwqIGTsOASPzz+4e5dcKF7BKU9NU8cHs4HfSeTHXa5VE87Ig==',key_name='tempest-TestSecurityGroupsBasicOps-1528247556',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8887855ebd545a6bdab3b6a18c19dd9',ramdisk_id='',reservation_id='r-d1vtnkr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-544089771',owner_user_name='tempest-TestSecurityGroupsBasicOps-544089771-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:27:41Z,user_data=None,user_id='3c17acb50b0d49d4b062d68aa88d1f7f',uuid=13cce6b0-b75f-41bf-90dd-1b5cc5d2344b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:27:52 np0005593232 nova_compute[250269]: 2026-01-23 10:27:52.365 250273 DEBUG nova.network.os_vif_util [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converting VIF {"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:27:52 np0005593232 nova_compute[250269]: 2026-01-23 10:27:52.367 250273 DEBUG nova.network.os_vif_util [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:27:52 np0005593232 nova_compute[250269]: 2026-01-23 10:27:52.371 250273 DEBUG nova.objects.instance [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:27:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.086 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <uuid>13cce6b0-b75f-41bf-90dd-1b5cc5d2344b</uuid>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <name>instance-000000ac</name>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163</nova:name>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:27:51</nova:creationTime>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:user uuid="3c17acb50b0d49d4b062d68aa88d1f7f">tempest-TestSecurityGroupsBasicOps-544089771-project-member</nova:user>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:project uuid="d8887855ebd545a6bdab3b6a18c19dd9">tempest-TestSecurityGroupsBasicOps-544089771</nova:project>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <nova:port uuid="4db60331-fd91-4563-aa5e-c8bb81879499">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="serial">13cce6b0-b75f-41bf-90dd-1b5cc5d2344b</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="uuid">13cce6b0-b75f-41bf-90dd-1b5cc5d2344b</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:e3:f7:86"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <target dev="tap4db60331-fd"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/console.log" append="off"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:27:53 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:27:53 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:27:53 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:27:53 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.087 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Preparing to wait for external event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.087 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.088 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.088 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.089 250273 DEBUG nova.virt.libvirt.vif [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-544089771-acc',id=172,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITO660aaiIUEAYb/nIuqp5SSLXHJpCV/HBAbegUJr3DnRFdwDI34p9mLd+os4mF8/ldbnLSdChV+EEJIvwwqIGTsOASPzz+4e5dcKF7BKU9NU8cHs4HfSeTHXa5VE87Ig==',key_name='tempest-TestSecurityGroupsBasicOps-1528247556',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8887855ebd545a6bdab3b6a18c19dd9',ramdisk_id='',reservation_id='r-d1vtnkr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-544089771',owner_user_name='tempest-TestSecurityGroupsBasicOps-544089771-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:27:41Z,user_data=None,user_id='3c17acb50b0d49d4b062d68aa88d1f7f',uuid=13cce6b0-b75f-41bf-90dd-1b5cc5d2344b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.090 250273 DEBUG nova.network.os_vif_util [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converting VIF {"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.091 250273 DEBUG nova.network.os_vif_util [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.092 250273 DEBUG os_vif [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.095 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.096 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.103 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4db60331-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.104 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4db60331-fd, col_values=(('external_ids', {'iface-id': '4db60331-fd91-4563-aa5e-c8bb81879499', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:f7:86', 'vm-uuid': '13cce6b0-b75f-41bf-90dd-1b5cc5d2344b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:53 np0005593232 NetworkManager[49057]: <info>  [1769164073.1602] manager: (tap4db60331-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/313)
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.176 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.178 250273 INFO os_vif [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd')#033[00m
Jan 23 05:27:53 np0005593232 nova_compute[250269]: 2026-01-23 10:27:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:53.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2967: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 223 op/s
Jan 23 05:27:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:54.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:54 np0005593232 nova_compute[250269]: 2026-01-23 10:27:54.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:54 np0005593232 nova_compute[250269]: 2026-01-23 10:27:54.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:27:54 np0005593232 podman[364710]: 2026-01-23 10:27:54.469864026 +0000 UTC m=+0.094205288 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 05:27:54 np0005593232 nova_compute[250269]: 2026-01-23 10:27:54.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:55 np0005593232 nova_compute[250269]: 2026-01-23 10:27:55.518 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:27:55 np0005593232 nova_compute[250269]: 2026-01-23 10:27:55.519 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:27:55 np0005593232 nova_compute[250269]: 2026-01-23 10:27:55.519 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] No VIF found with MAC fa:16:3e:e3:f7:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:27:55 np0005593232 nova_compute[250269]: 2026-01-23 10:27:55.520 250273 INFO nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Using config drive#033[00m
Jan 23 05:27:55 np0005593232 nova_compute[250269]: 2026-01-23 10:27:55.562 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:55.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2968: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 180 op/s
Jan 23 05:27:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:56.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:57 np0005593232 nova_compute[250269]: 2026-01-23 10:27:57.452 250273 INFO nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Creating config drive at /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config#033[00m
Jan 23 05:27:57 np0005593232 nova_compute[250269]: 2026-01-23 10:27:57.461 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z2k3zlx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:27:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:57.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:27:57 np0005593232 nova_compute[250269]: 2026-01-23 10:27:57.625 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z2k3zlx" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:57 np0005593232 nova_compute[250269]: 2026-01-23 10:27:57.653 250273 DEBUG nova.storage.rbd_utils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] rbd image 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:27:57 np0005593232 nova_compute[250269]: 2026-01-23 10:27:57.657 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:27:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2969: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 103 op/s
Jan 23 05:27:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:27:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:27:58.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.107 250273 DEBUG oslo_concurrency.processutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.108 250273 INFO nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Deleting local config drive /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b/disk.config because it was imported into RBD.#033[00m
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:58 np0005593232 kernel: tap4db60331-fd: entered promiscuous mode
Jan 23 05:27:58 np0005593232 NetworkManager[49057]: <info>  [1769164078.1893] manager: (tap4db60331-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/314)
Jan 23 05:27:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:27:58Z|00672|binding|INFO|Claiming lport 4db60331-fd91-4563-aa5e-c8bb81879499 for this chassis.
Jan 23 05:27:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:27:58Z|00673|binding|INFO|4db60331-fd91-4563-aa5e-c8bb81879499: Claiming fa:16:3e:e3:f7:86 10.100.0.11
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:58 np0005593232 systemd-udevd[364853]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:27:58 np0005593232 systemd-machined[215836]: New machine qemu-77-instance-000000ac.
Jan 23 05:27:58 np0005593232 NetworkManager[49057]: <info>  [1769164078.2451] device (tap4db60331-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:27:58 np0005593232 NetworkManager[49057]: <info>  [1769164078.2456] device (tap4db60331-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:27:58 np0005593232 systemd[1]: Started Virtual Machine qemu-77-instance-000000ac.
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.260 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:27:58Z|00674|binding|INFO|Setting lport 4db60331-fd91-4563-aa5e-c8bb81879499 ovn-installed in OVS
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:27:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.698 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164078.6976588, 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:27:58 np0005593232 nova_compute[250269]: 2026-01-23 10:27:58.698 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] VM Started (Lifecycle Event)#033[00m
Jan 23 05:27:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:27:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:27:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:27:59.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:27:59 np0005593232 nova_compute[250269]: 2026-01-23 10:27:59.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:27:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2970: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 222 KiB/s rd, 43 KiB/s wr, 31 op/s
Jan 23 05:28:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:00.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:01.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2971: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 23 KiB/s rd, 13 KiB/s wr, 16 op/s
Jan 23 05:28:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:02.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:03 np0005593232 nova_compute[250269]: 2026-01-23 10:28:03.203 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:03 np0005593232 nova_compute[250269]: 2026-01-23 10:28:03.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:03 np0005593232 nova_compute[250269]: 2026-01-23 10:28:03.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:03.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2972: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 31 KiB/s wr, 22 op/s
Jan 23 05:28:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:04.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:04 np0005593232 nova_compute[250269]: 2026-01-23 10:28:04.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:05.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2973: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 18 KiB/s wr, 13 op/s
Jan 23 05:28:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:06.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:06 np0005593232 nova_compute[250269]: 2026-01-23 10:28:06.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:07.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2974: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 18 KiB/s wr, 10 op/s
Jan 23 05:28:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:08.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:08 np0005593232 nova_compute[250269]: 2026-01-23 10:28:08.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:09Z|00675|binding|INFO|Setting lport 4db60331-fd91-4563-aa5e-c8bb81879499 up in Southbound
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.123 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:f7:86 10.100.0.11'], port_security=['fa:16:3e:e3:f7:86 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13cce6b0-b75f-41bf-90dd-1b5cc5d2344b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8887855ebd545a6bdab3b6a18c19dd9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07b099b4-00b4-46d9-9c83-f076e9832801 48acf45e-b74f-4870-9a4a-f3c4cef0ea2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b86d0cc-189d-499d-83d5-aecc5111cdb7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=4db60331-fd91-4563-aa5e-c8bb81879499) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.125 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4db60331-fd91-4563-aa5e-c8bb81879499 in datapath a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad bound to our chassis#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.129 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.153 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[474a622c-1eeb-4846-82ce-878ec390c68d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.155 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa7bf86f8-f1 in ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.159 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa7bf86f8-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.159 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[04944a9a-4d7a-40ad-b29a-09d8700ac613]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.162 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b47e4b17-7014-46ea-8860-af0f75f7abba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.182 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ea345830-3b26-4739-bdbb-00eca01456c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.205 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5121d16d-7367-4e0d-92ed-f8f5bf01a476]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.259 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa832ed-2e52-4190-b141-8f77fd3693c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.269 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[878710e6-3a25-4bb0-802b-ef88864af393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 NetworkManager[49057]: <info>  [1769164089.2722] manager: (tapa7bf86f8-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/315)
Jan 23 05:28:09 np0005593232 systemd-udevd[364917]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.309 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.317 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164078.6978955, 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.317 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d960048d-5082-4615-ac59-095b01c21b35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.319 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.323 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[766f67e5-fe73-499f-a86b-f37a582934da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 NetworkManager[49057]: <info>  [1769164089.4270] device (tapa7bf86f8-f0): carrier: link connected
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.434 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6fbad246-8761-4f18-9cee-6cb735e456e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.454 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e79cc3-eec1-41f1-b4bb-b7974341241d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7bf86f8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:03:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800256, 'reachable_time': 34015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364936, 'error': None, 'target': 'ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.471 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d0705396-c39e-4431-8f8c-617f32d2d08a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:3cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 800256, 'tstamp': 800256}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364937, 'error': None, 'target': 'ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.494 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfc2688-92b8-4937-9ee7-ee82265a1c2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa7bf86f8-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:03:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 202], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800256, 'reachable_time': 34015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364938, 'error': None, 'target': 'ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.534 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[911714b5-9bdd-4dff-8356-00d3f1539f3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:09.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.627 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6e9b65f6-2d58-49cd-9336-a978a603ac9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.629 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7bf86f8-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.630 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.630 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa7bf86f8-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:09 np0005593232 kernel: tapa7bf86f8-f0: entered promiscuous mode
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.633 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:09 np0005593232 NetworkManager[49057]: <info>  [1769164089.6355] manager: (tapa7bf86f8-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.638 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa7bf86f8-f0, col_values=(('external_ids', {'iface-id': '1e84a51b-d3a3-4559-9baf-4ae27c5bded6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:09 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:09Z|00676|binding|INFO|Releasing lport 1e84a51b-d3a3-4559-9baf-4ae27c5bded6 from this chassis (sb_readonly=1)
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.640 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.642 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.643 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e22a8b99-fc61-4f2d-93f9-aa7aaa185173]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.644 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad.pid.haproxy
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:28:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:09.645 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'env', 'PROCESS_TAG=haproxy-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.655 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:09 np0005593232 nova_compute[250269]: 2026-01-23 10:28:09.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2975: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.2 KiB/s rd, 18 KiB/s wr, 10 op/s
Jan 23 05:28:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:10.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:10 np0005593232 podman[364972]: 2026-01-23 10:28:10.065999712 +0000 UTC m=+0.028198953 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.182 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.184 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.184 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.184 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.185 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:28:10 np0005593232 podman[364972]: 2026-01-23 10:28:10.488624134 +0000 UTC m=+0.450823385 container create 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:28:10 np0005593232 systemd[1]: Started libpod-conmon-7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd.scope.
Jan 23 05:28:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:28:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a590bb8032e5f960385d26f03eabc3404a02748216c292ca4da59a4a767f8b5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:28:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/341486057' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:28:10 np0005593232 nova_compute[250269]: 2026-01-23 10:28:10.751 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:28:11 np0005593232 podman[364972]: 2026-01-23 10:28:11.087589359 +0000 UTC m=+1.049788700 container init 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:28:11 np0005593232 podman[364972]: 2026-01-23 10:28:11.100798454 +0000 UTC m=+1.062997705 container start 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:28:11 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [NOTICE]   (365015) : New worker (365017) forked
Jan 23 05:28:11 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [NOTICE]   (365015) : Loading success.
Jan 23 05:28:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:11.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2976: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 17 KiB/s wr, 6 op/s
Jan 23 05:28:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:12.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:12 np0005593232 nova_compute[250269]: 2026-01-23 10:28:12.313 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:12 np0005593232 nova_compute[250269]: 2026-01-23 10:28:12.319 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:28:13 np0005593232 nova_compute[250269]: 2026-01-23 10:28:13.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:13.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2977: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.4 KiB/s rd, 18 KiB/s wr, 6 op/s
Jan 23 05:28:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:14.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:14 np0005593232 nova_compute[250269]: 2026-01-23 10:28:14.835 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:15.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2978: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 05:28:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:16.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:17.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2979: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 KiB/s wr, 0 op/s
Jan 23 05:28:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:18.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:18 np0005593232 nova_compute[250269]: 2026-01-23 10:28:18.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:18 np0005593232 podman[365081]: 2026-01-23 10:28:18.461389605 +0000 UTC m=+0.119449676 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:28:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:19.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:19Z|00677|binding|INFO|Releasing lport 1e84a51b-d3a3-4559-9baf-4ae27c5bded6 from this chassis (sb_readonly=0)
Jan 23 05:28:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:19Z|00678|binding|INFO|Releasing lport b648300b-e46c-4d3b-b02e-94ff684c03ae from this chassis (sb_readonly=0)
Jan 23 05:28:19 np0005593232 nova_compute[250269]: 2026-01-23 10:28:19.837 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2980: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 23 05:28:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:20.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:20 np0005593232 nova_compute[250269]: 2026-01-23 10:28:20.373 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:28:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:21.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2981: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s wr, 0 op/s
Jan 23 05:28:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:22.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:23 np0005593232 nova_compute[250269]: 2026-01-23 10:28:23.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:23.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2982: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 4.1 KiB/s wr, 0 op/s
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.032 250273 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 0.41 sec#033[00m
Jan 23 05:28:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:24.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.253 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.253 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ac as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.257 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.257 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.261 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.261 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000a4 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.478 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.480 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3836MB free_disk=20.78510284423828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.480 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.480 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:24 np0005593232 nova_compute[250269]: 2026-01-23 10:28:24.839 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:25 np0005593232 podman[365111]: 2026-01-23 10:28:25.399451949 +0000 UTC m=+0.058719650 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:28:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:25.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2983: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 3.1 KiB/s wr, 0 op/s
Jan 23 05:28:25 np0005593232 nova_compute[250269]: 2026-01-23 10:28:25.903 250273 DEBUG nova.network.neutron [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updated VIF entry in instance network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:28:25 np0005593232 nova_compute[250269]: 2026-01-23 10:28:25.904 250273 DEBUG nova.network.neutron [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:26.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.164 250273 DEBUG oslo_concurrency.lockutils [req-ef4c7775-09a8-4d21-bba8-82af4c070e9a req-84c0e6cd-4764-4837-819b-78858f66534d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.302 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance ae979986-7780-443a-afbc-6b4be8f71da1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.302 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.302 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.302 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.303 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:28:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:26.627 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:28:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:26.629 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:28:26 np0005593232 nova_compute[250269]: 2026-01-23 10:28:26.762 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:27 np0005593232 nova_compute[250269]: 2026-01-23 10:28:27.381 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:28:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:27.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2984: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 0 B/s rd, 5.7 KiB/s wr, 0 op/s
Jan 23 05:28:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:28:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679525578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:28:27 np0005593232 nova_compute[250269]: 2026-01-23 10:28:27.931 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:28:27 np0005593232 nova_compute[250269]: 2026-01-23 10:28:27.939 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:28:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:28.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.300 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.754 250273 DEBUG nova.compute.manager [req-9e13cb7c-7c4a-4a73-a636-5258f6d1772b req-94ee54d9-6d44-4408-a1f3-5255e1222e44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.755 250273 DEBUG oslo_concurrency.lockutils [req-9e13cb7c-7c4a-4a73-a636-5258f6d1772b req-94ee54d9-6d44-4408-a1f3-5255e1222e44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.756 250273 DEBUG oslo_concurrency.lockutils [req-9e13cb7c-7c4a-4a73-a636-5258f6d1772b req-94ee54d9-6d44-4408-a1f3-5255e1222e44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.756 250273 DEBUG oslo_concurrency.lockutils [req-9e13cb7c-7c4a-4a73-a636-5258f6d1772b req-94ee54d9-6d44-4408-a1f3-5255e1222e44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.757 250273 DEBUG nova.compute.manager [req-9e13cb7c-7c4a-4a73-a636-5258f6d1772b req-94ee54d9-6d44-4408-a1f3-5255e1222e44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Processing event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.758 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Instance event wait completed in 30 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.765 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164108.76509, 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.766 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.768 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.770 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.777 250273 INFO nova.virt.libvirt.driver [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Instance spawned successfully.#033[00m
Jan 23 05:28:28 np0005593232 nova_compute[250269]: 2026-01-23 10:28:28.778 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.090 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.098 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.099 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.103 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.111 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.112 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.113 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.114 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.115 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.116 250273 DEBUG nova.virt.libvirt.driver [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:28:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:29.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:29 np0005593232 nova_compute[250269]: 2026-01-23 10:28:29.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2985: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 4.4 KiB/s wr, 1 op/s
Jan 23 05:28:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:30.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:30 np0005593232 nova_compute[250269]: 2026-01-23 10:28:30.096 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:28:30 np0005593232 nova_compute[250269]: 2026-01-23 10:28:30.394 250273 INFO nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Took 48.74 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:28:30 np0005593232 nova_compute[250269]: 2026-01-23 10:28:30.395 250273 DEBUG nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:31 np0005593232 nova_compute[250269]: 2026-01-23 10:28:31.103 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:31.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2986: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 3.7 KiB/s wr, 0 op/s
Jan 23 05:28:32 np0005593232 nova_compute[250269]: 2026-01-23 10:28:32.001 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:32 np0005593232 nova_compute[250269]: 2026-01-23 10:28:32.002 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:28:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:32.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:32 np0005593232 nova_compute[250269]: 2026-01-23 10:28:32.629 250273 INFO nova.compute.manager [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Took 52.08 seconds to build instance.#033[00m
Jan 23 05:28:32 np0005593232 nova_compute[250269]: 2026-01-23 10:28:32.851 250273 DEBUG oslo_concurrency.lockutils [None req-434173e7-240d-43b7-b67c-05050e4dc732 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 52.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.105 250273 DEBUG nova.compute.manager [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.106 250273 DEBUG oslo_concurrency.lockutils [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.106 250273 DEBUG oslo_concurrency.lockutils [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.106 250273 DEBUG oslo_concurrency.lockutils [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.106 250273 DEBUG nova.compute.manager [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] No waiting events found dispatching network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.106 250273 WARNING nova.compute.manager [req-25946b2d-a03f-40c7-a35b-4e7e133b4b0f req-d3d4c05e-b04d-41a6-9161-d9257706c00b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received unexpected event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.160 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.161 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.161 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:28:33 np0005593232 nova_compute[250269]: 2026-01-23 10:28:33.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:33.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2987: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 507 KiB/s rd, 6.7 KiB/s wr, 18 op/s
Jan 23 05:28:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:34.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 23 05:28:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 23 05:28:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 23 05:28:34 np0005593232 nova_compute[250269]: 2026-01-23 10:28:34.842 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:35.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2989: 321 pgs: 321 active+clean; 542 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.8 KiB/s wr, 111 op/s
Jan 23 05:28:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.433 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.434 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.434 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.434 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.434 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.435 250273 INFO nova.compute.manager [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Terminating instance#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.436 250273 DEBUG nova.compute.manager [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:28:36 np0005593232 kernel: tap610277e6-a4 (unregistering): left promiscuous mode
Jan 23 05:28:36 np0005593232 NetworkManager[49057]: <info>  [1769164116.4959] device (tap610277e6-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:28:36 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:36Z|00679|binding|INFO|Releasing lport 610277e6-a454-47a7-8c51-aa13506c9f66 from this chassis (sb_readonly=0)
Jan 23 05:28:36 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:36Z|00680|binding|INFO|Setting lport 610277e6-a454-47a7-8c51-aa13506c9f66 down in Southbound
Jan 23 05:28:36 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:36Z|00681|binding|INFO|Removing iface tap610277e6-a4 ovn-installed in OVS
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.523 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:0a:45 10.100.0.7'], port_security=['fa:16:3e:e9:0a:45 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'da8fd4b4-46c3-412e-aeeb-499a3fec1bc5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=610277e6-a454-47a7-8c51-aa13506c9f66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.524 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 610277e6-a454-47a7-8c51-aa13506c9f66 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 unbound from our chassis#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.526 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.525 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.544 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.557 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[585e7e21-e261-4fcc-a27e-0069d8cfb371]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a8.scope: Deactivated successfully.
Jan 23 05:28:36 np0005593232 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d000000a8.scope: Consumed 20.577s CPU time.
Jan 23 05:28:36 np0005593232 systemd-machined[215836]: Machine qemu-76-instance-000000a8 terminated.
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.591 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[836a010b-80f8-4a2d-af3e-04dfbf989c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.594 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf1d327-1d49-4abe-a23a-a24b65afbcc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.622 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5bd8431-111f-4959-afb9-749118fb4ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.631 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.638 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[57f4ba39-b63b-4fe2-90d5-10eed4de023a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfbd64ab8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:7c:5a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787642, 'reachable_time': 30923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365169, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.654 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[657e1388-1b8c-4be2-b33b-05f2662dbae3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfbd64ab8-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787655, 'tstamp': 787655}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365170, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfbd64ab8-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787659, 'tstamp': 787659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365170, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.658 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.666 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfbd64ab8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.667 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.667 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfbd64ab8-90, col_values=(('external_ids', {'iface-id': 'b648300b-e46c-4d3b-b02e-94ff684c03ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:36.667 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.668 250273 INFO nova.virt.libvirt.driver [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Instance destroyed successfully.#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.668 250273 DEBUG nova.objects.instance [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'resources' on Instance uuid da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.732 250273 DEBUG nova.virt.libvirt.vif [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-543715089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-543715089',id=168,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:26:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-cwl09i3q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:26:39Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=da8fd4b4-46c3-412e-aeeb-499a3fec1bc5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.734 250273 DEBUG nova.network.os_vif_util [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.735 250273 DEBUG nova.network.os_vif_util [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.736 250273 DEBUG os_vif [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.744 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.744 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap610277e6-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.748 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:28:36 np0005593232 nova_compute[250269]: 2026-01-23 10:28:36.751 250273 INFO os_vif [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:0a:45,bridge_name='br-int',has_traffic_filtering=True,id=610277e6-a454-47a7-8c51-aa13506c9f66,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap610277e6-a4')#033[00m
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.188 250273 INFO nova.virt.libvirt.driver [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Deleting instance files /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_del#033[00m
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.190 250273 INFO nova.virt.libvirt.driver [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Deletion of /var/lib/nova/instances/da8fd4b4-46c3-412e-aeeb-499a3fec1bc5_del complete#033[00m
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:28:37
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'vms', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.468 250273 INFO nova.compute.manager [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.469 250273 DEBUG oslo.service.loopingcall [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.470 250273 DEBUG nova.compute.manager [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:28:37 np0005593232 nova_compute[250269]: 2026-01-23 10:28:37.470 250273 DEBUG nova.network.neutron [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:28:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:37.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2990: 321 pgs: 321 active+clean; 408 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 7.4 KiB/s wr, 163 op/s
Jan 23 05:28:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.311 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updating instance_info_cache with network_info: [{"id": "610277e6-a454-47a7-8c51-aa13506c9f66", "address": "fa:16:3e:e9:0a:45", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap610277e6-a4", "ovs_interfaceid": "610277e6-a454-47a7-8c51-aa13506c9f66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.386 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.386 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.392 250273 DEBUG nova.compute.manager [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-unplugged-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.392 250273 DEBUG oslo_concurrency.lockutils [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.393 250273 DEBUG oslo_concurrency.lockutils [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.393 250273 DEBUG oslo_concurrency.lockutils [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.393 250273 DEBUG nova.compute.manager [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] No waiting events found dispatching network-vif-unplugged-610277e6-a454-47a7-8c51-aa13506c9f66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:28:38 np0005593232 nova_compute[250269]: 2026-01-23 10:28:38.394 250273 DEBUG nova.compute.manager [req-abba2218-b374-43ff-9c6c-2a939ee6e447 req-39e70aa9-5076-49c9-813c-2f203fd0123d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-unplugged-610277e6-a454-47a7-8c51-aa13506c9f66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:28:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 23 05:28:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 23 05:28:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:28:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.445 250273 DEBUG nova.network.neutron [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.474 250273 INFO nova.compute.manager [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Took 2.00 seconds to deallocate network for instance.#033[00m
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.532 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.532 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.557 250273 DEBUG nova.compute.manager [req-fc6cf0ec-3032-4796-8eaf-f9a3e5fedbc4 req-17ee7b33-3c6a-4e44-8deb-efb0bc4fa20c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-deleted-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.639 250273 DEBUG oslo_concurrency.processutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:28:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:39.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:39 np0005593232 nova_compute[250269]: 2026-01-23 10:28:39.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2992: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 9.7 KiB/s wr, 211 op/s
Jan 23 05:28:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:28:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3267454624' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:28:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.098 250273 DEBUG oslo_concurrency.processutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.107 250273 DEBUG nova.compute.provider_tree [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.125 250273 DEBUG nova.scheduler.client.report [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.166 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.209 250273 INFO nova.scheduler.client.report [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Deleted allocations for instance da8fd4b4-46c3-412e-aeeb-499a3fec1bc5#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.306 250273 DEBUG oslo_concurrency.lockutils [None req-a59d9b8c-fbeb-4157-b6e2-ed62e22b75ad 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.873s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.509 250273 DEBUG nova.compute.manager [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.509 250273 DEBUG oslo_concurrency.lockutils [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.509 250273 DEBUG oslo_concurrency.lockutils [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.510 250273 DEBUG oslo_concurrency.lockutils [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "da8fd4b4-46c3-412e-aeeb-499a3fec1bc5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.510 250273 DEBUG nova.compute.manager [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] No waiting events found dispatching network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.510 250273 WARNING nova.compute.manager [req-ea745dbd-5da1-44a5-9d36-983ef82dd0f1 req-fdf347cc-6ba5-4494-afee-06ab6054dfa7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Received unexpected event network-vif-plugged-610277e6-a454-47a7-8c51-aa13506c9f66 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:28:40 np0005593232 NetworkManager[49057]: <info>  [1769164120.8413] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/317)
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.840 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:40 np0005593232 NetworkManager[49057]: <info>  [1769164120.8421] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/318)
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.905 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.906 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.906 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.906 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.906 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.908 250273 INFO nova.compute.manager [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Terminating instance#033[00m
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.909 250273 DEBUG nova.compute.manager [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:28:40 np0005593232 kernel: tap8a815bf1-0b (unregistering): left promiscuous mode
Jan 23 05:28:40 np0005593232 NetworkManager[49057]: <info>  [1769164120.9935] device (tap8a815bf1-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:28:40 np0005593232 nova_compute[250269]: 2026-01-23 10:28:40.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:41Z|00682|binding|INFO|Releasing lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 from this chassis (sb_readonly=0)
Jan 23 05:28:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:41Z|00683|binding|INFO|Releasing lport 1e84a51b-d3a3-4559-9baf-4ae27c5bded6 from this chassis (sb_readonly=0)
Jan 23 05:28:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:41Z|00684|binding|INFO|Releasing lport b648300b-e46c-4d3b-b02e-94ff684c03ae from this chassis (sb_readonly=0)
Jan 23 05:28:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:41Z|00685|binding|INFO|Removing iface tap8a815bf1-0b ovn-installed in OVS
Jan 23 05:28:41 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:41Z|00686|binding|INFO|Setting lport 8a815bf1-0b58-47f0-a81a-267ec84efc82 down in Southbound
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.040 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:00:1f 10.100.0.9'], port_security=['fa:16:3e:da:00:1f 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'ae979986-7780-443a-afbc-6b4be8f71da1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5c27429e1d8f433a8a67ddb76f8798f1', 'neutron:revision_number': '8', 'neutron:security_group_ids': '29028637-714b-453c-9e54-c753b1c8b7f6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0dedc65-79e0-4ae8-b1b0-46423e11b58a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8a815bf1-0b58-47f0-a81a-267ec84efc82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.041 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8a815bf1-0b58-47f0-a81a-267ec84efc82 in datapath fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 unbound from our chassis#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.044 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.045 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfa20ef-1916-46bd-8f57-446469c93af0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.045 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 namespace which is not needed anymore#033[00m
Jan 23 05:28:41 np0005593232 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a4.scope: Deactivated successfully.
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.058 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d000000a4.scope: Consumed 2.217s CPU time.
Jan 23 05:28:41 np0005593232 systemd-machined[215836]: Machine qemu-75-instance-000000a4 terminated.
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.163 250273 INFO nova.virt.libvirt.driver [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Instance destroyed successfully.#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.167 250273 DEBUG nova.objects.instance [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lazy-loading 'resources' on Instance uuid ae979986-7780-443a-afbc-6b4be8f71da1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.190 250273 DEBUG nova.virt.libvirt.vif [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:24:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerBootFromVolumeStableRescueTest-server-464630064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverbootfromvolumestablerescuetest-server-464630064',id=164,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:25:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5c27429e1d8f433a8a67ddb76f8798f1',ramdisk_id='',reservation_id='r-gwy2rl7j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832',owner_user_name='tempest-ServerBootFromVolumeStableRescueTest-1351337832-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:26:04Z,user_data=None,user_id='0d6a628e0dcb441fa41457bf719e65a0',uuid=ae979986-7780-443a-afbc-6b4be8f71da1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.192 250273 DEBUG nova.network.os_vif_util [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converting VIF {"id": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "address": "fa:16:3e:da:00:1f", "network": {"id": "fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4", "bridge": "br-int", "label": "tempest-ServerBootFromVolumeStableRescueTest-596908432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5c27429e1d8f433a8a67ddb76f8798f1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a815bf1-0b", "ovs_interfaceid": "8a815bf1-0b58-47f0-a81a-267ec84efc82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.193 250273 DEBUG nova.network.os_vif_util [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.194 250273 DEBUG os_vif [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.198 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a815bf1-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.208 250273 INFO os_vif [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:00:1f,bridge_name='br-int',has_traffic_filtering=True,id=8a815bf1-0b58-47f0-a81a-267ec84efc82,network=Network(fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a815bf1-0b')#033[00m
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [NOTICE]   (361674) : haproxy version is 2.8.14-c23fe91
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [NOTICE]   (361674) : path to executable is /usr/sbin/haproxy
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [WARNING]  (361674) : Exiting Master process...
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [WARNING]  (361674) : Exiting Master process...
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [ALERT]    (361674) : Current worker (361677) exited with code 143 (Terminated)
Jan 23 05:28:41 np0005593232 neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4[361670]: [WARNING]  (361674) : All workers exited. Exiting... (0)
Jan 23 05:28:41 np0005593232 systemd[1]: libpod-d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35.scope: Deactivated successfully.
Jan 23 05:28:41 np0005593232 podman[365310]: 2026-01-23 10:28:41.243400236 +0000 UTC m=+0.065451551 container died d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:28:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35-userdata-shm.mount: Deactivated successfully.
Jan 23 05:28:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5204ad2e1d5ec9589fcda41b7a30409d4167b4cd72e0dad5027d919787567051-merged.mount: Deactivated successfully.
Jan 23 05:28:41 np0005593232 podman[365310]: 2026-01-23 10:28:41.293683445 +0000 UTC m=+0.115734750 container cleanup d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:28:41 np0005593232 systemd[1]: libpod-conmon-d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35.scope: Deactivated successfully.
Jan 23 05:28:41 np0005593232 podman[365358]: 2026-01-23 10:28:41.377612441 +0000 UTC m=+0.053358538 container remove d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.384 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a27c662-a955-4a05-8337-fd1bb730d5b5]: (4, ('Fri Jan 23 10:28:41 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35)\nd56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35\nFri Jan 23 10:28:41 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 (d56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35)\nd56c23a30231b68090a1d4feffdf42220c920816131ae7108314318f9e32de35\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.386 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9bd14d-505e-4bb6-a912-bba0d27beeaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.387 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfbd64ab8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:28:41 np0005593232 kernel: tapfbd64ab8-90: left promiscuous mode
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.403 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.407 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[34a3f0cd-15c8-4ba4-a3d2-c60bad46bfc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.425 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ea033879-d1ba-4300-80a9-8ed44af7d9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.428 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf98300-c9fe-427c-a1c5-87fb13686fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.454 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[51023ddc-88e1-4777-94bc-fb60da78736b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787635, 'reachable_time': 41571, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365374, 'error': None, 'target': 'ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.458 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fbd64ab8-9e5b-4300-98d7-50a5d6fbefc4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:28:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:41.458 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a7e711e0-a49d-4428-b9e2-ab98ca249933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:28:41 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfbd64ab8\x2d9e5b\x2d4300\x2d98d7\x2d50a5d6fbefc4.mount: Deactivated successfully.
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.599 250273 INFO nova.virt.libvirt.driver [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Deleting instance files /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1_del#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.601 250273 INFO nova.virt.libvirt.driver [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Deletion of /var/lib/nova/instances/ae979986-7780-443a-afbc-6b4be8f71da1_del complete#033[00m
Jan 23 05:28:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2993: 321 pgs: 321 active+clean; 372 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 5.2 KiB/s wr, 184 op/s
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.918 250273 DEBUG nova.compute.manager [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.919 250273 DEBUG nova.compute.manager [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing instance network info cache due to event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.920 250273 DEBUG oslo_concurrency.lockutils [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.921 250273 DEBUG oslo_concurrency.lockutils [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:28:41 np0005593232 nova_compute[250269]: 2026-01-23 10:28:41.921 250273 DEBUG nova.network.neutron [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:28:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:42 np0005593232 nova_compute[250269]: 2026-01-23 10:28:42.119 250273 INFO nova.compute.manager [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:28:42 np0005593232 nova_compute[250269]: 2026-01-23 10:28:42.120 250273 DEBUG oslo.service.loopingcall [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:28:42 np0005593232 nova_compute[250269]: 2026-01-23 10:28:42.121 250273 DEBUG nova.compute.manager [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:28:42 np0005593232 nova_compute[250269]: 2026-01-23 10:28:42.122 250273 DEBUG nova.network.neutron [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:42.644 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:42.645 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:28:42.646 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:43 np0005593232 nova_compute[250269]: 2026-01-23 10:28:43.484 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:43Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:f7:86 10.100.0.11
Jan 23 05:28:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:43Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:f7:86 10.100.0.11
Jan 23 05:28:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:43.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2994: 321 pgs: 321 active+clean; 379 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 192 op/s
Jan 23 05:28:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.121 250273 DEBUG nova.network.neutron [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.161 250273 DEBUG nova.compute.manager [req-1e758de9-2baa-4381-a19b-432ba44273f0 req-2ba86c73-6d04-4b1d-a6c1-4f8aa5ce3017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-deleted-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.161 250273 INFO nova.compute.manager [req-1e758de9-2baa-4381-a19b-432ba44273f0 req-2ba86c73-6d04-4b1d-a6c1-4f8aa5ce3017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Neutron deleted interface 8a815bf1-0b58-47f0-a81a-267ec84efc82; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.162 250273 DEBUG nova.network.neutron [req-1e758de9-2baa-4381-a19b-432ba44273f0 req-2ba86c73-6d04-4b1d-a6c1-4f8aa5ce3017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.239 250273 INFO nova.compute.manager [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Took 2.12 seconds to deallocate network for instance.#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.256 250273 DEBUG nova.compute.manager [req-1e758de9-2baa-4381-a19b-432ba44273f0 req-2ba86c73-6d04-4b1d-a6c1-4f8aa5ce3017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Detach interface failed, port_id=8a815bf1-0b58-47f0-a81a-267ec84efc82, reason: Instance ae979986-7780-443a-afbc-6b4be8f71da1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.561 250273 DEBUG nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.562 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.564 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.564 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.564 250273 DEBUG nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.565 250273 DEBUG nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-unplugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.565 250273 DEBUG nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.565 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.565 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.565 250273 DEBUG oslo_concurrency.lockutils [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.566 250273 DEBUG nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] No waiting events found dispatching network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.566 250273 WARNING nova.compute.manager [req-976e20b8-4ed1-45c8-a0d2-d34975d9ec51 req-6f0c5123-4714-492f-b08a-e11f34417ee1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Received unexpected event network-vif-plugged-8a815bf1-0b58-47f0-a81a-267ec84efc82 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:28:44 np0005593232 nova_compute[250269]: 2026-01-23 10:28:44.848 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.121 250273 INFO nova.compute.manager [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Took 0.88 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.174 250273 DEBUG nova.network.neutron [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updated VIF entry in instance network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.175 250273 DEBUG nova.network.neutron [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.431 250273 DEBUG oslo_concurrency.lockutils [req-d035e159-71ef-41e1-9858-83572fa1743d req-c03e7b92-0f2b-494d-b2db-e43d2a3e57e1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.582 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.584 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:28:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:45.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:45 np0005593232 nova_compute[250269]: 2026-01-23 10:28:45.717 250273 DEBUG oslo_concurrency.processutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:28:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2995: 321 pgs: 321 active+clean; 396 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 421 KiB/s rd, 2.5 MiB/s wr, 135 op/s
Jan 23 05:28:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:28:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807027297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.268 250273 DEBUG oslo_concurrency.processutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.281 250273 DEBUG nova.compute.provider_tree [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.593 250273 DEBUG nova.scheduler.client.report [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.805 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:46 np0005593232 nova_compute[250269]: 2026-01-23 10:28:46.926 250273 INFO nova.scheduler.client.report [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Deleted allocations for instance ae979986-7780-443a-afbc-6b4be8f71da1#033[00m
Jan 23 05:28:47 np0005593232 nova_compute[250269]: 2026-01-23 10:28:47.004 250273 DEBUG oslo_concurrency.lockutils [None req-59e77c8a-8aaa-45a1-a97f-84679bf95a22 0d6a628e0dcb441fa41457bf719e65a0 5c27429e1d8f433a8a67ddb76f8798f1 - - default default] Lock "ae979986-7780-443a-afbc-6b4be8f71da1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006473338916806254 of space, bias 1.0, pg target 1.942001675041876 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.650393169551461 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.002892064489112228 of space, bias 1.0, pg target 0.8647272822445562 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:28:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:28:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 14K writes, 66K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1537 writes, 6746 keys, 1537 commit groups, 1.0 writes per commit group, ingest: 10.42 MB, 0.02 MB/s#012Interval WAL: 1537 writes, 1537 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     45.4      1.93              0.33        46    0.042       0      0       0.0       0.0#012  L6      1/0    9.84 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.0     83.8     71.1      6.13              1.44        45    0.136    310K    24K       0.0       0.0#012 Sum      1/0    9.84 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.0     63.7     64.9      8.07              1.77        91    0.089    310K    24K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1     72.4     71.8      0.83              0.21        10    0.083     46K   2595       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0     83.8     71.1      6.13              1.44        45    0.136    310K    24K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     45.4      1.93              0.33        45    0.043       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.086, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.51 GB write, 0.10 MB/s write, 0.50 GB read, 0.10 MB/s read, 8.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 56.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000949 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3295,54.73 MB,18.0018%) FilterBlock(92,873.11 KB,0.280476%) IndexBlock(92,1.40 MB,0.460389%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:28:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2996: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 439 KiB/s rd, 2.6 MiB/s wr, 99 op/s
Jan 23 05:28:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 23 05:28:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 23 05:28:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 23 05:28:49 np0005593232 podman[365403]: 2026-01-23 10:28:49.538840511 +0000 UTC m=+0.175809778 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:28:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:28:49Z|00687|binding|INFO|Releasing lport 1e84a51b-d3a3-4559-9baf-4ae27c5bded6 from this chassis (sb_readonly=0)
Jan 23 05:28:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:49.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:49 np0005593232 nova_compute[250269]: 2026-01-23 10:28:49.747 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:49 np0005593232 nova_compute[250269]: 2026-01-23 10:28:49.851 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2998: 321 pgs: 321 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.6 MiB/s wr, 116 op/s
Jan 23 05:28:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:51 np0005593232 nova_compute[250269]: 2026-01-23 10:28:51.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:51 np0005593232 nova_compute[250269]: 2026-01-23 10:28:51.666 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164116.6651413, da8fd4b4-46c3-412e-aeeb-499a3fec1bc5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:28:51 np0005593232 nova_compute[250269]: 2026-01-23 10:28:51.667 250273 INFO nova.compute.manager [-] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:28:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:51.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:51 np0005593232 nova_compute[250269]: 2026-01-23 10:28:51.737 250273 DEBUG nova.compute.manager [None req-21af0f76-70d0-4bb0-9753-d83435e9a9bf - - - - - -] [instance: da8fd4b4-46c3-412e-aeeb-499a3fec1bc5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v2999: 321 pgs: 321 active+clean; 401 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 453 KiB/s rd, 2.6 MiB/s wr, 116 op/s
Jan 23 05:28:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:52.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:52 np0005593232 nova_compute[250269]: 2026-01-23 10:28:52.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:53 np0005593232 nova_compute[250269]: 2026-01-23 10:28:53.352 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:28:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:53.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:28:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3000: 321 pgs: 321 active+clean; 311 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 1.4 MiB/s wr, 84 op/s
Jan 23 05:28:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:54.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:54 np0005593232 nova_compute[250269]: 2026-01-23 10:28:54.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:54 np0005593232 nova_compute[250269]: 2026-01-23 10:28:54.374 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:28:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:28:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:54 np0005593232 nova_compute[250269]: 2026-01-23 10:28:54.855 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:55 np0005593232 nova_compute[250269]: 2026-01-23 10:28:55.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:28:55 np0005593232 nova_compute[250269]: 2026-01-23 10:28:55.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:28:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:55.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d16f3694-0028-45c4-a37c-90cbe1788ede does not exist
Jan 23 05:28:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2b89bbdb-cfb9-411f-b861-be024a747d29 does not exist
Jan 23 05:28:55 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f199c9bb-7c83-48c1-9022-d5ca5886df7e does not exist
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:28:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:28:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3001: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 101 KiB/s rd, 109 KiB/s wr, 77 op/s
Jan 23 05:28:55 np0005593232 podman[365590]: 2026-01-23 10:28:55.919675256 +0000 UTC m=+0.066946834 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 23 05:28:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:56.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:56 np0005593232 nova_compute[250269]: 2026-01-23 10:28:56.160 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164121.1577625, ae979986-7780-443a-afbc-6b4be8f71da1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:28:56 np0005593232 nova_compute[250269]: 2026-01-23 10:28:56.160 250273 INFO nova.compute.manager [-] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:28:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:28:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:28:56 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:28:56 np0005593232 nova_compute[250269]: 2026-01-23 10:28:56.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.611981702 +0000 UTC m=+0.083346000 container create 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:28:56 np0005593232 systemd[1]: Started libpod-conmon-0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869.scope.
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.578067278 +0000 UTC m=+0.049431646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:28:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.729726519 +0000 UTC m=+0.201090807 container init 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.745690173 +0000 UTC m=+0.217054451 container start 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.750026316 +0000 UTC m=+0.221390584 container attach 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:28:56 np0005593232 zen_gould[365744]: 167 167
Jan 23 05:28:56 np0005593232 systemd[1]: libpod-0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869.scope: Deactivated successfully.
Jan 23 05:28:56 np0005593232 conmon[365744]: conmon 0097b02b5b91f5618f2c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869.scope/container/memory.events
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.761573104 +0000 UTC m=+0.232937382 container died 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:28:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fcfe2ded3258c57f2f1d4d69e1ac45c9378fca0b311b859306b0e460b45c8209-merged.mount: Deactivated successfully.
Jan 23 05:28:56 np0005593232 podman[365728]: 2026-01-23 10:28:56.803857236 +0000 UTC m=+0.275221504 container remove 0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:28:56 np0005593232 systemd[1]: libpod-conmon-0097b02b5b91f5618f2c4ce248a081ba2d8bcbacdca48abaad03ca7fbedf3869.scope: Deactivated successfully.
Jan 23 05:28:57 np0005593232 podman[365768]: 2026-01-23 10:28:57.059679778 +0000 UTC m=+0.071319588 container create 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:28:57 np0005593232 systemd[1]: Started libpod-conmon-479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017.scope.
Jan 23 05:28:57 np0005593232 podman[365768]: 2026-01-23 10:28:57.037629991 +0000 UTC m=+0.049269821 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:28:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:28:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:57 np0005593232 podman[365768]: 2026-01-23 10:28:57.160690949 +0000 UTC m=+0.172330759 container init 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:28:57 np0005593232 podman[365768]: 2026-01-23 10:28:57.173506783 +0000 UTC m=+0.185146593 container start 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:28:57 np0005593232 podman[365768]: 2026-01-23 10:28:57.177852697 +0000 UTC m=+0.189492507 container attach 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:28:57 np0005593232 nova_compute[250269]: 2026-01-23 10:28:57.310 250273 DEBUG nova.compute.manager [None req-56ba426b-a957-491f-a407-1a18ce52075c - - - - - -] [instance: ae979986-7780-443a-afbc-6b4be8f71da1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:28:57 np0005593232 nova_compute[250269]: 2026-01-23 10:28:57.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:57.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3002: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 48 KiB/s rd, 21 KiB/s wr, 61 op/s
Jan 23 05:28:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:28:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:28:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:28:58 np0005593232 nice_khayyam[365809]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:28:58 np0005593232 nice_khayyam[365809]: --> relative data size: 1.0
Jan 23 05:28:58 np0005593232 nice_khayyam[365809]: --> All data devices are unavailable
Jan 23 05:28:58 np0005593232 systemd[1]: libpod-479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017.scope: Deactivated successfully.
Jan 23 05:28:58 np0005593232 podman[365768]: 2026-01-23 10:28:58.154857397 +0000 UTC m=+1.166497207 container died 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:28:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8ac450ae307db566ef91bbb2a6bee3a1d38e3e98215d53d38a2000c5b8d25327-merged.mount: Deactivated successfully.
Jan 23 05:28:58 np0005593232 podman[365768]: 2026-01-23 10:28:58.225793723 +0000 UTC m=+1.237433543 container remove 479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:28:58 np0005593232 systemd[1]: libpod-conmon-479725a353a7af1efba4f6d16e3bab61591b2b3fe4ab0a71e50fdf3c03590017.scope: Deactivated successfully.
Jan 23 05:28:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:28:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 23 05:28:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 23 05:28:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 23 05:28:58 np0005593232 podman[366003]: 2026-01-23 10:28:58.978011994 +0000 UTC m=+0.048200401 container create e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 05:28:59 np0005593232 systemd[1]: Started libpod-conmon-e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4.scope.
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:58.952074706 +0000 UTC m=+0.022263133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:28:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:59.073290232 +0000 UTC m=+0.143478679 container init e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:59.082879824 +0000 UTC m=+0.153068231 container start e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:59.086528028 +0000 UTC m=+0.156716455 container attach e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:28:59 np0005593232 cool_hugle[366019]: 167 167
Jan 23 05:28:59 np0005593232 systemd[1]: libpod-e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4.scope: Deactivated successfully.
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:59.089143202 +0000 UTC m=+0.159331629 container died e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:28:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bf8a8538fd7411147bb938abfb851ff5ba7a3bcdec02ac2a87117ca97c0b5da3-merged.mount: Deactivated successfully.
Jan 23 05:28:59 np0005593232 podman[366003]: 2026-01-23 10:28:59.127923405 +0000 UTC m=+0.198111822 container remove e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:28:59 np0005593232 systemd[1]: libpod-conmon-e63ec12ea5b1736a39b5fce519f8024e845d5b9a43745107094a99a562ae9bc4.scope: Deactivated successfully.
Jan 23 05:28:59 np0005593232 podman[366042]: 2026-01-23 10:28:59.361838753 +0000 UTC m=+0.056890258 container create a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:28:59 np0005593232 systemd[1]: Started libpod-conmon-a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34.scope.
Jan 23 05:28:59 np0005593232 podman[366042]: 2026-01-23 10:28:59.335568417 +0000 UTC m=+0.030620012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:28:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:28:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf10e76246710e0562d671df3f70f4b8e80b34316b1a33fb015982ffb3617cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf10e76246710e0562d671df3f70f4b8e80b34316b1a33fb015982ffb3617cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf10e76246710e0562d671df3f70f4b8e80b34316b1a33fb015982ffb3617cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cf10e76246710e0562d671df3f70f4b8e80b34316b1a33fb015982ffb3617cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:28:59 np0005593232 podman[366042]: 2026-01-23 10:28:59.477541522 +0000 UTC m=+0.172593137 container init a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:28:59 np0005593232 podman[366042]: 2026-01-23 10:28:59.486340422 +0000 UTC m=+0.181391947 container start a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:28:59 np0005593232 podman[366042]: 2026-01-23 10:28:59.490393387 +0000 UTC m=+0.185444942 container attach a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:28:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:28:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:28:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:28:59.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:28:59 np0005593232 nova_compute[250269]: 2026-01-23 10:28:59.858 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:28:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3004: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 21 KiB/s wr, 39 op/s
Jan 23 05:29:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:00.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:00 np0005593232 nova_compute[250269]: 2026-01-23 10:29:00.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]: {
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:    "0": [
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:        {
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "devices": [
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "/dev/loop3"
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            ],
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "lv_name": "ceph_lv0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "lv_size": "7511998464",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "name": "ceph_lv0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "tags": {
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.cluster_name": "ceph",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.crush_device_class": "",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.encrypted": "0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.osd_id": "0",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.type": "block",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:                "ceph.vdo": "0"
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            },
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "type": "block",
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:            "vg_name": "ceph_vg0"
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:        }
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]:    ]
Jan 23 05:29:00 np0005593232 trusting_shaw[366059]: }
Jan 23 05:29:00 np0005593232 systemd[1]: libpod-a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34.scope: Deactivated successfully.
Jan 23 05:29:00 np0005593232 podman[366042]: 2026-01-23 10:29:00.41826476 +0000 UTC m=+1.113316315 container died a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:29:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8cf10e76246710e0562d671df3f70f4b8e80b34316b1a33fb015982ffb3617cd-merged.mount: Deactivated successfully.
Jan 23 05:29:00 np0005593232 podman[366042]: 2026-01-23 10:29:00.488543727 +0000 UTC m=+1.183595242 container remove a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:29:00 np0005593232 systemd[1]: libpod-conmon-a80c307b926fee103be7afafd99c361939cc8a39c2338cc265b93ed567c1bd34.scope: Deactivated successfully.
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.147635081 +0000 UTC m=+0.048861420 container create 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:29:01 np0005593232 systemd[1]: Started libpod-conmon-7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92.scope.
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.12541669 +0000 UTC m=+0.026643019 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:29:01 np0005593232 nova_compute[250269]: 2026-01-23 10:29:01.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.251773811 +0000 UTC m=+0.153000150 container init 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.270397461 +0000 UTC m=+0.171623810 container start 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.27459141 +0000 UTC m=+0.175817949 container attach 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:29:01 np0005593232 practical_jennings[366241]: 167 167
Jan 23 05:29:01 np0005593232 systemd[1]: libpod-7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92.scope: Deactivated successfully.
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.28127758 +0000 UTC m=+0.182503899 container died 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:29:01 np0005593232 nova_compute[250269]: 2026-01-23 10:29:01.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:01 np0005593232 nova_compute[250269]: 2026-01-23 10:29:01.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:29:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-34ea3e19d5962576119431fe782670dd2d56acd1bd1b5c3880589c713214efe3-merged.mount: Deactivated successfully.
Jan 23 05:29:01 np0005593232 podman[366224]: 2026-01-23 10:29:01.326124254 +0000 UTC m=+0.227350563 container remove 7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:29:01 np0005593232 systemd[1]: libpod-conmon-7ce10a98f85b16258c048c78ce815643985dfefc1d96ea698ee72cffe86f3c92.scope: Deactivated successfully.
Jan 23 05:29:01 np0005593232 podman[366265]: 2026-01-23 10:29:01.546323243 +0000 UTC m=+0.056566499 container create a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:29:01 np0005593232 systemd[1]: Started libpod-conmon-a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7.scope.
Jan 23 05:29:01 np0005593232 podman[366265]: 2026-01-23 10:29:01.521288382 +0000 UTC m=+0.031531618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:29:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:29:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd89f129b6233bfce0a334b1c95585bfa9def6b794e3a68848c1c882791ee0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:29:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd89f129b6233bfce0a334b1c95585bfa9def6b794e3a68848c1c882791ee0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:29:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd89f129b6233bfce0a334b1c95585bfa9def6b794e3a68848c1c882791ee0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:29:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdd89f129b6233bfce0a334b1c95585bfa9def6b794e3a68848c1c882791ee0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:29:01 np0005593232 podman[366265]: 2026-01-23 10:29:01.644505174 +0000 UTC m=+0.154748730 container init a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:29:01 np0005593232 podman[366265]: 2026-01-23 10:29:01.653735296 +0000 UTC m=+0.163978512 container start a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:29:01 np0005593232 podman[366265]: 2026-01-23 10:29:01.657508694 +0000 UTC m=+0.167751990 container attach a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:29:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:01.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3005: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 37 KiB/s rd, 21 KiB/s wr, 39 op/s
Jan 23 05:29:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:02.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]: {
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:        "osd_id": 0,
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:        "type": "bluestore"
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]:    }
Jan 23 05:29:02 np0005593232 thirsty_bassi[366281]: }
Jan 23 05:29:02 np0005593232 systemd[1]: libpod-a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7.scope: Deactivated successfully.
Jan 23 05:29:02 np0005593232 podman[366265]: 2026-01-23 10:29:02.618163519 +0000 UTC m=+1.128406735 container died a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:29:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ecdd89f129b6233bfce0a334b1c95585bfa9def6b794e3a68848c1c882791ee0-merged.mount: Deactivated successfully.
Jan 23 05:29:02 np0005593232 podman[366265]: 2026-01-23 10:29:02.684413592 +0000 UTC m=+1.194656808 container remove a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:29:02 np0005593232 systemd[1]: libpod-conmon-a264953c846d89718b2120fed4a91c296da4b776230a0ea06dbb1b460273afa7.scope: Deactivated successfully.
Jan 23 05:29:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:29:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:29:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:29:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:29:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 34f90337-02b2-40c9-92d9-591bea0aa92b does not exist
Jan 23 05:29:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b1ad6f3f-44bb-4024-bb5b-eb0fc4d57a6d does not exist
Jan 23 05:29:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e1601ae7-4958-4fee-93d4-2077c372c0c7 does not exist
Jan 23 05:29:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:03.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:29:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:29:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3006: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 8.1 KiB/s wr, 30 op/s
Jan 23 05:29:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:04.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.288 250273 DEBUG nova.compute.manager [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.289 250273 DEBUG nova.compute.manager [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing instance network info cache due to event network-changed-4db60331-fd91-4563-aa5e-c8bb81879499. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.289 250273 DEBUG oslo_concurrency.lockutils [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.289 250273 DEBUG oslo_concurrency.lockutils [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.290 250273 DEBUG nova.network.neutron [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Refreshing network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:29:04 np0005593232 nova_compute[250269]: 2026-01-23 10:29:04.859 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.101 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.102 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.285 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.285 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.287 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.287 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.288 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.290 250273 INFO nova.compute.manager [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Terminating instance#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.292 250273 DEBUG nova.compute.manager [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:29:05 np0005593232 kernel: tap4db60331-fd (unregistering): left promiscuous mode
Jan 23 05:29:05 np0005593232 NetworkManager[49057]: <info>  [1769164145.3924] device (tap4db60331-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:29:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:05Z|00688|binding|INFO|Releasing lport 4db60331-fd91-4563-aa5e-c8bb81879499 from this chassis (sb_readonly=0)
Jan 23 05:29:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:05Z|00689|binding|INFO|Setting lport 4db60331-fd91-4563-aa5e-c8bb81879499 down in Southbound
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:05Z|00690|binding|INFO|Removing iface tap4db60331-fd ovn-installed in OVS
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.448 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ac.scope: Deactivated successfully.
Jan 23 05:29:05 np0005593232 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d000000ac.scope: Consumed 16.412s CPU time.
Jan 23 05:29:05 np0005593232 systemd-machined[215836]: Machine qemu-77-instance-000000ac terminated.
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.520 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.525 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.544 250273 INFO nova.virt.libvirt.driver [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Instance destroyed successfully.#033[00m
Jan 23 05:29:05 np0005593232 nova_compute[250269]: 2026-01-23 10:29:05.545 250273 DEBUG nova.objects.instance [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lazy-loading 'resources' on Instance uuid 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:29:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:05.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3007: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.4 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Jan 23 05:29:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:06.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.222 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.523 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:f7:86 10.100.0.11'], port_security=['fa:16:3e:e3:f7:86 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '13cce6b0-b75f-41bf-90dd-1b5cc5d2344b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8887855ebd545a6bdab3b6a18c19dd9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '07b099b4-00b4-46d9-9c83-f076e9832801 48acf45e-b74f-4870-9a4a-f3c4cef0ea2c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6b86d0cc-189d-499d-83d5-aecc5111cdb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=4db60331-fd91-4563-aa5e-c8bb81879499) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.526 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 4db60331-fd91-4563-aa5e-c8bb81879499 in datapath a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad unbound from our chassis#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.530 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.533 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[754b4b06-8def-4cd9-a99a-5526cc7dece1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.535 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad namespace which is not needed anymore#033[00m
Jan 23 05:29:06 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [NOTICE]   (365015) : haproxy version is 2.8.14-c23fe91
Jan 23 05:29:06 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [NOTICE]   (365015) : path to executable is /usr/sbin/haproxy
Jan 23 05:29:06 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [WARNING]  (365015) : Exiting Master process...
Jan 23 05:29:06 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [ALERT]    (365015) : Current worker (365017) exited with code 143 (Terminated)
Jan 23 05:29:06 np0005593232 neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad[365008]: [WARNING]  (365015) : All workers exited. Exiting... (0)
Jan 23 05:29:06 np0005593232 systemd[1]: libpod-7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd.scope: Deactivated successfully.
Jan 23 05:29:06 np0005593232 podman[366402]: 2026-01-23 10:29:06.70640757 +0000 UTC m=+0.051459113 container died 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.740 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.742 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.743 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.744 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:29:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd-userdata-shm.mount: Deactivated successfully.
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.745 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7a590bb8032e5f960385d26f03eabc3404a02748216c292ca4da59a4a767f8b5-merged.mount: Deactivated successfully.
Jan 23 05:29:06 np0005593232 podman[366402]: 2026-01-23 10:29:06.761372623 +0000 UTC m=+0.106424166 container cleanup 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:29:06 np0005593232 systemd[1]: libpod-conmon-7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd.scope: Deactivated successfully.
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.801 250273 DEBUG nova.virt.libvirt.vif [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:27:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-544089771-access_point-180081163',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-544089771-acc',id=172,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBITO660aaiIUEAYb/nIuqp5SSLXHJpCV/HBAbegUJr3DnRFdwDI34p9mLd+os4mF8/ldbnLSdChV+EEJIvwwqIGTsOASPzz+4e5dcKF7BKU9NU8cHs4HfSeTHXa5VE87Ig==',key_name='tempest-TestSecurityGroupsBasicOps-1528247556',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:28:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8887855ebd545a6bdab3b6a18c19dd9',ramdisk_id='',reservation_id='r-d1vtnkr2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-544089771',owner_user_name='tempest-TestSecurityGroupsBasicOps-544089771-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:28:31Z,user_data=None,user_id='3c17acb50b0d49d4b062d68aa88d1f7f',uuid=13cce6b0-b75f-41bf-90dd-1b5cc5d2344b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.803 250273 DEBUG nova.network.os_vif_util [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converting VIF {"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.804 250273 DEBUG nova.network.os_vif_util [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.805 250273 DEBUG os_vif [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.807 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4db60331-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.847 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.859 250273 INFO os_vif [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:f7:86,bridge_name='br-int',has_traffic_filtering=True,id=4db60331-fd91-4563-aa5e-c8bb81879499,network=Network(a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4db60331-fd')#033[00m
Jan 23 05:29:06 np0005593232 podman[366431]: 2026-01-23 10:29:06.868134727 +0000 UTC m=+0.079327746 container remove 7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.877 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dadc92c5-e861-4214-a71b-e195f00a1918]: (4, ('Fri Jan 23 10:29:06 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad (7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd)\n7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd\nFri Jan 23 10:29:06 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad (7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd)\n7608666a045420de3468231ee09dfc6d02b0078b33bcc3e904b12ca341d259dd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.880 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ffed70-4dad-4c6a-9665-305610731bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.883 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa7bf86f8-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:06 np0005593232 kernel: tapa7bf86f8-f0: left promiscuous mode
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:06 np0005593232 nova_compute[250269]: 2026-01-23 10:29:06.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.905 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c90c8d-af99-4ca8-9482-4d2233cacb2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.928 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7d68ccbe-90a0-41f7-997f-30139ac78436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.930 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c87e1bce-fbf0-4928-896d-a316e6eb7452]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.945 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b5554db6-26c4-44ba-a76a-787898ac4dc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 800239, 'reachable_time': 17215, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366484, 'error': None, 'target': 'ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.948 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:29:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:06.948 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[e04c5f64-367d-4987-8be2-7f23b99f54ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:06 np0005593232 systemd[1]: run-netns-ovnmeta\x2da7bf86f8\x2df670\x2d4e9e\x2db3d7\x2dd7a3c130d2ad.mount: Deactivated successfully.
Jan 23 05:29:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:29:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214560071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:29:07 np0005593232 nova_compute[250269]: 2026-01-23 10:29:07.201 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:07.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3008: 321 pgs: 321 active+clean; 279 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 6.8 KiB/s rd, 4.4 KiB/s wr, 1 op/s
Jan 23 05:29:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:08.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:08 np0005593232 nova_compute[250269]: 2026-01-23 10:29:08.143 250273 INFO nova.virt.libvirt.driver [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Deleting instance files /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_del#033[00m
Jan 23 05:29:08 np0005593232 nova_compute[250269]: 2026-01-23 10:29:08.144 250273 INFO nova.virt.libvirt.driver [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Deletion of /var/lib/nova/instances/13cce6b0-b75f-41bf-90dd-1b5cc5d2344b_del complete#033[00m
Jan 23 05:29:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.120 250273 INFO nova.compute.manager [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Took 3.83 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.120 250273 DEBUG oslo.service.loopingcall [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.121 250273 DEBUG nova.compute.manager [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.121 250273 DEBUG nova.network.neutron [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.132 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Error from libvirt while getting description of instance-000000ac: [Error Code 42] Domain not found: no domain with matching uuid '13cce6b0-b75f-41bf-90dd-1b5cc5d2344b' (instance-000000ac): libvirt.libvirtError: Domain not found: no domain with matching uuid '13cce6b0-b75f-41bf-90dd-1b5cc5d2344b' (instance-000000ac)#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.265 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.266 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4230MB free_disk=20.897132873535156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.266 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.267 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:09.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:09 np0005593232 nova_compute[250269]: 2026-01-23 10:29:09.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3009: 321 pgs: 321 active+clean; 239 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 4.2 KiB/s wr, 15 op/s
Jan 23 05:29:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:10.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.277 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.278 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.278 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.667 250273 DEBUG nova.compute.manager [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-unplugged-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.668 250273 DEBUG oslo_concurrency.lockutils [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.668 250273 DEBUG oslo_concurrency.lockutils [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.668 250273 DEBUG oslo_concurrency.lockutils [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.668 250273 DEBUG nova.compute.manager [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] No waiting events found dispatching network-vif-unplugged-4db60331-fd91-4563-aa5e-c8bb81879499 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.668 250273 DEBUG nova.compute.manager [req-8a662fbe-d0e0-4aac-bc78-556ca3414ebf req-7bbe55b1-9690-4ef8-b5d2-69f8883e4c03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-unplugged-4db60331-fd91-4563-aa5e-c8bb81879499 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:29:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:11.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:29:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915844503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.767 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.774 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.849 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.875 250273 DEBUG nova.network.neutron [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updated VIF entry in instance network info cache for port 4db60331-fd91-4563-aa5e-c8bb81879499. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:29:11 np0005593232 nova_compute[250269]: 2026-01-23 10:29:11.876 250273 DEBUG nova.network.neutron [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [{"id": "4db60331-fd91-4563-aa5e-c8bb81879499", "address": "fa:16:3e:e3:f7:86", "network": {"id": "a7bf86f8-f670-4e9e-b3d7-d7a3c130d2ad", "bridge": "br-int", "label": "tempest-network-smoke--274742252", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8887855ebd545a6bdab3b6a18c19dd9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4db60331-fd", "ovs_interfaceid": "4db60331-fd91-4563-aa5e-c8bb81879499", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:29:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3010: 321 pgs: 321 active+clean; 239 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.3 KiB/s rd, 1.7 KiB/s wr, 13 op/s
Jan 23 05:29:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:12.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:12 np0005593232 nova_compute[250269]: 2026-01-23 10:29:12.209 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:29:12 np0005593232 nova_compute[250269]: 2026-01-23 10:29:12.657 250273 DEBUG oslo_concurrency.lockutils [req-cc2c31de-ca8e-4e2d-8a82-17ae7de60b70 req-0cf1bae8-1dfe-4c6c-9e71-c1a49438aa81 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:29:12 np0005593232 nova_compute[250269]: 2026-01-23 10:29:12.660 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:29:12 np0005593232 nova_compute[250269]: 2026-01-23 10:29:12.660 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.393s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:13.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.801 250273 DEBUG nova.compute.manager [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.802 250273 DEBUG oslo_concurrency.lockutils [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.802 250273 DEBUG oslo_concurrency.lockutils [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.802 250273 DEBUG oslo_concurrency.lockutils [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.803 250273 DEBUG nova.compute.manager [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] No waiting events found dispatching network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:29:13 np0005593232 nova_compute[250269]: 2026-01-23 10:29:13.803 250273 WARNING nova.compute.manager [req-5b62c061-9860-4ebe-99e1-40897e8f84c0 req-20c8b8d9-8f73-4e64-9899-8ac68063618c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received unexpected event network-vif-plugged-4db60331-fd91-4563-aa5e-c8bb81879499 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:29:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3011: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 27 op/s
Jan 23 05:29:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:14.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:14 np0005593232 nova_compute[250269]: 2026-01-23 10:29:14.865 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:15 np0005593232 nova_compute[250269]: 2026-01-23 10:29:15.661 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:15 np0005593232 nova_compute[250269]: 2026-01-23 10:29:15.661 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:29:15 np0005593232 nova_compute[250269]: 2026-01-23 10:29:15.661 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:29:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:15.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3012: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:29:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:16.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:16 np0005593232 nova_compute[250269]: 2026-01-23 10:29:16.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:17.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3013: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:29:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:18.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:18 np0005593232 nova_compute[250269]: 2026-01-23 10:29:18.623 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 23 05:29:18 np0005593232 nova_compute[250269]: 2026-01-23 10:29:18.623 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:29:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:19 np0005593232 nova_compute[250269]: 2026-01-23 10:29:19.867 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3014: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:29:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:20.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:20 np0005593232 podman[366569]: 2026-01-23 10:29:20.451954101 +0000 UTC m=+0.111740017 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 23 05:29:20 np0005593232 nova_compute[250269]: 2026-01-23 10:29:20.543 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164145.5416067, 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:29:20 np0005593232 nova_compute[250269]: 2026-01-23 10:29:20.544 250273 INFO nova.compute.manager [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:29:20 np0005593232 nova_compute[250269]: 2026-01-23 10:29:20.648 250273 DEBUG nova.compute.manager [None req-cef05ac5-33d3-4578-b33a-c61624cb4517 - - - - - -] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:29:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:21.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:21 np0005593232 nova_compute[250269]: 2026-01-23 10:29:21.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3015: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 852 B/s wr, 14 op/s
Jan 23 05:29:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:22.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:23.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.787 250273 DEBUG nova.network.neutron [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.794 250273 DEBUG nova.compute.manager [req-ba5b9e08-99e1-43ee-9ebe-eb36b76afb96 req-6e8c469b-ef30-41de-98f5-a456e3f71d44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Received event network-vif-deleted-4db60331-fd91-4563-aa5e-c8bb81879499 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.795 250273 INFO nova.compute.manager [req-ba5b9e08-99e1-43ee-9ebe-eb36b76afb96 req-6e8c469b-ef30-41de-98f5-a456e3f71d44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Neutron deleted interface 4db60331-fd91-4563-aa5e-c8bb81879499; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.795 250273 DEBUG nova.network.neutron [req-ba5b9e08-99e1-43ee-9ebe-eb36b76afb96 req-6e8c469b-ef30-41de-98f5-a456e3f71d44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.883 250273 DEBUG nova.compute.manager [req-ba5b9e08-99e1-43ee-9ebe-eb36b76afb96 req-6e8c469b-ef30-41de-98f5-a456e3f71d44 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Detach interface failed, port_id=4db60331-fd91-4563-aa5e-c8bb81879499, reason: Instance 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 23 05:29:23 np0005593232 nova_compute[250269]: 2026-01-23 10:29:23.890 250273 INFO nova.compute.manager [-] [instance: 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b] Took 14.77 seconds to deallocate network for instance.#033[00m
Jan 23 05:29:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3016: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 9.5 KiB/s rd, 852 B/s wr, 14 op/s
Jan 23 05:29:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:24.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.214 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.214 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.274 250273 DEBUG oslo_concurrency.processutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:29:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734273158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.727 250273 DEBUG oslo_concurrency.processutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.735 250273 DEBUG nova.compute.provider_tree [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:29:24 np0005593232 nova_compute[250269]: 2026-01-23 10:29:24.869 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:25 np0005593232 nova_compute[250269]: 2026-01-23 10:29:25.186 250273 DEBUG nova.scheduler.client.report [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:29:25 np0005593232 nova_compute[250269]: 2026-01-23 10:29:25.489 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:25 np0005593232 nova_compute[250269]: 2026-01-23 10:29:25.586 250273 INFO nova.scheduler.client.report [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Deleted allocations for instance 13cce6b0-b75f-41bf-90dd-1b5cc5d2344b#033[00m
Jan 23 05:29:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3017: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 23 05:29:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:26.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:26 np0005593232 nova_compute[250269]: 2026-01-23 10:29:26.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:26 np0005593232 podman[366620]: 2026-01-23 10:29:26.420775475 +0000 UTC m=+0.067995174 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 23 05:29:26 np0005593232 nova_compute[250269]: 2026-01-23 10:29:26.781 250273 DEBUG oslo_concurrency.lockutils [None req-53d95bf4-8d7f-4bc5-82dc-2e9b60a78136 3c17acb50b0d49d4b062d68aa88d1f7f d8887855ebd545a6bdab3b6a18c19dd9 - - default default] Lock "13cce6b0-b75f-41bf-90dd-1b5cc5d2344b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 21.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:26 np0005593232 nova_compute[250269]: 2026-01-23 10:29:26.888 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:27.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3018: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 341 B/s wr, 0 op/s
Jan 23 05:29:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:28.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:28.887 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:29:28 np0005593232 nova_compute[250269]: 2026-01-23 10:29:28.888 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:28.888 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.290 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.386 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:29:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:29.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:29 np0005593232 nova_compute[250269]: 2026-01-23 10:29:29.871 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3019: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 23 05:29:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:30.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:31.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3020: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 23 05:29:31 np0005593232 nova_compute[250269]: 2026-01-23 10:29:31.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:32.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:33.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3021: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 23 05:29:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:34.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:34 np0005593232 nova_compute[250269]: 2026-01-23 10:29:34.874 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:35.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3022: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.7 KiB/s wr, 0 op/s
Jan 23 05:29:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:36.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:36.891 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:36 np0005593232 nova_compute[250269]: 2026-01-23 10:29:36.934 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:29:37
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta', 'default.rgw.log', 'images']
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:29:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:37.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3023: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 23 05:29:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:38 np0005593232 nova_compute[250269]: 2026-01-23 10:29:38.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:29:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:29:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:39.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:39 np0005593232 nova_compute[250269]: 2026-01-23 10:29:39.877 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3024: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.3 KiB/s wr, 0 op/s
Jan 23 05:29:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:40.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:41.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3025: 321 pgs: 321 active+clean; 200 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:29:41 np0005593232 nova_compute[250269]: 2026-01-23 10:29:41.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:42.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.502 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.502 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.522 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.608 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.608 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.616 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.616 250273 INFO nova.compute.claims [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:42.645 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:42.646 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:42.646 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:42 np0005593232 nova_compute[250269]: 2026-01-23 10:29:42.750 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:29:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1881062564' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:29:43 np0005593232 nova_compute[250269]: 2026-01-23 10:29:43.177 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:43 np0005593232 nova_compute[250269]: 2026-01-23 10:29:43.182 250273 DEBUG nova.compute.provider_tree [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:29:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:43 np0005593232 nova_compute[250269]: 2026-01-23 10:29:43.602 250273 DEBUG nova.scheduler.client.report [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:29:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:43.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3026: 321 pgs: 321 active+clean; 155 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 15 KiB/s rd, 341 B/s wr, 19 op/s
Jan 23 05:29:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:29:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:44.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:29:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:29:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084646791' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:29:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:29:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2084646791' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:29:44 np0005593232 nova_compute[250269]: 2026-01-23 10:29:44.714 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:44 np0005593232 nova_compute[250269]: 2026-01-23 10:29:44.715 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:29:44 np0005593232 nova_compute[250269]: 2026-01-23 10:29:44.877 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:44 np0005593232 nova_compute[250269]: 2026-01-23 10:29:44.911 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:29:44 np0005593232 nova_compute[250269]: 2026-01-23 10:29:44.912 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.004 250273 INFO nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.032 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.172 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.173 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.174 250273 INFO nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Creating image(s)#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.199 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.227 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.256 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.260 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.303 250273 DEBUG nova.policy [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e1f41f21f79408d8dff1331cfd1e0db', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7be5cb5abaf44b0a9c0c307d348d8f75', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.357 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.358 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.360 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.360 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.399 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:45 np0005593232 nova_compute[250269]: 2026-01-23 10:29:45.404 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 713eba08-716b-48ed-866e-e231d09ebfaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:45.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3027: 321 pgs: 321 active+clean; 121 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 682 B/s wr, 24 op/s
Jan 23 05:29:46 np0005593232 nova_compute[250269]: 2026-01-23 10:29:46.136 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 713eba08-716b-48ed-866e-e231d09ebfaf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.732s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:46.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:46 np0005593232 nova_compute[250269]: 2026-01-23 10:29:46.223 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] resizing rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:29:46 np0005593232 nova_compute[250269]: 2026-01-23 10:29:46.484 250273 DEBUG nova.objects.instance [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lazy-loading 'migration_context' on Instance uuid 713eba08-716b-48ed-866e-e231d09ebfaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:29:46 np0005593232 nova_compute[250269]: 2026-01-23 10:29:46.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:47 np0005593232 nova_compute[250269]: 2026-01-23 10:29:47.044 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:29:47 np0005593232 nova_compute[250269]: 2026-01-23 10:29:47.045 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Ensure instance console log exists: /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:29:47 np0005593232 nova_compute[250269]: 2026-01-23 10:29:47.046 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:47 np0005593232 nova_compute[250269]: 2026-01-23 10:29:47.047 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:47 np0005593232 nova_compute[250269]: 2026-01-23 10:29:47.048 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.906639679880886e-06 of space, bias 1.0, pg target 0.002071991903964266 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:29:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:47.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3028: 321 pgs: 321 active+clean; 150 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 1.4 MiB/s wr, 51 op/s
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.147450) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188147570, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 2135, "num_deletes": 255, "total_data_size": 3707994, "memory_usage": 3770440, "flush_reason": "Manual Compaction"}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 23 05:29:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:48.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188171206, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 3645647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65335, "largest_seqno": 67469, "table_properties": {"data_size": 3636033, "index_size": 6043, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20333, "raw_average_key_size": 20, "raw_value_size": 3616659, "raw_average_value_size": 3668, "num_data_blocks": 263, "num_entries": 986, "num_filter_entries": 986, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769163986, "oldest_key_time": 1769163986, "file_creation_time": 1769164188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 23816 microseconds, and 10033 cpu microseconds.
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.171280) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 3645647 bytes OK
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.171302) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.177603) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.177621) EVENT_LOG_v1 {"time_micros": 1769164188177614, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.177638) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 3699154, prev total WAL file size 3699154, number of live WAL files 2.
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.179004) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(3560KB)], [152(10071KB)]
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188179083, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 13958977, "oldest_snapshot_seqno": -1}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 9257 keys, 12060643 bytes, temperature: kUnknown
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188317975, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 12060643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12001352, "index_size": 35050, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 243499, "raw_average_key_size": 26, "raw_value_size": 11839456, "raw_average_value_size": 1278, "num_data_blocks": 1341, "num_entries": 9257, "num_filter_entries": 9257, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164188, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.318297) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 12060643 bytes
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.357635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.4 rd, 86.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 9.8 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 9785, records dropped: 528 output_compression: NoCompression
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.357676) EVENT_LOG_v1 {"time_micros": 1769164188357661, "job": 94, "event": "compaction_finished", "compaction_time_micros": 138993, "compaction_time_cpu_micros": 41575, "output_level": 6, "num_output_files": 1, "total_output_size": 12060643, "num_input_records": 9785, "num_output_records": 9257, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188358698, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164188360612, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.178924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.360734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.360742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.360745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.360748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:29:48.360750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:29:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:48 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:29:48 np0005593232 nova_compute[250269]: 2026-01-23 10:29:48.511 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Successfully created port: dc21586e-25cd-4cb5-923b-a4766c5ef9cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:29:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:49.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:49 np0005593232 nova_compute[250269]: 2026-01-23 10:29:49.920 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3029: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 05:29:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:50.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:50 np0005593232 nova_compute[250269]: 2026-01-23 10:29:50.964 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Successfully updated port: dc21586e-25cd-4cb5-923b-a4766c5ef9cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:29:50 np0005593232 nova_compute[250269]: 2026-01-23 10:29:50.982 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:29:50 np0005593232 nova_compute[250269]: 2026-01-23 10:29:50.982 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:29:50 np0005593232 nova_compute[250269]: 2026-01-23 10:29:50.982 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:29:51 np0005593232 nova_compute[250269]: 2026-01-23 10:29:51.165 250273 DEBUG nova.compute.manager [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:51 np0005593232 nova_compute[250269]: 2026-01-23 10:29:51.165 250273 DEBUG nova.compute.manager [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing instance network info cache due to event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:29:51 np0005593232 nova_compute[250269]: 2026-01-23 10:29:51.166 250273 DEBUG oslo_concurrency.lockutils [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:29:51 np0005593232 nova_compute[250269]: 2026-01-23 10:29:51.243 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:29:51 np0005593232 podman[366892]: 2026-01-23 10:29:51.474938688 +0000 UTC m=+0.127588287 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:29:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3030: 321 pgs: 321 active+clean; 167 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 23 05:29:51 np0005593232 nova_compute[250269]: 2026-01-23 10:29:51.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:52.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:52 np0005593232 nova_compute[250269]: 2026-01-23 10:29:52.314 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.213 250273 DEBUG nova.network.neutron [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.242 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.242 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Instance network_info: |[{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.243 250273 DEBUG oslo_concurrency.lockutils [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.243 250273 DEBUG nova.network.neutron [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.247 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Start _get_guest_xml network_info=[{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.253 250273 WARNING nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.261 250273 DEBUG nova.virt.libvirt.host [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.262 250273 DEBUG nova.virt.libvirt.host [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.273 250273 DEBUG nova.virt.libvirt.host [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.274 250273 DEBUG nova.virt.libvirt.host [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.275 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.275 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.276 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.276 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.276 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.276 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.277 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.277 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.277 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.277 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.278 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.278 250273 DEBUG nova.virt.hardware [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.280 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:29:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452586967' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.766 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:53.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.794 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:53 np0005593232 nova_compute[250269]: 2026-01-23 10:29:53.798 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3031: 321 pgs: 321 active+clean; 183 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 44 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Jan 23 05:29:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:54.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:29:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3192275663' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.239 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.241 250273 DEBUG nova.virt.libvirt.vif [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1234686874',display_name='tempest-TestSnapshotPattern-server-1234686874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1234686874',id=173,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0lEKBsepjSG1BUrT2qgopJ/7aCoBcgDi3hhuJKTvppGpJeuS7bRrTAjsHpfJAjqSviKitZ9vmMFVrUxqv9t4cjKwPE6pfdP8/KJg/bYjfHtBTugoC0prDbk1bWow1ivA==',key_name='tempest-TestSnapshotPattern-313488550',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7be5cb5abaf44b0a9c0c307d348d8f75',ramdisk_id='',reservation_id='r-6fm0yb09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-428739353',owner_user_name='tempest-TestSnapshotPattern-428739353-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:29:45Z,user_data=None,user_id='8e1f41f21f79408d8dff1331cfd1e0db',uuid=713eba08-716b-48ed-866e-e231d09ebfaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.242 250273 DEBUG nova.network.os_vif_util [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converting VIF {"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.243 250273 DEBUG nova.network.os_vif_util [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.245 250273 DEBUG nova.objects.instance [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 713eba08-716b-48ed-866e-e231d09ebfaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.268 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <uuid>713eba08-716b-48ed-866e-e231d09ebfaf</uuid>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <name>instance-000000ad</name>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestSnapshotPattern-server-1234686874</nova:name>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:29:53</nova:creationTime>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:user uuid="8e1f41f21f79408d8dff1331cfd1e0db">tempest-TestSnapshotPattern-428739353-project-member</nova:user>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:project uuid="7be5cb5abaf44b0a9c0c307d348d8f75">tempest-TestSnapshotPattern-428739353</nova:project>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <nova:port uuid="dc21586e-25cd-4cb5-923b-a4766c5ef9cc">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="serial">713eba08-716b-48ed-866e-e231d09ebfaf</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="uuid">713eba08-716b-48ed-866e-e231d09ebfaf</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/713eba08-716b-48ed-866e-e231d09ebfaf_disk">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/713eba08-716b-48ed-866e-e231d09ebfaf_disk.config">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:28:d3:85"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <target dev="tapdc21586e-25"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/console.log" append="off"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:29:54 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:29:54 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:29:54 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:29:54 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.269 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Preparing to wait for external event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.270 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.270 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.270 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.271 250273 DEBUG nova.virt.libvirt.vif [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1234686874',display_name='tempest-TestSnapshotPattern-server-1234686874',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1234686874',id=173,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0lEKBsepjSG1BUrT2qgopJ/7aCoBcgDi3hhuJKTvppGpJeuS7bRrTAjsHpfJAjqSviKitZ9vmMFVrUxqv9t4cjKwPE6pfdP8/KJg/bYjfHtBTugoC0prDbk1bWow1ivA==',key_name='tempest-TestSnapshotPattern-313488550',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7be5cb5abaf44b0a9c0c307d348d8f75',ramdisk_id='',reservation_id='r-6fm0yb09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-428739353',owner_user_name='tempest-TestSnapshotPattern-428739353-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:29:45Z,user_data=None,user_id='8e1f41f21f79408d8dff1331cfd1e0db',uuid=713eba08-716b-48ed-866e-e231d09ebfaf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.272 250273 DEBUG nova.network.os_vif_util [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converting VIF {"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.273 250273 DEBUG nova.network.os_vif_util [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.273 250273 DEBUG os_vif [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.274 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.275 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.275 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.280 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.280 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdc21586e-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.281 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdc21586e-25, col_values=(('external_ids', {'iface-id': 'dc21586e-25cd-4cb5-923b-a4766c5ef9cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:d3:85', 'vm-uuid': '713eba08-716b-48ed-866e-e231d09ebfaf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.283 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:29:54 np0005593232 NetworkManager[49057]: <info>  [1769164194.2851] manager: (tapdc21586e-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.294 250273 INFO os_vif [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25')#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.366 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.367 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.367 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] No VIF found with MAC fa:16:3e:28:d3:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.368 250273 INFO nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Using config drive#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.400 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:54 np0005593232 nova_compute[250269]: 2026-01-23 10:29:54.922 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.216 250273 INFO nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Creating config drive at /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.222 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4st9re88 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.358 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4st9re88" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.396 250273 DEBUG nova.storage.rbd_utils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] rbd image 713eba08-716b-48ed-866e-e231d09ebfaf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.400 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config 713eba08-716b-48ed-866e-e231d09ebfaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.606 250273 DEBUG oslo_concurrency.processutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config 713eba08-716b-48ed-866e-e231d09ebfaf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.607 250273 INFO nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Deleting local config drive /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf/disk.config because it was imported into RBD.#033[00m
Jan 23 05:29:55 np0005593232 kernel: tapdc21586e-25: entered promiscuous mode
Jan 23 05:29:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:55Z|00691|binding|INFO|Claiming lport dc21586e-25cd-4cb5-923b-a4766c5ef9cc for this chassis.
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.675 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:55 np0005593232 NetworkManager[49057]: <info>  [1769164195.6761] manager: (tapdc21586e-25): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Jan 23 05:29:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:55Z|00692|binding|INFO|dc21586e-25cd-4cb5-923b-a4766c5ef9cc: Claiming fa:16:3e:28:d3:85 10.100.0.5
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.691 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d3:85 10.100.0.5'], port_security=['fa:16:3e:28:d3:85 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '713eba08-716b-48ed-866e-e231d09ebfaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd95237d-0845-479e-9505-318e01879565', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7be5cb5abaf44b0a9c0c307d348d8f75', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9caee2dd-fa48-495f-923a-9b90f0b8d219', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fddb1949-170b-4939-a509-14ac4d8149d1, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=dc21586e-25cd-4cb5-923b-a4766c5ef9cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.693 161902 INFO neutron.agent.ovn.metadata.agent [-] Port dc21586e-25cd-4cb5-923b-a4766c5ef9cc in datapath bd95237d-0845-479e-9505-318e01879565 bound to our chassis#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.695 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd95237d-0845-479e-9505-318e01879565#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.708 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bdfdbfdd-400b-4d9c-83fb-11d407e469b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.709 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd95237d-01 in ovnmeta-bd95237d-0845-479e-9505-318e01879565 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.711 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd95237d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.711 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e533f7ab-9209-4422-ba57-24f4624f7193]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.712 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1967fa-be23-4f2a-b1f4-b11cab63163b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 systemd-udevd[367057]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:29:55 np0005593232 systemd-machined[215836]: New machine qemu-78-instance-000000ad.
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.727 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[71d7e2fa-195b-4568-8b60-ecf368bc4b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 NetworkManager[49057]: <info>  [1769164195.7329] device (tapdc21586e-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:29:55 np0005593232 NetworkManager[49057]: <info>  [1769164195.7337] device (tapdc21586e-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:29:55 np0005593232 systemd[1]: Started Virtual Machine qemu-78-instance-000000ad.
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:55Z|00693|binding|INFO|Setting lport dc21586e-25cd-4cb5-923b-a4766c5ef9cc ovn-installed in OVS
Jan 23 05:29:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:55Z|00694|binding|INFO|Setting lport dc21586e-25cd-4cb5-923b-a4766c5ef9cc up in Southbound
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.751 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.751 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e655a45e-3fe6-416f-bc1a-12641b387597]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.786 250273 DEBUG nova.network.neutron [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updated VIF entry in instance network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.787 250273 DEBUG nova.network.neutron [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.788 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[62df6fb8-8838-4c29-9830-a0240d17a6e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.794 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce9768e-4b86-4657-bce6-193cd68ed247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:55 np0005593232 NetworkManager[49057]: <info>  [1769164195.7952] manager: (tapbd95237d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/321)
Jan 23 05:29:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:29:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:55.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:29:55 np0005593232 nova_compute[250269]: 2026-01-23 10:29:55.812 250273 DEBUG oslo_concurrency.lockutils [req-07edf851-118f-44f2-8099-acae94c5f4c0 req-f7da402f-e374-4ba9-9e42-3f269882c058 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.826 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[267c56cd-4ccb-4826-b8d0-e96a03a4d771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.830 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[80b95546-4925-4b1e-aba9-7c0dcc53bb50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 NetworkManager[49057]: <info>  [1769164195.8558] device (tapbd95237d-00): carrier: link connected
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.862 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[63d33336-be72-4955-b2af-926d59c46fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.882 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[789becdc-f889-4603-a5d7-5628257a3e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd95237d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:77:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810899, 'reachable_time': 31779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367089, 'error': None, 'target': 'ovnmeta-bd95237d-0845-479e-9505-318e01879565', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.898 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[688d492d-4784-4791-942e-ba0f9ff0287b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee9:776b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 810899, 'tstamp': 810899}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367090, 'error': None, 'target': 'ovnmeta-bd95237d-0845-479e-9505-318e01879565', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.918 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf0bc69-cb84-40f4-a115-31c16cdd838c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd95237d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e9:77:6b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 207], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810899, 'reachable_time': 31779, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367091, 'error': None, 'target': 'ovnmeta-bd95237d-0845-479e-9505-318e01879565', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3032: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 62 op/s
Jan 23 05:29:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:55.949 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bec1fca8-ffac-4b07-97a6-c260eb3387de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.004 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7013bcd9-072f-46d2-b5af-fdabd761b364]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.006 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd95237d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.006 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.006 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd95237d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:56 np0005593232 NetworkManager[49057]: <info>  [1769164196.0088] manager: (tapbd95237d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/322)
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:56 np0005593232 kernel: tapbd95237d-00: entered promiscuous mode
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.015 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd95237d-00, col_values=(('external_ids', {'iface-id': '0f4f3525-34df-42ca-96c3-3c7e0c388556'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.015 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:56 np0005593232 ovn_controller[151001]: 2026-01-23T10:29:56Z|00695|binding|INFO|Releasing lport 0f4f3525-34df-42ca-96c3-3c7e0c388556 from this chassis (sb_readonly=0)
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.041 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.043 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd95237d-0845-479e-9505-318e01879565.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd95237d-0845-479e-9505-318e01879565.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.044 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a61f985d-3dac-4321-a3ef-03db6a8c229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.045 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-bd95237d-0845-479e-9505-318e01879565
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/bd95237d-0845-479e-9505-318e01879565.pid.haproxy
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID bd95237d-0845-479e-9505-318e01879565
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:29:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:29:56.047 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd95237d-0845-479e-9505-318e01879565', 'env', 'PROCESS_TAG=haproxy-bd95237d-0845-479e-9505-318e01879565', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd95237d-0845-479e-9505-318e01879565.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:29:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:56.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.447 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164196.4464102, 713eba08-716b-48ed-866e-e231d09ebfaf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:29:56 np0005593232 nova_compute[250269]: 2026-01-23 10:29:56.448 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] VM Started (Lifecycle Event)#033[00m
Jan 23 05:29:56 np0005593232 podman[367164]: 2026-01-23 10:29:56.490396243 +0000 UTC m=+0.052348139 container create e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:29:56 np0005593232 systemd[1]: Started libpod-conmon-e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200.scope.
Jan 23 05:29:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:29:56 np0005593232 podman[367164]: 2026-01-23 10:29:56.463460978 +0000 UTC m=+0.025412884 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:29:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/779247c66a7255db323c8d458194c020c1f52713d918878df6f54f726cc5e8de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:29:56 np0005593232 podman[367164]: 2026-01-23 10:29:56.57790199 +0000 UTC m=+0.139853906 container init e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:29:56 np0005593232 podman[367164]: 2026-01-23 10:29:56.585044553 +0000 UTC m=+0.146996439 container start e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 05:29:56 np0005593232 podman[367177]: 2026-01-23 10:29:56.594530373 +0000 UTC m=+0.069797105 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 05:29:56 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [NOTICE]   (367203) : New worker (367205) forked
Jan 23 05:29:56 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [NOTICE]   (367203) : Loading success.
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.529 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.536 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164196.4476552, 713eba08-716b-48ed-866e-e231d09ebfaf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.536 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:29:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:57.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.905 250273 DEBUG nova.compute.manager [req-e90a8ca7-f9cc-47d3-8368-011c44c0c40c req-2323a164-7f43-41a4-badf-e159412ec74f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.907 250273 DEBUG oslo_concurrency.lockutils [req-e90a8ca7-f9cc-47d3-8368-011c44c0c40c req-2323a164-7f43-41a4-badf-e159412ec74f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.912 250273 DEBUG oslo_concurrency.lockutils [req-e90a8ca7-f9cc-47d3-8368-011c44c0c40c req-2323a164-7f43-41a4-badf-e159412ec74f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.912 250273 DEBUG oslo_concurrency.lockutils [req-e90a8ca7-f9cc-47d3-8368-011c44c0c40c req-2323a164-7f43-41a4-badf-e159412ec74f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.912 250273 DEBUG nova.compute.manager [req-e90a8ca7-f9cc-47d3-8368-011c44c0c40c req-2323a164-7f43-41a4-badf-e159412ec74f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Processing event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.913 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.918 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.923 250273 INFO nova.virt.libvirt.driver [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Instance spawned successfully.#033[00m
Jan 23 05:29:57 np0005593232 nova_compute[250269]: 2026-01-23 10:29:57.923 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:29:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3033: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 23 05:29:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:29:58.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.694 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.705 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164197.9177585, 713eba08-716b-48ed-866e-e231d09ebfaf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.706 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.757 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.758 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.759 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.760 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.761 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.762 250273 DEBUG nova.virt.libvirt.driver [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:29:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:29:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:29:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:29:59.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.827 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.832 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:29:59 np0005593232 nova_compute[250269]: 2026-01-23 10:29:59.923 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:29:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3034: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 423 KiB/s rd, 2.2 MiB/s wr, 54 op/s
Jan 23 05:30:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:30:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:00 np0005593232 nova_compute[250269]: 2026-01-23 10:30:00.500 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:30:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:30:00 np0005593232 nova_compute[250269]: 2026-01-23 10:30:00.939 250273 INFO nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Took 15.77 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:30:00 np0005593232 nova_compute[250269]: 2026-01-23 10:30:00.940 250273 DEBUG nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:01 np0005593232 nova_compute[250269]: 2026-01-23 10:30:01.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:01.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3035: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 422 KiB/s rd, 1.8 MiB/s wr, 50 op/s
Jan 23 05:30:01 np0005593232 nova_compute[250269]: 2026-01-23 10:30:01.967 250273 INFO nova.compute.manager [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Took 19.39 seconds to build instance.#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.026 250273 DEBUG nova.compute.manager [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.027 250273 DEBUG oslo_concurrency.lockutils [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.027 250273 DEBUG oslo_concurrency.lockutils [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.030 250273 DEBUG oslo_concurrency.lockutils [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.031 250273 DEBUG nova.compute.manager [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] No waiting events found dispatching network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.031 250273 WARNING nova.compute.manager [req-ff53de5c-5938-4082-86eb-285024a9b74e req-2c3ed90e-b0b8-4d5f-b6b6-383257a1b394 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received unexpected event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc for instance with vm_state building and task_state spawning.#033[00m
Jan 23 05:30:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:02.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:02 np0005593232 nova_compute[250269]: 2026-01-23 10:30:02.304 250273 DEBUG oslo_concurrency.lockutils [None req-49bf862c-acc5-4189-a656-b6a4a39a05f7 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:30:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:03.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3036: 321 pgs: 321 active+clean; 213 MiB data, 1.2 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 05:30:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:04.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:04 np0005593232 nova_compute[250269]: 2026-01-23 10:30:04.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:04 np0005593232 podman[367439]: 2026-01-23 10:30:04.402471559 +0000 UTC m=+0.108723380 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:30:04 np0005593232 podman[367439]: 2026-01-23 10:30:04.541744028 +0000 UTC m=+0.247995849 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:30:04 np0005593232 nova_compute[250269]: 2026-01-23 10:30:04.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:05 np0005593232 nova_compute[250269]: 2026-01-23 10:30:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:05 np0005593232 nova_compute[250269]: 2026-01-23 10:30:05.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:05 np0005593232 podman[367594]: 2026-01-23 10:30:05.365918434 +0000 UTC m=+0.091214934 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:30:05 np0005593232 podman[367594]: 2026-01-23 10:30:05.376801563 +0000 UTC m=+0.102098003 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:30:05 np0005593232 podman[367661]: 2026-01-23 10:30:05.734607193 +0000 UTC m=+0.098342856 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, distribution-scope=public, io.buildah.version=1.28.2, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 05:30:05 np0005593232 podman[367661]: 2026-01-23 10:30:05.750189036 +0000 UTC m=+0.113924689 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, release=1793, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, vcs-type=git, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 23 05:30:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:05.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:30:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:30:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3037: 321 pgs: 321 active+clean; 235 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.3 MiB/s wr, 112 op/s
Jan 23 05:30:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3be5d5bc-02c0-4cd4-95bc-bce968fad1c4 does not exist
Jan 23 05:30:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1c3da0b8-1428-4d72-ab74-5e081d9d1aea does not exist
Jan 23 05:30:06 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f2604991-30d2-4afa-8d0c-898bc2ac1694 does not exist
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.491384497 +0000 UTC m=+0.042800277 container create f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:30:07 np0005593232 systemd[1]: Started libpod-conmon-f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0.scope.
Jan 23 05:30:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.471324007 +0000 UTC m=+0.022739837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.569528138 +0000 UTC m=+0.120943958 container init f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.57767337 +0000 UTC m=+0.129089170 container start f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.581842238 +0000 UTC m=+0.133258078 container attach f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:30:07 np0005593232 jolly_merkle[367979]: 167 167
Jan 23 05:30:07 np0005593232 systemd[1]: libpod-f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0.scope: Deactivated successfully.
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.585039529 +0000 UTC m=+0.136455309 container died f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:30:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a15d6e92163960c04009968e303b86e277bbbdaec180b54575f1ad95096ec977-merged.mount: Deactivated successfully.
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:07 np0005593232 podman[367963]: 2026-01-23 10:30:07.629880764 +0000 UTC m=+0.181296534 container remove f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:30:07 np0005593232 systemd[1]: libpod-conmon-f0bfb3de8a652a3ab9709d2ecf9304881742eee9256cab11cb90469f2b560af0.scope: Deactivated successfully.
Jan 23 05:30:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:07.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:07 np0005593232 podman[368003]: 2026-01-23 10:30:07.815625503 +0000 UTC m=+0.047254984 container create 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.819 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.820 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.820 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.820 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:30:07 np0005593232 nova_compute[250269]: 2026-01-23 10:30:07.821 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:07 np0005593232 systemd[1]: Started libpod-conmon-452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630.scope.
Jan 23 05:30:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:07 np0005593232 podman[368003]: 2026-01-23 10:30:07.795735428 +0000 UTC m=+0.027364929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:07 np0005593232 podman[368003]: 2026-01-23 10:30:07.924703284 +0000 UTC m=+0.156332785 container init 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:30:07 np0005593232 podman[368003]: 2026-01-23 10:30:07.933281027 +0000 UTC m=+0.164910508 container start 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:30:07 np0005593232 podman[368003]: 2026-01-23 10:30:07.939884835 +0000 UTC m=+0.171514316 container attach 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:30:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3038: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Jan 23 05:30:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:07 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:30:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:30:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716553519' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:08 np0005593232 bold_sammet[368020]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:30:08 np0005593232 bold_sammet[368020]: --> relative data size: 1.0
Jan 23 05:30:08 np0005593232 bold_sammet[368020]: --> All data devices are unavailable
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.779 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.780 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:30:08 np0005593232 systemd[1]: libpod-452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630.scope: Deactivated successfully.
Jan 23 05:30:08 np0005593232 podman[368058]: 2026-01-23 10:30:08.832038342 +0000 UTC m=+0.026973898 container died 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.934 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.936 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4018MB free_disk=20.925731658935547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.936 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:08 np0005593232 nova_compute[250269]: 2026-01-23 10:30:08.937 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.138 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 713eba08-716b-48ed-866e-e231d09ebfaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.139 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.139 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.189 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-39334138c77d73f3543d37c10243e52f69e75bc5905d6367c8ad9553dd95646e-merged.mount: Deactivated successfully.
Jan 23 05:30:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:30:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/44508774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.671 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.678 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.720 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:30:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:09.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.921 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.922 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.985s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:09 np0005593232 nova_compute[250269]: 2026-01-23 10:30:09.927 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3039: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.4 MiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 23 05:30:09 np0005593232 podman[368058]: 2026-01-23 10:30:09.981639768 +0000 UTC m=+1.176575324 container remove 452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sammet, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:30:09 np0005593232 systemd[1]: libpod-conmon-452855faf7d21d50a0756527d83fd7c3fe23178f57ea00a669e845b4875e6630.scope: Deactivated successfully.
Jan 23 05:30:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:10.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.58788791 +0000 UTC m=+0.021714408 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.74479112 +0000 UTC m=+0.178617618 container create 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:30:10 np0005593232 systemd[1]: Started libpod-conmon-7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14.scope.
Jan 23 05:30:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.833823861 +0000 UTC m=+0.267650379 container init 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.841330614 +0000 UTC m=+0.275157112 container start 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.844611017 +0000 UTC m=+0.278437525 container attach 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:30:10 np0005593232 recursing_swanson[368252]: 167 167
Jan 23 05:30:10 np0005593232 systemd[1]: libpod-7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14.scope: Deactivated successfully.
Jan 23 05:30:10 np0005593232 podman[368236]: 2026-01-23 10:30:10.847661204 +0000 UTC m=+0.281487732 container died 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:30:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9cb055577b5a8be1d1e552b9ae890bb53ed698542484671cbe415011e077a801-merged.mount: Deactivated successfully.
Jan 23 05:30:11 np0005593232 podman[368236]: 2026-01-23 10:30:11.026906409 +0000 UTC m=+0.460732907 container remove 7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:30:11 np0005593232 systemd[1]: libpod-conmon-7043f0f0a0c25b2d1047466a20d1aa1de489cc918b60a7f28f017b77d2926b14.scope: Deactivated successfully.
Jan 23 05:30:11 np0005593232 podman[368275]: 2026-01-23 10:30:11.21095584 +0000 UTC m=+0.052216295 container create e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:30:11 np0005593232 systemd[1]: Started libpod-conmon-e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099.scope.
Jan 23 05:30:11 np0005593232 podman[368275]: 2026-01-23 10:30:11.183940322 +0000 UTC m=+0.025200807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3dd8830ab6c2344c1bbbfda3e56fd436d110ed433384918ff90dad2a221e9cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3dd8830ab6c2344c1bbbfda3e56fd436d110ed433384918ff90dad2a221e9cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3dd8830ab6c2344c1bbbfda3e56fd436d110ed433384918ff90dad2a221e9cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3dd8830ab6c2344c1bbbfda3e56fd436d110ed433384918ff90dad2a221e9cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:11 np0005593232 podman[368275]: 2026-01-23 10:30:11.314492703 +0000 UTC m=+0.155753168 container init e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:30:11 np0005593232 podman[368275]: 2026-01-23 10:30:11.321814021 +0000 UTC m=+0.163074456 container start e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:30:11 np0005593232 podman[368275]: 2026-01-23 10:30:11.327076141 +0000 UTC m=+0.168336576 container attach e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:30:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:11.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3040: 321 pgs: 321 active+clean; 260 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 134 op/s
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]: {
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:    "0": [
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:        {
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "devices": [
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "/dev/loop3"
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            ],
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "lv_name": "ceph_lv0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "lv_size": "7511998464",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "name": "ceph_lv0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "tags": {
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.cluster_name": "ceph",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.crush_device_class": "",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.encrypted": "0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.osd_id": "0",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.type": "block",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:                "ceph.vdo": "0"
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            },
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "type": "block",
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:            "vg_name": "ceph_vg0"
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:        }
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]:    ]
Jan 23 05:30:12 np0005593232 interesting_cerf[368292]: }
Jan 23 05:30:12 np0005593232 systemd[1]: libpod-e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099.scope: Deactivated successfully.
Jan 23 05:30:12 np0005593232 podman[368275]: 2026-01-23 10:30:12.134060467 +0000 UTC m=+0.975320912 container died e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:30:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f3dd8830ab6c2344c1bbbfda3e56fd436d110ed433384918ff90dad2a221e9cf-merged.mount: Deactivated successfully.
Jan 23 05:30:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:12.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:12 np0005593232 podman[368275]: 2026-01-23 10:30:12.190566473 +0000 UTC m=+1.031826908 container remove e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cerf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:30:12 np0005593232 systemd[1]: libpod-conmon-e8903107e49a5dbc1b66fd1ea2fcf5026a4e1dae5e63ea18ac25cb3e93f91099.scope: Deactivated successfully.
Jan 23 05:30:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:12Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:d3:85 10.100.0.5
Jan 23 05:30:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:12Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:d3:85 10.100.0.5
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.777665461 +0000 UTC m=+0.035981084 container create c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:30:12 np0005593232 systemd[1]: Started libpod-conmon-c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792.scope.
Jan 23 05:30:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.85575196 +0000 UTC m=+0.114067603 container init c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.76147024 +0000 UTC m=+0.019785893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.863252443 +0000 UTC m=+0.121568066 container start c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.866359292 +0000 UTC m=+0.124674935 container attach c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:30:12 np0005593232 blissful_chaum[368467]: 167 167
Jan 23 05:30:12 np0005593232 systemd[1]: libpod-c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792.scope: Deactivated successfully.
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.867573406 +0000 UTC m=+0.125889029 container died c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:30:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7528a45668536c04ae750ae2726c7e8537c0122c7c1847368893b055ab44d49f-merged.mount: Deactivated successfully.
Jan 23 05:30:12 np0005593232 podman[368451]: 2026-01-23 10:30:12.89904036 +0000 UTC m=+0.157355983 container remove c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_chaum, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:30:12 np0005593232 systemd[1]: libpod-conmon-c35a12c66ae0692ff2e38ecc4f4e953cd7abcd4b573e7f63612e00b5ddbc7792.scope: Deactivated successfully.
Jan 23 05:30:13 np0005593232 podman[368492]: 2026-01-23 10:30:13.083557255 +0000 UTC m=+0.046748110 container create 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:30:13 np0005593232 systemd[1]: Started libpod-conmon-181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81.scope.
Jan 23 05:30:13 np0005593232 podman[368492]: 2026-01-23 10:30:13.063483065 +0000 UTC m=+0.026673950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:30:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c20963c2dbcf39083fe4a87bc277f544b8f9f3edb6ca003818d6969768ef45/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c20963c2dbcf39083fe4a87bc277f544b8f9f3edb6ca003818d6969768ef45/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c20963c2dbcf39083fe4a87bc277f544b8f9f3edb6ca003818d6969768ef45/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72c20963c2dbcf39083fe4a87bc277f544b8f9f3edb6ca003818d6969768ef45/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:13 np0005593232 podman[368492]: 2026-01-23 10:30:13.202121385 +0000 UTC m=+0.165312260 container init 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:30:13 np0005593232 podman[368492]: 2026-01-23 10:30:13.209058442 +0000 UTC m=+0.172249297 container start 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:30:13 np0005593232 podman[368492]: 2026-01-23 10:30:13.212413178 +0000 UTC m=+0.175604053 container attach 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:30:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:30:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:13.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:30:13 np0005593232 nova_compute[250269]: 2026-01-23 10:30:13.924 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:13 np0005593232 nova_compute[250269]: 2026-01-23 10:30:13.926 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:30:13 np0005593232 nova_compute[250269]: 2026-01-23 10:30:13.926 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:30:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3041: 321 pgs: 321 active+clean; 270 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 183 op/s
Jan 23 05:30:13 np0005593232 nova_compute[250269]: 2026-01-23 10:30:13.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:13 np0005593232 NetworkManager[49057]: <info>  [1769164213.9948] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/323)
Jan 23 05:30:13 np0005593232 NetworkManager[49057]: <info>  [1769164213.9959] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]: {
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:        "osd_id": 0,
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:        "type": "bluestore"
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]:    }
Jan 23 05:30:14 np0005593232 trusting_dijkstra[368508]: }
Jan 23 05:30:14 np0005593232 systemd[1]: libpod-181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81.scope: Deactivated successfully.
Jan 23 05:30:14 np0005593232 podman[368530]: 2026-01-23 10:30:14.128851436 +0000 UTC m=+0.023714525 container died 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:30:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-72c20963c2dbcf39083fe4a87bc277f544b8f9f3edb6ca003818d6969768ef45-merged.mount: Deactivated successfully.
Jan 23 05:30:14 np0005593232 podman[368530]: 2026-01-23 10:30:14.183105998 +0000 UTC m=+0.077969087 container remove 181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_dijkstra, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:30:14 np0005593232 systemd[1]: libpod-conmon-181559f9c21ac213b784588c3fbc6c4982416e5b5332e15da30c4cbb5fca5d81.scope: Deactivated successfully.
Jan 23 05:30:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:14.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:14 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:14Z|00696|binding|INFO|Releasing lport 0f4f3525-34df-42ca-96c3-3c7e0c388556 from this chassis (sb_readonly=0)
Jan 23 05:30:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:30:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6abf7749-5d2b-4057-9920-8d5a5ce45e48 does not exist
Jan 23 05:30:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 659b6e21-33b3-45b6-897f-4221fce886a8 does not exist
Jan 23 05:30:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6a232bbc-0528-4a50-875b-a57819941416 does not exist
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.270 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.271 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.271 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.271 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 713eba08-716b-48ed-866e-e231d09ebfaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.295 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:14 np0005593232 nova_compute[250269]: 2026-01-23 10:30:14.929 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:30:15 np0005593232 nova_compute[250269]: 2026-01-23 10:30:15.525 250273 DEBUG nova.compute.manager [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:15 np0005593232 nova_compute[250269]: 2026-01-23 10:30:15.525 250273 DEBUG nova.compute.manager [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing instance network info cache due to event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:30:15 np0005593232 nova_compute[250269]: 2026-01-23 10:30:15.526 250273 DEBUG oslo_concurrency.lockutils [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:30:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:15.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3042: 321 pgs: 321 active+clean; 284 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 161 op/s
Jan 23 05:30:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3043: 321 pgs: 321 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 147 op/s
Jan 23 05:30:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.441 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.718 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.718 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.718 250273 DEBUG oslo_concurrency.lockutils [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.719 250273 DEBUG nova.network.neutron [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:30:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:19.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:19 np0005593232 nova_compute[250269]: 2026-01-23 10:30:19.932 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3044: 321 pgs: 321 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 23 05:30:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:20.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:20 np0005593232 nova_compute[250269]: 2026-01-23 10:30:20.655 250273 DEBUG nova.compute.manager [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:20 np0005593232 nova_compute[250269]: 2026-01-23 10:30:20.869 250273 INFO nova.compute.manager [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] instance snapshotting#033[00m
Jan 23 05:30:21 np0005593232 nova_compute[250269]: 2026-01-23 10:30:21.356 250273 INFO nova.virt.libvirt.driver [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Beginning live snapshot process#033[00m
Jan 23 05:30:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:21.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3045: 321 pgs: 321 active+clean; 293 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 769 KiB/s rd, 2.2 MiB/s wr, 89 op/s
Jan 23 05:30:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:22.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:22 np0005593232 podman[368651]: 2026-01-23 10:30:22.520952356 +0000 UTC m=+0.160993817 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:30:22 np0005593232 nova_compute[250269]: 2026-01-23 10:30:22.569 250273 DEBUG nova.virt.libvirt.imagebackend [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:30:22 np0005593232 nova_compute[250269]: 2026-01-23 10:30:22.857 250273 DEBUG nova.storage.rbd_utils [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] creating snapshot(ed2592dc56034e3f850d0de9443cacfe) on rbd image(713eba08-716b-48ed-866e-e231d09ebfaf_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:30:23 np0005593232 nova_compute[250269]: 2026-01-23 10:30:23.082 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3046: 321 pgs: 321 active+clean; 308 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.0 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 23 05:30:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Jan 23 05:30:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:24.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Jan 23 05:30:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Jan 23 05:30:24 np0005593232 nova_compute[250269]: 2026-01-23 10:30:24.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:24 np0005593232 nova_compute[250269]: 2026-01-23 10:30:24.548 250273 DEBUG nova.storage.rbd_utils [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] cloning vms/713eba08-716b-48ed-866e-e231d09ebfaf_disk@ed2592dc56034e3f850d0de9443cacfe to images/81a92860-f94f-4274-aba5-1ec35fd1f681 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:30:24 np0005593232 nova_compute[250269]: 2026-01-23 10:30:24.728 250273 DEBUG nova.storage.rbd_utils [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] flattening images/81a92860-f94f-4274-aba5-1ec35fd1f681 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:30:24 np0005593232 nova_compute[250269]: 2026-01-23 10:30:24.934 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:25.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3048: 321 pgs: 321 active+clean; 325 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.6 MiB/s wr, 134 op/s
Jan 23 05:30:26 np0005593232 nova_compute[250269]: 2026-01-23 10:30:26.117 250273 DEBUG nova.storage.rbd_utils [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] removing snapshot(ed2592dc56034e3f850d0de9443cacfe) on rbd image(713eba08-716b-48ed-866e-e231d09ebfaf_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:30:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:26.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Jan 23 05:30:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Jan 23 05:30:26 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Jan 23 05:30:26 np0005593232 nova_compute[250269]: 2026-01-23 10:30:26.405 250273 DEBUG nova.storage.rbd_utils [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] creating snapshot(snap) on rbd image(81a92860-f94f-4274-aba5-1ec35fd1f681) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:30:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Jan 23 05:30:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Jan 23 05:30:27 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Jan 23 05:30:27 np0005593232 podman[368821]: 2026-01-23 10:30:27.431169671 +0000 UTC m=+0.087172289 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:30:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:27 np0005593232 nova_compute[250269]: 2026-01-23 10:30:27.860 250273 DEBUG nova.network.neutron [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updated VIF entry in instance network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:30:27 np0005593232 nova_compute[250269]: 2026-01-23 10:30:27.861 250273 DEBUG nova.network.neutron [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:30:27 np0005593232 nova_compute[250269]: 2026-01-23 10:30:27.891 250273 DEBUG oslo_concurrency.lockutils [req-4fb9defa-2a3c-4722-a3bb-869a3b2c4c32 req-9e47dbf3-e3cf-4fdb-ae73-0820f2e25b70 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:30:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3051: 321 pgs: 321 active+clean; 387 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 11 MiB/s wr, 406 op/s
Jan 23 05:30:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:28.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.337 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.337 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.706 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.871 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.871 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.882 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:30:28 np0005593232 nova_compute[250269]: 2026-01-23 10:30:28.883 250273 INFO nova.compute.claims [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.023 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:29.024 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:30:29 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:29.026 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.053 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.302 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:30:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206991178' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.492 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.499 250273 DEBUG nova.compute.provider_tree [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.526 250273 DEBUG nova.scheduler.client.report [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.559 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.560 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.633 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.634 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.659 250273 INFO nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:30:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:29.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.929 250273 INFO nova.virt.libvirt.driver [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Snapshot image upload complete#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.930 250273 INFO nova.compute.manager [None req-ff12c03d-5ef0-44c0-8069-12020921ac07 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Took 9.06 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:30:29 np0005593232 nova_compute[250269]: 2026-01-23 10:30:29.936 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3052: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 MiB/s rd, 10 MiB/s wr, 320 op/s
Jan 23 05:30:30 np0005593232 nova_compute[250269]: 2026-01-23 10:30:30.021 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:30:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 05:30:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:30.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.125 250273 DEBUG nova.policy [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cd8c3758e14f9c8e4ad1a9a94a9995', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b27af793a8cc42259216fbeaa302ba03', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.727 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.728 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.729 250273 INFO nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Creating image(s)#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.756 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.783 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.810 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.815 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.886 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.887 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.888 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.888 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.920 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:31 np0005593232 nova_compute[250269]: 2026-01-23 10:30:31.924 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7fed3293-7f29-4792-952a-17b0d8962482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3053: 321 pgs: 321 active+clean; 405 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 9.2 MiB/s rd, 8.0 MiB/s wr, 249 op/s
Jan 23 05:30:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:32.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.253 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 7fed3293-7f29-4792-952a-17b0d8962482_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.325 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] resizing rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.461 250273 DEBUG nova.objects.instance [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'migration_context' on Instance uuid 7fed3293-7f29-4792-952a-17b0d8962482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.508 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.509 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Ensure instance console log exists: /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.510 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.510 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:32 np0005593232 nova_compute[250269]: 2026-01-23 10:30:32.511 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:33 np0005593232 nova_compute[250269]: 2026-01-23 10:30:33.418 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Successfully created port: 8019b64f-0aa7-4f8c-9650-9de06caea07e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:30:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Jan 23 05:30:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Jan 23 05:30:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Jan 23 05:30:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3055: 321 pgs: 321 active+clean; 429 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 7.8 MiB/s wr, 226 op/s
Jan 23 05:30:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:30:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:34.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:30:34 np0005593232 nova_compute[250269]: 2026-01-23 10:30:34.306 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Jan 23 05:30:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Jan 23 05:30:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Jan 23 05:30:34 np0005593232 nova_compute[250269]: 2026-01-23 10:30:34.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Jan 23 05:30:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Jan 23 05:30:35 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Jan 23 05:30:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:30:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:30:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3058: 321 pgs: 321 active+clean; 475 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 130 op/s
Jan 23 05:30:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:30:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:36.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:30:37
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', '.mgr', 'default.rgw.log']
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:30:37 np0005593232 nova_compute[250269]: 2026-01-23 10:30:37.438 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Successfully updated port: 8019b64f-0aa7-4f8c-9650-9de06caea07e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:30:37 np0005593232 nova_compute[250269]: 2026-01-23 10:30:37.484 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:30:37 np0005593232 nova_compute[250269]: 2026-01-23 10:30:37.485 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquired lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:30:37 np0005593232 nova_compute[250269]: 2026-01-23 10:30:37.485 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:30:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:37 np0005593232 nova_compute[250269]: 2026-01-23 10:30:37.908 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:30:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3059: 321 pgs: 321 active+clean; 511 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.0 MiB/s rd, 9.9 MiB/s wr, 248 op/s
Jan 23 05:30:38 np0005593232 nova_compute[250269]: 2026-01-23 10:30:38.025 250273 DEBUG nova.compute.manager [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:38 np0005593232 nova_compute[250269]: 2026-01-23 10:30:38.025 250273 DEBUG nova.compute.manager [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing instance network info cache due to event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:30:38 np0005593232 nova_compute[250269]: 2026-01-23 10:30:38.026 250273 DEBUG oslo_concurrency.lockutils [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:30:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:30:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:38.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:30:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:30:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:30:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:39.028 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:39 np0005593232 nova_compute[250269]: 2026-01-23 10:30:39.309 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:39.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:39 np0005593232 nova_compute[250269]: 2026-01-23 10:30:39.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:39 np0005593232 nova_compute[250269]: 2026-01-23 10:30:39.948 250273 DEBUG nova.network.neutron [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:30:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3060: 321 pgs: 321 active+clean; 524 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.9 MiB/s rd, 8.6 MiB/s wr, 238 op/s
Jan 23 05:30:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:40.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.410 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Releasing lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.411 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Instance network_info: |[{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.411 250273 DEBUG oslo_concurrency.lockutils [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.411 250273 DEBUG nova.network.neutron [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing network info cache for port 8019b64f-0aa7-4f8c-9650-9de06caea07e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.415 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Start _get_guest_xml network_info=[{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.422 250273 WARNING nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.430 250273 DEBUG nova.virt.libvirt.host [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.431 250273 DEBUG nova.virt.libvirt.host [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.438 250273 DEBUG nova.virt.libvirt.host [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.438 250273 DEBUG nova.virt.libvirt.host [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.439 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.440 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.440 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.440 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.441 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.441 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.441 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.441 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.441 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.442 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.442 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.442 250273 DEBUG nova.virt.hardware [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:30:40 np0005593232 nova_compute[250269]: 2026-01-23 10:30:40.445 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:30:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4199071441' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.093 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.123 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.128 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:30:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2334174961' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.602 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.604 250273 DEBUG nova.virt.libvirt.vif [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-acc',id=176,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-sgbpa098',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:30:31Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=7fed3293-7f29-4792-952a-17b0d8962482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.604 250273 DEBUG nova.network.os_vif_util [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.605 250273 DEBUG nova.network.os_vif_util [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.606 250273 DEBUG nova.objects.instance [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7fed3293-7f29-4792-952a-17b0d8962482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.627 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <uuid>7fed3293-7f29-4792-952a-17b0d8962482</uuid>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <name>instance-000000b0</name>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558</nova:name>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:30:40</nova:creationTime>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:user uuid="a3cd8c3758e14f9c8e4ad1a9a94a9995">tempest-TestSecurityGroupsBasicOps-622349977-project-member</nova:user>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:project uuid="b27af793a8cc42259216fbeaa302ba03">tempest-TestSecurityGroupsBasicOps-622349977</nova:project>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <nova:port uuid="8019b64f-0aa7-4f8c-9650-9de06caea07e">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="serial">7fed3293-7f29-4792-952a-17b0d8962482</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="uuid">7fed3293-7f29-4792-952a-17b0d8962482</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7fed3293-7f29-4792-952a-17b0d8962482_disk">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/7fed3293-7f29-4792-952a-17b0d8962482_disk.config">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:6e:f7:02"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <target dev="tap8019b64f-0a"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/console.log" append="off"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:30:41 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:30:41 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:30:41 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:30:41 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.628 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Preparing to wait for external event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.629 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.629 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.629 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.630 250273 DEBUG nova.virt.libvirt.vif [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-acc',id=176,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-sgbpa098',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:30:31Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=7fed3293-7f29-4792-952a-17b0d8962482,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.630 250273 DEBUG nova.network.os_vif_util [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.631 250273 DEBUG nova.network.os_vif_util [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.631 250273 DEBUG os_vif [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.632 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.633 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.633 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.637 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.638 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8019b64f-0a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.638 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8019b64f-0a, col_values=(('external_ids', {'iface-id': '8019b64f-0aa7-4f8c-9650-9de06caea07e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:f7:02', 'vm-uuid': '7fed3293-7f29-4792-952a-17b0d8962482'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.640 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:41 np0005593232 NetworkManager[49057]: <info>  [1769164241.6409] manager: (tap8019b64f-0a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.644 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.650 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:41 np0005593232 nova_compute[250269]: 2026-01-23 10:30:41.652 250273 INFO os_vif [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a')#033[00m
Jan 23 05:30:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:41.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3061: 321 pgs: 321 active+clean; 524 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 7.0 MiB/s wr, 193 op/s
Jan 23 05:30:42 np0005593232 nova_compute[250269]: 2026-01-23 10:30:42.018 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:30:42 np0005593232 nova_compute[250269]: 2026-01-23 10:30:42.018 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:30:42 np0005593232 nova_compute[250269]: 2026-01-23 10:30:42.019 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No VIF found with MAC fa:16:3e:6e:f7:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:30:42 np0005593232 nova_compute[250269]: 2026-01-23 10:30:42.019 250273 INFO nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Using config drive#033[00m
Jan 23 05:30:42 np0005593232 nova_compute[250269]: 2026-01-23 10:30:42.048 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:42.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:42.646 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:42.647 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:42.648 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Jan 23 05:30:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3062: 321 pgs: 321 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.0 MiB/s wr, 164 op/s
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.202 250273 DEBUG nova.network.neutron [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updated VIF entry in instance network info cache for port 8019b64f-0aa7-4f8c-9650-9de06caea07e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.202 250273 DEBUG nova.network.neutron [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:30:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:44.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.233 250273 DEBUG oslo_concurrency.lockutils [req-cd543699-de90-4eef-8ab1-8bb31a82f68a req-adde271b-b58f-49d9-b0f1-b1f78ed255b9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4192949609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:30:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4192949609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.617 250273 INFO nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Creating config drive at /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.622 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbab0sic execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.762 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgbab0sic" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.794 250273 DEBUG nova.storage.rbd_utils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 7fed3293-7f29-4792-952a-17b0d8962482_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.798 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config 7fed3293-7f29-4792-952a-17b0d8962482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.953 250273 DEBUG oslo_concurrency.processutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config 7fed3293-7f29-4792-952a-17b0d8962482_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:30:44 np0005593232 nova_compute[250269]: 2026-01-23 10:30:44.953 250273 INFO nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Deleting local config drive /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482/disk.config because it was imported into RBD.#033[00m
Jan 23 05:30:45 np0005593232 kernel: tap8019b64f-0a: entered promiscuous mode
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.0164] manager: (tap8019b64f-0a): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 23 05:30:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:45Z|00697|binding|INFO|Claiming lport 8019b64f-0aa7-4f8c-9650-9de06caea07e for this chassis.
Jan 23 05:30:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:45Z|00698|binding|INFO|8019b64f-0aa7-4f8c-9650-9de06caea07e: Claiming fa:16:3e:6e:f7:02 10.100.0.8
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.021 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:45Z|00699|binding|INFO|Setting lport 8019b64f-0aa7-4f8c-9650-9de06caea07e ovn-installed in OVS
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:45Z|00700|binding|INFO|Setting lport 8019b64f-0aa7-4f8c-9650-9de06caea07e up in Southbound
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.046 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:f7:02 10.100.0.8'], port_security=['fa:16:3e:6e:f7:02 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7fed3293-7f29-4792-952a-17b0d8962482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d8920ab-3f59-40bb-a223-b071277b1888', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1feca004-4cb8-48c6-97b5-f4e0f9b48a29 571af405-339d-428c-b637-699c80b29e30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90aca808-149b-4d83-8945-235dd296217f, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8019b64f-0aa7-4f8c-9650-9de06caea07e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.048 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8019b64f-0aa7-4f8c-9650-9de06caea07e in datapath 4d8920ab-3f59-40bb-a223-b071277b1888 bound to our chassis#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.048 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.049 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d8920ab-3f59-40bb-a223-b071277b1888#033[00m
Jan 23 05:30:45 np0005593232 systemd-udevd[369221]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:30:45 np0005593232 systemd-machined[215836]: New machine qemu-79-instance-000000b0.
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.0689] device (tap8019b64f-0a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.0698] device (tap8019b64f-0a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.070 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d54eb6b4-e9a3-4b96-9cf7-5cff0f59b169]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.072 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4d8920ab-31 in ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.073 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4d8920ab-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.073 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[48b9c7b7-1632-42c7-a6db-586e05f82ae0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.074 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b74115-eb9e-42d2-8ee6-2f65c99d5378]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 systemd[1]: Started Virtual Machine qemu-79-instance-000000b0.
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.090 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb4cd0-8eef-489c-934c-ab7b2e906b9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.106 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e343baf3-6c5b-4f55-a637-5284d73ed198]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.152 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[dd225649-f05d-4f3c-b73f-fcaa5d141557]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.1629] manager: (tap4d8920ab-30): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.161 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[476d60cd-a8dc-42df-9325-3a2640703bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.212 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc8104c-0a9b-42b6-9923-d50732d44fe2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.218 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e93ca3aa-8222-46cd-b8fc-8b59820ba962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.2562] device (tap4d8920ab-30): carrier: link connected
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.266 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[388ca439-51de-4866-9d45-cd03749b9e4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.295 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8ec299cd-0479-4ce2-b039-acb04820c0cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d8920ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:f8:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815839, 'reachable_time': 31288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369255, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.317 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[35ed05e9-fcc6-4ba9-9e81-7ea791024da3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:f8d1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815839, 'tstamp': 815839}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369256, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.337 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3a3e4c27-0d8b-4610-8572-5d57f5428408]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d8920ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:f8:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815839, 'reachable_time': 31288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369257, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.385 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f90ec53f-2485-4999-b521-1ed2792a28b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.452 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea9b290-8ac4-4f01-9105-2c8bcace1fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.453 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d8920ab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.453 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.454 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d8920ab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.456 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 NetworkManager[49057]: <info>  [1769164245.4573] manager: (tap4d8920ab-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Jan 23 05:30:45 np0005593232 kernel: tap4d8920ab-30: entered promiscuous mode
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.459 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.460 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d8920ab-30, col_values=(('external_ids', {'iface-id': '3e4779d9-d53f-4b05-82f8-912a2ec8334d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:30:45Z|00701|binding|INFO|Releasing lport 3e4779d9-d53f-4b05-82f8-912a2ec8334d from this chassis (sb_readonly=0)
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.478 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4d8920ab-3f59-40bb-a223-b071277b1888.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4d8920ab-3f59-40bb-a223-b071277b1888.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.481 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2d99d8df-261e-41b3-b3af-62c03ceb7caf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.482 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-4d8920ab-3f59-40bb-a223-b071277b1888
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/4d8920ab-3f59-40bb-a223-b071277b1888.pid.haproxy
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 4d8920ab-3f59-40bb-a223-b071277b1888
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:30:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:30:45.483 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'env', 'PROCESS_TAG=haproxy-4d8920ab-3f59-40bb-a223-b071277b1888', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4d8920ab-3f59-40bb-a223-b071277b1888.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.592 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164245.5919926, 7fed3293-7f29-4792-952a-17b0d8962482 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.593 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] VM Started (Lifecycle Event)#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.622 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.627 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164245.5936449, 7fed3293-7f29-4792-952a-17b0d8962482 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.628 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.728 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.734 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:30:45 np0005593232 nova_compute[250269]: 2026-01-23 10:30:45.792 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:30:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:45.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:45 np0005593232 podman[369331]: 2026-01-23 10:30:45.9003004 +0000 UTC m=+0.062991361 container create b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:30:45 np0005593232 systemd[1]: Started libpod-conmon-b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34.scope.
Jan 23 05:30:45 np0005593232 podman[369331]: 2026-01-23 10:30:45.86651269 +0000 UTC m=+0.029203631 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:30:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3064: 321 pgs: 321 active+clean; 530 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.8 MiB/s wr, 161 op/s
Jan 23 05:30:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:30:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31bc6c5c71c9dff3d80bc7ed33565de6cb819553c86f7fa2e6f123479112e5ba/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:30:46 np0005593232 podman[369331]: 2026-01-23 10:30:46.011844371 +0000 UTC m=+0.174535382 container init b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:30:46 np0005593232 podman[369331]: 2026-01-23 10:30:46.02307556 +0000 UTC m=+0.185766511 container start b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:30:46 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [NOTICE]   (369351) : New worker (369353) forked
Jan 23 05:30:46 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [NOTICE]   (369351) : Loading success.
Jan 23 05:30:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:46.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.465 250273 DEBUG nova.compute.manager [req-fdbcc2e0-42dc-4f1e-8879-c683922a604a req-4359b9ef-5909-4603-ab3b-1a237f761504 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.466 250273 DEBUG oslo_concurrency.lockutils [req-fdbcc2e0-42dc-4f1e-8879-c683922a604a req-4359b9ef-5909-4603-ab3b-1a237f761504 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.466 250273 DEBUG oslo_concurrency.lockutils [req-fdbcc2e0-42dc-4f1e-8879-c683922a604a req-4359b9ef-5909-4603-ab3b-1a237f761504 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.467 250273 DEBUG oslo_concurrency.lockutils [req-fdbcc2e0-42dc-4f1e-8879-c683922a604a req-4359b9ef-5909-4603-ab3b-1a237f761504 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.467 250273 DEBUG nova.compute.manager [req-fdbcc2e0-42dc-4f1e-8879-c683922a604a req-4359b9ef-5909-4603-ab3b-1a237f761504 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Processing event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.467 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.474 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164246.4739254, 7fed3293-7f29-4792-952a-17b0d8962482 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.474 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.477 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.487 250273 INFO nova.virt.libvirt.driver [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Instance spawned successfully.#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.488 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.516 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.523 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.529 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.529 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.530 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.531 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.531 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.532 250273 DEBUG nova.virt.libvirt.driver [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.642 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:46 np0005593232 nova_compute[250269]: 2026-01-23 10:30:46.972 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:30:47 np0005593232 nova_compute[250269]: 2026-01-23 10:30:47.257 250273 INFO nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Took 15.53 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:30:47 np0005593232 nova_compute[250269]: 2026-01-23 10:30:47.259 250273 DEBUG nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.007492977037967616 of space, bias 1.0, pg target 2.247893111390285 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6482179415596502 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.005057841289782245 of space, bias 1.0, pg target 1.507236704355109 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:30:47 np0005593232 nova_compute[250269]: 2026-01-23 10:30:47.468 250273 INFO nova.compute.manager [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Took 18.63 seconds to build instance.#033[00m
Jan 23 05:30:47 np0005593232 nova_compute[250269]: 2026-01-23 10:30:47.519 250273 DEBUG oslo_concurrency.lockutils [None req-7e7ba8ab-af4f-44da-9cae-6ead132e70a5 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3065: 321 pgs: 321 active+clean; 552 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 480 KiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 23 05:30:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:48.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.600 250273 DEBUG nova.compute.manager [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.602 250273 DEBUG oslo_concurrency.lockutils [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.603 250273 DEBUG oslo_concurrency.lockutils [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.603 250273 DEBUG oslo_concurrency.lockutils [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.604 250273 DEBUG nova.compute.manager [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] No waiting events found dispatching network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:30:48 np0005593232 nova_compute[250269]: 2026-01-23 10:30:48.604 250273 WARNING nova.compute.manager [req-15da54f9-2797-4081-9c7b-3ce1a634b12f req-b62b2037-472b-4a55-8c2f-e7f2b37558bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received unexpected event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e for instance with vm_state active and task_state None.#033[00m
Jan 23 05:30:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:49.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:49 np0005593232 nova_compute[250269]: 2026-01-23 10:30:49.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3066: 321 pgs: 321 active+clean; 576 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 23 05:30:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:50.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:51 np0005593232 nova_compute[250269]: 2026-01-23 10:30:51.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:51.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3067: 321 pgs: 321 active+clean; 576 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.2 MiB/s wr, 131 op/s
Jan 23 05:30:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:52.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.398 250273 DEBUG nova.compute.manager [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.399 250273 DEBUG nova.compute.manager [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing instance network info cache due to event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.399 250273 DEBUG oslo_concurrency.lockutils [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.400 250273 DEBUG oslo_concurrency.lockutils [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:30:53 np0005593232 nova_compute[250269]: 2026-01-23 10:30:53.400 250273 DEBUG nova.network.neutron [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing network info cache for port 8019b64f-0aa7-4f8c-9650-9de06caea07e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:30:53 np0005593232 podman[369365]: 2026-01-23 10:30:53.494708668 +0000 UTC m=+0.139567978 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 23 05:30:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:53.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3068: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 152 op/s
Jan 23 05:30:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:54.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:54 np0005593232 nova_compute[250269]: 2026-01-23 10:30:54.948 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:55 np0005593232 nova_compute[250269]: 2026-01-23 10:30:55.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:55.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3069: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 133 op/s
Jan 23 05:30:55 np0005593232 nova_compute[250269]: 2026-01-23 10:30:55.979 250273 DEBUG nova.network.neutron [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updated VIF entry in instance network info cache for port 8019b64f-0aa7-4f8c-9650-9de06caea07e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:30:55 np0005593232 nova_compute[250269]: 2026-01-23 10:30:55.981 250273 DEBUG nova.network.neutron [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:30:56 np0005593232 nova_compute[250269]: 2026-01-23 10:30:56.042 250273 DEBUG oslo_concurrency.lockutils [req-1fe302a2-9a2b-493d-861b-a853683559dc req-a480d919-29de-4896-8ff8-736ea2735c88 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:30:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:56.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:56 np0005593232 nova_compute[250269]: 2026-01-23 10:30:56.648 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:57 np0005593232 nova_compute[250269]: 2026-01-23 10:30:57.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:57 np0005593232 nova_compute[250269]: 2026-01-23 10:30:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:30:57 np0005593232 nova_compute[250269]: 2026-01-23 10:30:57.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:30:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:57.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:30:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3070: 321 pgs: 321 active+clean; 577 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 1.9 MiB/s wr, 214 op/s
Jan 23 05:30:58 np0005593232 podman[369417]: 2026-01-23 10:30:58.220363366 +0000 UTC m=+0.060151780 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 23 05:30:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:30:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:30:58.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:30:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:30:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:30:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/201560293' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:30:59 np0005593232 nova_compute[250269]: 2026-01-23 10:30:59.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:30:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3071: 321 pgs: 321 active+clean; 581 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 6.1 MiB/s rd, 1.7 MiB/s wr, 283 op/s
Jan 23 05:30:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:30:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:30:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:30:59.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:00.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:00 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:00Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:f7:02 10.100.0.8
Jan 23 05:31:00 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:00Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:f7:02 10.100.0.8
Jan 23 05:31:01 np0005593232 nova_compute[250269]: 2026-01-23 10:31:01.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:01 np0005593232 nova_compute[250269]: 2026-01-23 10:31:01.651 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3072: 321 pgs: 321 active+clean; 581 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.3 MiB/s rd, 699 KiB/s wr, 241 op/s
Jan 23 05:31:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:01.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:02.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3073: 321 pgs: 321 active+clean; 601 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 6.8 MiB/s rd, 1.7 MiB/s wr, 326 op/s
Jan 23 05:31:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:03.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:04 np0005593232 nova_compute[250269]: 2026-01-23 10:31:04.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:05 np0005593232 nova_compute[250269]: 2026-01-23 10:31:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3074: 321 pgs: 321 active+clean; 610 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 2.2 MiB/s wr, 336 op/s
Jan 23 05:31:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:05.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:06.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:06 np0005593232 nova_compute[250269]: 2026-01-23 10:31:06.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:06 np0005593232 nova_compute[250269]: 2026-01-23 10:31:06.655 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3075: 321 pgs: 321 active+clean; 610 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.0 MiB/s rd, 2.2 MiB/s wr, 385 op/s
Jan 23 05:31:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:07.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:08.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:08 np0005593232 nova_compute[250269]: 2026-01-23 10:31:08.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.001 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.001 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.002 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.002 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.002 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:31:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437851152' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.507 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:09 np0005593232 nova_compute[250269]: 2026-01-23 10:31:09.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3076: 321 pgs: 321 active+clean; 614 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 2.3 MiB/s wr, 305 op/s
Jan 23 05:31:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:10.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.525 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.526 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.528 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.529 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ad as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.756 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.759 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3741MB free_disk=20.785133361816406GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.759 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:11 np0005593232 nova_compute[250269]: 2026-01-23 10:31:11.759 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3077: 321 pgs: 321 active+clean; 614 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 1.7 MiB/s wr, 206 op/s
Jan 23 05:31:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.061 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 713eba08-716b-48ed-866e-e231d09ebfaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.061 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 7fed3293-7f29-4792-952a-17b0d8962482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.064 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.065 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.202 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:31:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.350 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.351 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.380 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.429 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:31:12 np0005593232 nova_compute[250269]: 2026-01-23 10:31:12.580 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:13 np0005593232 nova_compute[250269]: 2026-01-23 10:31:13.016 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:13 np0005593232 nova_compute[250269]: 2026-01-23 10:31:13.026 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:31:13 np0005593232 nova_compute[250269]: 2026-01-23 10:31:13.287 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:31:13 np0005593232 nova_compute[250269]: 2026-01-23 10:31:13.347 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:31:13 np0005593232 nova_compute[250269]: 2026-01-23 10:31:13.348 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3078: 321 pgs: 321 active+clean; 644 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.8 MiB/s rd, 3.1 MiB/s wr, 279 op/s
Jan 23 05:31:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:14.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:14 np0005593232 nova_compute[250269]: 2026-01-23 10:31:14.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3079: 321 pgs: 321 active+clean; 657 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.1 MiB/s wr, 229 op/s
Jan 23 05:31:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:16.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 85624f5d-a434-4c03-bc2f-6a4b734a0b47 does not exist
Jan 23 05:31:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 75b57d18-9ac4-4188-942b-114b309b145f does not exist
Jan 23 05:31:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1e911d9d-579c-4700-997d-cad55af71a0a does not exist
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:31:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:31:16 np0005593232 nova_compute[250269]: 2026-01-23 10:31:16.660 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:31:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 49K writes, 185K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.68 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5081 writes, 19K keys, 5081 commit groups, 1.0 writes per commit group, ingest: 23.09 MB, 0.04 MB/s#012Interval WAL: 5082 writes, 1977 syncs, 2.57 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.427583169 +0000 UTC m=+0.067259413 container create 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:31:17 np0005593232 systemd[1]: Started libpod-conmon-4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4.scope.
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.406132419 +0000 UTC m=+0.045808673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.545694946 +0000 UTC m=+0.185371230 container init 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True)
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.562978297 +0000 UTC m=+0.202654541 container start 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.567466805 +0000 UTC m=+0.207143039 container attach 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:31:17 np0005593232 sharp_sutherland[369923]: 167 167
Jan 23 05:31:17 np0005593232 systemd[1]: libpod-4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4.scope: Deactivated successfully.
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.575950986 +0000 UTC m=+0.215627250 container died 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:31:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7e72d3aded0b05151ceab82d6f102bde242b367c42a9b9c9e51104f83e0a9751-merged.mount: Deactivated successfully.
Jan 23 05:31:17 np0005593232 podman[369906]: 2026-01-23 10:31:17.62359587 +0000 UTC m=+0.263272104 container remove 4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:17 np0005593232 systemd[1]: libpod-conmon-4aa7f434898318c85fb227b18b5dc105c7d24d869c5032e24b49a8706e687ba4.scope: Deactivated successfully.
Jan 23 05:31:17 np0005593232 podman[369947]: 2026-01-23 10:31:17.868480701 +0000 UTC m=+0.054957034 container create 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:31:17 np0005593232 systemd[1]: Started libpod-conmon-3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1.scope.
Jan 23 05:31:17 np0005593232 podman[369947]: 2026-01-23 10:31:17.849069859 +0000 UTC m=+0.035546202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3080: 321 pgs: 321 active+clean; 657 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 191 op/s
Jan 23 05:31:17 np0005593232 podman[369947]: 2026-01-23 10:31:17.98948506 +0000 UTC m=+0.175961393 container init 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:31:18 np0005593232 podman[369947]: 2026-01-23 10:31:18.000556325 +0000 UTC m=+0.187032688 container start 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:31:18 np0005593232 podman[369947]: 2026-01-23 10:31:18.005725921 +0000 UTC m=+0.192202254 container attach 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:31:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:18.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:18.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:18 np0005593232 nova_compute[250269]: 2026-01-23 10:31:18.350 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:18 np0005593232 nova_compute[250269]: 2026-01-23 10:31:18.351 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:31:18 np0005593232 nova_compute[250269]: 2026-01-23 10:31:18.351 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:31:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:18 np0005593232 naughty_brown[369963]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:31:18 np0005593232 naughty_brown[369963]: --> relative data size: 1.0
Jan 23 05:31:18 np0005593232 naughty_brown[369963]: --> All data devices are unavailable
Jan 23 05:31:18 np0005593232 systemd[1]: libpod-3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1.scope: Deactivated successfully.
Jan 23 05:31:18 np0005593232 podman[369947]: 2026-01-23 10:31:18.905153056 +0000 UTC m=+1.091629379 container died 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:31:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-907b55e62fcd9a72f9279cc9d75de75cf2ac478c47742523eb79b4c4a06227c1-merged.mount: Deactivated successfully.
Jan 23 05:31:18 np0005593232 podman[369947]: 2026-01-23 10:31:18.988377182 +0000 UTC m=+1.174853505 container remove 3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:18 np0005593232 systemd[1]: libpod-conmon-3b271404d7088d163209591fd71065efd9fa496957698bb6ff848ceb796536c1.scope: Deactivated successfully.
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.828985404 +0000 UTC m=+0.053769939 container create 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:31:19 np0005593232 systemd[1]: Started libpod-conmon-658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f.scope.
Jan 23 05:31:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.808242334 +0000 UTC m=+0.033026889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.931756975 +0000 UTC m=+0.156541580 container init 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:31:19 np0005593232 nova_compute[250269]: 2026-01-23 10:31:19.932 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:31:19 np0005593232 nova_compute[250269]: 2026-01-23 10:31:19.934 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:31:19 np0005593232 nova_compute[250269]: 2026-01-23 10:31:19.935 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:31:19 np0005593232 nova_compute[250269]: 2026-01-23 10:31:19.935 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 713eba08-716b-48ed-866e-e231d09ebfaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.94177266 +0000 UTC m=+0.166557205 container start 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.946458743 +0000 UTC m=+0.171243358 container attach 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:31:19 np0005593232 interesting_meninsky[370199]: 167 167
Jan 23 05:31:19 np0005593232 systemd[1]: libpod-658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f.scope: Deactivated successfully.
Jan 23 05:31:19 np0005593232 podman[370183]: 2026-01-23 10:31:19.951531927 +0000 UTC m=+0.176316492 container died 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:31:19 np0005593232 nova_compute[250269]: 2026-01-23 10:31:19.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3081: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 162 op/s
Jan 23 05:31:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f4759e595d27a205a8c8c785f12604cad04f6964f650e0eec34ca7fc398ddef5-merged.mount: Deactivated successfully.
Jan 23 05:31:20 np0005593232 podman[370183]: 2026-01-23 10:31:20.003958447 +0000 UTC m=+0.228743022 container remove 658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_meninsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:31:20 np0005593232 systemd[1]: libpod-conmon-658c9c74eb569e4fdde7ea432370294410fb59a110ff24ac21c05e44d57b557f.scope: Deactivated successfully.
Jan 23 05:31:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:20 np0005593232 podman[370225]: 2026-01-23 10:31:20.258133262 +0000 UTC m=+0.087408686 container create f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:31:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:20.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:20 np0005593232 podman[370225]: 2026-01-23 10:31:20.22465452 +0000 UTC m=+0.053930004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:20 np0005593232 systemd[1]: Started libpod-conmon-f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd.scope.
Jan 23 05:31:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bc8028eb133ed52b981e271a271964accc4b0db412f54eeb40127536430d4b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bc8028eb133ed52b981e271a271964accc4b0db412f54eeb40127536430d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bc8028eb133ed52b981e271a271964accc4b0db412f54eeb40127536430d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21bc8028eb133ed52b981e271a271964accc4b0db412f54eeb40127536430d4b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:20 np0005593232 podman[370225]: 2026-01-23 10:31:20.37206222 +0000 UTC m=+0.201337644 container init f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:31:20 np0005593232 podman[370225]: 2026-01-23 10:31:20.386046138 +0000 UTC m=+0.215321532 container start f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:31:20 np0005593232 podman[370225]: 2026-01-23 10:31:20.389407133 +0000 UTC m=+0.218682557 container attach f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:20 np0005593232 nova_compute[250269]: 2026-01-23 10:31:20.936 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:20 np0005593232 nova_compute[250269]: 2026-01-23 10:31:20.938 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:20 np0005593232 nova_compute[250269]: 2026-01-23 10:31:20.982 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.139 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.140 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.152 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.152 250273 INFO nova.compute.claims [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]: {
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:    "0": [
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:        {
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "devices": [
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "/dev/loop3"
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            ],
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "lv_name": "ceph_lv0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "lv_size": "7511998464",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "name": "ceph_lv0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "tags": {
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.cluster_name": "ceph",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.crush_device_class": "",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.encrypted": "0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.osd_id": "0",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.type": "block",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:                "ceph.vdo": "0"
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            },
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "type": "block",
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:            "vg_name": "ceph_vg0"
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:        }
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]:    ]
Jan 23 05:31:21 np0005593232 thirsty_poitras[370242]: }
Jan 23 05:31:21 np0005593232 systemd[1]: libpod-f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd.scope: Deactivated successfully.
Jan 23 05:31:21 np0005593232 podman[370225]: 2026-01-23 10:31:21.308459216 +0000 UTC m=+1.137734700 container died f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:31:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-21bc8028eb133ed52b981e271a271964accc4b0db412f54eeb40127536430d4b-merged.mount: Deactivated successfully.
Jan 23 05:31:21 np0005593232 podman[370225]: 2026-01-23 10:31:21.37228832 +0000 UTC m=+1.201563714 container remove f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:21 np0005593232 systemd[1]: libpod-conmon-f790a075ac6309ee7db42691874334bf8c1d03a797a7214c5c97d548f4f878bd.scope: Deactivated successfully.
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:21 np0005593232 nova_compute[250269]: 2026-01-23 10:31:21.865 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3082: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Jan 23 05:31:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:22.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.060336707 +0000 UTC m=+0.037110266 container create e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:31:22 np0005593232 systemd[1]: Started libpod-conmon-e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0.scope.
Jan 23 05:31:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.045469815 +0000 UTC m=+0.022243394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.143401118 +0000 UTC m=+0.120174697 container init e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.153057093 +0000 UTC m=+0.129830652 container start e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.157142949 +0000 UTC m=+0.133916518 container attach e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:31:22 np0005593232 gifted_liskov[370442]: 167 167
Jan 23 05:31:22 np0005593232 systemd[1]: libpod-e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0.scope: Deactivated successfully.
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.162485521 +0000 UTC m=+0.139259110 container died e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:31:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4f2205681f68947e9e680bf20e67b9f2625c6e5c146f74392c2d527bd0ed8b3f-merged.mount: Deactivated successfully.
Jan 23 05:31:22 np0005593232 podman[370408]: 2026-01-23 10:31:22.216773054 +0000 UTC m=+0.193546643 container remove e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:31:22 np0005593232 systemd[1]: libpod-conmon-e4a30466767cf97ec1640d1f0a6289796b2acef9337c7639ea6a5cc3a39b6ab0.scope: Deactivated successfully.
Jan 23 05:31:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:22.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:31:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171907973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:31:22 np0005593232 nova_compute[250269]: 2026-01-23 10:31:22.331 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:22 np0005593232 nova_compute[250269]: 2026-01-23 10:31:22.339 250273 DEBUG nova.compute.provider_tree [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:31:22 np0005593232 podman[370471]: 2026-01-23 10:31:22.446072281 +0000 UTC m=+0.062078195 container create 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:31:22 np0005593232 systemd[1]: Started libpod-conmon-78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080.scope.
Jan 23 05:31:22 np0005593232 podman[370471]: 2026-01-23 10:31:22.408509483 +0000 UTC m=+0.024515507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:31:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:31:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913da4a9ccc49e49c3c256d88a382da75a838dbe0737f765771ca5b52b71d36f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913da4a9ccc49e49c3c256d88a382da75a838dbe0737f765771ca5b52b71d36f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913da4a9ccc49e49c3c256d88a382da75a838dbe0737f765771ca5b52b71d36f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/913da4a9ccc49e49c3c256d88a382da75a838dbe0737f765771ca5b52b71d36f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:31:22 np0005593232 podman[370471]: 2026-01-23 10:31:22.554252726 +0000 UTC m=+0.170258670 container init 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:31:22 np0005593232 podman[370471]: 2026-01-23 10:31:22.574412729 +0000 UTC m=+0.190418653 container start 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:31:22 np0005593232 podman[370471]: 2026-01-23 10:31:22.582589501 +0000 UTC m=+0.198595445 container attach 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:22 np0005593232 nova_compute[250269]: 2026-01-23 10:31:22.739 250273 DEBUG nova.scheduler.client.report [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:31:22 np0005593232 nova_compute[250269]: 2026-01-23 10:31:22.801 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:22 np0005593232 nova_compute[250269]: 2026-01-23 10:31:22.802 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:31:23 np0005593232 nova_compute[250269]: 2026-01-23 10:31:23.178 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:31:23 np0005593232 nova_compute[250269]: 2026-01-23 10:31:23.178 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:31:23 np0005593232 nova_compute[250269]: 2026-01-23 10:31:23.231 250273 INFO nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:31:23 np0005593232 nova_compute[250269]: 2026-01-23 10:31:23.254 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]: {
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:        "osd_id": 0,
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:        "type": "bluestore"
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]:    }
Jan 23 05:31:23 np0005593232 elastic_solomon[370488]: }
Jan 23 05:31:23 np0005593232 systemd[1]: libpod-78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080.scope: Deactivated successfully.
Jan 23 05:31:23 np0005593232 podman[370471]: 2026-01-23 10:31:23.424542482 +0000 UTC m=+1.040548476 container died 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:31:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-913da4a9ccc49e49c3c256d88a382da75a838dbe0737f765771ca5b52b71d36f-merged.mount: Deactivated successfully.
Jan 23 05:31:23 np0005593232 podman[370471]: 2026-01-23 10:31:23.512371048 +0000 UTC m=+1.128376982 container remove 78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:31:23 np0005593232 systemd[1]: libpod-conmon-78f584d87bb6f43dc7631adc71a8c9758ecc5e49d2c53d155c909f94352b4080.scope: Deactivated successfully.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.562364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283562414, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1230, "num_deletes": 260, "total_data_size": 1884696, "memory_usage": 1919248, "flush_reason": "Manual Compaction"}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283586550, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 1842297, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67470, "largest_seqno": 68699, "table_properties": {"data_size": 1836455, "index_size": 3108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13115, "raw_average_key_size": 20, "raw_value_size": 1824385, "raw_average_value_size": 2789, "num_data_blocks": 136, "num_entries": 654, "num_filter_entries": 654, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164189, "oldest_key_time": 1769164189, "file_creation_time": 1769164283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 24366 microseconds, and 8099 cpu microseconds.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.586721) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 1842297 bytes OK
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.586785) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.591045) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.591077) EVENT_LOG_v1 {"time_micros": 1769164283591068, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.591108) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1879030, prev total WAL file size 1882765, number of live WAL files 2.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.592718) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373632' seq:72057594037927935, type:22 .. '6C6F676D0033303136' seq:0, type:0; will stop at (end)
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(1799KB)], [155(11MB)]
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283592779, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 13902940, "oldest_snapshot_seqno": -1}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8b50ba1b-05dc-4bf0-b1d7-5ed08a6793a7 does not exist
Jan 23 05:31:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6185224c-74ce-42d2-9aa4-9611f8d249d1 does not exist
Jan 23 05:31:23 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 11e32097-7f93-4a0b-9b4d-c9a51e07c0bf does not exist
Jan 23 05:31:23 np0005593232 podman[370523]: 2026-01-23 10:31:23.678985074 +0000 UTC m=+0.112714385 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 9373 keys, 13758607 bytes, temperature: kUnknown
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283721598, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 13758607, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13696461, "index_size": 37585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23493, "raw_key_size": 247010, "raw_average_key_size": 26, "raw_value_size": 13530591, "raw_average_value_size": 1443, "num_data_blocks": 1446, "num_entries": 9373, "num_filter_entries": 9373, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.722070) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 13758607 bytes
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.739150) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.8 rd, 106.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.5 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 9911, records dropped: 538 output_compression: NoCompression
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.739190) EVENT_LOG_v1 {"time_micros": 1769164283739173, "job": 96, "event": "compaction_finished", "compaction_time_micros": 128934, "compaction_time_cpu_micros": 32535, "output_level": 6, "num_output_files": 1, "total_output_size": 13758607, "num_input_records": 9911, "num_output_records": 9373, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283739952, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164283743634, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.592526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.743738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.743746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.743748) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.743750) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:31:23.743752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:31:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3083: 321 pgs: 321 active+clean; 682 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 182 op/s
Jan 23 05:31:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:24.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.064 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.066 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.067 250273 INFO nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Creating image(s)#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.112 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.150 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.181 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.185 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.273 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.274 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:24.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.274 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.275 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.301 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.305 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b467845b-6847-4ed6-8239-9cb93760cfc7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:24 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.679 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 b467845b-6847-4ed6-8239-9cb93760cfc7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.752 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] resizing rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.867 250273 DEBUG nova.objects.instance [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'migration_context' on Instance uuid b467845b-6847-4ed6-8239-9cb93760cfc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.902 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.903 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Ensure instance console log exists: /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.904 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.904 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.904 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:24 np0005593232 nova_compute[250269]: 2026-01-23 10:31:24.962 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:25 np0005593232 nova_compute[250269]: 2026-01-23 10:31:25.031 250273 DEBUG nova.policy [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cd8c3758e14f9c8e4ad1a9a94a9995', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b27af793a8cc42259216fbeaa302ba03', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:31:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3084: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 793 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Jan 23 05:31:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:26.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:26 np0005593232 nova_compute[250269]: 2026-01-23 10:31:26.119 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:31:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:26.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:26 np0005593232 nova_compute[250269]: 2026-01-23 10:31:26.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:26 np0005593232 nova_compute[250269]: 2026-01-23 10:31:26.923 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:31:26 np0005593232 nova_compute[250269]: 2026-01-23 10:31:26.923 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:31:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:31:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3085: 321 pgs: 321 active+clean; 733 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 2.5 MiB/s wr, 91 op/s
Jan 23 05:31:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:28.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:28 np0005593232 podman[370768]: 2026-01-23 10:31:28.412495719 +0000 UTC m=+0.062626261 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 05:31:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:29 np0005593232 nova_compute[250269]: 2026-01-23 10:31:29.965 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3086: 321 pgs: 321 active+clean; 754 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 246 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Jan 23 05:31:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:30.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:30.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:31 np0005593232 nova_compute[250269]: 2026-01-23 10:31:31.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3087: 321 pgs: 321 active+clean; 754 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Jan 23 05:31:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:32.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:33 np0005593232 nova_compute[250269]: 2026-01-23 10:31:33.351 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:33.354 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:31:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:33.356 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:31:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3088: 321 pgs: 321 active+clean; 754 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 3.6 MiB/s wr, 56 op/s
Jan 23 05:31:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:34.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:34 np0005593232 nova_compute[250269]: 2026-01-23 10:31:34.968 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:35.358 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3089: 321 pgs: 321 active+clean; 754 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 3.1 MiB/s wr, 31 op/s
Jan 23 05:31:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:36.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:36 np0005593232 nova_compute[250269]: 2026-01-23 10:31:36.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:37 np0005593232 nova_compute[250269]: 2026-01-23 10:31:37.328 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Successfully created port: 77481d6e-4c79-4c13-b9bc-38c906a20223 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:31:37
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:31:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3090: 321 pgs: 321 active+clean; 754 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 230 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 23 05:31:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:38.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:38.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Jan 23 05:31:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Jan 23 05:31:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:31:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:31:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e340 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:38 np0005593232 nova_compute[250269]: 2026-01-23 10:31:38.785 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Successfully updated port: 77481d6e-4c79-4c13-b9bc-38c906a20223 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:31:38 np0005593232 nova_compute[250269]: 2026-01-23 10:31:38.809 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:31:38 np0005593232 nova_compute[250269]: 2026-01-23 10:31:38.810 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquired lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:31:38 np0005593232 nova_compute[250269]: 2026-01-23 10:31:38.810 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:31:39 np0005593232 nova_compute[250269]: 2026-01-23 10:31:39.116 250273 DEBUG nova.compute.manager [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-changed-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:31:39 np0005593232 nova_compute[250269]: 2026-01-23 10:31:39.117 250273 DEBUG nova.compute.manager [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Refreshing instance network info cache due to event network-changed-77481d6e-4c79-4c13-b9bc-38c906a20223. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:31:39 np0005593232 nova_compute[250269]: 2026-01-23 10:31:39.117 250273 DEBUG oslo_concurrency.lockutils [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:31:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Jan 23 05:31:39 np0005593232 nova_compute[250269]: 2026-01-23 10:31:39.493 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:31:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Jan 23 05:31:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Jan 23 05:31:39 np0005593232 nova_compute[250269]: 2026-01-23 10:31:39.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3093: 321 pgs: 321 active+clean; 786 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.6 MiB/s wr, 23 op/s
Jan 23 05:31:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:40.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:40.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Jan 23 05:31:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Jan 23 05:31:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Jan 23 05:31:41 np0005593232 nova_compute[250269]: 2026-01-23 10:31:41.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3095: 321 pgs: 321 active+clean; 808 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 12 MiB/s wr, 121 op/s
Jan 23 05:31:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:42.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.095 250273 DEBUG nova.network.neutron [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Updating instance_info_cache with network_info: [{"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:31:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:42.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.341 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Releasing lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.342 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Instance network_info: |[{"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.343 250273 DEBUG oslo_concurrency.lockutils [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.344 250273 DEBUG nova.network.neutron [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Refreshing network info cache for port 77481d6e-4c79-4c13-b9bc-38c906a20223 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.349 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Start _get_guest_xml network_info=[{"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.361 250273 WARNING nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.453 250273 DEBUG nova.virt.libvirt.host [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.455 250273 DEBUG nova.virt.libvirt.host [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.465 250273 DEBUG nova.virt.libvirt.host [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.466 250273 DEBUG nova.virt.libvirt.host [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.469 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.469 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.470 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.470 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.471 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.471 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.471 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.472 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.472 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.473 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.473 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.473 250273 DEBUG nova.virt.hardware [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.478 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:42.647 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:42.648 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:42.649 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:31:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1614833683' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.955 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.991 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:42 np0005593232 nova_compute[250269]: 2026-01-23 10:31:42.996 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:31:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/120019670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.498 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.500 250273 DEBUG nova.virt.libvirt.vif [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=180,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-0zkixqmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:31:23Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=b467845b-6847-4ed6-8239-9cb93760cfc7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.500 250273 DEBUG nova.network.os_vif_util [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.501 250273 DEBUG nova.network.os_vif_util [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.502 250273 DEBUG nova.objects.instance [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'pci_devices' on Instance uuid b467845b-6847-4ed6-8239-9cb93760cfc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:31:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.571 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <uuid>b467845b-6847-4ed6-8239-9cb93760cfc7</uuid>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <name>instance-000000b4</name>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213</nova:name>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:31:42</nova:creationTime>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:user uuid="a3cd8c3758e14f9c8e4ad1a9a94a9995">tempest-TestSecurityGroupsBasicOps-622349977-project-member</nova:user>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:project uuid="b27af793a8cc42259216fbeaa302ba03">tempest-TestSecurityGroupsBasicOps-622349977</nova:project>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <nova:port uuid="77481d6e-4c79-4c13-b9bc-38c906a20223">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="serial">b467845b-6847-4ed6-8239-9cb93760cfc7</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="uuid">b467845b-6847-4ed6-8239-9cb93760cfc7</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b467845b-6847-4ed6-8239-9cb93760cfc7_disk">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:14:ce:b6"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <target dev="tap77481d6e-4c"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/console.log" append="off"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:31:43 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:31:43 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:31:43 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:31:43 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.573 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Preparing to wait for external event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.574 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.575 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.575 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.576 250273 DEBUG nova.virt.libvirt.vif [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=180,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-0zkixqmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:31:23Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=b467845b-6847-4ed6-8239-9cb93760cfc7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.577 250273 DEBUG nova.network.os_vif_util [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.578 250273 DEBUG nova.network.os_vif_util [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.579 250273 DEBUG os_vif [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.581 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.582 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.588 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.588 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77481d6e-4c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.589 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77481d6e-4c, col_values=(('external_ids', {'iface-id': '77481d6e-4c79-4c13-b9bc-38c906a20223', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:14:ce:b6', 'vm-uuid': 'b467845b-6847-4ed6-8239-9cb93760cfc7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.592 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:43 np0005593232 NetworkManager[49057]: <info>  [1769164303.5934] manager: (tap77481d6e-4c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.596 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.602 250273 INFO os_vif [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c')#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.679 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.680 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.680 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No VIF found with MAC fa:16:3e:14:ce:b6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.681 250273 INFO nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Using config drive#033[00m
Jan 23 05:31:43 np0005593232 nova_compute[250269]: 2026-01-23 10:31:43.714 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3096: 321 pgs: 321 active+clean; 818 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 200 op/s
Jan 23 05:31:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:44.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:44.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.622 250273 INFO nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Creating config drive at /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config#033[00m
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.628 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw6hm2td5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.789 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw6hm2td5" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.824 250273 DEBUG nova.storage.rbd_utils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.829 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:31:44 np0005593232 nova_compute[250269]: 2026-01-23 10:31:44.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.022 250273 DEBUG oslo_concurrency.processutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config b467845b-6847-4ed6-8239-9cb93760cfc7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.023 250273 INFO nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Deleting local config drive /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7/disk.config because it was imported into RBD.#033[00m
Jan 23 05:31:45 np0005593232 kernel: tap77481d6e-4c: entered promiscuous mode
Jan 23 05:31:45 np0005593232 NetworkManager[49057]: <info>  [1769164305.0817] manager: (tap77481d6e-4c): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Jan 23 05:31:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:45Z|00702|binding|INFO|Claiming lport 77481d6e-4c79-4c13-b9bc-38c906a20223 for this chassis.
Jan 23 05:31:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:45Z|00703|binding|INFO|77481d6e-4c79-4c13-b9bc-38c906a20223: Claiming fa:16:3e:14:ce:b6 10.100.0.14
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.082 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.089 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:ce:b6 10.100.0.14'], port_security=['fa:16:3e:14:ce:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b467845b-6847-4ed6-8239-9cb93760cfc7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d8920ab-3f59-40bb-a223-b071277b1888', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1feca004-4cb8-48c6-97b5-f4e0f9b48a29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90aca808-149b-4d83-8945-235dd296217f, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=77481d6e-4c79-4c13-b9bc-38c906a20223) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.091 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 77481d6e-4c79-4c13-b9bc-38c906a20223 in datapath 4d8920ab-3f59-40bb-a223-b071277b1888 bound to our chassis#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.092 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d8920ab-3f59-40bb-a223-b071277b1888#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.093 250273 DEBUG nova.network.neutron [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Updated VIF entry in instance network info cache for port 77481d6e-4c79-4c13-b9bc-38c906a20223. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.093 250273 DEBUG nova.network.neutron [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Updating instance_info_cache with network_info: [{"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:31:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:45Z|00704|binding|INFO|Setting lport 77481d6e-4c79-4c13-b9bc-38c906a20223 ovn-installed in OVS
Jan 23 05:31:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:31:45Z|00705|binding|INFO|Setting lport 77481d6e-4c79-4c13-b9bc-38c906a20223 up in Southbound
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.111 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1b4e34dd-7a7a-41e6-bcfc-3f0945e3ef87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.123 250273 DEBUG oslo_concurrency.lockutils [req-1834eef3-8f07-4e93-9fed-28e6a6e73837 req-64bbe4fe-a956-49ac-bc9d-21f458a94e59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b467845b-6847-4ed6-8239-9cb93760cfc7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:31:45 np0005593232 systemd-udevd[370982]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:31:45 np0005593232 systemd-machined[215836]: New machine qemu-80-instance-000000b4.
Jan 23 05:31:45 np0005593232 systemd[1]: Started Virtual Machine qemu-80-instance-000000b4.
Jan 23 05:31:45 np0005593232 NetworkManager[49057]: <info>  [1769164305.1408] device (tap77481d6e-4c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:31:45 np0005593232 NetworkManager[49057]: <info>  [1769164305.1417] device (tap77481d6e-4c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.148 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d5146f57-b249-4721-a237-e5c438e196ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.151 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c133fb-66ce-4c64-aa01-fe7d66d710d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.180 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f19fca88-d91d-40c0-9561-cb3174f0ce92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.198 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[158c72e6-2ceb-434d-87ba-6088aa6b61e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d8920ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:f8:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815839, 'reachable_time': 43897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370993, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.217 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[784a56a8-9eed-4b0e-adb9-4c5bfbe7a96b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d8920ab-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815855, 'tstamp': 815855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370994, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d8920ab-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815859, 'tstamp': 815859}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370994, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.219 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d8920ab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.221 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.223 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d8920ab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.224 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.224 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d8920ab-30, col_values=(('external_ids', {'iface-id': '3e4779d9-d53f-4b05-82f8-912a2ec8334d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:31:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:31:45.224 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.824 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164305.8234875, b467845b-6847-4ed6-8239-9cb93760cfc7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.824 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] VM Started (Lifecycle Event)#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.876 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.880 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164305.8262143, b467845b-6847-4ed6-8239-9cb93760cfc7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.880 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.911 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.915 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:31:45 np0005593232 nova_compute[250269]: 2026-01-23 10:31:45.945 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:31:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3097: 321 pgs: 321 active+clean; 780 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 187 op/s
Jan 23 05:31:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.080 250273 DEBUG nova.compute.manager [req-124e06b0-3ecc-4e87-9555-7ca23916f0bd req-636d884d-1a12-499d-ba58-3523a3d44d91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.080 250273 DEBUG oslo_concurrency.lockutils [req-124e06b0-3ecc-4e87-9555-7ca23916f0bd req-636d884d-1a12-499d-ba58-3523a3d44d91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.081 250273 DEBUG oslo_concurrency.lockutils [req-124e06b0-3ecc-4e87-9555-7ca23916f0bd req-636d884d-1a12-499d-ba58-3523a3d44d91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.081 250273 DEBUG oslo_concurrency.lockutils [req-124e06b0-3ecc-4e87-9555-7ca23916f0bd req-636d884d-1a12-499d-ba58-3523a3d44d91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.081 250273 DEBUG nova.compute.manager [req-124e06b0-3ecc-4e87-9555-7ca23916f0bd req-636d884d-1a12-499d-ba58-3523a3d44d91 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Processing event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.083 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.087 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164306.087658, b467845b-6847-4ed6-8239-9cb93760cfc7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.088 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.092 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.098 250273 INFO nova.virt.libvirt.driver [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Instance spawned successfully.#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.099 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.137 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.143 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.144 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.144 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.145 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.145 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.146 250273 DEBUG nova.virt.libvirt.driver [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.152 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.226 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.245 250273 INFO nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Took 22.18 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.245 250273 DEBUG nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:31:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:46.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.324 250273 INFO nova.compute.manager [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Took 25.26 seconds to build instance.#033[00m
Jan 23 05:31:46 np0005593232 nova_compute[250269]: 2026-01-23 10:31:46.365 250273 DEBUG oslo_concurrency.lockutils [None req-ec7dcc4b-d154-45e6-b68e-fdc0997ad123 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Jan 23 05:31:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Jan 23 05:31:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.011065163781872325 of space, bias 1.0, pg target 3.3195491345616976 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6460427135678393 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.009257441722501395 of space, bias 1.0, pg target 2.7494601915829144 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001715754699423041 quantized to 16 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021446933742788013 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018229893681369813 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00042893867485576027 quantized to 32 (current 32)
Jan 23 05:31:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3099: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 756 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 8.1 MiB/s wr, 238 op/s
Jan 23 05:31:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:48.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:48.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.482 250273 DEBUG nova.compute.manager [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.482 250273 DEBUG oslo_concurrency.lockutils [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.482 250273 DEBUG oslo_concurrency.lockutils [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.483 250273 DEBUG oslo_concurrency.lockutils [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.483 250273 DEBUG nova.compute.manager [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] No waiting events found dispatching network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.483 250273 WARNING nova.compute.manager [req-2ee1efa3-e2f4-4762-bdda-381e513c268a req-db8bb176-25c0-4fc2-8f2c-c96c1fb447b1 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received unexpected event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e343 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Jan 23 05:31:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Jan 23 05:31:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Jan 23 05:31:48 np0005593232 nova_compute[250269]: 2026-01-23 10:31:48.597 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:49 np0005593232 nova_compute[250269]: 2026-01-23 10:31:49.976 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3101: 321 pgs: 2 active+clean+snaptrim, 319 active+clean; 677 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.6 MiB/s rd, 2.7 MiB/s wr, 259 op/s
Jan 23 05:31:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:50.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:50.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3102: 321 pgs: 321 active+clean; 613 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 57 KiB/s wr, 231 op/s
Jan 23 05:31:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:31:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:31:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:52.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Jan 23 05:31:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Jan 23 05:31:53 np0005593232 nova_compute[250269]: 2026-01-23 10:31:53.602 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:53 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Jan 23 05:31:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3104: 321 pgs: 321 active+clean; 600 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 62 KiB/s wr, 268 op/s
Jan 23 05:31:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:54.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:31:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:54.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:31:54 np0005593232 podman[371043]: 2026-01-23 10:31:54.519696642 +0000 UTC m=+0.162938883 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:31:54 np0005593232 nova_compute[250269]: 2026-01-23 10:31:54.993 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:55 np0005593232 nova_compute[250269]: 2026-01-23 10:31:55.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:55 np0005593232 nova_compute[250269]: 2026-01-23 10:31:55.295 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3105: 321 pgs: 321 active+clean; 593 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.3 MiB/s rd, 4.2 KiB/s wr, 250 op/s
Jan 23 05:31:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:56.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Jan 23 05:31:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Jan 23 05:31:56 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Jan 23 05:31:57 np0005593232 nova_compute[250269]: 2026-01-23 10:31:57.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:31:57 np0005593232 nova_compute[250269]: 2026-01-23 10:31:57.295 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:31:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Jan 23 05:31:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Jan 23 05:31:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Jan 23 05:31:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3108: 321 pgs: 321 active+clean; 597 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 986 KiB/s wr, 225 op/s
Jan 23 05:31:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:31:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:31:58.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:31:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:31:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:31:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:31:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:31:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:31:58 np0005593232 nova_compute[250269]: 2026-01-23 10:31:58.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:31:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Jan 23 05:31:58 np0005593232 podman[371095]: 2026-01-23 10:31:58.914061515 +0000 UTC m=+0.103286587 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 23 05:31:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Jan 23 05:31:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Jan 23 05:31:59 np0005593232 nova_compute[250269]: 2026-01-23 10:31:59.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3110: 321 pgs: 321 active+clean; 624 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 4.6 MiB/s wr, 199 op/s
Jan 23 05:32:00 np0005593232 nova_compute[250269]: 2026-01-23 10:32:00.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:00.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:00.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Jan 23 05:32:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Jan 23 05:32:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Jan 23 05:32:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:01Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:14:ce:b6 10.100.0.14
Jan 23 05:32:01 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:01Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:14:ce:b6 10.100.0.14
Jan 23 05:32:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3112: 321 pgs: 321 active+clean; 613 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 6.9 MiB/s wr, 237 op/s
Jan 23 05:32:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:02.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:02.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:03 np0005593232 nova_compute[250269]: 2026-01-23 10:32:03.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:03 np0005593232 nova_compute[250269]: 2026-01-23 10:32:03.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3113: 321 pgs: 321 active+clean; 601 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 8.0 MiB/s wr, 261 op/s
Jan 23 05:32:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:04Z|00706|binding|INFO|Releasing lport 3e4779d9-d53f-4b05-82f8-912a2ec8334d from this chassis (sb_readonly=0)
Jan 23 05:32:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:04Z|00707|binding|INFO|Releasing lport 0f4f3525-34df-42ca-96c3-3c7e0c388556 from this chassis (sb_readonly=0)
Jan 23 05:32:04 np0005593232 nova_compute[250269]: 2026-01-23 10:32:04.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:05 np0005593232 nova_compute[250269]: 2026-01-23 10:32:05.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3114: 321 pgs: 321 active+clean; 603 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 7.7 MiB/s wr, 248 op/s
Jan 23 05:32:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:06.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:06.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.622 250273 DEBUG nova.compute.manager [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.623 250273 DEBUG nova.compute.manager [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing instance network info cache due to event network-changed-dc21586e-25cd-4cb5-923b-a4766c5ef9cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.624 250273 DEBUG oslo_concurrency.lockutils [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.624 250273 DEBUG oslo_concurrency.lockutils [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.625 250273 DEBUG nova.network.neutron [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Refreshing network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.835 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.836 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.836 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.838 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.839 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.841 250273 INFO nova.compute.manager [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Terminating instance#033[00m
Jan 23 05:32:06 np0005593232 nova_compute[250269]: 2026-01-23 10:32:06.843 250273 DEBUG nova.compute.manager [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:32:07 np0005593232 kernel: tapdc21586e-25 (unregistering): left promiscuous mode
Jan 23 05:32:07 np0005593232 NetworkManager[49057]: <info>  [1769164327.2099] device (tapdc21586e-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00708|binding|INFO|Releasing lport dc21586e-25cd-4cb5-923b-a4766c5ef9cc from this chassis (sb_readonly=0)
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00709|binding|INFO|Setting lport dc21586e-25cd-4cb5-923b-a4766c5ef9cc down in Southbound
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00710|binding|INFO|Removing iface tapdc21586e-25 ovn-installed in OVS
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.242 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.248 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.249 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.249 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.250 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.250 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.252 250273 INFO nova.compute.manager [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Terminating instance#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.253 250273 DEBUG nova.compute.manager [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.268 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d3:85 10.100.0.5'], port_security=['fa:16:3e:28:d3:85 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '713eba08-716b-48ed-866e-e231d09ebfaf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd95237d-0845-479e-9505-318e01879565', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7be5cb5abaf44b0a9c0c307d348d8f75', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9caee2dd-fa48-495f-923a-9b90f0b8d219', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fddb1949-170b-4939-a509-14ac4d8149d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=dc21586e-25cd-4cb5-923b-a4766c5ef9cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.273 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.274 161902 INFO neutron.agent.ovn.metadata.agent [-] Port dc21586e-25cd-4cb5-923b-a4766c5ef9cc in datapath bd95237d-0845-479e-9505-318e01879565 unbound from our chassis#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.277 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd95237d-0845-479e-9505-318e01879565, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.283 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7f824088-1d5b-44f9-a2cf-123de1066ff7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.285 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd95237d-0845-479e-9505-318e01879565 namespace which is not needed anymore#033[00m
Jan 23 05:32:07 np0005593232 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ad.scope: Deactivated successfully.
Jan 23 05:32:07 np0005593232 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d000000ad.scope: Consumed 20.805s CPU time.
Jan 23 05:32:07 np0005593232 systemd-machined[215836]: Machine qemu-78-instance-000000ad terminated.
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.507 250273 INFO nova.virt.libvirt.driver [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Instance destroyed successfully.#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.508 250273 DEBUG nova.objects.instance [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lazy-loading 'resources' on Instance uuid 713eba08-716b-48ed-866e-e231d09ebfaf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.544 250273 DEBUG nova.virt.libvirt.vif [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:29:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1234686874',display_name='tempest-TestSnapshotPattern-server-1234686874',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1234686874',id=173,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO0lEKBsepjSG1BUrT2qgopJ/7aCoBcgDi3hhuJKTvppGpJeuS7bRrTAjsHpfJAjqSviKitZ9vmMFVrUxqv9t4cjKwPE6pfdP8/KJg/bYjfHtBTugoC0prDbk1bWow1ivA==',key_name='tempest-TestSnapshotPattern-313488550',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:30:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7be5cb5abaf44b0a9c0c307d348d8f75',ramdisk_id='',reservation_id='r-6fm0yb09',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-428739353',owner_user_name='tempest-TestSnapshotPattern-428739353-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:30:31Z,user_data=None,user_id='8e1f41f21f79408d8dff1331cfd1e0db',uuid=713eba08-716b-48ed-866e-e231d09ebfaf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.545 250273 DEBUG nova.network.os_vif_util [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converting VIF {"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.546 250273 DEBUG nova.network.os_vif_util [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.546 250273 DEBUG os_vif [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.549 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.549 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdc21586e-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.551 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.554 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.560 250273 INFO os_vif [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d3:85,bridge_name='br-int',has_traffic_filtering=True,id=dc21586e-25cd-4cb5-923b-a4766c5ef9cc,network=Network(bd95237d-0845-479e-9505-318e01879565),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdc21586e-25')#033[00m
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:07 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [NOTICE]   (367203) : haproxy version is 2.8.14-c23fe91
Jan 23 05:32:07 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [NOTICE]   (367203) : path to executable is /usr/sbin/haproxy
Jan 23 05:32:07 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [WARNING]  (367203) : Exiting Master process...
Jan 23 05:32:07 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [ALERT]    (367203) : Current worker (367205) exited with code 143 (Terminated)
Jan 23 05:32:07 np0005593232 neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565[367186]: [WARNING]  (367203) : All workers exited. Exiting... (0)
Jan 23 05:32:07 np0005593232 systemd[1]: libpod-e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200.scope: Deactivated successfully.
Jan 23 05:32:07 np0005593232 podman[371168]: 2026-01-23 10:32:07.876375012 +0000 UTC m=+0.421034249 container died e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:32:07 np0005593232 kernel: tap77481d6e-4c (unregistering): left promiscuous mode
Jan 23 05:32:07 np0005593232 NetworkManager[49057]: <info>  [1769164327.9183] device (tap77481d6e-4c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00711|binding|INFO|Releasing lport 77481d6e-4c79-4c13-b9bc-38c906a20223 from this chassis (sb_readonly=0)
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00712|binding|INFO|Setting lport 77481d6e-4c79-4c13-b9bc-38c906a20223 down in Southbound
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.932 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:07Z|00713|binding|INFO|Removing iface tap77481d6e-4c ovn-installed in OVS
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:07.942 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:14:ce:b6 10.100.0.14'], port_security=['fa:16:3e:14:ce:b6 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b467845b-6847-4ed6-8239-9cb93760cfc7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d8920ab-3f59-40bb-a223-b071277b1888', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1feca004-4cb8-48c6-97b5-f4e0f9b48a29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90aca808-149b-4d83-8945-235dd296217f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=77481d6e-4c79-4c13-b9bc-38c906a20223) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:32:07 np0005593232 nova_compute[250269]: 2026-01-23 10:32:07.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:07 np0005593232 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b4.scope: Deactivated successfully.
Jan 23 05:32:07 np0005593232 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d000000b4.scope: Consumed 15.626s CPU time.
Jan 23 05:32:07 np0005593232 systemd-machined[215836]: Machine qemu-80-instance-000000b4 terminated.
Jan 23 05:32:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3115: 321 pgs: 321 active+clean; 611 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.4 MiB/s wr, 271 op/s
Jan 23 05:32:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:32:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:08.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.133 250273 INFO nova.virt.libvirt.driver [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Instance destroyed successfully.#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.134 250273 DEBUG nova.objects.instance [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'resources' on Instance uuid b467845b-6847-4ed6-8239-9cb93760cfc7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.161 250273 DEBUG nova.virt.libvirt.vif [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:31:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-0-803800213',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=180,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:31:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-0zkixqmi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:31:46Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=b467845b-6847-4ed6-8239-9cb93760cfc7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.162 250273 DEBUG nova.network.os_vif_util [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "77481d6e-4c79-4c13-b9bc-38c906a20223", "address": "fa:16:3e:14:ce:b6", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77481d6e-4c", "ovs_interfaceid": "77481d6e-4c79-4c13-b9bc-38c906a20223", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.162 250273 DEBUG nova.network.os_vif_util [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.163 250273 DEBUG os_vif [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.164 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.165 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77481d6e-4c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.204 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.206 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.209 250273 INFO os_vif [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:14:ce:b6,bridge_name='br-int',has_traffic_filtering=True,id=77481d6e-4c79-4c13-b9bc-38c906a20223,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77481d6e-4c')#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:08.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.389 250273 DEBUG nova.compute.manager [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-unplugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.390 250273 DEBUG oslo_concurrency.lockutils [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.390 250273 DEBUG oslo_concurrency.lockutils [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.391 250273 DEBUG oslo_concurrency.lockutils [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.391 250273 DEBUG nova.compute.manager [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] No waiting events found dispatching network-vif-unplugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.391 250273 DEBUG nova.compute.manager [req-72c13b01-3add-42a2-b638-05ebee7ec8f5 req-a332f388-d608-47a1-9423-25942cafee3a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-unplugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e349 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Jan 23 05:32:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200-userdata-shm.mount: Deactivated successfully.
Jan 23 05:32:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-779247c66a7255db323c8d458194c020c1f52713d918878df6f54f726cc5e8de-merged.mount: Deactivated successfully.
Jan 23 05:32:08 np0005593232 podman[371168]: 2026-01-23 10:32:08.736028876 +0000 UTC m=+1.280688033 container cleanup e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:32:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Jan 23 05:32:08 np0005593232 systemd[1]: libpod-conmon-e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200.scope: Deactivated successfully.
Jan 23 05:32:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Jan 23 05:32:08 np0005593232 podman[371260]: 2026-01-23 10:32:08.857739345 +0000 UTC m=+0.073018956 container remove e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.869 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ced0951d-6912-46d7-8e14-0070ac45aa27]: (4, ('Fri Jan 23 10:32:07 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565 (e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200)\ne9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200\nFri Jan 23 10:32:08 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-bd95237d-0845-479e-9505-318e01879565 (e9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200)\ne9e334ea513b5164bef2548361efd765d8e80a83205a7585ba80b2e8ed341200\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.871 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b64d14fe-167e-43cf-82c3-4eb61ce92e1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.873 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd95237d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:08 np0005593232 kernel: tapbd95237d-00: left promiscuous mode
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.875 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.881 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c802c4f2-ed04-4340-8fc8-718a8a4beb04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 nova_compute[250269]: 2026-01-23 10:32:08.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.904 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f6398959-5eef-480e-8405-8489d51d73fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.907 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fd55aaf6-9231-45d7-bace-daa9a0936dae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.938 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2417fdff-655d-4b57-a962-77e32c7893b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 810892, 'reachable_time': 35688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371276, 'error': None, 'target': 'ovnmeta-bd95237d-0845-479e-9505-318e01879565', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.942 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd95237d-0845-479e-9505-318e01879565 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.942 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0bef62-b5b8-4b9f-add8-f00535b47e49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:08 np0005593232 systemd[1]: run-netns-ovnmeta\x2dbd95237d\x2d0845\x2d479e\x2d9505\x2d318e01879565.mount: Deactivated successfully.
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.945 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 77481d6e-4c79-4c13-b9bc-38c906a20223 in datapath 4d8920ab-3f59-40bb-a223-b071277b1888 unbound from our chassis#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.947 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4d8920ab-3f59-40bb-a223-b071277b1888#033[00m
Jan 23 05:32:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:08.969 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7e996e0a-f21d-414e-8105-812c6afd1cdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.026 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe91887-79ed-4af7-8e05-241b8acef03f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.031 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[233fafff-6f2c-48ae-8a53-bc7f3cc90498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.087 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[f5954643-eabe-4f40-a3fe-4813bc81f74d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.113 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf4717f-8b7d-40fa-ba96-059f23d3c9e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4d8920ab-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:f8:d1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 658, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 209], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815839, 'reachable_time': 43897, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371283, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.135 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[488036d4-9ddd-435d-b65b-ef147a67c582]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4d8920ab-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815855, 'tstamp': 815855}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371284, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4d8920ab-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815859, 'tstamp': 815859}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371284, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.137 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d8920ab-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.139 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.141 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.141 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4d8920ab-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.141 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.142 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4d8920ab-30, col_values=(('external_ids', {'iface-id': '3e4779d9-d53f-4b05-82f8-912a2ec8334d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:09.142 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.180 250273 INFO nova.virt.libvirt.driver [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Deleting instance files /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7_del#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.182 250273 INFO nova.virt.libvirt.driver [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Deletion of /var/lib/nova/instances/b467845b-6847-4ed6-8239-9cb93760cfc7_del complete#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.238 250273 INFO nova.virt.libvirt.driver [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Deleting instance files /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf_del#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.239 250273 INFO nova.virt.libvirt.driver [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Deletion of /var/lib/nova/instances/713eba08-716b-48ed-866e-e231d09ebfaf_del complete#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.367 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.368 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.368 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.369 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.369 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.448 250273 INFO nova.compute.manager [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Took 2.19 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.450 250273 DEBUG oslo.service.loopingcall [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.451 250273 DEBUG nova.compute.manager [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.452 250273 DEBUG nova.network.neutron [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.462 250273 INFO nova.compute.manager [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Took 2.62 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.463 250273 DEBUG oslo.service.loopingcall [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.465 250273 DEBUG nova.compute.manager [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.465 250273 DEBUG nova.network.neutron [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:32:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:32:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2021599879' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:32:09 np0005593232 nova_compute[250269]: 2026-01-23 10:32:09.921 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:32:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3117: 321 pgs: 321 active+clean; 541 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 4.9 MiB/s wr, 281 op/s
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.055 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.056 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b0 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:32:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:10.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:10.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.441 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.444 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3965MB free_disk=20.760608673095703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.444 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.445 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.698 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 713eba08-716b-48ed-866e-e231d09ebfaf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.699 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 7fed3293-7f29-4792-952a-17b0d8962482 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.700 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance b467845b-6847-4ed6-8239-9cb93760cfc7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.700 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.701 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.778 250273 DEBUG nova.network.neutron [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updated VIF entry in instance network info cache for port dc21586e-25cd-4cb5-923b-a4766c5ef9cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.779 250273 DEBUG nova.network.neutron [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [{"id": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "address": "fa:16:3e:28:d3:85", "network": {"id": "bd95237d-0845-479e-9505-318e01879565", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-2097735183-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7be5cb5abaf44b0a9c0c307d348d8f75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdc21586e-25", "ovs_interfaceid": "dc21586e-25cd-4cb5-923b-a4766c5ef9cc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.839 250273 DEBUG oslo_concurrency.lockutils [req-cb9ad423-7789-45d7-b770-b955caefd57c req-0f4f6b78-a88e-40b4-adff-b3cc7c05411b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-713eba08-716b-48ed-866e-e231d09ebfaf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.900 250273 DEBUG nova.compute.manager [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.901 250273 DEBUG oslo_concurrency.lockutils [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.901 250273 DEBUG oslo_concurrency.lockutils [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.901 250273 DEBUG oslo_concurrency.lockutils [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.901 250273 DEBUG nova.compute.manager [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] No waiting events found dispatching network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.902 250273 WARNING nova.compute.manager [req-09b520bf-343b-4bf3-929e-84be08236745 req-7f6d50ce-fceb-48ea-b7da-bf0917bd2f09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received unexpected event network-vif-plugged-dc21586e-25cd-4cb5-923b-a4766c5ef9cc for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:32:10 np0005593232 nova_compute[250269]: 2026-01-23 10:32:10.903 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.081 250273 DEBUG nova.network.neutron [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.114 250273 INFO nova.compute.manager [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Took 1.65 seconds to deallocate network for instance.#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.230 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.234 250273 DEBUG nova.compute.manager [req-64ca180a-cc34-4ab5-aaf4-e4ad879d0d96 req-e6703629-44cd-4dac-8d57-047917a2ff6a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Received event network-vif-deleted-dc21586e-25cd-4cb5-923b-a4766c5ef9cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:32:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1328348431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.461 250273 DEBUG nova.network.neutron [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.472 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.479 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.525 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.533 250273 INFO nova.compute.manager [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Took 2.08 seconds to deallocate network for instance.#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.554 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.555 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.555 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.606 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:11 np0005593232 nova_compute[250269]: 2026-01-23 10:32:11.883 250273 DEBUG oslo_concurrency.processutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:32:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3118: 321 pgs: 321 active+clean; 500 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 783 KiB/s rd, 3.7 MiB/s wr, 207 op/s
Jan 23 05:32:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:12.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:32:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2073891876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.469 250273 DEBUG oslo_concurrency.processutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.480 250273 DEBUG nova.compute.provider_tree [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.662 250273 DEBUG nova.scheduler.client.report [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.802 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.806 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.829 250273 INFO nova.scheduler.client.report [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Deleted allocations for instance 713eba08-716b-48ed-866e-e231d09ebfaf#033[00m
Jan 23 05:32:12 np0005593232 nova_compute[250269]: 2026-01-23 10:32:12.957 250273 DEBUG oslo_concurrency.processutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.032 250273 DEBUG oslo_concurrency.lockutils [None req-3bc7d55c-8aed-4d04-9b03-eb3fcab59603 8e1f41f21f79408d8dff1331cfd1e0db 7be5cb5abaf44b0a9c0c307d348d8f75 - - default default] Lock "713eba08-716b-48ed-866e-e231d09ebfaf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.107 250273 DEBUG nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-vif-unplugged-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.108 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.108 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.109 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.109 250273 DEBUG nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] No waiting events found dispatching network-vif-unplugged-77481d6e-4c79-4c13-b9bc-38c906a20223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.110 250273 WARNING nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received unexpected event network-vif-unplugged-77481d6e-4c79-4c13-b9bc-38c906a20223 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.110 250273 DEBUG nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.111 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.111 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.111 250273 DEBUG oslo_concurrency.lockutils [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.112 250273 DEBUG nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] No waiting events found dispatching network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.112 250273 WARNING nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received unexpected event network-vif-plugged-77481d6e-4c79-4c13-b9bc-38c906a20223 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.113 250273 DEBUG nova.compute.manager [req-446b2586-63ae-4988-8c6c-388c725d7d76 req-0121a23a-340f-4b38-a17f-84a987b736e5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Received event network-vif-deleted-77481d6e-4c79-4c13-b9bc-38c906a20223 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:32:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3675048649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.491 250273 DEBUG oslo_concurrency.processutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.500 250273 DEBUG nova.compute.provider_tree [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.532 250273 DEBUG nova.scheduler.client.report [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.566 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.638 250273 INFO nova.scheduler.client.report [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Deleted allocations for instance b467845b-6847-4ed6-8239-9cb93760cfc7#033[00m
Jan 23 05:32:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:13 np0005593232 nova_compute[250269]: 2026-01-23 10:32:13.783 250273 DEBUG oslo_concurrency.lockutils [None req-aa4ffc46-1014-4952-b0b5-1114905a3e85 a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "b467845b-6847-4ed6-8239-9cb93760cfc7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3119: 321 pgs: 321 active+clean; 453 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 521 KiB/s rd, 1.8 MiB/s wr, 154 op/s
Jan 23 05:32:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:14.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:15 np0005593232 nova_compute[250269]: 2026-01-23 10:32:15.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:15 np0005593232 nova_compute[250269]: 2026-01-23 10:32:15.557 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:15 np0005593232 nova_compute[250269]: 2026-01-23 10:32:15.558 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:32:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3120: 321 pgs: 321 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 190 KiB/s rd, 565 KiB/s wr, 115 op/s
Jan 23 05:32:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:16.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:16 np0005593232 nova_compute[250269]: 2026-01-23 10:32:16.123 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:32:16 np0005593232 nova_compute[250269]: 2026-01-23 10:32:16.124 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:32:16 np0005593232 nova_compute[250269]: 2026-01-23 10:32:16.125 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:32:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:16.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:16 np0005593232 nova_compute[250269]: 2026-01-23 10:32:16.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.445 250273 DEBUG nova.compute.manager [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.446 250273 DEBUG nova.compute.manager [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing instance network info cache due to event network-changed-8019b64f-0aa7-4f8c-9650-9de06caea07e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.447 250273 DEBUG oslo_concurrency.lockutils [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.590 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.591 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.592 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.592 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.593 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.595 250273 INFO nova.compute.manager [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Terminating instance#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.598 250273 DEBUG nova.compute.manager [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:32:17 np0005593232 kernel: tap8019b64f-0a (unregistering): left promiscuous mode
Jan 23 05:32:17 np0005593232 NetworkManager[49057]: <info>  [1769164337.6927] device (tap8019b64f-0a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:32:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:17Z|00714|binding|INFO|Releasing lport 8019b64f-0aa7-4f8c-9650-9de06caea07e from this chassis (sb_readonly=0)
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.702 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:17Z|00715|binding|INFO|Setting lport 8019b64f-0aa7-4f8c-9650-9de06caea07e down in Southbound
Jan 23 05:32:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:32:17Z|00716|binding|INFO|Removing iface tap8019b64f-0a ovn-installed in OVS
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:17.716 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:f7:02 10.100.0.8'], port_security=['fa:16:3e:6e:f7:02 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '7fed3293-7f29-4792-952a-17b0d8962482', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4d8920ab-3f59-40bb-a223-b071277b1888', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1feca004-4cb8-48c6-97b5-f4e0f9b48a29 571af405-339d-428c-b637-699c80b29e30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90aca808-149b-4d83-8945-235dd296217f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=8019b64f-0aa7-4f8c-9650-9de06caea07e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:32:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:17.717 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 8019b64f-0aa7-4f8c-9650-9de06caea07e in datapath 4d8920ab-3f59-40bb-a223-b071277b1888 unbound from our chassis#033[00m
Jan 23 05:32:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:17.719 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4d8920ab-3f59-40bb-a223-b071277b1888, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:32:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:17.720 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e48835f-988c-4bf3-bcf5-d65b84ed892a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:17.721 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888 namespace which is not needed anymore#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:17 np0005593232 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000b0.scope: Deactivated successfully.
Jan 23 05:32:17 np0005593232 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d000000b0.scope: Consumed 19.097s CPU time.
Jan 23 05:32:17 np0005593232 systemd-machined[215836]: Machine qemu-79-instance-000000b0 terminated.
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.844 250273 INFO nova.virt.libvirt.driver [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Instance destroyed successfully.#033[00m
Jan 23 05:32:17 np0005593232 nova_compute[250269]: 2026-01-23 10:32:17.846 250273 DEBUG nova.objects.instance [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'resources' on Instance uuid 7fed3293-7f29-4792-952a-17b0d8962482 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:32:17 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [NOTICE]   (369351) : haproxy version is 2.8.14-c23fe91
Jan 23 05:32:17 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [NOTICE]   (369351) : path to executable is /usr/sbin/haproxy
Jan 23 05:32:17 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [WARNING]  (369351) : Exiting Master process...
Jan 23 05:32:17 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [ALERT]    (369351) : Current worker (369353) exited with code 143 (Terminated)
Jan 23 05:32:17 np0005593232 neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888[369346]: [WARNING]  (369351) : All workers exited. Exiting... (0)
Jan 23 05:32:17 np0005593232 systemd[1]: libpod-b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34.scope: Deactivated successfully.
Jan 23 05:32:17 np0005593232 podman[371411]: 2026-01-23 10:32:17.943156403 +0000 UTC m=+0.057824345 container died b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 05:32:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34-userdata-shm.mount: Deactivated successfully.
Jan 23 05:32:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-31bc6c5c71c9dff3d80bc7ed33565de6cb819553c86f7fa2e6f123479112e5ba-merged.mount: Deactivated successfully.
Jan 23 05:32:17 np0005593232 podman[371411]: 2026-01-23 10:32:17.993169934 +0000 UTC m=+0.107837916 container cleanup b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:32:18 np0005593232 systemd[1]: libpod-conmon-b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34.scope: Deactivated successfully.
Jan 23 05:32:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3121: 321 pgs: 321 active+clean; 453 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 45 KiB/s rd, 22 KiB/s wr, 67 op/s
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.033 250273 DEBUG nova.virt.libvirt.vif [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-access_point-839314558',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-acc',id=176,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKupE4epLmgYLAdAj+FLKAkKAmBXmdOwgX+oMoeS46mz1daV80ym+/nNG6TQn7iL9TDYCuW4Gc2E4iMSeMZBjYm+yTMAaXHo2qMMkDwzwxd8ZHn30a3jeIEr/ZWv2szkRw==',key_name='tempest-TestSecurityGroupsBasicOps-55099826',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:30:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-sgbpa098',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:30:47Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=7fed3293-7f29-4792-952a-17b0d8962482,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.033 250273 DEBUG nova.network.os_vif_util [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.034 250273 DEBUG nova.network.os_vif_util [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.034 250273 DEBUG os_vif [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.036 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8019b64f-0a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.037 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.039 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.040 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.043 250273 INFO os_vif [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:f7:02,bridge_name='br-int',has_traffic_filtering=True,id=8019b64f-0aa7-4f8c-9650-9de06caea07e,network=Network(4d8920ab-3f59-40bb-a223-b071277b1888),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8019b64f-0a')#033[00m
Jan 23 05:32:18 np0005593232 podman[371441]: 2026-01-23 10:32:18.074264079 +0000 UTC m=+0.051690920 container remove b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.081 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[07adc8ee-bc26-4b1a-b686-965e1b70ad41]: (4, ('Fri Jan 23 10:32:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888 (b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34)\nb7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34\nFri Jan 23 10:32:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888 (b7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34)\nb7b1a9ce2d44542fb71bc9c6f04a477c8d1087f3cbb35b04d13b925954d6ba34\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.084 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1539da-cb38-454a-a420-63b60887a59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.085 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4d8920ab-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:18 np0005593232 kernel: tap4d8920ab-30: left promiscuous mode
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.105 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d720f85c-fae4-4d3b-b624-c8f0ffb08340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:18.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.121 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2adb8fb1-dfbb-4871-b83f-7a75c0c8d7f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.123 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a965f32b-39b3-448f-b1bf-90b65fc47461]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.146 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[89801ad1-4883-4753-94b0-bc963aa465c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815828, 'reachable_time': 35108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371474, 'error': None, 'target': 'ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.148 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4d8920ab-3f59-40bb-a223-b071277b1888 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:32:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:18.149 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[47ae774d-ebde-4238-ad5a-f7a5d4f74b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:32:18 np0005593232 systemd[1]: run-netns-ovnmeta\x2d4d8920ab\x2d3f59\x2d40bb\x2da223\x2db071277b1888.mount: Deactivated successfully.
Jan 23 05:32:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:18.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.493 250273 INFO nova.virt.libvirt.driver [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Deleting instance files /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482_del#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.495 250273 INFO nova.virt.libvirt.driver [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Deletion of /var/lib/nova/instances/7fed3293-7f29-4792-952a-17b0d8962482_del complete#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.566 250273 INFO nova.compute.manager [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Took 0.97 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.566 250273 DEBUG oslo.service.loopingcall [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.567 250273 DEBUG nova.compute.manager [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:32:18 np0005593232 nova_compute[250269]: 2026-01-23 10:32:18.567 250273 DEBUG nova.network.neutron [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:32:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.566 250273 DEBUG nova.compute.manager [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-unplugged-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.566 250273 DEBUG oslo_concurrency.lockutils [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.567 250273 DEBUG oslo_concurrency.lockutils [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.567 250273 DEBUG oslo_concurrency.lockutils [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.567 250273 DEBUG nova.compute.manager [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] No waiting events found dispatching network-vif-unplugged-8019b64f-0aa7-4f8c-9650-9de06caea07e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:19 np0005593232 nova_compute[250269]: 2026-01-23 10:32:19.567 250273 DEBUG nova.compute.manager [req-94ca271c-a7fc-4088-bc5b-96801fa51b15 req-4db5cb01-47f5-4327-beff-93c34e10eb8f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-unplugged-8019b64f-0aa7-4f8c-9650-9de06caea07e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:32:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3122: 321 pgs: 321 active+clean; 418 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 41 KiB/s rd, 23 KiB/s wr, 63 op/s
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.038 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:20.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:20.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.869 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [{"id": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "address": "fa:16:3e:6e:f7:02", "network": {"id": "4d8920ab-3f59-40bb-a223-b071277b1888", "bridge": "br-int", "label": "tempest-network-smoke--1695810502", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8019b64f-0a", "ovs_interfaceid": "8019b64f-0aa7-4f8c-9650-9de06caea07e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.901 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.902 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.902 250273 DEBUG oslo_concurrency.lockutils [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:32:20 np0005593232 nova_compute[250269]: 2026-01-23 10:32:20.902 250273 DEBUG nova.network.neutron [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Refreshing network info cache for port 8019b64f-0aa7-4f8c-9650-9de06caea07e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.080 250273 DEBUG nova.network.neutron [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.134 250273 INFO nova.compute.manager [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Took 2.57 seconds to deallocate network for instance.#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.211 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.211 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.273 250273 DEBUG nova.compute.manager [req-3d8bc5a3-ebb1-4927-9d8d-54147fc34007 req-7ea6ffd0-58db-4ff0-a6ce-e68f15e201a3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-deleted-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.295 250273 DEBUG oslo_concurrency.processutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.341 250273 INFO nova.network.neutron [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Port 8019b64f-0aa7-4f8c-9650-9de06caea07e from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.343 250273 DEBUG nova.network.neutron [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.366 250273 DEBUG oslo_concurrency.lockutils [req-eaf6edcc-5345-42c7-bab1-4e1ca236728e req-38a2930e-a800-4468-9399-c552483a1d09 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-7fed3293-7f29-4792-952a-17b0d8962482" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:32:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:32:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877944839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.756 250273 DEBUG oslo_concurrency.processutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.765 250273 DEBUG nova.compute.provider_tree [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.781 250273 DEBUG nova.scheduler.client.report [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.819 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.846 250273 DEBUG nova.compute.manager [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.846 250273 DEBUG oslo_concurrency.lockutils [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "7fed3293-7f29-4792-952a-17b0d8962482-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.846 250273 DEBUG oslo_concurrency.lockutils [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.846 250273 DEBUG oslo_concurrency.lockutils [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.847 250273 DEBUG nova.compute.manager [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] No waiting events found dispatching network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.847 250273 WARNING nova.compute.manager [req-a28feaa4-ef62-41fb-b265-43c32b08cc98 req-00c59b45-129f-4b3e-8718-7f489dcb47b5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Received unexpected event network-vif-plugged-8019b64f-0aa7-4f8c-9650-9de06caea07e for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:32:21 np0005593232 nova_compute[250269]: 2026-01-23 10:32:21.922 250273 INFO nova.scheduler.client.report [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Deleted allocations for instance 7fed3293-7f29-4792-952a-17b0d8962482#033[00m
Jan 23 05:32:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3123: 321 pgs: 321 active+clean; 407 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 34 KiB/s wr, 52 op/s
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.087 250273 DEBUG oslo_concurrency.lockutils [None req-76f33cef-bd9c-4fa6-a399-f2802c4196bb a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "7fed3293-7f29-4792-952a-17b0d8962482" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:22.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.505 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164327.5036552, 713eba08-716b-48ed-866e-e231d09ebfaf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.506 250273 INFO nova.compute.manager [-] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.543 250273 DEBUG nova.compute.manager [None req-ef59c200-bfa4-4b3d-8aa7-f598145448f5 - - - - - -] [instance: 713eba08-716b-48ed-866e-e231d09ebfaf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.633 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:22 np0005593232 nova_compute[250269]: 2026-01-23 10:32:22.861 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:23 np0005593232 nova_compute[250269]: 2026-01-23 10:32:23.037 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:23 np0005593232 nova_compute[250269]: 2026-01-23 10:32:23.131 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164328.1292975, b467845b-6847-4ed6-8239-9cb93760cfc7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:32:23 np0005593232 nova_compute[250269]: 2026-01-23 10:32:23.131 250273 INFO nova.compute.manager [-] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:32:23 np0005593232 nova_compute[250269]: 2026-01-23 10:32:23.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:23 np0005593232 nova_compute[250269]: 2026-01-23 10:32:23.339 250273 DEBUG nova.compute.manager [None req-500ae3b0-8e60-45ca-bb6f-db3374ee8783 - - - - - -] [instance: b467845b-6847-4ed6-8239-9cb93760cfc7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:32:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3124: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 25 KiB/s wr, 54 op/s
Jan 23 05:32:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:24.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:25 np0005593232 nova_compute[250269]: 2026-01-23 10:32:25.040 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:25 np0005593232 podman[371706]: 2026-01-23 10:32:25.104941614 +0000 UTC m=+0.120802325 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.487994891 +0000 UTC m=+0.043844527 container create 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:32:25 np0005593232 systemd[1]: Started libpod-conmon-70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9.scope.
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.46964874 +0000 UTC m=+0.025498396 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.587375736 +0000 UTC m=+0.143225392 container init 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.594013855 +0000 UTC m=+0.149863491 container start 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.596791854 +0000 UTC m=+0.152641520 container attach 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 05:32:25 np0005593232 charming_dewdney[371865]: 167 167
Jan 23 05:32:25 np0005593232 systemd[1]: libpod-70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9.scope: Deactivated successfully.
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.600510979 +0000 UTC m=+0.156360665 container died 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:32:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-676c4c15367b41c8b5b1453c8689232253f8f3ecd05be24436cd0d4b2e8246b2-merged.mount: Deactivated successfully.
Jan 23 05:32:25 np0005593232 podman[371848]: 2026-01-23 10:32:25.649921324 +0000 UTC m=+0.205770960 container remove 70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:32:25 np0005593232 systemd[1]: libpod-conmon-70277f1c9b68d00d0a5464ecb68a9b2817e3900ec5349a27e6d1c84911ce0df9.scope: Deactivated successfully.
Jan 23 05:32:25 np0005593232 podman[371888]: 2026-01-23 10:32:25.817376364 +0000 UTC m=+0.044918028 container create a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:32:25 np0005593232 systemd[1]: Started libpod-conmon-a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082.scope.
Jan 23 05:32:25 np0005593232 podman[371888]: 2026-01-23 10:32:25.798117116 +0000 UTC m=+0.025658800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bca09c75000a584b9ec6761293a1093830bc1f2bd20a67f5094899c5b58f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bca09c75000a584b9ec6761293a1093830bc1f2bd20a67f5094899c5b58f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bca09c75000a584b9ec6761293a1093830bc1f2bd20a67f5094899c5b58f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95bca09c75000a584b9ec6761293a1093830bc1f2bd20a67f5094899c5b58f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:25 np0005593232 podman[371888]: 2026-01-23 10:32:25.929853901 +0000 UTC m=+0.157395585 container init a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:32:25 np0005593232 podman[371888]: 2026-01-23 10:32:25.936631613 +0000 UTC m=+0.164173277 container start a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:32:25 np0005593232 podman[371888]: 2026-01-23 10:32:25.939643969 +0000 UTC m=+0.167185653 container attach a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:32:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3125: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 23 KiB/s wr, 32 op/s
Jan 23 05:32:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:26.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:26.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:32:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:32:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]: [
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:    {
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "available": false,
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "ceph_device": false,
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "lsm_data": {},
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "lvs": [],
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "path": "/dev/sr0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "rejected_reasons": [
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "Has a FileSystem",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "Insufficient space (<5GB)"
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        ],
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        "sys_api": {
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "actuators": null,
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "device_nodes": "sr0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "devname": "sr0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "human_readable_size": "482.00 KB",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "id_bus": "ata",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "model": "QEMU DVD-ROM",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "nr_requests": "2",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "parent": "/dev/sr0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "partitions": {},
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "path": "/dev/sr0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "removable": "1",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "rev": "2.5+",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "ro": "0",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "rotational": "1",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "sas_address": "",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "sas_device_handle": "",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "scheduler_mode": "mq-deadline",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "sectors": 0,
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "sectorsize": "2048",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "size": 493568.0,
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "support_discard": "2048",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "type": "disk",
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:            "vendor": "QEMU"
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:        }
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]:    }
Jan 23 05:32:27 np0005593232 stupefied_feistel[371904]: ]
Jan 23 05:32:27 np0005593232 systemd[1]: libpod-a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082.scope: Deactivated successfully.
Jan 23 05:32:27 np0005593232 podman[371888]: 2026-01-23 10:32:27.192439048 +0000 UTC m=+1.419980742 container died a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:32:27 np0005593232 systemd[1]: libpod-a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082.scope: Consumed 1.260s CPU time.
Jan 23 05:32:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95bca09c75000a584b9ec6761293a1093830bc1f2bd20a67f5094899c5b58f39-merged.mount: Deactivated successfully.
Jan 23 05:32:27 np0005593232 podman[371888]: 2026-01-23 10:32:27.286198073 +0000 UTC m=+1.513739737 container remove a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_feistel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:32:27 np0005593232 systemd[1]: libpod-conmon-a580e6521fcc01a15cb4ff8f186c1aa499d09d4cf37c9280f1f30fc44de83082.scope: Deactivated successfully.
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:32:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3126: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 23 KiB/s wr, 43 op/s
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:32:28 np0005593232 nova_compute[250269]: 2026-01-23 10:32:28.040 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c0fca01c-9434-4d1a-9531-c5666032bb82 does not exist
Jan 23 05:32:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 71705079-2afc-4170-b0cb-c9cf38673754 does not exist
Jan 23 05:32:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 306513b3-f59f-4ea0-8e1f-ca3d65193502 does not exist
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:32:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:28.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:28.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.755138) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348755392, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1010, "num_deletes": 253, "total_data_size": 1488591, "memory_usage": 1510456, "flush_reason": "Manual Compaction"}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348769884, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 1052623, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68700, "largest_seqno": 69709, "table_properties": {"data_size": 1048141, "index_size": 2005, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11611, "raw_average_key_size": 21, "raw_value_size": 1038658, "raw_average_value_size": 1930, "num_data_blocks": 86, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164283, "oldest_key_time": 1769164283, "file_creation_time": 1769164348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 14743 microseconds, and 10696 cpu microseconds.
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.769918) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 1052623 bytes OK
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.769937) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.771216) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.771229) EVENT_LOG_v1 {"time_micros": 1769164348771225, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.771251) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 1483728, prev total WAL file size 1483728, number of live WAL files 2.
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.772209) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353038' seq:72057594037927935, type:22 .. '6D6772737461740032373539' seq:0, type:0; will stop at (end)
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(1027KB)], [158(13MB)]
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348772434, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 14811230, "oldest_snapshot_seqno": -1}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 9411 keys, 11351018 bytes, temperature: kUnknown
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348869199, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11351018, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11292231, "index_size": 34162, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 248154, "raw_average_key_size": 26, "raw_value_size": 11129171, "raw_average_value_size": 1182, "num_data_blocks": 1305, "num_entries": 9411, "num_filter_entries": 9411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164348, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.869703) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11351018 bytes
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.872096) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.8 rd, 117.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(24.9) write-amplify(10.8) OK, records in: 9911, records dropped: 500 output_compression: NoCompression
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.872130) EVENT_LOG_v1 {"time_micros": 1769164348872114, "job": 98, "event": "compaction_finished", "compaction_time_micros": 96905, "compaction_time_cpu_micros": 39658, "output_level": 6, "num_output_files": 1, "total_output_size": 11351018, "num_input_records": 9911, "num_output_records": 9411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348872689, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164348877471, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.771980) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.877583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.877603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.877608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.877612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:28.877616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:28 np0005593232 podman[373138]: 2026-01-23 10:32:28.883700229 +0000 UTC m=+0.053694048 container create f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:32:28 np0005593232 systemd[1]: Started libpod-conmon-f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30.scope.
Jan 23 05:32:28 np0005593232 podman[373138]: 2026-01-23 10:32:28.864148253 +0000 UTC m=+0.034142082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:28 np0005593232 podman[373138]: 2026-01-23 10:32:28.988393274 +0000 UTC m=+0.158387123 container init f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:32:29 np0005593232 podman[373138]: 2026-01-23 10:32:29.006083777 +0000 UTC m=+0.176077596 container start f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:32:29 np0005593232 sharp_austin[373154]: 167 167
Jan 23 05:32:29 np0005593232 systemd[1]: libpod-f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30.scope: Deactivated successfully.
Jan 23 05:32:29 np0005593232 podman[373138]: 2026-01-23 10:32:29.01110669 +0000 UTC m=+0.181100529 container attach f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:29 np0005593232 podman[373138]: 2026-01-23 10:32:29.011649725 +0000 UTC m=+0.181643534 container died f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:32:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7a806b8cb37744604add270fe790802020b1d5be4ae717168f1e7f0e47731964-merged.mount: Deactivated successfully.
Jan 23 05:32:29 np0005593232 podman[373156]: 2026-01-23 10:32:29.051400195 +0000 UTC m=+0.089022811 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:29 np0005593232 podman[373138]: 2026-01-23 10:32:29.056511711 +0000 UTC m=+0.226505520 container remove f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_austin, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:32:29 np0005593232 systemd[1]: libpod-conmon-f7f0b98eceba1777a2f8c4b8a8bd888bdb6b02a3e3dae480f77dcf83e75c6e30.scope: Deactivated successfully.
Jan 23 05:32:29 np0005593232 podman[373199]: 2026-01-23 10:32:29.265222183 +0000 UTC m=+0.064831894 container create fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:32:29 np0005593232 podman[373199]: 2026-01-23 10:32:29.231504034 +0000 UTC m=+0.031113785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:29 np0005593232 systemd[1]: Started libpod-conmon-fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527.scope.
Jan 23 05:32:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:29 np0005593232 podman[373199]: 2026-01-23 10:32:29.408751242 +0000 UTC m=+0.208360993 container init fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:32:29 np0005593232 podman[373199]: 2026-01-23 10:32:29.430581483 +0000 UTC m=+0.230191194 container start fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:32:29 np0005593232 podman[373199]: 2026-01-23 10:32:29.436031758 +0000 UTC m=+0.235641519 container attach fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:32:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3127: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 35 KiB/s rd, 22 KiB/s wr, 51 op/s
Jan 23 05:32:30 np0005593232 nova_compute[250269]: 2026-01-23 10:32:30.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:30.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:30 np0005593232 brave_austin[373216]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:32:30 np0005593232 brave_austin[373216]: --> relative data size: 1.0
Jan 23 05:32:30 np0005593232 brave_austin[373216]: --> All data devices are unavailable
Jan 23 05:32:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:30.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:30 np0005593232 systemd[1]: libpod-fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527.scope: Deactivated successfully.
Jan 23 05:32:30 np0005593232 podman[373199]: 2026-01-23 10:32:30.392376601 +0000 UTC m=+1.191986312 container died fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:32:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e5bf4af1d78beaf023b913b2b262b2bddd1aaa383fdb0f6504682d60ebbc7bdd-merged.mount: Deactivated successfully.
Jan 23 05:32:30 np0005593232 podman[373199]: 2026-01-23 10:32:30.478173569 +0000 UTC m=+1.277783270 container remove fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:32:30 np0005593232 systemd[1]: libpod-conmon-fd31bc71163fc86e164cee1fb707ffee4a1e1b9a4a76013acde2f79d759c9527.scope: Deactivated successfully.
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.36483522 +0000 UTC m=+0.060202632 container create f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:32:31 np0005593232 systemd[1]: Started libpod-conmon-f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d.scope.
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.336211327 +0000 UTC m=+0.031578809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.468663382 +0000 UTC m=+0.164030844 container init f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.481131866 +0000 UTC m=+0.176499278 container start f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:32:31 np0005593232 naughty_hofstadter[373404]: 167 167
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.485682235 +0000 UTC m=+0.181049647 container attach f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:32:31 np0005593232 systemd[1]: libpod-f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d.scope: Deactivated successfully.
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.487236019 +0000 UTC m=+0.182603471 container died f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:32:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-190f0c1ee8a5d41a9fc79e2f1923c4b5ff999b7db5ab0d7cb8e4d18636667b5d-merged.mount: Deactivated successfully.
Jan 23 05:32:31 np0005593232 podman[373388]: 2026-01-23 10:32:31.539465714 +0000 UTC m=+0.234833116 container remove f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hofstadter, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:32:31 np0005593232 systemd[1]: libpod-conmon-f55a468cd411283883724e7a4b11b7008fcbba67d6ada9f2289acc42c422136d.scope: Deactivated successfully.
Jan 23 05:32:31 np0005593232 podman[373427]: 2026-01-23 10:32:31.824000451 +0000 UTC m=+0.079510000 container create 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:32:31 np0005593232 systemd[1]: Started libpod-conmon-3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4.scope.
Jan 23 05:32:31 np0005593232 podman[373427]: 2026-01-23 10:32:31.787745361 +0000 UTC m=+0.043254990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912797cfd2a9f31ca574181c454ac8f9b912b10c8818f82d9b8063b6ebf079dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912797cfd2a9f31ca574181c454ac8f9b912b10c8818f82d9b8063b6ebf079dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912797cfd2a9f31ca574181c454ac8f9b912b10c8818f82d9b8063b6ebf079dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/912797cfd2a9f31ca574181c454ac8f9b912b10c8818f82d9b8063b6ebf079dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:31 np0005593232 podman[373427]: 2026-01-23 10:32:31.956845047 +0000 UTC m=+0.212354676 container init 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:31 np0005593232 podman[373427]: 2026-01-23 10:32:31.972383789 +0000 UTC m=+0.227893328 container start 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:32:31 np0005593232 podman[373427]: 2026-01-23 10:32:31.97594802 +0000 UTC m=+0.231457659 container attach 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:32:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3128: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 19 KiB/s wr, 47 op/s
Jan 23 05:32:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:32.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:32.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]: {
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:    "0": [
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:        {
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "devices": [
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "/dev/loop3"
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            ],
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "lv_name": "ceph_lv0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "lv_size": "7511998464",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "name": "ceph_lv0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "tags": {
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.cluster_name": "ceph",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.crush_device_class": "",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.encrypted": "0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.osd_id": "0",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.type": "block",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:                "ceph.vdo": "0"
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            },
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "type": "block",
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:            "vg_name": "ceph_vg0"
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:        }
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]:    ]
Jan 23 05:32:32 np0005593232 pensive_dewdney[373443]: }
Jan 23 05:32:32 np0005593232 nova_compute[250269]: 2026-01-23 10:32:32.844 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164337.842154, 7fed3293-7f29-4792-952a-17b0d8962482 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:32:32 np0005593232 nova_compute[250269]: 2026-01-23 10:32:32.846 250273 INFO nova.compute.manager [-] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:32:32 np0005593232 systemd[1]: libpod-3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4.scope: Deactivated successfully.
Jan 23 05:32:32 np0005593232 podman[373427]: 2026-01-23 10:32:32.850614011 +0000 UTC m=+1.106123601 container died 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:32:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-912797cfd2a9f31ca574181c454ac8f9b912b10c8818f82d9b8063b6ebf079dd-merged.mount: Deactivated successfully.
Jan 23 05:32:32 np0005593232 podman[373427]: 2026-01-23 10:32:32.94097732 +0000 UTC m=+1.196486879 container remove 3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:32:32 np0005593232 systemd[1]: libpod-conmon-3f421533ccae6d907b585cfbd4d46f3dfa49502fc5c4531cbaf8195b6de24ec4.scope: Deactivated successfully.
Jan 23 05:32:33 np0005593232 nova_compute[250269]: 2026-01-23 10:32:33.043 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:33 np0005593232 nova_compute[250269]: 2026-01-23 10:32:33.543 250273 DEBUG nova.compute.manager [None req-6d37fad2-7db2-4ee1-a836-4e50d3c269a7 - - - - - -] [instance: 7fed3293-7f29-4792-952a-17b0d8962482] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:32:33 np0005593232 nova_compute[250269]: 2026-01-23 10:32:33.617 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:33.617 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=67, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=66) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:32:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:33.620 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:32:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.825965695 +0000 UTC m=+0.071200445 container create 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:32:33 np0005593232 systemd[1]: Started libpod-conmon-3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883.scope.
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.792779441 +0000 UTC m=+0.038014251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.935313763 +0000 UTC m=+0.180548493 container init 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.947477998 +0000 UTC m=+0.192712748 container start 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.952328276 +0000 UTC m=+0.197563006 container attach 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 23 05:32:33 np0005593232 fervent_golick[373622]: 167 167
Jan 23 05:32:33 np0005593232 systemd[1]: libpod-3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883.scope: Deactivated successfully.
Jan 23 05:32:33 np0005593232 podman[373606]: 2026-01-23 10:32:33.95668251 +0000 UTC m=+0.201917290 container died 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-18d035a8c1e0c5bfff9f70f9ad072cb919b6a83c137499d1a0f90d23baa2f542-merged.mount: Deactivated successfully.
Jan 23 05:32:34 np0005593232 podman[373606]: 2026-01-23 10:32:34.013013241 +0000 UTC m=+0.258247991 container remove 3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:32:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3129: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 4.1 KiB/s wr, 34 op/s
Jan 23 05:32:34 np0005593232 systemd[1]: libpod-conmon-3a492f522ac04655b468a359fe627911b1c96fb5a4d5450ceddf9c7dcffc1883.scope: Deactivated successfully.
Jan 23 05:32:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:34.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:34 np0005593232 podman[373648]: 2026-01-23 10:32:34.265327403 +0000 UTC m=+0.057023002 container create 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:34 np0005593232 systemd[1]: Started libpod-conmon-4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371.scope.
Jan 23 05:32:34 np0005593232 podman[373648]: 2026-01-23 10:32:34.244624904 +0000 UTC m=+0.036320483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:32:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:32:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcad5afef063211c9c4f93e43cb109b67e275b5173935506bd9b59852b959014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcad5afef063211c9c4f93e43cb109b67e275b5173935506bd9b59852b959014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcad5afef063211c9c4f93e43cb109b67e275b5173935506bd9b59852b959014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcad5afef063211c9c4f93e43cb109b67e275b5173935506bd9b59852b959014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:32:34 np0005593232 podman[373648]: 2026-01-23 10:32:34.366043786 +0000 UTC m=+0.157739445 container init 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:32:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:34.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:34 np0005593232 podman[373648]: 2026-01-23 10:32:34.373337083 +0000 UTC m=+0.165032642 container start 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:32:34 np0005593232 podman[373648]: 2026-01-23 10:32:34.377648375 +0000 UTC m=+0.169344014 container attach 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Jan 23 05:32:35 np0005593232 nova_compute[250269]: 2026-01-23 10:32:35.047 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:35 np0005593232 cool_feynman[373665]: {
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:        "osd_id": 0,
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:        "type": "bluestore"
Jan 23 05:32:35 np0005593232 cool_feynman[373665]:    }
Jan 23 05:32:35 np0005593232 cool_feynman[373665]: }
Jan 23 05:32:35 np0005593232 systemd[1]: libpod-4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371.scope: Deactivated successfully.
Jan 23 05:32:35 np0005593232 podman[373648]: 2026-01-23 10:32:35.342772177 +0000 UTC m=+1.134467776 container died 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:32:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bcad5afef063211c9c4f93e43cb109b67e275b5173935506bd9b59852b959014-merged.mount: Deactivated successfully.
Jan 23 05:32:35 np0005593232 podman[373648]: 2026-01-23 10:32:35.430615683 +0000 UTC m=+1.222311252 container remove 4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:32:35 np0005593232 systemd[1]: libpod-conmon-4f5e7e879fc05f3f4197893e7c0713c35236a357301958b16be312f9317b4371.scope: Deactivated successfully.
Jan 23 05:32:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:32:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:32:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3642de89-1b13-4a35-8ffc-ecf0996265b6 does not exist
Jan 23 05:32:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 177c15df-aa73-4dbc-bd87-b62cd6638d51 does not exist
Jan 23 05:32:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 194215c5-84bd-4399-a13e-9a48fb0221b4 does not exist
Jan 23 05:32:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3130: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 21 op/s
Jan 23 05:32:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:36.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:36.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:32:37
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'backups', 'default.rgw.control', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:32:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3131: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 34 op/s
Jan 23 05:32:38 np0005593232 nova_compute[250269]: 2026-01-23 10:32:38.045 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:38.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:32:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:32:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:38.624 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '67'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:32:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3132: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 13 KiB/s wr, 23 op/s
Jan 23 05:32:40 np0005593232 nova_compute[250269]: 2026-01-23 10:32:40.050 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:40.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:40.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3133: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 15 op/s
Jan 23 05:32:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:42.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:42.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:42.648 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:42.649 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:32:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:32:42.649 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:32:43 np0005593232 nova_compute[250269]: 2026-01-23 10:32:43.048 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3134: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 15 op/s
Jan 23 05:32:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:44.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:44.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:45 np0005593232 nova_compute[250269]: 2026-01-23 10:32:45.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3135: 321 pgs: 321 active+clean; 374 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 12 KiB/s wr, 15 op/s
Jan 23 05:32:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:46.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:46.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0043490019542155225 of space, bias 1.0, pg target 1.3047005862646568 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.650393169551461 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003880986239065699 of space, bias 1.0, pg target 1.160414885480644 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:32:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:32:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3136: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.2 KiB/s wr, 107 op/s
Jan 23 05:32:48 np0005593232 nova_compute[250269]: 2026-01-23 10:32:48.050 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:48.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3137: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 93 op/s
Jan 23 05:32:50 np0005593232 nova_compute[250269]: 2026-01-23 10:32:50.088 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3138: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 93 op/s
Jan 23 05:32:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:52.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:53 np0005593232 nova_compute[250269]: 2026-01-23 10:32:53.052 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3139: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 93 op/s
Jan 23 05:32:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:32:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:54.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:32:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:54.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:54 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 23 05:32:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:54.996390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:32:54 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 23 05:32:54 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164374996568, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 481, "num_deletes": 251, "total_data_size": 516932, "memory_usage": 527256, "flush_reason": "Manual Compaction"}
Jan 23 05:32:54 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164375005689, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 512617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69710, "largest_seqno": 70190, "table_properties": {"data_size": 509827, "index_size": 825, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6568, "raw_average_key_size": 19, "raw_value_size": 504300, "raw_average_value_size": 1461, "num_data_blocks": 36, "num_entries": 345, "num_filter_entries": 345, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164349, "oldest_key_time": 1769164349, "file_creation_time": 1769164374, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 9372 microseconds, and 4489 cpu microseconds.
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.005776) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 512617 bytes OK
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.005816) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.007826) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.007854) EVENT_LOG_v1 {"time_micros": 1769164375007845, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.007924) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 514118, prev total WAL file size 514118, number of live WAL files 2.
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.009044) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(500KB)], [161(10MB)]
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164375009185, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 11863635, "oldest_snapshot_seqno": -1}
Jan 23 05:32:55 np0005593232 nova_compute[250269]: 2026-01-23 10:32:55.091 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 9242 keys, 9912069 bytes, temperature: kUnknown
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164375131764, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9912069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9855569, "index_size": 32259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 245311, "raw_average_key_size": 26, "raw_value_size": 9696648, "raw_average_value_size": 1049, "num_data_blocks": 1218, "num_entries": 9242, "num_filter_entries": 9242, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164375, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.132525) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9912069 bytes
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.134740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 96.5 rd, 80.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(42.5) write-amplify(19.3) OK, records in: 9756, records dropped: 514 output_compression: NoCompression
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.134777) EVENT_LOG_v1 {"time_micros": 1769164375134759, "job": 100, "event": "compaction_finished", "compaction_time_micros": 122930, "compaction_time_cpu_micros": 55203, "output_level": 6, "num_output_files": 1, "total_output_size": 9912069, "num_input_records": 9756, "num_output_records": 9242, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164375135303, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164375140203, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.008710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.140692) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.140707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.140712) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.140715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:32:55.140718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:32:55 np0005593232 podman[373810]: 2026-01-23 10:32:55.513261692 +0000 UTC m=+0.152183227 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:32:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3140: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 93 op/s
Jan 23 05:32:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:56.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:56 np0005593232 nova_compute[250269]: 2026-01-23 10:32:56.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:56.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:57 np0005593232 nova_compute[250269]: 2026-01-23 10:32:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3141: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 13 KiB/s wr, 136 op/s
Jan 23 05:32:58 np0005593232 nova_compute[250269]: 2026-01-23 10:32:58.055 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:32:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:32:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:32:58.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:32:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:32:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:32:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:32:58.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:32:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:32:59 np0005593232 nova_compute[250269]: 2026-01-23 10:32:59.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:32:59 np0005593232 nova_compute[250269]: 2026-01-23 10:32:59.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:32:59 np0005593232 podman[373838]: 2026-01-23 10:32:59.437689627 +0000 UTC m=+0.099021956 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 05:33:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3142: 321 pgs: 321 active+clean; 373 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 528 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 23 05:33:00 np0005593232 nova_compute[250269]: 2026-01-23 10:33:00.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:00.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:00 np0005593232 nova_compute[250269]: 2026-01-23 10:33:00.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:00.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3143: 321 pgs: 321 active+clean; 396 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 544 KiB/s rd, 1.2 MiB/s wr, 66 op/s
Jan 23 05:33:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:02.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:03 np0005593232 nova_compute[250269]: 2026-01-23 10:33:03.057 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3144: 321 pgs: 321 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 563 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 23 05:33:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:04.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:04.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:05 np0005593232 nova_compute[250269]: 2026-01-23 10:33:05.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:05 np0005593232 nova_compute[250269]: 2026-01-23 10:33:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3145: 321 pgs: 321 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 563 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 23 05:33:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:06.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3146: 321 pgs: 321 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 563 KiB/s rd, 3.6 MiB/s wr, 97 op/s
Jan 23 05:33:08 np0005593232 nova_compute[250269]: 2026-01-23 10:33:08.060 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:08.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:08 np0005593232 nova_compute[250269]: 2026-01-23 10:33:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:08 np0005593232 nova_compute[250269]: 2026-01-23 10:33:08.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:08.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3147: 321 pgs: 321 active+clean; 468 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Jan 23 05:33:10 np0005593232 nova_compute[250269]: 2026-01-23 10:33:10.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:10.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.340 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.340 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.340 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.340 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:33:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:33:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596987187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:33:11 np0005593232 nova_compute[250269]: 2026-01-23 10:33:11.801 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.002 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.003 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=20.855636596679688GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.004 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.004 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:33:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3148: 321 pgs: 321 active+clean; 482 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 594 KiB/s rd, 4.0 MiB/s wr, 75 op/s
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.099 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.099 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.129 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:33:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:12.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:33:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/87467525' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.581 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.587 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.610 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.656 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:33:12 np0005593232 nova_compute[250269]: 2026-01-23 10:33:12.657 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:33:13 np0005593232 nova_compute[250269]: 2026-01-23 10:33:13.062 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3149: 321 pgs: 321 active+clean; 514 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.1 MiB/s wr, 129 op/s
Jan 23 05:33:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:14.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:14.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:15 np0005593232 nova_compute[250269]: 2026-01-23 10:33:15.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:15 np0005593232 nova_compute[250269]: 2026-01-23 10:33:15.657 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:15 np0005593232 nova_compute[250269]: 2026-01-23 10:33:15.658 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:33:15 np0005593232 nova_compute[250269]: 2026-01-23 10:33:15.658 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:33:15 np0005593232 nova_compute[250269]: 2026-01-23 10:33:15.704 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:33:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Jan 23 05:33:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Jan 23 05:33:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Jan 23 05:33:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3151: 321 pgs: 321 active+clean; 514 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 2.2 MiB/s wr, 117 op/s
Jan 23 05:33:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:16.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Jan 23 05:33:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Jan 23 05:33:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Jan 23 05:33:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Jan 23 05:33:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Jan 23 05:33:17 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Jan 23 05:33:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3154: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 561 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.7 MiB/s rd, 6.2 MiB/s wr, 690 op/s
Jan 23 05:33:18 np0005593232 nova_compute[250269]: 2026-01-23 10:33:18.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:18.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:18.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3155: 321 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 315 active+clean; 561 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.7 MiB/s rd, 3.5 MiB/s wr, 536 op/s
Jan 23 05:33:20 np0005593232 nova_compute[250269]: 2026-01-23 10:33:20.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:20.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:20.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3156: 321 pgs: 321 active+clean; 561 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.1 MiB/s rd, 3.4 MiB/s wr, 574 op/s
Jan 23 05:33:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:22.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:23 np0005593232 nova_compute[250269]: 2026-01-23 10:33:23.066 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Jan 23 05:33:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Jan 23 05:33:23 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Jan 23 05:33:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3158: 321 pgs: 321 active+clean; 561 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.1 MiB/s rd, 3.0 MiB/s wr, 536 op/s
Jan 23 05:33:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.004000113s ======
Jan 23 05:33:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000113s
Jan 23 05:33:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:25 np0005593232 nova_compute[250269]: 2026-01-23 10:33:25.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3159: 321 pgs: 321 active+clean; 561 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 583 KiB/s rd, 19 KiB/s wr, 77 op/s
Jan 23 05:33:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:26 np0005593232 podman[374016]: 2026-01-23 10:33:26.520804188 +0000 UTC m=+0.168112830 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:33:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3160: 321 pgs: 321 active+clean; 594 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 206 op/s
Jan 23 05:33:28 np0005593232 nova_compute[250269]: 2026-01-23 10:33:28.068 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3161: 321 pgs: 321 active+clean; 594 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.6 MiB/s wr, 206 op/s
Jan 23 05:33:30 np0005593232 nova_compute[250269]: 2026-01-23 10:33:30.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:30 np0005593232 podman[374046]: 2026-01-23 10:33:30.443656588 +0000 UTC m=+0.090733150 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 05:33:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:30.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3162: 321 pgs: 321 active+clean; 600 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.9 MiB/s wr, 186 op/s
Jan 23 05:33:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:32.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:32.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:33 np0005593232 nova_compute[250269]: 2026-01-23 10:33:33.069 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3163: 321 pgs: 321 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 216 op/s
Jan 23 05:33:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:34.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:35 np0005593232 nova_compute[250269]: 2026-01-23 10:33:35.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3164: 321 pgs: 321 active+clean; 627 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 185 op/s
Jan 23 05:33:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:33:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:33:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:36.953 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=68, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=67) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:33:36 np0005593232 nova_compute[250269]: 2026-01-23 10:33:36.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:36 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:36.955 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:33:37
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'backups', '.mgr', 'cephfs.cephfs.data', '.rgw.root']
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:33:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:33:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3165: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 6.4 MiB/s wr, 286 op/s
Jan 23 05:33:38 np0005593232 nova_compute[250269]: 2026-01-23 10:33:38.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:38.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:33:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:33:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 12cd6ec1-15f7-4362-a2f8-b5e9a4ac6fdd does not exist
Jan 23 05:33:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f62adb5-3096-4749-9ff7-5bcaf6d98ed5 does not exist
Jan 23 05:33:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9aac6c52-6092-488e-b518-56e9c3aa2239 does not exist
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:39 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:33:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3166: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 733 KiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 23 05:33:40 np0005593232 nova_compute[250269]: 2026-01-23 10:33:40.116 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.445165242 +0000 UTC m=+0.061099308 container create 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:40.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.422570779 +0000 UTC m=+0.038504875 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:40 np0005593232 systemd[1]: Started libpod-conmon-1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050.scope.
Jan 23 05:33:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.598634224 +0000 UTC m=+0.214568300 container init 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.612185269 +0000 UTC m=+0.228119345 container start 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.617185611 +0000 UTC m=+0.233119677 container attach 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:33:40 np0005593232 wonderful_rhodes[374525]: 167 167
Jan 23 05:33:40 np0005593232 systemd[1]: libpod-1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050.scope: Deactivated successfully.
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.623741637 +0000 UTC m=+0.239675723 container died 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c937100ea3c1ffd72e7adf6e375c85f10e9e77841687fc8a17d172ebf98cc250-merged.mount: Deactivated successfully.
Jan 23 05:33:40 np0005593232 podman[374509]: 2026-01-23 10:33:40.673860572 +0000 UTC m=+0.289794618 container remove 1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:40 np0005593232 systemd[1]: libpod-conmon-1180e93a347248251f60a8388d19cfa4503317b41db7aa9a2ce8a8303dba9050.scope: Deactivated successfully.
Jan 23 05:33:40 np0005593232 podman[374548]: 2026-01-23 10:33:40.866418815 +0000 UTC m=+0.049962151 container create 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 23 05:33:40 np0005593232 podman[374548]: 2026-01-23 10:33:40.843982777 +0000 UTC m=+0.027526093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:41 np0005593232 systemd[1]: Started libpod-conmon-61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab.scope.
Jan 23 05:33:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:41 np0005593232 podman[374548]: 2026-01-23 10:33:41.264564082 +0000 UTC m=+0.448107468 container init 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:33:41 np0005593232 podman[374548]: 2026-01-23 10:33:41.273619309 +0000 UTC m=+0.457162635 container start 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:33:41 np0005593232 podman[374548]: 2026-01-23 10:33:41.278272491 +0000 UTC m=+0.461815877 container attach 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:33:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3167: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 4.3 MiB/s wr, 196 op/s
Jan 23 05:33:42 np0005593232 gallant_moser[374564]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:33:42 np0005593232 gallant_moser[374564]: --> relative data size: 1.0
Jan 23 05:33:42 np0005593232 gallant_moser[374564]: --> All data devices are unavailable
Jan 23 05:33:42 np0005593232 systemd[1]: libpod-61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab.scope: Deactivated successfully.
Jan 23 05:33:42 np0005593232 podman[374548]: 2026-01-23 10:33:42.151748148 +0000 UTC m=+1.335291464 container died 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:33:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a84abc89164d0e982424d713764fe7dc7b60109b8957991882072ca85e381b2c-merged.mount: Deactivated successfully.
Jan 23 05:33:42 np0005593232 podman[374548]: 2026-01-23 10:33:42.218885137 +0000 UTC m=+1.402428443 container remove 61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:33:42 np0005593232 systemd[1]: libpod-conmon-61eb9a423c57281d6d0ee119e22ec6953d7fcd9e3e7f9824737792f9ea06c0ab.scope: Deactivated successfully.
Jan 23 05:33:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:42.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:42.649 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:42.651 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:33:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:42.652 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:33:42 np0005593232 podman[374731]: 2026-01-23 10:33:42.978796055 +0000 UTC m=+0.052457512 container create b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:42.956415009 +0000 UTC m=+0.030076506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:43 np0005593232 nova_compute[250269]: 2026-01-23 10:33:43.072 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:43 np0005593232 systemd[1]: Started libpod-conmon-b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2.scope.
Jan 23 05:33:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:43.133166373 +0000 UTC m=+0.206827820 container init b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:43.138819324 +0000 UTC m=+0.212480771 container start b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:33:43 np0005593232 sad_wilson[374747]: 167 167
Jan 23 05:33:43 np0005593232 systemd[1]: libpod-b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2.scope: Deactivated successfully.
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:43.180834728 +0000 UTC m=+0.254496175 container attach b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:43.183114333 +0000 UTC m=+0.256775870 container died b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:33:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-250817ed94fef2af492005acdb2dbca084bbeb1b430d32c5d27a0287bc294493-merged.mount: Deactivated successfully.
Jan 23 05:33:43 np0005593232 podman[374731]: 2026-01-23 10:33:43.229698697 +0000 UTC m=+0.303360144 container remove b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wilson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:33:43 np0005593232 systemd[1]: libpod-conmon-b0c281e8de101f69f94f68b5c2e1b42acdaeb6f5bb80018c3244fe363bdcabc2.scope: Deactivated successfully.
Jan 23 05:33:43 np0005593232 podman[374774]: 2026-01-23 10:33:43.447834607 +0000 UTC m=+0.041280774 container create 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:33:43 np0005593232 systemd[1]: Started libpod-conmon-30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957.scope.
Jan 23 05:33:43 np0005593232 podman[374774]: 2026-01-23 10:33:43.431549014 +0000 UTC m=+0.024995211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d05f4b713eaa0af025981eaf5596f86bd594fbd1f7314debaab676b80e994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d05f4b713eaa0af025981eaf5596f86bd594fbd1f7314debaab676b80e994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d05f4b713eaa0af025981eaf5596f86bd594fbd1f7314debaab676b80e994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b61d05f4b713eaa0af025981eaf5596f86bd594fbd1f7314debaab676b80e994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:43 np0005593232 podman[374774]: 2026-01-23 10:33:43.565951214 +0000 UTC m=+0.159397411 container init 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:33:43 np0005593232 podman[374774]: 2026-01-23 10:33:43.583422871 +0000 UTC m=+0.176869048 container start 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:33:43 np0005593232 podman[374774]: 2026-01-23 10:33:43.587305371 +0000 UTC m=+0.180751568 container attach 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:33:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3168: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 220 op/s
Jan 23 05:33:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:44.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]: {
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:    "0": [
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:        {
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "devices": [
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "/dev/loop3"
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            ],
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "lv_name": "ceph_lv0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "lv_size": "7511998464",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "name": "ceph_lv0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "tags": {
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.cluster_name": "ceph",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.crush_device_class": "",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.encrypted": "0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.osd_id": "0",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.type": "block",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:                "ceph.vdo": "0"
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            },
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "type": "block",
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:            "vg_name": "ceph_vg0"
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:        }
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]:    ]
Jan 23 05:33:44 np0005593232 hungry_dubinsky[374791]: }
Jan 23 05:33:44 np0005593232 systemd[1]: libpod-30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957.scope: Deactivated successfully.
Jan 23 05:33:44 np0005593232 podman[374774]: 2026-01-23 10:33:44.397194841 +0000 UTC m=+0.990641018 container died 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:33:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b61d05f4b713eaa0af025981eaf5596f86bd594fbd1f7314debaab676b80e994-merged.mount: Deactivated successfully.
Jan 23 05:33:44 np0005593232 podman[374774]: 2026-01-23 10:33:44.446674168 +0000 UTC m=+1.040120345 container remove 30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_dubinsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:33:44 np0005593232 systemd[1]: libpod-conmon-30d1c283875972c2ca7c72b0caa7b65878452d802d9553e157ffaf1b371c1957.scope: Deactivated successfully.
Jan 23 05:33:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:44.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:33:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2550201890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:33:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:33:44.958 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '68'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:33:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:33:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2550201890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:33:45 np0005593232 nova_compute[250269]: 2026-01-23 10:33:45.119 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.186013192 +0000 UTC m=+0.040210984 container create 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:33:45 np0005593232 systemd[1]: Started libpod-conmon-8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167.scope.
Jan 23 05:33:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.260220852 +0000 UTC m=+0.114418694 container init 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.168150655 +0000 UTC m=+0.022348457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.270669759 +0000 UTC m=+0.124867551 container start 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.273676004 +0000 UTC m=+0.127873806 container attach 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:33:45 np0005593232 fervent_edison[374968]: 167 167
Jan 23 05:33:45 np0005593232 systemd[1]: libpod-8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167.scope: Deactivated successfully.
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.277437411 +0000 UTC m=+0.131635193 container died 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:33:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f01360113b60d35878a6169eb3c0d215f6a9ef523071a46617ad4d7754ffb7f7-merged.mount: Deactivated successfully.
Jan 23 05:33:45 np0005593232 podman[374952]: 2026-01-23 10:33:45.310256594 +0000 UTC m=+0.164454386 container remove 8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:33:45 np0005593232 systemd[1]: libpod-conmon-8bbaf4b571cdd878d6a7c220dd3ccde14836d9529649808e6297e5d717879167.scope: Deactivated successfully.
Jan 23 05:33:45 np0005593232 podman[374992]: 2026-01-23 10:33:45.472493155 +0000 UTC m=+0.043953460 container create 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:33:45 np0005593232 systemd[1]: Started libpod-conmon-1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f.scope.
Jan 23 05:33:45 np0005593232 podman[374992]: 2026-01-23 10:33:45.453191207 +0000 UTC m=+0.024651542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:33:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:33:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d2c8ed02801507acda19e2ea775fb3c689a0881b2cc9e445e6528f3577eda0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d2c8ed02801507acda19e2ea775fb3c689a0881b2cc9e445e6528f3577eda0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d2c8ed02801507acda19e2ea775fb3c689a0881b2cc9e445e6528f3577eda0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4d2c8ed02801507acda19e2ea775fb3c689a0881b2cc9e445e6528f3577eda0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:33:45 np0005593232 podman[374992]: 2026-01-23 10:33:45.57575768 +0000 UTC m=+0.147217995 container init 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:33:45 np0005593232 podman[374992]: 2026-01-23 10:33:45.583629204 +0000 UTC m=+0.155089499 container start 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:33:45 np0005593232 podman[374992]: 2026-01-23 10:33:45.586711522 +0000 UTC m=+0.158171857 container attach 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:33:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3169: 321 pgs: 321 active+clean; 660 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 23 05:33:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:46 np0005593232 naughty_carver[375008]: {
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:        "osd_id": 0,
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:        "type": "bluestore"
Jan 23 05:33:46 np0005593232 naughty_carver[375008]:    }
Jan 23 05:33:46 np0005593232 naughty_carver[375008]: }
Jan 23 05:33:46 np0005593232 systemd[1]: libpod-1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f.scope: Deactivated successfully.
Jan 23 05:33:46 np0005593232 podman[374992]: 2026-01-23 10:33:46.443516334 +0000 UTC m=+1.014976639 container died 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:33:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f4d2c8ed02801507acda19e2ea775fb3c689a0881b2cc9e445e6528f3577eda0-merged.mount: Deactivated successfully.
Jan 23 05:33:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:46.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:46 np0005593232 podman[374992]: 2026-01-23 10:33:46.490843219 +0000 UTC m=+1.062303524 container remove 1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_carver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:33:46 np0005593232 systemd[1]: libpod-conmon-1cb72e0022117f2dd1802175e77e73a64a9be7c5cacd442e3af23728361e505f.scope: Deactivated successfully.
Jan 23 05:33:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:33:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:33:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6886c840-9e80-44df-aa8d-48e65fa8b0df does not exist
Jan 23 05:33:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2bd28b43-d964-4de4-8278-2f2ce48bb986 does not exist
Jan 23 05:33:46 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6dbddca4-558b-415d-95f2-f230ebf4c21a does not exist
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.010857055823096966 of space, bias 1.0, pg target 3.25711674692909 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6460427135678393 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4463626727386936 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:33:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 23 05:33:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:47 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:33:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3170: 321 pgs: 321 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 256 op/s
Jan 23 05:33:48 np0005593232 nova_compute[250269]: 2026-01-23 10:33:48.073 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:48.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3171: 321 pgs: 321 active+clean; 659 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 18 KiB/s wr, 155 op/s
Jan 23 05:33:50 np0005593232 nova_compute[250269]: 2026-01-23 10:33:50.122 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:50.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:50.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3172: 321 pgs: 321 active+clean; 646 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 18 KiB/s wr, 159 op/s
Jan 23 05:33:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:52.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:52.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:53 np0005593232 nova_compute[250269]: 2026-01-23 10:33:53.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3173: 321 pgs: 321 active+clean; 580 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.2 KiB/s wr, 155 op/s
Jan 23 05:33:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:54.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:54.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:55 np0005593232 nova_compute[250269]: 2026-01-23 10:33:55.124 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3174: 321 pgs: 321 active+clean; 580 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.8 KiB/s wr, 119 op/s
Jan 23 05:33:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:56.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:33:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:56.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:33:57 np0005593232 nova_compute[250269]: 2026-01-23 10:33:57.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:57 np0005593232 podman[375096]: 2026-01-23 10:33:57.451166619 +0000 UTC m=+0.106852839 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:33:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3175: 321 pgs: 321 active+clean; 524 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 181 op/s
Jan 23 05:33:58 np0005593232 nova_compute[250269]: 2026-01-23 10:33:58.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:33:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:33:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:33:58.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:33:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:33:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:33:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:33:58.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:33:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:33:59 np0005593232 nova_compute[250269]: 2026-01-23 10:33:59.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:59 np0005593232 nova_compute[250269]: 2026-01-23 10:33:59.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:33:59 np0005593232 nova_compute[250269]: 2026-01-23 10:33:59.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:34:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3176: 321 pgs: 321 active+clean; 524 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 553 KiB/s rd, 20 KiB/s wr, 90 op/s
Jan 23 05:34:00 np0005593232 nova_compute[250269]: 2026-01-23 10:34:00.127 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:00.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:00.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:01 np0005593232 podman[375174]: 2026-01-23 10:34:01.451261225 +0000 UTC m=+0.093092757 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:34:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3177: 321 pgs: 321 active+clean; 503 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 560 KiB/s rd, 21 KiB/s wr, 99 op/s
Jan 23 05:34:02 np0005593232 nova_compute[250269]: 2026-01-23 10:34:02.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:02.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:02 np0005593232 nova_compute[250269]: 2026-01-23 10:34:02.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:02 np0005593232 nova_compute[250269]: 2026-01-23 10:34:02.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:34:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:02.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:03 np0005593232 nova_compute[250269]: 2026-01-23 10:34:03.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3178: 321 pgs: 321 active+clean; 503 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 558 KiB/s rd, 29 KiB/s wr, 97 op/s
Jan 23 05:34:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:04.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.129 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.217 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.217 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.246 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.432 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.432 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.442 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.443 250273 INFO nova.compute.claims [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:34:05 np0005593232 nova_compute[250269]: 2026-01-23 10:34:05.621 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:34:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338684539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.073 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3179: 321 pgs: 321 active+clean; 503 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 541 KiB/s rd, 28 KiB/s wr, 71 op/s
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.080 250273 DEBUG nova.compute.provider_tree [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.117 250273 DEBUG nova.scheduler.client.report [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.191 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.193 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.284 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.286 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:34:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.319 250273 INFO nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.332 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.424 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:34:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:06.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.662 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.664 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.665 250273 INFO nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Creating image(s)#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.704 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.737 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.775 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.780 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.854 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.856 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.856 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.857 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.883 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:06 np0005593232 nova_compute[250269]: 2026-01-23 10:34:06.888 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.019 250273 DEBUG nova.policy [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e1629a4b14764dddaabcadd16f3e1c1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.340 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.442 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] resizing rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.762 250273 DEBUG nova.objects.instance [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.799 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.800 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Ensure instance console log exists: /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.801 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.802 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:07 np0005593232 nova_compute[250269]: 2026-01-23 10:34:07.803 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3180: 321 pgs: 321 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 551 KiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 23 05:34:08 np0005593232 nova_compute[250269]: 2026-01-23 10:34:08.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:08 np0005593232 nova_compute[250269]: 2026-01-23 10:34:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:08.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:08.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:08 np0005593232 nova_compute[250269]: 2026-01-23 10:34:08.669 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Successfully created port: 629a33c7-4917-4a44-9978-d78fecc89001 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:34:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:09 np0005593232 nova_compute[250269]: 2026-01-23 10:34:09.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3181: 321 pgs: 321 active+clean; 541 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 27 op/s
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:10.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:10.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.639 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Successfully updated port: 629a33c7-4917-4a44-9978-d78fecc89001 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.675 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.675 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.676 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.861 250273 DEBUG nova.compute.manager [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-changed-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.862 250273 DEBUG nova.compute.manager [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing instance network info cache due to event network-changed-629a33c7-4917-4a44-9978-d78fecc89001. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:34:10 np0005593232 nova_compute[250269]: 2026-01-23 10:34:10.862 250273 DEBUG oslo_concurrency.lockutils [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:34:11 np0005593232 nova_compute[250269]: 2026-01-23 10:34:11.416 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:34:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3182: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:12.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.338 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.339 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.339 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:12.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:34:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2209890529' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:34:12 np0005593232 nova_compute[250269]: 2026-01-23 10:34:12.825 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.031 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.032 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4177MB free_disk=20.83083724975586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.033 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.033 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.160 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.161 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.161 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.286 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:34:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4008311550' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:34:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.789 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.800 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.843 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.990 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:34:13 np0005593232 nova_compute[250269]: 2026-01-23 10:34:13.991 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3183: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 23 05:34:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:14.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.461 250273 DEBUG nova.network.neutron [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.503 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.503 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance network_info: |[{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.504 250273 DEBUG oslo_concurrency.lockutils [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.504 250273 DEBUG nova.network.neutron [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.508 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start _get_guest_xml network_info=[{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:34:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:14.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.514 250273 WARNING nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.520 250273 DEBUG nova.virt.libvirt.host [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.521 250273 DEBUG nova.virt.libvirt.host [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.524 250273 DEBUG nova.virt.libvirt.host [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.525 250273 DEBUG nova.virt.libvirt.host [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.526 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.526 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.527 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.527 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.528 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.528 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.528 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.529 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.529 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.529 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.530 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.530 250273 DEBUG nova.virt.hardware [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:34:14 np0005593232 nova_compute[250269]: 2026-01-23 10:34:14.534 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:34:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2865950511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.021 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.063 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.068 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.133 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:34:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663288695' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.501 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.504 250273 DEBUG nova.virt.libvirt.vif [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-171773642',display_name='tempest-ServerStableDeviceRescueTest-server-171773642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-171773642',id=184,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBb+3zp8o0K+puz2JQkOGzmnnNl24MGD8VzK0xADWahKH4uswOlO8X7EcIpKY4ojueYsF9jTYZVtsKkrwP7uCzwhKymJejXuTC0hLydoquX1zBJO2KneNThhWactU3vFAw==',key_name='tempest-keypair-455350267',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815b71acf60d4ed8933ebd05228fa0c0',ramdisk_id='',reservation_id='r-c01cxkoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1802220041',owner_user_name='tempest-ServerStableDeviceRescueTest-1802220041-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:34:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e1629a4b14764dddaabcadd16f3e1c1c',uuid=5d42acd2-a3c4-40d2-b1c7-0dd7920671fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.505 250273 DEBUG nova.network.os_vif_util [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converting VIF {"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.508 250273 DEBUG nova.network.os_vif_util [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.511 250273 DEBUG nova.objects.instance [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.534 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <uuid>5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</uuid>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <name>instance-000000b8</name>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-171773642</nova:name>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:34:14</nova:creationTime>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:user uuid="e1629a4b14764dddaabcadd16f3e1c1c">tempest-ServerStableDeviceRescueTest-1802220041-project-member</nova:user>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:project uuid="815b71acf60d4ed8933ebd05228fa0c0">tempest-ServerStableDeviceRescueTest-1802220041</nova:project>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <nova:port uuid="629a33c7-4917-4a44-9978-d78fecc89001">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="serial">5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="uuid">5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:fe:27:90"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <target dev="tap629a33c7-49"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/console.log" append="off"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:34:15 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:34:15 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:34:15 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:34:15 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.536 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Preparing to wait for external event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.537 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.538 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.538 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.539 250273 DEBUG nova.virt.libvirt.vif [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-171773642',display_name='tempest-ServerStableDeviceRescueTest-server-171773642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-171773642',id=184,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBb+3zp8o0K+puz2JQkOGzmnnNl24MGD8VzK0xADWahKH4uswOlO8X7EcIpKY4ojueYsF9jTYZVtsKkrwP7uCzwhKymJejXuTC0hLydoquX1zBJO2KneNThhWactU3vFAw==',key_name='tempest-keypair-455350267',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='815b71acf60d4ed8933ebd05228fa0c0',ramdisk_id='',reservation_id='r-c01cxkoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerStableDeviceRescueTest-1802220041',owner_user_name='tempest-ServerStableDeviceRescueTest-1802220041-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:34:06Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e1629a4b14764dddaabcadd16f3e1c1c',uuid=5d42acd2-a3c4-40d2-b1c7-0dd7920671fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.540 250273 DEBUG nova.network.os_vif_util [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converting VIF {"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.541 250273 DEBUG nova.network.os_vif_util [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.542 250273 DEBUG os_vif [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.544 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.545 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.545 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.552 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.552 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap629a33c7-49, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.553 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap629a33c7-49, col_values=(('external_ids', {'iface-id': '629a33c7-4917-4a44-9978-d78fecc89001', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:27:90', 'vm-uuid': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.557 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:15 np0005593232 NetworkManager[49057]: <info>  [1769164455.5585] manager: (tap629a33c7-49): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/331)
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.560 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.570 250273 INFO os_vif [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49')#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.700 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.701 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.701 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No VIF found with MAC fa:16:3e:fe:27:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.702 250273 INFO nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Using config drive#033[00m
Jan 23 05:34:15 np0005593232 nova_compute[250269]: 2026-01-23 10:34:15.737 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3184: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:34:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:16.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:16 np0005593232 nova_compute[250269]: 2026-01-23 10:34:16.906 250273 INFO nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Creating config drive at /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config#033[00m
Jan 23 05:34:16 np0005593232 nova_compute[250269]: 2026-01-23 10:34:16.921 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qo_n6mf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:16 np0005593232 nova_compute[250269]: 2026-01-23 10:34:16.992 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:16 np0005593232 nova_compute[250269]: 2026-01-23 10:34:16.993 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:34:16 np0005593232 nova_compute[250269]: 2026-01-23 10:34:16.994 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:34:17 np0005593232 nova_compute[250269]: 2026-01-23 10:34:17.025 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:34:17 np0005593232 nova_compute[250269]: 2026-01-23 10:34:17.026 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:34:17 np0005593232 nova_compute[250269]: 2026-01-23 10:34:17.083 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qo_n6mf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:17 np0005593232 nova_compute[250269]: 2026-01-23 10:34:17.117 250273 DEBUG nova.storage.rbd_utils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:34:17 np0005593232 nova_compute[250269]: 2026-01-23 10:34:17.126 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.063 250273 DEBUG nova.network.neutron [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updated VIF entry in instance network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.064 250273 DEBUG nova.network.neutron [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:34:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3185: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.087 250273 DEBUG oslo_concurrency.lockutils [req-f7a151a8-98b7-4a1c-a45b-3f369aab049b req-67c5b6e9-20b3-4ab7-b018-8ad9387ff7eb 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.111 250273 DEBUG oslo_concurrency.processutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.985s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.112 250273 INFO nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Deleting local config drive /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config because it was imported into RBD.#033[00m
Jan 23 05:34:18 np0005593232 kernel: tap629a33c7-49: entered promiscuous mode
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.1907] manager: (tap629a33c7-49): new Tun device (/org/freedesktop/NetworkManager/Devices/332)
Jan 23 05:34:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:18Z|00717|binding|INFO|Claiming lport 629a33c7-4917-4a44-9978-d78fecc89001 for this chassis.
Jan 23 05:34:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:18Z|00718|binding|INFO|629a33c7-4917-4a44-9978-d78fecc89001: Claiming fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 systemd-udevd[375568]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:34:18 np0005593232 systemd-machined[215836]: New machine qemu-81-instance-000000b8.
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.2408] device (tap629a33c7-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.2420] device (tap629a33c7-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.255 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:18Z|00719|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 ovn-installed in OVS
Jan 23 05:34:18 np0005593232 systemd[1]: Started Virtual Machine qemu-81-instance-000000b8.
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:18Z|00720|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 up in Southbound
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.273 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.275 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 bound to our chassis#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.277 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7d5530f-5227-4f75-bac0-2604bb3d68e2#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.296 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[115caa7f-19c9-4e7d-941a-a1fddbd28362]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.298 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7d5530f-51 in ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.301 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7d5530f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.301 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97123cf5-534e-4d28-93b6-a23b2bdd95d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.302 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7ea123-6ef6-47a1-b0dd-7b2921be11c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.316 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[013bdca4-9a61-41ab-be25-cfb9590ad8cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:18.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.324 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.333 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[39907573-c13c-4a23-a2b1-9aaf244876ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.366 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2a5548-5849-4e8f-aa9f-01fea5498256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.3755] manager: (tapd7d5530f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/333)
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.375 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8e30acbc-97ed-449a-b2cb-494dcdb5339d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.419 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ef787898-d44e-43a9-917e-70fc3a64fd38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.422 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b46e65e0-6bdd-41e9-89a2-fe0a57286341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.4428] device (tapd7d5530f-50): carrier: link connected
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.447 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[66be13ef-e68b-44ec-ac5d-679a97b73808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.465 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0899796-a4d4-4797-aad1-cb0248375877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837158, 'reachable_time': 38443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375602, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.482 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[26de2f36-a3ad-492d-91e2-124b01348518]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:67cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 837158, 'tstamp': 837158}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375603, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.500 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[81807263-7ea9-4911-bf0a-ef0229919885]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 215], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837158, 'reachable_time': 38443, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375604, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:18.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.537 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae93f87-9e8d-4268-85a4-893fd465b92d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.597 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[19860560-4007-4495-9bac-0116e796d309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.599 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.600 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.600 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7d5530f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 kernel: tapd7d5530f-50: entered promiscuous mode
Jan 23 05:34:18 np0005593232 NetworkManager[49057]: <info>  [1769164458.6039] manager: (tapd7d5530f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/334)
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.606 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7d5530f-50, col_values=(('external_ids', {'iface-id': '4c99eeb5-c437-4d31-ac3b-bfd151140733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:18Z|00721|binding|INFO|Releasing lport 4c99eeb5-c437-4d31-ac3b-bfd151140733 from this chassis (sb_readonly=0)
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.610 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.611 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7b44a5e5-103f-4aa5-806c-855e7a48c687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.612 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:34:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:18.614 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'env', 'PROCESS_TAG=haproxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7d5530f-5227-4f75-bac0-2604bb3d68e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.622 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.879 250273 DEBUG nova.compute.manager [req-8dd957e4-a1d3-4eb6-b352-38f0b5d3d580 req-32468a81-ab31-4bff-9710-52bd5ebe2873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.881 250273 DEBUG oslo_concurrency.lockutils [req-8dd957e4-a1d3-4eb6-b352-38f0b5d3d580 req-32468a81-ab31-4bff-9710-52bd5ebe2873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.881 250273 DEBUG oslo_concurrency.lockutils [req-8dd957e4-a1d3-4eb6-b352-38f0b5d3d580 req-32468a81-ab31-4bff-9710-52bd5ebe2873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.882 250273 DEBUG oslo_concurrency.lockutils [req-8dd957e4-a1d3-4eb6-b352-38f0b5d3d580 req-32468a81-ab31-4bff-9710-52bd5ebe2873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:18 np0005593232 nova_compute[250269]: 2026-01-23 10:34:18.882 250273 DEBUG nova.compute.manager [req-8dd957e4-a1d3-4eb6-b352-38f0b5d3d580 req-32468a81-ab31-4bff-9710-52bd5ebe2873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Processing event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:34:19 np0005593232 podman[375672]: 2026-01-23 10:34:19.083380469 +0000 UTC m=+0.062338493 container create 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:34:19 np0005593232 systemd[1]: Started libpod-conmon-3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e.scope.
Jan 23 05:34:19 np0005593232 podman[375672]: 2026-01-23 10:34:19.052802129 +0000 UTC m=+0.031760173 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:34:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ce2300dbfb36f8604dae0c79e74e744728979faf109214d0ba3ef55d13058a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:19 np0005593232 podman[375672]: 2026-01-23 10:34:19.188540768 +0000 UTC m=+0.167498792 container init 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:34:19 np0005593232 podman[375672]: 2026-01-23 10:34:19.196447852 +0000 UTC m=+0.175405876 container start 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.197 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164459.1971326, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.198 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Started (Lifecycle Event)#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.201 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.207 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.214 250273 INFO nova.virt.libvirt.driver [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance spawned successfully.#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.215 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:34:19 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [NOTICE]   (375697) : New worker (375699) forked
Jan 23 05:34:19 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [NOTICE]   (375697) : Loading success.
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.230 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.238 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.241 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.242 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.242 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.243 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.243 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.244 250273 DEBUG nova.virt.libvirt.driver [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.288 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.289 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164459.1973417, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.289 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.326 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.332 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164459.2054894, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.333 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.338 250273 INFO nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Took 12.68 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.339 250273 DEBUG nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.368 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.372 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.406 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.420 250273 INFO nova.compute.manager [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Took 14.06 seconds to build instance.#033[00m
Jan 23 05:34:19 np0005593232 nova_compute[250269]: 2026-01-23 10:34:19.444 250273 DEBUG oslo_concurrency.lockutils [None req-b3a99499-17c8-471b-98bc-6d58457c027a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3186: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.1 KiB/s rd, 351 KiB/s wr, 9 op/s
Jan 23 05:34:20 np0005593232 nova_compute[250269]: 2026-01-23 10:34:20.183 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:20.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:20.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:20 np0005593232 nova_compute[250269]: 2026-01-23 10:34:20.557 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.043 250273 DEBUG nova.compute.manager [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.044 250273 DEBUG oslo_concurrency.lockutils [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.044 250273 DEBUG oslo_concurrency.lockutils [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.045 250273 DEBUG oslo_concurrency.lockutils [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.045 250273 DEBUG nova.compute.manager [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:34:21 np0005593232 nova_compute[250269]: 2026-01-23 10:34:21.046 250273 WARNING nova.compute.manager [req-19e53fd7-710e-41ec-8e47-35be5ae6fdad req-13381124-ceff-457b-b735-476d6c384e2e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:34:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3187: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 424 KiB/s rd, 352 KiB/s wr, 29 op/s
Jan 23 05:34:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:22.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:22.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:22 np0005593232 nova_compute[250269]: 2026-01-23 10:34:22.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:22 np0005593232 NetworkManager[49057]: <info>  [1769164462.8986] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 23 05:34:22 np0005593232 NetworkManager[49057]: <info>  [1769164462.9006] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/336)
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.015 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:23Z|00722|binding|INFO|Releasing lport 4c99eeb5-c437-4d31-ac3b-bfd151140733 from this chassis (sb_readonly=0)
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.728 250273 DEBUG nova.compute.manager [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-changed-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.729 250273 DEBUG nova.compute.manager [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing instance network info cache due to event network-changed-629a33c7-4917-4a44-9978-d78fecc89001. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.730 250273 DEBUG oslo_concurrency.lockutils [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.730 250273 DEBUG oslo_concurrency.lockutils [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:34:23 np0005593232 nova_compute[250269]: 2026-01-23 10:34:23.731 250273 DEBUG nova.network.neutron [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:34:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3188: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 82 op/s
Jan 23 05:34:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:24.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:24.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:25 np0005593232 nova_compute[250269]: 2026-01-23 10:34:25.184 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:25 np0005593232 nova_compute[250269]: 2026-01-23 10:34:25.561 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3189: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 73 op/s
Jan 23 05:34:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:26.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:26 np0005593232 nova_compute[250269]: 2026-01-23 10:34:26.421 250273 DEBUG nova.network.neutron [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updated VIF entry in instance network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:34:26 np0005593232 nova_compute[250269]: 2026-01-23 10:34:26.423 250273 DEBUG nova.network.neutron [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:34:26 np0005593232 nova_compute[250269]: 2026-01-23 10:34:26.447 250273 DEBUG oslo_concurrency.lockutils [req-3e860db8-9d32-4100-a906-509a8af79c0c req-4d124192-2548-42b8-9dc4-200ac5019102 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:34:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:26.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:27 np0005593232 nova_compute[250269]: 2026-01-23 10:34:27.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3190: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Jan 23 05:34:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:28.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:28 np0005593232 podman[375765]: 2026-01-23 10:34:28.477220383 +0000 UTC m=+0.126897248 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 23 05:34:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:28.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:29 np0005593232 nova_compute[250269]: 2026-01-23 10:34:29.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3191: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 73 op/s
Jan 23 05:34:30 np0005593232 nova_compute[250269]: 2026-01-23 10:34:30.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:30.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:30 np0005593232 nova_compute[250269]: 2026-01-23 10:34:30.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3192: 321 pgs: 321 active+clean; 549 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 73 op/s
Jan 23 05:34:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:34:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:32.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:34:32 np0005593232 podman[375793]: 2026-01-23 10:34:32.430138979 +0000 UTC m=+0.083132494 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:34:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:32.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3193: 321 pgs: 321 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 917 KiB/s wr, 70 op/s
Jan 23 05:34:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:34Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:34:34 np0005593232 ovn_controller[151001]: 2026-01-23T10:34:34Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:34:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:34.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:35 np0005593232 nova_compute[250269]: 2026-01-23 10:34:35.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:35 np0005593232 nova_compute[250269]: 2026-01-23 10:34:35.586 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3194: 321 pgs: 321 active+clean; 553 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 903 KiB/s wr, 16 op/s
Jan 23 05:34:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:36.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:34:37
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root']
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:34:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:34:37 np0005593232 nova_compute[250269]: 2026-01-23 10:34:37.689 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:37.692 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=69, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=68) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:34:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:37.695 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3195: 321 pgs: 321 active+clean; 582 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 05:34:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:38.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:38.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:34:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:34:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3196: 321 pgs: 321 active+clean; 582 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 05:34:40 np0005593232 nova_compute[250269]: 2026-01-23 10:34:40.231 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:40.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:40 np0005593232 nova_compute[250269]: 2026-01-23 10:34:40.481 250273 DEBUG nova.compute.manager [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:34:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:40.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:40 np0005593232 nova_compute[250269]: 2026-01-23 10:34:40.574 250273 INFO nova.compute.manager [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] instance snapshotting#033[00m
Jan 23 05:34:40 np0005593232 nova_compute[250269]: 2026-01-23 10:34:40.589 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.123 250273 INFO nova.virt.libvirt.driver [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Beginning live snapshot process#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.348 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.353 250273 DEBUG nova.virt.libvirt.imagebackend [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 05:34:41 np0005593232 nova_compute[250269]: 2026-01-23 10:34:41.874 250273 DEBUG nova.storage.rbd_utils [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] creating snapshot(97eb4f8a38df4915ba34e2b750a9eb03) on rbd image(5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:34:42 np0005593232 nova_compute[250269]: 2026-01-23 10:34:42.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3197: 321 pgs: 321 active+clean; 594 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 311 KiB/s rd, 2.8 MiB/s wr, 74 op/s
Jan 23 05:34:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Jan 23 05:34:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:42.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Jan 23 05:34:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Jan 23 05:34:42 np0005593232 nova_compute[250269]: 2026-01-23 10:34:42.513 250273 DEBUG nova.storage.rbd_utils [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] cloning vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk@97eb4f8a38df4915ba34e2b750a9eb03 to images/e01877f1-023b-4adf-9357-0984581d9119 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:34:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:42.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:42.650 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:42.651 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:42.651 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:42 np0005593232 nova_compute[250269]: 2026-01-23 10:34:42.701 250273 DEBUG nova.storage.rbd_utils [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] flattening images/e01877f1-023b-4adf-9357-0984581d9119 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 05:34:43 np0005593232 nova_compute[250269]: 2026-01-23 10:34:43.242 250273 DEBUG nova.storage.rbd_utils [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] removing snapshot(97eb4f8a38df4915ba34e2b750a9eb03) on rbd image(5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Jan 23 05:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Jan 23 05:34:43 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Jan 23 05:34:43 np0005593232 nova_compute[250269]: 2026-01-23 10:34:43.523 250273 DEBUG nova.storage.rbd_utils [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] creating snapshot(snap) on rbd image(e01877f1-023b-4adf-9357-0984581d9119) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:34:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3200: 321 pgs: 321 active+clean; 628 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 439 KiB/s rd, 4.6 MiB/s wr, 115 op/s
Jan 23 05:34:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:44.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Jan 23 05:34:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:44.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2036369111' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:34:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2036369111' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:34:44 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:34:44.697 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '69'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.277 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.357 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Triggering sync for uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.358 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.359 250273 INFO nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (image_uploading). Skip.#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.360 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.360 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:45 np0005593232 nova_compute[250269]: 2026-01-23 10:34:45.591 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3202: 321 pgs: 321 active+clean; 628 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Jan 23 05:34:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:46.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:46.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.009673112379024754 of space, bias 1.0, pg target 2.901933713707426 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002175591499162479 of space, bias 1.0, pg target 0.6483262667504187 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00486990798901917 of space, bias 1.0, pg target 1.4512325807277127 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:34:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:34:47 np0005593232 nova_compute[250269]: 2026-01-23 10:34:47.730 250273 INFO nova.virt.libvirt.driver [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Snapshot image upload complete#033[00m
Jan 23 05:34:47 np0005593232 nova_compute[250269]: 2026-01-23 10:34:47.732 250273 INFO nova.compute.manager [None req-4e646d47-9353-41be-9015-5475c5392e8c e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Took 7.16 seconds to snapshot the instance on the hypervisor.#033[00m
Jan 23 05:34:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:34:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3203: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.9 MiB/s rd, 10 MiB/s wr, 201 op/s
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:48.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Jan 23 05:34:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4899b142-6547-4eff-878c-567eee9499a3 does not exist
Jan 23 05:34:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c111862b-014c-43b2-a1c9-7f623b4cf56b does not exist
Jan 23 05:34:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c7925ec9-7940-41bd-80fa-167d761036d0 does not exist
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:34:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:34:50 np0005593232 podman[376284]: 2026-01-23 10:34:50.044620074 +0000 UTC m=+0.062072255 container create 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:34:50 np0005593232 systemd[1]: Started libpod-conmon-2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84.scope.
Jan 23 05:34:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3205: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.1 MiB/s rd, 7.0 MiB/s wr, 148 op/s
Jan 23 05:34:50 np0005593232 podman[376284]: 2026-01-23 10:34:50.014846768 +0000 UTC m=+0.032298969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:50 np0005593232 nova_compute[250269]: 2026-01-23 10:34:50.188 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:34:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:34:50 np0005593232 podman[376284]: 2026-01-23 10:34:50.212831765 +0000 UTC m=+0.230284026 container init 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:34:50 np0005593232 podman[376284]: 2026-01-23 10:34:50.22286028 +0000 UTC m=+0.240312471 container start 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:34:50 np0005593232 podman[376284]: 2026-01-23 10:34:50.227030109 +0000 UTC m=+0.244482500 container attach 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:34:50 np0005593232 nifty_moser[376301]: 167 167
Jan 23 05:34:50 np0005593232 systemd[1]: libpod-2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84.scope: Deactivated successfully.
Jan 23 05:34:50 np0005593232 nova_compute[250269]: 2026-01-23 10:34:50.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:50 np0005593232 podman[376306]: 2026-01-23 10:34:50.300353593 +0000 UTC m=+0.048498250 container died 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:34:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fa0b1162b5944781365735054a4e3b90433b9dfd9c8efcf3d1b6ef9f53ca628c-merged.mount: Deactivated successfully.
Jan 23 05:34:50 np0005593232 podman[376306]: 2026-01-23 10:34:50.36146663 +0000 UTC m=+0.109611277 container remove 2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:34:50 np0005593232 systemd[1]: libpod-conmon-2d39e7136ccf9fe43591b7906aa844e25271d63dfdf7ab5ded853279bde95f84.scope: Deactivated successfully.
Jan 23 05:34:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:50.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:50.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:50 np0005593232 podman[376328]: 2026-01-23 10:34:50.58032054 +0000 UTC m=+0.053589554 container create 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:50 np0005593232 nova_compute[250269]: 2026-01-23 10:34:50.594 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:50 np0005593232 systemd[1]: Started libpod-conmon-7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e.scope.
Jan 23 05:34:50 np0005593232 podman[376328]: 2026-01-23 10:34:50.557318377 +0000 UTC m=+0.030587421 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:50 np0005593232 podman[376328]: 2026-01-23 10:34:50.706421415 +0000 UTC m=+0.179690489 container init 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:34:50 np0005593232 podman[376328]: 2026-01-23 10:34:50.715472842 +0000 UTC m=+0.188741886 container start 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:34:50 np0005593232 podman[376328]: 2026-01-23 10:34:50.720155424 +0000 UTC m=+0.193424468 container attach 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:34:51 np0005593232 youthful_mcnulty[376344]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:34:51 np0005593232 youthful_mcnulty[376344]: --> relative data size: 1.0
Jan 23 05:34:51 np0005593232 youthful_mcnulty[376344]: --> All data devices are unavailable
Jan 23 05:34:51 np0005593232 systemd[1]: libpod-7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e.scope: Deactivated successfully.
Jan 23 05:34:51 np0005593232 conmon[376344]: conmon 7a547deb06c88cd63c7c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e.scope/container/memory.events
Jan 23 05:34:51 np0005593232 podman[376359]: 2026-01-23 10:34:51.744470929 +0000 UTC m=+0.053064279 container died 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:34:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-879b4ebfcee3f5dd80eaeb345e9cc3ef9badb5fb874c4d326d9e9a9cbffc13d7-merged.mount: Deactivated successfully.
Jan 23 05:34:51 np0005593232 podman[376359]: 2026-01-23 10:34:51.82929056 +0000 UTC m=+0.137883900 container remove 7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:51 np0005593232 systemd[1]: libpod-conmon-7a547deb06c88cd63c7cc7e1163cdec0fe72c677eb69382276c054f50cfaf45e.scope: Deactivated successfully.
Jan 23 05:34:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3206: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 122 op/s
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.297 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.298 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.326 250273 DEBUG nova.objects.instance [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'flavor' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:34:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:52.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.385 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:52.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.836 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.837 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:34:52 np0005593232 nova_compute[250269]: 2026-01-23 10:34:52.837 250273 INFO nova.compute.manager [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Attaching volume c35b89da-3ccd-4fac-ae16-afc4a62713b5 to /dev/vdb#033[00m
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.845126603 +0000 UTC m=+0.071976576 container create 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:34:52 np0005593232 systemd[1]: Started libpod-conmon-62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9.scope.
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.821757599 +0000 UTC m=+0.048607582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.94175866 +0000 UTC m=+0.168608653 container init 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.949788378 +0000 UTC m=+0.176638341 container start 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.95371791 +0000 UTC m=+0.180567963 container attach 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:34:52 np0005593232 wonderful_leakey[376532]: 167 167
Jan 23 05:34:52 np0005593232 systemd[1]: libpod-62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9.scope: Deactivated successfully.
Jan 23 05:34:52 np0005593232 podman[376516]: 2026-01-23 10:34:52.956418527 +0000 UTC m=+0.183268470 container died 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:34:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-371a1019c78feda28243f5c4fe9569d3ef5f86407e8b16d1dc37a32139f77caf-merged.mount: Deactivated successfully.
Jan 23 05:34:53 np0005593232 podman[376516]: 2026-01-23 10:34:53.005861322 +0000 UTC m=+0.232711275 container remove 62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:34:53 np0005593232 systemd[1]: libpod-conmon-62b492df3b6caca50dde474c2bc87cec0b004563366dc3969d72c5e0ec3b1ec9.scope: Deactivated successfully.
Jan 23 05:34:53 np0005593232 podman[376555]: 2026-01-23 10:34:53.242748185 +0000 UTC m=+0.062323022 container create 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.254 250273 DEBUG os_brick.utils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.257 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.290 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.290 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[9736ae28-a124-49de-a823-f923b9676498]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.292 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:53 np0005593232 systemd[1]: Started libpod-conmon-0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f.scope.
Jan 23 05:34:53 np0005593232 podman[376555]: 2026-01-23 10:34:53.211988431 +0000 UTC m=+0.031563278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.305 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.305 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[7af553d6-ddd1-42f6-ba4d-ba78f6ef5c60]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.308 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.327 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.328 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[c2222759-dbbe-485c-9147-1107524e0267]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.330 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[88afcc31-010c-4b9d-a35d-8f4be5653fa7]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.331 250273 DEBUG oslo_concurrency.processutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:34:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f5708a56151459fc51bc4b48703f73fadbc58ad1a361fdc878fddb64d1d593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f5708a56151459fc51bc4b48703f73fadbc58ad1a361fdc878fddb64d1d593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f5708a56151459fc51bc4b48703f73fadbc58ad1a361fdc878fddb64d1d593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f5708a56151459fc51bc4b48703f73fadbc58ad1a361fdc878fddb64d1d593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:53 np0005593232 podman[376555]: 2026-01-23 10:34:53.352388202 +0000 UTC m=+0.171963029 container init 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:34:53 np0005593232 podman[376555]: 2026-01-23 10:34:53.371244848 +0000 UTC m=+0.190819645 container start 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:34:53 np0005593232 podman[376555]: 2026-01-23 10:34:53.378278417 +0000 UTC m=+0.197853264 container attach 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.388 250273 DEBUG oslo_concurrency.processutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "nvme version" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.392 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.392 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.392 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.393 250273 DEBUG os_brick.utils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] <== get_connector_properties: return (138ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:34:53 np0005593232 nova_compute[250269]: 2026-01-23 10:34:53.393 250273 DEBUG nova.virt.block_device [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating existing volume attachment record: 47071b21-d18c-49d3-b6b7-352b220eac3d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:34:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3207: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.9 MiB/s rd, 4.9 MiB/s wr, 103 op/s
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]: {
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:    "0": [
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:        {
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "devices": [
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "/dev/loop3"
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            ],
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "lv_name": "ceph_lv0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "lv_size": "7511998464",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "name": "ceph_lv0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "tags": {
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.cluster_name": "ceph",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.crush_device_class": "",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.encrypted": "0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.osd_id": "0",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.type": "block",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:                "ceph.vdo": "0"
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            },
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "type": "block",
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:            "vg_name": "ceph_vg0"
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:        }
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]:    ]
Jan 23 05:34:54 np0005593232 sweet_northcutt[376575]: }
Jan 23 05:34:54 np0005593232 systemd[1]: libpod-0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f.scope: Deactivated successfully.
Jan 23 05:34:54 np0005593232 podman[376555]: 2026-01-23 10:34:54.210634646 +0000 UTC m=+1.030209453 container died 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:34:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95f5708a56151459fc51bc4b48703f73fadbc58ad1a361fdc878fddb64d1d593-merged.mount: Deactivated successfully.
Jan 23 05:34:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:34:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806156508' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:34:54 np0005593232 podman[376555]: 2026-01-23 10:34:54.304275427 +0000 UTC m=+1.123850254 container remove 0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:34:54 np0005593232 systemd[1]: libpod-conmon-0ab0b8ebc3371eb323743e8e258972e8228514754794114a7fc53e09aea0be0f.scope: Deactivated successfully.
Jan 23 05:34:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:54.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.557 250273 DEBUG nova.objects.instance [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'flavor' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:34:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:54.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.586 250273 DEBUG nova.virt.libvirt.driver [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Attempting to attach volume c35b89da-3ccd-4fac-ae16-afc4a62713b5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.593 250273 DEBUG nova.virt.libvirt.guest [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-c35b89da-3ccd-4fac-ae16-afc4a62713b5">
Jan 23 05:34:54 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:34:54 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:34:54 np0005593232 nova_compute[250269]:  <serial>c35b89da-3ccd-4fac-ae16-afc4a62713b5</serial>
Jan 23 05:34:54 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:34:54 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.781 250273 DEBUG nova.virt.libvirt.driver [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.782 250273 DEBUG nova.virt.libvirt.driver [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.782 250273 DEBUG nova.virt.libvirt.driver [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:34:54 np0005593232 nova_compute[250269]: 2026-01-23 10:34:54.782 250273 DEBUG nova.virt.libvirt.driver [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No VIF found with MAC fa:16:3e:fe:27:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:34:55 np0005593232 nova_compute[250269]: 2026-01-23 10:34:55.135 250273 DEBUG oslo_concurrency.lockutils [None req-3c49b605-525e-4954-b330-2ffb3e839454 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.161159962 +0000 UTC m=+0.051063522 container create d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:34:55 np0005593232 systemd[1]: Started libpod-conmon-d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e.scope.
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.138952581 +0000 UTC m=+0.028856171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.257171011 +0000 UTC m=+0.147074651 container init d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.266139386 +0000 UTC m=+0.156042966 container start d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.270406618 +0000 UTC m=+0.160310268 container attach d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:34:55 np0005593232 intelligent_gould[376777]: 167 167
Jan 23 05:34:55 np0005593232 systemd[1]: libpod-d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e.scope: Deactivated successfully.
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.336149036 +0000 UTC m=+0.226052606 container died d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:34:55 np0005593232 nova_compute[250269]: 2026-01-23 10:34:55.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9291b0b9c97bc57cf329e0869dbfb84e23121fa06b703473505c521b84c1726b-merged.mount: Deactivated successfully.
Jan 23 05:34:55 np0005593232 podman[376761]: 2026-01-23 10:34:55.414490503 +0000 UTC m=+0.304394103 container remove d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_gould, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:34:55 np0005593232 systemd[1]: libpod-conmon-d99b389b49c49ae545687c72372dcc3f9acf5969616bfa1b39e25af2b853675e.scope: Deactivated successfully.
Jan 23 05:34:55 np0005593232 nova_compute[250269]: 2026-01-23 10:34:55.596 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:34:55 np0005593232 podman[376801]: 2026-01-23 10:34:55.611290067 +0000 UTC m=+0.054691866 container create 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:55 np0005593232 systemd[1]: Started libpod-conmon-225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd.scope.
Jan 23 05:34:55 np0005593232 podman[376801]: 2026-01-23 10:34:55.593250474 +0000 UTC m=+0.036652303 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:34:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:34:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b57484bced7b813e6d5c51e9dc2c27077e4c6119740cac07b00c5e2d83c469/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b57484bced7b813e6d5c51e9dc2c27077e4c6119740cac07b00c5e2d83c469/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b57484bced7b813e6d5c51e9dc2c27077e4c6119740cac07b00c5e2d83c469/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22b57484bced7b813e6d5c51e9dc2c27077e4c6119740cac07b00c5e2d83c469/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:34:55 np0005593232 podman[376801]: 2026-01-23 10:34:55.720268134 +0000 UTC m=+0.163670023 container init 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:34:55 np0005593232 podman[376801]: 2026-01-23 10:34:55.731524844 +0000 UTC m=+0.174926683 container start 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:34:55 np0005593232 podman[376801]: 2026-01-23 10:34:55.736097604 +0000 UTC m=+0.179499443 container attach 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:34:55 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 23 05:34:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3208: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 99 op/s
Jan 23 05:34:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:56.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:34:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:56.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]: {
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:        "osd_id": 0,
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:        "type": "bluestore"
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]:    }
Jan 23 05:34:56 np0005593232 hopeful_jepsen[376818]: }
Jan 23 05:34:56 np0005593232 systemd[1]: libpod-225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd.scope: Deactivated successfully.
Jan 23 05:34:56 np0005593232 podman[376801]: 2026-01-23 10:34:56.661983021 +0000 UTC m=+1.105384820 container died 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:34:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-22b57484bced7b813e6d5c51e9dc2c27077e4c6119740cac07b00c5e2d83c469-merged.mount: Deactivated successfully.
Jan 23 05:34:56 np0005593232 podman[376801]: 2026-01-23 10:34:56.720624418 +0000 UTC m=+1.164026217 container remove 225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jepsen, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:34:56 np0005593232 systemd[1]: libpod-conmon-225c8eed0875ca08686c8d1a95c34acbe96b6cf19e8fe0d4c13538f733d9f1dd.scope: Deactivated successfully.
Jan 23 05:34:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:34:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:34:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cc51e059-f176-46a5-bb3f-f5c967c2c49b does not exist
Jan 23 05:34:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c2e1be4a-29e2-47ed-83ef-673c559edaa0 does not exist
Jan 23 05:34:57 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0a089b78-a538-4cde-a512-6d6055038ab5 does not exist
Jan 23 05:34:57 np0005593232 nova_compute[250269]: 2026-01-23 10:34:57.346 250273 INFO nova.compute.manager [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Rescuing#033[00m
Jan 23 05:34:57 np0005593232 nova_compute[250269]: 2026-01-23 10:34:57.347 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:34:57 np0005593232 nova_compute[250269]: 2026-01-23 10:34:57.347 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:34:57 np0005593232 nova_compute[250269]: 2026-01-23 10:34:57.347 250273 DEBUG nova.network.neutron [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:34:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:34:57 np0005593232 nova_compute[250269]: 2026-01-23 10:34:57.424 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3209: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 22 KiB/s wr, 94 op/s
Jan 23 05:34:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:34:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:34:58.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:34:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:34:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:34:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:34:58.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:34:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:34:59 np0005593232 nova_compute[250269]: 2026-01-23 10:34:59.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:59 np0005593232 nova_compute[250269]: 2026-01-23 10:34:59.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:34:59 np0005593232 nova_compute[250269]: 2026-01-23 10:34:59.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:34:59 np0005593232 podman[376901]: 2026-01-23 10:34:59.525761678 +0000 UTC m=+0.168474599 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 05:34:59 np0005593232 nova_compute[250269]: 2026-01-23 10:34:59.528 250273 DEBUG nova.network.neutron [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:34:59 np0005593232 nova_compute[250269]: 2026-01-23 10:34:59.548 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:35:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3210: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 20 KiB/s wr, 83 op/s
Jan 23 05:35:00 np0005593232 nova_compute[250269]: 2026-01-23 10:35:00.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:00.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:00.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:00 np0005593232 nova_compute[250269]: 2026-01-23 10:35:00.608 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:35:00 np0005593232 nova_compute[250269]: 2026-01-23 10:35:00.632 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3211: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 79 op/s
Jan 23 05:35:02 np0005593232 nova_compute[250269]: 2026-01-23 10:35:02.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:02.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:03 np0005593232 podman[376980]: 2026-01-23 10:35:03.465986662 +0000 UTC m=+0.112589561 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:35:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3212: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 83 op/s
Jan 23 05:35:04 np0005593232 kernel: tap629a33c7-49 (unregistering): left promiscuous mode
Jan 23 05:35:04 np0005593232 NetworkManager[49057]: <info>  [1769164504.1299] device (tap629a33c7-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:35:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:04Z|00723|binding|INFO|Releasing lport 629a33c7-4917-4a44-9978-d78fecc89001 from this chassis (sb_readonly=0)
Jan 23 05:35:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:04Z|00724|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 down in Southbound
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.150 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:04Z|00725|binding|INFO|Removing iface tap629a33c7-49 ovn-installed in OVS
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.154 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:04.162 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:04.164 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 unbound from our chassis#033[00m
Jan 23 05:35:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:04.168 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7d5530f-5227-4f75-bac0-2604bb3d68e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:35:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:04.173 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eec79970-14bd-40cb-a866-ef80b25f2480]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:04.174 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace which is not needed anymore#033[00m
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:04 np0005593232 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Jan 23 05:35:04 np0005593232 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d000000b8.scope: Consumed 16.089s CPU time.
Jan 23 05:35:04 np0005593232 systemd-machined[215836]: Machine qemu-81-instance-000000b8 terminated.
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [NOTICE]   (375697) : haproxy version is 2.8.14-c23fe91
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [NOTICE]   (375697) : path to executable is /usr/sbin/haproxy
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [WARNING]  (375697) : Exiting Master process...
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [WARNING]  (375697) : Exiting Master process...
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [ALERT]    (375697) : Current worker (375699) exited with code 143 (Terminated)
Jan 23 05:35:04 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[375692]: [WARNING]  (375697) : All workers exited. Exiting... (0)
Jan 23 05:35:04 np0005593232 systemd[1]: libpod-3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e.scope: Deactivated successfully.
Jan 23 05:35:04 np0005593232 podman[377025]: 2026-01-23 10:35:04.370979855 +0000 UTC m=+0.059328737 container died 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:04.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e-userdata-shm.mount: Deactivated successfully.
Jan 23 05:35:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4ce2300dbfb36f8604dae0c79e74e744728979faf109214d0ba3ef55d13058a4-merged.mount: Deactivated successfully.
Jan 23 05:35:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:04.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.641 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance shutdown successfully after 4 seconds.#033[00m
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.651 250273 INFO nova.virt.libvirt.driver [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance destroyed successfully.#033[00m
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.652 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:04 np0005593232 nova_compute[250269]: 2026-01-23 10:35:04.744 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Attempting a stable device rescue#033[00m
Jan 23 05:35:05 np0005593232 podman[377025]: 2026-01-23 10:35:05.007144048 +0000 UTC m=+0.695492970 container cleanup 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:35:05 np0005593232 systemd[1]: libpod-conmon-3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e.scope: Deactivated successfully.
Jan 23 05:35:05 np0005593232 podman[377068]: 2026-01-23 10:35:05.322779328 +0000 UTC m=+0.272617550 container remove 3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.333 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3152d59c-1ece-411e-980a-3afe59fa1c10]: (4, ('Fri Jan 23 10:35:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e)\n3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e\nFri Jan 23 10:35:05 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e)\n3e9c4df6fe46fd8ff2754998b433875b8d9729caa44d425b5568fa415332748e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.336 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[46827609-c25d-4d70-b322-50544867f2c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.339 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.398 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:05 np0005593232 kernel: tapd7d5530f-50: left promiscuous mode
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.419 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.424 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[30b06827-e979-4961-9ce8-c0f25a21ab58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.447 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f00adc34-dabf-4d6c-bf8d-600d634bf677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.449 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fecc5a04-0889-4764-868c-225da6c91571]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.478 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[22854e6b-0234-4467-b234-4c978501c30a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837150, 'reachable_time': 23557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377086, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.482 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:35:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:05.483 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9e4b99ca-46a6-4e2e-94cd-b522648b1852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:05 np0005593232 systemd[1]: run-netns-ovnmeta\x2dd7d5530f\x2d5227\x2d4f75\x2dbac0\x2d2604bb3d68e2.mount: Deactivated successfully.
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.594 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.602 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.603 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Creating image(s)#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.642 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.648 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.649 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.669 250273 DEBUG nova.compute.manager [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.669 250273 DEBUG oslo_concurrency.lockutils [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.669 250273 DEBUG oslo_concurrency.lockutils [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.669 250273 DEBUG oslo_concurrency.lockutils [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.670 250273 DEBUG nova.compute.manager [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.670 250273 WARNING nova.compute.manager [req-d21c55ba-5427-40c2-ad42-4cb4044cef73 req-addd6d55-640d-4c8b-8c4c-fb56e74a6d28 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.706 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.753 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.759 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "233f78aca32a272efbd96ef009a28f5f3b2ff2dd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:05 np0005593232 nova_compute[250269]: 2026-01-23 10:35:05.761 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "233f78aca32a272efbd96ef009a28f5f3b2ff2dd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3213: 321 pgs: 321 active+clean; 708 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 81 op/s
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.256 250273 DEBUG nova.virt.libvirt.imagebackend [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/e01877f1-023b-4adf-9357-0984581d9119/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/e01877f1-023b-4adf-9357-0984581d9119/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.341 250273 DEBUG nova.virt.libvirt.imagebackend [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Selected location: {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/e01877f1-023b-4adf-9357-0984581d9119/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.342 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] cloning images/e01877f1-023b-4adf-9357-0984581d9119@snap to None/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.rescue clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:35:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:06.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.766 250273 DEBUG oslo_concurrency.lockutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "233f78aca32a272efbd96ef009a28f5f3b2ff2dd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.881 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'migration_context' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.904 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.910 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start _get_guest_xml network_info=[{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "vif_mac": "fa:16:3e:fe:27:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}, 'disk.rescue': {'bus': 'virtio', 'dev': 'vdc', 'type': 'disk'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e01877f1-023b-4adf-9357-0984581d9119', 'kernel_id': '', 'ramdisk_id': ''} block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '47071b21-d18c-49d3-b6b7-352b220eac3d', 'device_type': 'disk', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c35b89da-3ccd-4fac-ae16-afc4a62713b5', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c35b89da-3ccd-4fac-ae16-afc4a62713b5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'attached_at': '', 'detached_at': '', 'volume_id': 'c35b89da-3ccd-4fac-ae16-afc4a62713b5', 'serial': 'c35b89da-3ccd-4fac-ae16-afc4a62713b5'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.910 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'resources' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.947 250273 WARNING nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.955 250273 DEBUG nova.virt.libvirt.host [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.956 250273 DEBUG nova.virt.libvirt.host [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.962 250273 DEBUG nova.virt.libvirt.host [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.963 250273 DEBUG nova.virt.libvirt.host [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.965 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.965 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.966 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.967 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.967 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.968 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.968 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.968 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.969 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.969 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.970 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.970 250273 DEBUG nova.virt.hardware [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:35:06 np0005593232 nova_compute[250269]: 2026-01-23 10:35:06.971 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.009 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:35:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/670716618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.523 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.591 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.861 250273 DEBUG nova.compute.manager [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.862 250273 DEBUG oslo_concurrency.lockutils [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.863 250273 DEBUG oslo_concurrency.lockutils [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.863 250273 DEBUG oslo_concurrency.lockutils [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.864 250273 DEBUG nova.compute.manager [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:07 np0005593232 nova_compute[250269]: 2026-01-23 10:35:07.864 250273 WARNING nova.compute.manager [req-4e3a53eb-c5f9-49ac-b1b4-b19e4353cdda req-6bf98432-7a91-4d71-bf43-e3c023b159d7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:35:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:35:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002492863' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.083 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3214: 321 pgs: 321 active+clean; 775 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 166 op/s
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.116 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:35:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/704201926' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:35:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.589 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.592 250273 DEBUG nova.virt.libvirt.vif [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-171773642',display_name='tempest-ServerStableDeviceRescueTest-server-171773642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-171773642',id=184,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBb+3zp8o0K+puz2JQkOGzmnnNl24MGD8VzK0xADWahKH4uswOlO8X7EcIpKY4ojueYsF9jTYZVtsKkrwP7uCzwhKymJejXuTC0hLydoquX1zBJO2KneNThhWactU3vFAw==',key_name='tempest-keypair-455350267',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:34:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='815b71acf60d4ed8933ebd05228fa0c0',ramdisk_id='',reservation_id='r-c01cxkoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1802220041',owner_user_name='tempest-ServerStableDeviceRescueTest-1802220041-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:34:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e1629a4b14764dddaabcadd16f3e1c1c',uuid=5d42acd2-a3c4-40d2-b1c7-0dd7920671fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "vif_mac": "fa:16:3e:fe:27:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.593 250273 DEBUG nova.network.os_vif_util [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converting VIF {"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "vif_mac": "fa:16:3e:fe:27:90"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.594 250273 DEBUG nova.network.os_vif_util [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.597 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.627 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <uuid>5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</uuid>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <name>instance-000000b8</name>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:name>tempest-ServerStableDeviceRescueTest-server-171773642</nova:name>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:35:06</nova:creationTime>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:user uuid="e1629a4b14764dddaabcadd16f3e1c1c">tempest-ServerStableDeviceRescueTest-1802220041-project-member</nova:user>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:project uuid="815b71acf60d4ed8933ebd05228fa0c0">tempest-ServerStableDeviceRescueTest-1802220041</nova:project>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <nova:port uuid="629a33c7-4917-4a44-9978-d78fecc89001">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="serial">5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="uuid">5d42acd2-a3c4-40d2-b1c7-0dd7920671fe</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-c35b89da-3ccd-4fac-ae16-afc4a62713b5">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <serial>c35b89da-3ccd-4fac-ae16-afc4a62713b5</serial>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.rescue">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <target dev="vdc" bus="virtio"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <boot order="1"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:fe:27:90"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <target dev="tap629a33c7-49"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/console.log" append="off"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:35:08 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:35:08 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:35:08 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:35:08 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.639 250273 INFO nova.virt.libvirt.driver [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance destroyed successfully.#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.772 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.773 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.774 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name vdc, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.775 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.775 250273 DEBUG nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] No VIF found with MAC fa:16:3e:fe:27:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.777 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Using config drive#033[00m
Jan 23 05:35:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.824 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.903 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:08 np0005593232 nova_compute[250269]: 2026-01-23 10:35:08.945 250273 DEBUG nova.objects.instance [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'keypairs' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:09 np0005593232 nova_compute[250269]: 2026-01-23 10:35:09.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:09 np0005593232 nova_compute[250269]: 2026-01-23 10:35:09.780 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Creating config drive at /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue#033[00m
Jan 23 05:35:09 np0005593232 nova_compute[250269]: 2026-01-23 10:35:09.793 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppoq8xsdg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:09 np0005593232 nova_compute[250269]: 2026-01-23 10:35:09.968 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppoq8xsdg" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.033 250273 DEBUG nova.storage.rbd_utils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] rbd image 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.040 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3215: 321 pgs: 321 active+clean; 775 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 233 KiB/s rd, 3.8 MiB/s wr, 88 op/s
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.294 250273 DEBUG oslo_concurrency.processutils [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.296 250273 INFO nova.virt.libvirt.driver [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Deleting local config drive /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe/disk.config.rescue because it was imported into RBD.#033[00m
Jan 23 05:35:10 np0005593232 kernel: tap629a33c7-49: entered promiscuous mode
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.4005] manager: (tap629a33c7-49): new Tun device (/org/freedesktop/NetworkManager/Devices/337)
Jan 23 05:35:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:10Z|00726|binding|INFO|Claiming lport 629a33c7-4917-4a44-9978-d78fecc89001 for this chassis.
Jan 23 05:35:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:10Z|00727|binding|INFO|629a33c7-4917-4a44-9978-d78fecc89001: Claiming fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.404 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:10.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.415 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.418 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 bound to our chassis#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.421 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7d5530f-5227-4f75-bac0-2604bb3d68e2#033[00m
Jan 23 05:35:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:10Z|00728|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 ovn-installed in OVS
Jan 23 05:35:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:10Z|00729|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 up in Southbound
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.437 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.441 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.448 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9c12c429-4585-46ed-bf05-b31e04a136f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.451 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7d5530f-51 in ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.453 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7d5530f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.454 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f62bb439-4752-438c-b082-11200b26be0d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.456 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[088996ce-8452-44d0-bfbc-504f87f2b9e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 systemd-machined[215836]: New machine qemu-82-instance-000000b8.
Jan 23 05:35:10 np0005593232 systemd-udevd[377388]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.474 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[76e68e48-d227-4b61-9b58-6dbfd3c67016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 systemd[1]: Started Virtual Machine qemu-82-instance-000000b8.
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.4864] device (tap629a33c7-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.4879] device (tap629a33c7-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.497 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e9e29f6b-3a80-456f-ba6d-24bf8fe6391c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.550 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a5dc6472-e406-4cdc-acdc-086189fd4627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.5613] manager: (tapd7d5530f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/338)
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.560 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[95ef5031-0519-4a75-a246-41e68d7c1832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:10.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.613 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6527428b-898e-48b9-b29d-dfbf3a36bdd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.618 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[07a7b5d1-3822-457c-9a1e-27c5dd0a3fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.651 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.6625] device (tapd7d5530f-50): carrier: link connected
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.673 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[9ddf376f-73c2-42be-9d2e-8ef1494e0d9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.704 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c312d02c-e4d1-4e9e-9c88-9c59508ea2f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842380, 'reachable_time': 26510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377420, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.735 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e44aad2-1391-4f14-8d17-616777c2c0e8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:67cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 842380, 'tstamp': 842380}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377421, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.769 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e994ee5-105f-444b-9d59-ef28f93172f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 218], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842380, 'reachable_time': 26510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377422, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.812 250273 DEBUG nova.compute.manager [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.813 250273 DEBUG oslo_concurrency.lockutils [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.814 250273 DEBUG oslo_concurrency.lockutils [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.814 250273 DEBUG oslo_concurrency.lockutils [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.815 250273 DEBUG nova.compute.manager [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.815 250273 WARNING nova.compute.manager [req-782e97b7-9e38-47e9-98ed-128bf4ec210d req-478b9b75-e127-445f-b5c7-bb83083cd392 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state active and task_state rescuing.#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.824 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[47ff5e6b-e833-42b6-afef-db5e83f85334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.923 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f76080d0-a6e2-4de1-ad57-2d8d89f6bd45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.925 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.925 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.926 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7d5530f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:10 np0005593232 kernel: tapd7d5530f-50: entered promiscuous mode
Jan 23 05:35:10 np0005593232 NetworkManager[49057]: <info>  [1769164510.9325] manager: (tapd7d5530f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/339)
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.932 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7d5530f-50, col_values=(('external_ids', {'iface-id': '4c99eeb5-c437-4d31-ac3b-bfd151140733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:10Z|00730|binding|INFO|Releasing lport 4c99eeb5-c437-4d31-ac3b-bfd151140733 from this chassis (sb_readonly=0)
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 nova_compute[250269]: 2026-01-23 10:35:10.947 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.948 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.949 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c60b21af-40e5-4f3b-9fed-0e62a365e6b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.950 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:35:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:10.951 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'env', 'PROCESS_TAG=haproxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7d5530f-5227-4f75-bac0-2604bb3d68e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:35:11 np0005593232 nova_compute[250269]: 2026-01-23 10:35:11.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:11 np0005593232 podman[377453]: 2026-01-23 10:35:11.376228568 +0000 UTC m=+0.036524089 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:35:11 np0005593232 podman[377453]: 2026-01-23 10:35:11.510119074 +0000 UTC m=+0.170414545 container create 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:35:11 np0005593232 systemd[1]: Started libpod-conmon-96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1.scope.
Jan 23 05:35:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:35:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7adb2205971e950cd9e8b950a63f12910be255d4f7864c744ad715872f285813/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:35:11 np0005593232 podman[377453]: 2026-01-23 10:35:11.630108264 +0000 UTC m=+0.290403805 container init 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 05:35:11 np0005593232 podman[377453]: 2026-01-23 10:35:11.638319467 +0000 UTC m=+0.298614918 container start 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:35:11 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [NOTICE]   (377508) : New worker (377510) forked
Jan 23 05:35:11 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [NOTICE]   (377508) : Loading success.
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.072 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.073 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164512.0713127, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.074 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.085 250273 DEBUG nova.compute.manager [None req-a0b23a7d-6c95-4676-a9db-bb8bdcbdf140 e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.103 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.109 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:35:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3216: 321 pgs: 321 active+clean; 777 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 3.8 MiB/s wr, 96 op/s
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.145 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (rescuing). Skip.#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.146 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164512.0785618, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.146 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Started (Lifecycle Event)#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.184 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.190 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.322 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.325 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:12.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:35:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433245179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.854 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.995 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.997 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.997 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:35:12 np0005593232 nova_compute[250269]: 2026-01-23 10:35:12.997 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000b8 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.069 250273 DEBUG nova.compute.manager [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.069 250273 DEBUG oslo_concurrency.lockutils [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.070 250273 DEBUG oslo_concurrency.lockutils [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.070 250273 DEBUG oslo_concurrency.lockutils [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.071 250273 DEBUG nova.compute.manager [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.071 250273 WARNING nova.compute.manager [req-2b26afa3-cba9-43d1-aaca-f5f469f3ccd3 req-2c703cf5-73e5-4ec5-8dcf-af525e8a33ad 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state rescued and task_state None.#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.238 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.240 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3959MB free_disk=20.740543365478516GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.240 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.241 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.365 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.366 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.367 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:35:13 np0005593232 nova_compute[250269]: 2026-01-23 10:35:13.523 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:35:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:35:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2612655551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.061 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.069 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.092 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:35:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3217: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 438 KiB/s rd, 3.9 MiB/s wr, 131 op/s
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.178 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.179 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:14.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:14.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.644 250273 INFO nova.compute.manager [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Unrescuing#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.645 250273 DEBUG oslo_concurrency.lockutils [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.645 250273 DEBUG oslo_concurrency.lockutils [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:35:14 np0005593232 nova_compute[250269]: 2026-01-23 10:35:14.646 250273 DEBUG nova.network.neutron [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:35:15 np0005593232 nova_compute[250269]: 2026-01-23 10:35:15.442 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:15 np0005593232 nova_compute[250269]: 2026-01-23 10:35:15.653 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3218: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 435 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 23 05:35:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:35:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:16.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:35:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:16.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.025 250273 DEBUG nova.network.neutron [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.075 250273 DEBUG oslo_concurrency.lockutils [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.077 250273 DEBUG nova.objects.instance [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'flavor' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.180 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.180 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.181 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.208 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.209 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.209 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.210 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:17 np0005593232 kernel: tap629a33c7-49 (unregistering): left promiscuous mode
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.2338] device (tap629a33c7-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.246 250273 DEBUG nova.compute.manager [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-changed-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.249 250273 DEBUG nova.compute.manager [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing instance network info cache due to event network-changed-629a33c7-4917-4a44-9978-d78fecc89001. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.250 250273 DEBUG oslo_concurrency.lockutils [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00731|binding|INFO|Releasing lport 629a33c7-4917-4a44-9978-d78fecc89001 from this chassis (sb_readonly=0)
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00732|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 down in Southbound
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00733|binding|INFO|Removing iface tap629a33c7-49 ovn-installed in OVS
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.311 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.316 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.318 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 unbound from our chassis#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.321 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7d5530f-5227-4f75-bac0-2604bb3d68e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.323 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9971a384-f5b0-414b-b8fb-ab07589e3474]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.324 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace which is not needed anymore#033[00m
Jan 23 05:35:17 np0005593232 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Jan 23 05:35:17 np0005593232 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d000000b8.scope: Consumed 6.673s CPU time.
Jan 23 05:35:17 np0005593232 systemd-machined[215836]: Machine qemu-82-instance-000000b8 terminated.
Jan 23 05:35:17 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [NOTICE]   (377508) : haproxy version is 2.8.14-c23fe91
Jan 23 05:35:17 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [NOTICE]   (377508) : path to executable is /usr/sbin/haproxy
Jan 23 05:35:17 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [WARNING]  (377508) : Exiting Master process...
Jan 23 05:35:17 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [ALERT]    (377508) : Current worker (377510) exited with code 143 (Terminated)
Jan 23 05:35:17 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377504]: [WARNING]  (377508) : All workers exited. Exiting... (0)
Jan 23 05:35:17 np0005593232 systemd[1]: libpod-96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1.scope: Deactivated successfully.
Jan 23 05:35:17 np0005593232 podman[377634]: 2026-01-23 10:35:17.487717968 +0000 UTC m=+0.054027527 container died 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:35:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1-userdata-shm.mount: Deactivated successfully.
Jan 23 05:35:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7adb2205971e950cd9e8b950a63f12910be255d4f7864c744ad715872f285813-merged.mount: Deactivated successfully.
Jan 23 05:35:17 np0005593232 podman[377634]: 2026-01-23 10:35:17.554482295 +0000 UTC m=+0.120791814 container cleanup 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.573 250273 INFO nova.virt.libvirt.driver [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance destroyed successfully.#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.574 250273 DEBUG nova.objects.instance [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'numa_topology' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:17 np0005593232 systemd[1]: libpod-conmon-96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1.scope: Deactivated successfully.
Jan 23 05:35:17 np0005593232 podman[377670]: 2026-01-23 10:35:17.640772208 +0000 UTC m=+0.058357680 container remove 96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.647 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a3a4c40-c98b-4385-8198-ccbb787fac93]: (4, ('Fri Jan 23 10:35:17 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1)\n96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1\nFri Jan 23 10:35:17 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1)\n96353f444ce4e97fec055dbac166475273a4e7da317a4f3da7ef3204490507f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.651 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3814ad43-0876-41f0-8ac2-dcbf40e22617]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.652 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 kernel: tapd7d5530f-50: left promiscuous mode
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.692 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ca654cc6-7295-4ff7-85c3-1eaf51904d15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 kernel: tap629a33c7-49: entered promiscuous mode
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.7057] manager: (tap629a33c7-49): new Tun device (/org/freedesktop/NetworkManager/Devices/340)
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00734|binding|INFO|Claiming lport 629a33c7-4917-4a44-9978-d78fecc89001 for this chassis.
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00735|binding|INFO|629a33c7-4917-4a44-9978-d78fecc89001: Claiming fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:35:17 np0005593232 systemd-udevd[377700]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.707 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fedb7c77-9ec5-4d23-9889-f22680f3a280]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.708 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e995d095-02ee-4336-bbf0-ecbac61cb8f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.717 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.7249] device (tap629a33c7-49): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.7260] device (tap629a33c7-49): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00736|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 ovn-installed in OVS
Jan 23 05:35:17 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:17Z|00737|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 up in Southbound
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.730 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[92b671bb-219a-4ebb-aa11-b9c96cb7eff6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 842368, 'reachable_time': 15113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377704, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.731 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 systemd[1]: run-netns-ovnmeta\x2dd7d5530f\x2d5227\x2d4f75\x2dbac0\x2d2604bb3d68e2.mount: Deactivated successfully.
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.737 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.737 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[3f81d535-b123-4e37-8b86-d822520ac7fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.738 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 unbound from our chassis#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.740 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7d5530f-5227-4f75-bac0-2604bb3d68e2#033[00m
Jan 23 05:35:17 np0005593232 systemd-machined[215836]: New machine qemu-83-instance-000000b8.
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.757 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d49d178a-4cd3-4f0e-a395-994c84d05057]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.759 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7d5530f-51 in ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.761 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7d5530f-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.761 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3d02b29c-0e9e-44a6-ba11-fa6957c781ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.762 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[736d5e05-3960-4942-9c95-db3c0b2483c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 systemd[1]: Started Virtual Machine qemu-83-instance-000000b8.
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.780 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[7cba9177-059e-42bf-8bd2-6600b09436b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.794 250273 DEBUG nova.compute.manager [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.795 250273 DEBUG oslo_concurrency.lockutils [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.796 250273 DEBUG oslo_concurrency.lockutils [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.796 250273 DEBUG oslo_concurrency.lockutils [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.797 250273 DEBUG nova.compute.manager [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.798 250273 WARNING nova.compute.manager [req-53c6560f-f265-4f5a-a261-c814f1d02682 req-1a362d7e-5b14-4b77-b94f-0c4420553310 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.819 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2ea996dc-aa51-443c-8f86-4fdb5140391d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.857 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[df63e5ce-11d4-4889-8022-7c33ce38c8da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 systemd-udevd[377612]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.865 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6667738a-0a03-4708-b314-82aca39a5108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.8673] manager: (tapd7d5530f-50): new Veth device (/org/freedesktop/NetworkManager/Devices/341)
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.910 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[defee0df-ab81-4166-bb62-3b0b018f8501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.914 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7b50d3-a2fb-4276-a87e-ae9aa92668d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 NetworkManager[49057]: <info>  [1769164517.9498] device (tapd7d5530f-50): carrier: link connected
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.958 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd953c9-a46b-4acc-bbc9-03f2a177cb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 nova_compute[250269]: 2026-01-23 10:35:17.972 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.975 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=70, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=69) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.978 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[405be015-a607-4f09-9e90-13068d728a62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843109, 'reachable_time': 27131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 377738, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:17.998 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5e066764-1091-46b0-86df-632ec70112c6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe00:67cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 843109, 'tstamp': 843109}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 377739, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.018 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[69175dd6-df3b-4ab0-b1fe-1a8c70c6a567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7d5530f-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:00:67:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 221], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843109, 'reachable_time': 27131, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 377740, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.057 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f6aedd2c-cd7c-4e60-a667-34af7d17b16c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3219: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 259 op/s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.128 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6e87ff34-d5c3-42d0-baf0-c4fa824dc4c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.129 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.130 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.130 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7d5530f-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.132 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:18 np0005593232 NetworkManager[49057]: <info>  [1769164518.1326] manager: (tapd7d5530f-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/342)
Jan 23 05:35:18 np0005593232 kernel: tapd7d5530f-50: entered promiscuous mode
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.137 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7d5530f-50, col_values=(('external_ids', {'iface-id': '4c99eeb5-c437-4d31-ac3b-bfd151140733'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:18 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:18Z|00738|binding|INFO|Releasing lport 4c99eeb5-c437-4d31-ac3b-bfd151140733 from this chassis (sb_readonly=0)
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.168 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.169 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[83b9f666-3417-4c22-9661-59137c833bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.170 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/d7d5530f-5227-4f75-bac0-2604bb3d68e2.pid.haproxy
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID d7d5530f-5227-4f75-bac0-2604bb3d68e2
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.172 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'env', 'PROCESS_TAG=haproxy-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7d5530f-5227-4f75-bac0-2604bb3d68e2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:35:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:18.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:18 np0005593232 podman[377809]: 2026-01-23 10:35:18.586050066 +0000 UTC m=+0.049076396 container create 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 05:35:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:18.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:18 np0005593232 systemd[1]: Started libpod-conmon-4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8.scope.
Jan 23 05:35:18 np0005593232 podman[377809]: 2026-01-23 10:35:18.562527337 +0000 UTC m=+0.025553687 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:35:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:35:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/462208e503e76ce032de1df26aa91ac5201748cd608d9520ed5c8879264d62e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:35:18 np0005593232 podman[377809]: 2026-01-23 10:35:18.685194374 +0000 UTC m=+0.148220724 container init 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 05:35:18 np0005593232 podman[377809]: 2026-01-23 10:35:18.69739106 +0000 UTC m=+0.160417390 container start 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:35:18 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [NOTICE]   (377845) : New worker (377852) forked
Jan 23 05:35:18 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [NOTICE]   (377845) : Loading success.
Jan 23 05:35:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:18.764 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:35:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.815 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.817 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164518.8153317, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.817 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.954 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.960 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.989 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.990 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164518.8204994, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:35:18 np0005593232 nova_compute[250269]: 2026-01-23 10:35:18.990 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Started (Lifecycle Event)#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.032 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.037 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.076 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] During sync_power_state the instance has a pending task (unrescuing). Skip.#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.401 250273 DEBUG nova.compute.manager [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-changed-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.402 250273 DEBUG nova.compute.manager [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing instance network info cache due to event network-changed-629a33c7-4917-4a44-9978-d78fecc89001. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.402 250273 DEBUG oslo_concurrency.lockutils [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.966 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.967 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.967 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.967 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.968 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.968 250273 WARNING nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.969 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.969 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.969 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.970 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.970 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.971 250273 WARNING nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.971 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.971 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.972 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.972 250273 DEBUG oslo_concurrency.lockutils [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.973 250273 DEBUG nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:35:19 np0005593232 nova_compute[250269]: 2026-01-23 10:35:19.973 250273 WARNING nova.compute.manager [req-5cb68876-72c6-4cdc-998d-046b9591ba4a req-9de54ea7-16a0-45ac-bf3c-de323ff0c99b 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state rescued and task_state unrescuing.#033[00m
Jan 23 05:35:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3220: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 134 KiB/s wr, 175 op/s
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.245 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.304 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.305 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.306 250273 DEBUG oslo_concurrency.lockutils [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.306 250273 DEBUG nova.network.neutron [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:35:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:20.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.441 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:20.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.694 250273 DEBUG nova.compute.manager [None req-d7031a76-e63e-4f34-bbed-84f23005efdd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:35:20 np0005593232 nova_compute[250269]: 2026-01-23 10:35:20.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3221: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.4 MiB/s rd, 134 KiB/s wr, 199 op/s
Jan 23 05:35:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:22.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:22 np0005593232 nova_compute[250269]: 2026-01-23 10:35:22.976 250273 DEBUG nova.network.neutron [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updated VIF entry in instance network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:35:22 np0005593232 nova_compute[250269]: 2026-01-23 10:35:22.979 250273 DEBUG nova.network.neutron [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:35:23 np0005593232 nova_compute[250269]: 2026-01-23 10:35:23.058 250273 DEBUG oslo_concurrency.lockutils [req-c2dd66b3-66e5-4e80-a52f-f72c07689e40 req-3ea6459e-d4d0-4296-a07c-54e13b73dcbd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:35:23 np0005593232 nova_compute[250269]: 2026-01-23 10:35:23.060 250273 DEBUG oslo_concurrency.lockutils [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:35:23 np0005593232 nova_compute[250269]: 2026-01-23 10:35:23.061 250273 DEBUG nova.network.neutron [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Refreshing network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:35:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3222: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 134 KiB/s wr, 269 op/s
Jan 23 05:35:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:24.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:25 np0005593232 nova_compute[250269]: 2026-01-23 10:35:25.443 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:25 np0005593232 nova_compute[250269]: 2026-01-23 10:35:25.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3223: 321 pgs: 321 active+clean; 787 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 26 KiB/s wr, 234 op/s
Jan 23 05:35:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:26.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:27.767 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '70'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:27 np0005593232 nova_compute[250269]: 2026-01-23 10:35:27.783 250273 DEBUG nova.network.neutron [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updated VIF entry in instance network info cache for port 629a33c7-4917-4a44-9978-d78fecc89001. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:35:27 np0005593232 nova_compute[250269]: 2026-01-23 10:35:27.786 250273 DEBUG nova.network.neutron [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [{"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:35:27 np0005593232 nova_compute[250269]: 2026-01-23 10:35:27.913 250273 DEBUG oslo_concurrency.lockutils [req-4cec25c0-24ad-4267-a39b-37f3a1cc49a8 req-8223e701-997d-4178-a63f-e7e4880283c6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:35:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3224: 321 pgs: 321 active+clean; 833 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 261 op/s
Jan 23 05:35:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:28.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3225: 321 pgs: 321 active+clean; 833 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 23 05:35:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:30 np0005593232 nova_compute[250269]: 2026-01-23 10:35:30.446 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:30 np0005593232 podman[377938]: 2026-01-23 10:35:30.466812805 +0000 UTC m=+0.113428365 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 05:35:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:30 np0005593232 nova_compute[250269]: 2026-01-23 10:35:30.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3226: 321 pgs: 321 active+clean; 842 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.0 MiB/s wr, 150 op/s
Jan 23 05:35:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:32.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:32.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:33 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:33Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fe:27:90 10.100.0.5
Jan 23 05:35:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3227: 321 pgs: 321 active+clean; 866 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 212 op/s
Jan 23 05:35:34 np0005593232 podman[377967]: 2026-01-23 10:35:34.402969734 +0000 UTC m=+0.062873548 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:35:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:34.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:34.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:35 np0005593232 nova_compute[250269]: 2026-01-23 10:35:35.483 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:35 np0005593232 nova_compute[250269]: 2026-01-23 10:35:35.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3228: 321 pgs: 321 active+clean; 866 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 941 KiB/s rd, 3.9 MiB/s wr, 135 op/s
Jan 23 05:35:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:36.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:35:37
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'vms', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log']
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:35:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3229: 321 pgs: 321 active+clean; 866 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 963 KiB/s rd, 3.9 MiB/s wr, 141 op/s
Jan 23 05:35:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:38.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:35:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:35:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:38.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3230: 321 pgs: 321 active+clean; 866 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 944 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 23 05:35:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:40.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:40 np0005593232 nova_compute[250269]: 2026-01-23 10:35:40.488 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:40 np0005593232 nova_compute[250269]: 2026-01-23 10:35:40.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3231: 321 pgs: 321 active+clean; 868 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Jan 23 05:35:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:42.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:42.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:42.652 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:42.653 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:42.653 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3232: 321 pgs: 321 active+clean; 868 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 1.9 MiB/s wr, 167 op/s
Jan 23 05:35:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:44.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:44.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:45 np0005593232 nova_compute[250269]: 2026-01-23 10:35:45.494 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:45 np0005593232 nova_compute[250269]: 2026-01-23 10:35:45.707 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3233: 321 pgs: 321 active+clean; 868 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 38 KiB/s wr, 80 op/s
Jan 23 05:35:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:35:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:46.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:35:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:46.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.014023568362646566 of space, bias 1.0, pg target 4.2070705087939695 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021759550065140517 of space, bias 1.0, pg target 0.6440826819281593 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.007032958484552392 of space, bias 1.0, pg target 2.081755711427508 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017099385817978784 quantized to 16 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002137423227247348 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018168097431602458 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:35:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004274846454494696 quantized to 32 (current 32)
Jan 23 05:35:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3234: 321 pgs: 321 active+clean; 870 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 234 KiB/s wr, 87 op/s
Jan 23 05:35:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:48.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:48.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.601 250273 DEBUG oslo_concurrency.lockutils [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.602 250273 DEBUG oslo_concurrency.lockutils [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.622 250273 INFO nova.compute.manager [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Detaching volume c35b89da-3ccd-4fac-ae16-afc4a62713b5#033[00m
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.916 250273 INFO nova.virt.block_device [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Attempting to driver detach volume c35b89da-3ccd-4fac-ae16-afc4a62713b5 from mountpoint /dev/vdb#033[00m
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.934 250273 DEBUG nova.virt.libvirt.driver [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Attempting to detach device vdb from instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:35:49 np0005593232 nova_compute[250269]: 2026-01-23 10:35:49.935 250273 DEBUG nova.virt.libvirt.guest [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-c35b89da-3ccd-4fac-ae16-afc4a62713b5">
Jan 23 05:35:49 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  <serial>c35b89da-3ccd-4fac-ae16-afc4a62713b5</serial>
Jan 23 05:35:49 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:35:49 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:35:49 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.034 250273 INFO nova.virt.libvirt.driver [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Successfully detached device vdb from instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe from the persistent domain config.#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.035 250273 DEBUG nova.virt.libvirt.driver [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.035 250273 DEBUG nova.virt.libvirt.guest [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-c35b89da-3ccd-4fac-ae16-afc4a62713b5">
Jan 23 05:35:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  <serial>c35b89da-3ccd-4fac-ae16-afc4a62713b5</serial>
Jan 23 05:35:50 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:35:50 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:35:50 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:35:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Jan 23 05:35:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3235: 321 pgs: 321 active+clean; 870 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 200 KiB/s wr, 81 op/s
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.239 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769164550.2389197, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.241 250273 DEBUG nova.virt.libvirt.driver [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.244 250273 INFO nova.virt.libvirt.driver [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Successfully detached device vdb from instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe from the live domain config.#033[00m
Jan 23 05:35:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Jan 23 05:35:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Jan 23 05:35:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:35:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:50.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.497 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:50.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.709 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:50 np0005593232 nova_compute[250269]: 2026-01-23 10:35:50.941 250273 DEBUG nova.objects.instance [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'flavor' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:51 np0005593232 nova_compute[250269]: 2026-01-23 10:35:51.007 250273 DEBUG oslo_concurrency.lockutils [None req-0ef86c61-63b0-4a1b-a0ef-aaf168b845cd e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3237: 321 pgs: 321 active+clean; 870 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 240 KiB/s wr, 86 op/s
Jan 23 05:35:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:52.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 05:35:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:52.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 05:35:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3238: 321 pgs: 321 active+clean; 870 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 250 KiB/s wr, 20 op/s
Jan 23 05:35:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Jan 23 05:35:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:35:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:54.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:35:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.005000142s ======
Jan 23 05:35:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.005000142s
Jan 23 05:35:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Jan 23 05:35:55 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Jan 23 05:35:55 np0005593232 nova_compute[250269]: 2026-01-23 10:35:55.500 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:55 np0005593232 nova_compute[250269]: 2026-01-23 10:35:55.712 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3240: 321 pgs: 321 active+clean; 870 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 18 KiB/s wr, 14 op/s
Jan 23 05:35:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Jan 23 05:35:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:56.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:56.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Jan 23 05:35:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Jan 23 05:35:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3242: 321 pgs: 321 active+clean; 812 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 3.2 MiB/s wr, 97 op/s
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:35:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:35:58.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:35:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:35:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:35:58.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e361 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Jan 23 05:35:58 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.929 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.929 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.929 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.929 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.930 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.931 250273 INFO nova.compute.manager [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Terminating instance#033[00m
Jan 23 05:35:58 np0005593232 nova_compute[250269]: 2026-01-23 10:35:58.931 250273 DEBUG nova.compute.manager [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:35:58 np0005593232 kernel: tap629a33c7-49 (unregistering): left promiscuous mode
Jan 23 05:35:58 np0005593232 NetworkManager[49057]: <info>  [1769164558.9907] device (tap629a33c7-49): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:35:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:59Z|00739|binding|INFO|Releasing lport 629a33c7-4917-4a44-9978-d78fecc89001 from this chassis (sb_readonly=0)
Jan 23 05:35:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:59Z|00740|binding|INFO|Setting lport 629a33c7-4917-4a44-9978-d78fecc89001 down in Southbound
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.045 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 ovn_controller[151001]: 2026-01-23T10:35:59Z|00741|binding|INFO|Removing iface tap629a33c7-49 ovn-installed in OVS
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.052 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.055 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:27:90 10.100.0.5'], port_security=['fa:16:3e:fe:27:90 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5d42acd2-a3c4-40d2-b1c7-0dd7920671fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '815b71acf60d4ed8933ebd05228fa0c0', 'neutron:revision_number': '8', 'neutron:security_group_ids': '5072a19e-23bb-454f-8286-abdf06d201ec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.187', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11c3d371-746a-4085-8cb4-b3d90e2e50bf, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=629a33c7-4917-4a44-9978-d78fecc89001) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.056 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 629a33c7-4917-4a44-9978-d78fecc89001 in datapath d7d5530f-5227-4f75-bac0-2604bb3d68e2 unbound from our chassis#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.058 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7d5530f-5227-4f75-bac0-2604bb3d68e2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.061 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b0582c5f-642b-46ea-8627-f5b6c057546f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.062 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 namespace which is not needed anymore#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.069 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b8.scope: Deactivated successfully.
Jan 23 05:35:59 np0005593232 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d000000b8.scope: Consumed 15.871s CPU time.
Jan 23 05:35:59 np0005593232 systemd-machined[215836]: Machine qemu-83-instance-000000b8 terminated.
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.170 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.175 250273 INFO nova.virt.libvirt.driver [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Instance destroyed successfully.#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.175 250273 DEBUG nova.objects.instance [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lazy-loading 'resources' on Instance uuid 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.200 250273 DEBUG nova.virt.libvirt.vif [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:34:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerStableDeviceRescueTest-server-171773642',display_name='tempest-ServerStableDeviceRescueTest-server-171773642',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstabledevicerescuetest-server-171773642',id=184,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBb+3zp8o0K+puz2JQkOGzmnnNl24MGD8VzK0xADWahKH4uswOlO8X7EcIpKY4ojueYsF9jTYZVtsKkrwP7uCzwhKymJejXuTC0hLydoquX1zBJO2KneNThhWactU3vFAw==',key_name='tempest-keypair-455350267',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:35:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='815b71acf60d4ed8933ebd05228fa0c0',ramdisk_id='',reservation_id='r-c01cxkoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerStableDeviceRescueTest-1802220041',owner_user_name='tempest-ServerStableDeviceRescueTest-1802220041-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:35:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e1629a4b14764dddaabcadd16f3e1c1c',uuid=5d42acd2-a3c4-40d2-b1c7-0dd7920671fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.201 250273 DEBUG nova.network.os_vif_util [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converting VIF {"id": "629a33c7-4917-4a44-9978-d78fecc89001", "address": "fa:16:3e:fe:27:90", "network": {"id": "d7d5530f-5227-4f75-bac0-2604bb3d68e2", "bridge": "br-int", "label": "tempest-ServerStableDeviceRescueTest-1351383381-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "815b71acf60d4ed8933ebd05228fa0c0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap629a33c7-49", "ovs_interfaceid": "629a33c7-4917-4a44-9978-d78fecc89001", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.202 250273 DEBUG nova.network.os_vif_util [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.202 250273 DEBUG os_vif [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.205 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap629a33c7-49, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.219 250273 INFO os_vif [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fe:27:90,bridge_name='br-int',has_traffic_filtering=True,id=629a33c7-4917-4a44-9978-d78fecc89001,network=Network(d7d5530f-5227-4f75-bac0-2604bb3d68e2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap629a33c7-49')#033[00m
Jan 23 05:35:59 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [NOTICE]   (377845) : haproxy version is 2.8.14-c23fe91
Jan 23 05:35:59 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [NOTICE]   (377845) : path to executable is /usr/sbin/haproxy
Jan 23 05:35:59 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [WARNING]  (377845) : Exiting Master process...
Jan 23 05:35:59 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [ALERT]    (377845) : Current worker (377852) exited with code 143 (Terminated)
Jan 23 05:35:59 np0005593232 neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2[377822]: [WARNING]  (377845) : All workers exited. Exiting... (0)
Jan 23 05:35:59 np0005593232 systemd[1]: libpod-4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8.scope: Deactivated successfully.
Jan 23 05:35:59 np0005593232 podman[378209]: 2026-01-23 10:35:59.245943456 +0000 UTC m=+0.064641498 container died 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:35:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8-userdata-shm.mount: Deactivated successfully.
Jan 23 05:35:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-462208e503e76ce032de1df26aa91ac5201748cd608d9520ed5c8879264d62e9-merged.mount: Deactivated successfully.
Jan 23 05:35:59 np0005593232 podman[378209]: 2026-01-23 10:35:59.297586474 +0000 UTC m=+0.116284526 container cleanup 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:35:59 np0005593232 systemd[1]: libpod-conmon-4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8.scope: Deactivated successfully.
Jan 23 05:35:59 np0005593232 podman[378263]: 2026-01-23 10:35:59.391703609 +0000 UTC m=+0.060407488 container remove 4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.400 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[917a04e4-8d98-4bfa-8dd0-406036fb3ee1]: (4, ('Fri Jan 23 10:35:59 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8)\n4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8\nFri Jan 23 10:35:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 (4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8)\n4b9b9028e3ae1b343b49c48654ce140bfb956515020ab778e753880552ce4cc8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.402 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5b019100-8a0c-4dc4-8c82-951b36b60814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.404 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7d5530f-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.406 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 kernel: tapd7d5530f-50: left promiscuous mode
Jan 23 05:35:59 np0005593232 nova_compute[250269]: 2026-01-23 10:35:59.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.427 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[61b7850b-bd95-45f3-a790-2f949e7f9712]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.444 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[73061442-f0de-4658-b61b-ec69a3ebdf2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.447 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eecd6788-4e3c-4ecc-99be-749ab88a78db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.473 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f4c4bf-b407-4e8e-afcc-4e0b2ba31361]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843099, 'reachable_time': 15088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378276, 'error': None, 'target': 'ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 systemd[1]: run-netns-ovnmeta\x2dd7d5530f\x2d5227\x2d4f75\x2dbac0\x2d2604bb3d68e2.mount: Deactivated successfully.
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.479 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7d5530f-5227-4f75-bac0-2604bb3d68e2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:35:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:35:59.480 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[7a9cb1a4-237e-4b26-bf45-a1c6983b99bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:35:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.012 250273 DEBUG nova.compute.manager [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.013 250273 DEBUG oslo_concurrency.lockutils [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.013 250273 DEBUG oslo_concurrency.lockutils [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.013 250273 DEBUG oslo_concurrency.lockutils [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.014 250273 DEBUG nova.compute.manager [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.014 250273 DEBUG nova.compute.manager [req-c3529af2-e98c-4d23-90dd-9f0b5a6cb27d req-f38c0f90-8e5f-4d02-aa06-bfd240bb9654 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-unplugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7bd57e18-cf9f-431c-96e9-52f732abea49 does not exist
Jan 23 05:36:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2e7355a9-5561-4bc9-995c-47d20d19a5bb does not exist
Jan 23 05:36:00 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f5d99a9-6023-4bd1-a5cf-3d684cced8b7 does not exist
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Jan 23 05:36:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3245: 321 pgs: 321 active+clean; 812 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 249 KiB/s rd, 5.1 MiB/s wr, 131 op/s
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:00.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:00 np0005593232 nova_compute[250269]: 2026-01-23 10:36:00.502 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:00.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.828774086 +0000 UTC m=+0.066993515 container create 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:36:00 np0005593232 systemd[1]: Started libpod-conmon-43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8.scope.
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.79831632 +0000 UTC m=+0.036535809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.928203382 +0000 UTC m=+0.166422841 container init 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.941057678 +0000 UTC m=+0.179277147 container start 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.944694331 +0000 UTC m=+0.182913780 container attach 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:36:00 np0005593232 systemd[1]: libpod-43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8.scope: Deactivated successfully.
Jan 23 05:36:00 np0005593232 conmon[378438]: conmon 43a0f1390333f1b0d601 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8.scope/container/memory.events
Jan 23 05:36:00 np0005593232 stoic_elgamal[378438]: 167 167
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.953363857 +0000 UTC m=+0.191583276 container died 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:36:00 np0005593232 podman[378430]: 2026-01-23 10:36:00.981516358 +0000 UTC m=+0.117002947 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 23 05:36:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-af51e3c72329bf792a2c56ede3a959013dfb0e56d59ecfe7196758579bbeb17e-merged.mount: Deactivated successfully.
Jan 23 05:36:00 np0005593232 podman[378416]: 2026-01-23 10:36:00.993639602 +0000 UTC m=+0.231859031 container remove 43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:36:01 np0005593232 systemd[1]: libpod-conmon-43a0f1390333f1b0d601657c75d1f4a06fb8b624fd5d5679dff73c1b616a46b8.scope: Deactivated successfully.
Jan 23 05:36:01 np0005593232 podman[378529]: 2026-01-23 10:36:01.171650562 +0000 UTC m=+0.044662431 container create 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:36:01 np0005593232 systemd[1]: Started libpod-conmon-5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17.scope.
Jan 23 05:36:01 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:01 np0005593232 podman[378529]: 2026-01-23 10:36:01.153549217 +0000 UTC m=+0.026561106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:01 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:01 np0005593232 podman[378529]: 2026-01-23 10:36:01.279731984 +0000 UTC m=+0.152743863 container init 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:36:01 np0005593232 nova_compute[250269]: 2026-01-23 10:36:01.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:01 np0005593232 nova_compute[250269]: 2026-01-23 10:36:01.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:36:01 np0005593232 podman[378529]: 2026-01-23 10:36:01.304710754 +0000 UTC m=+0.177722623 container start 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:36:01 np0005593232 podman[378529]: 2026-01-23 10:36:01.309434168 +0000 UTC m=+0.182446067 container attach 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:36:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3246: 321 pgs: 321 active+clean; 816 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 6.0 MiB/s wr, 218 op/s
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.185 250273 DEBUG nova.compute.manager [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.186 250273 DEBUG oslo_concurrency.lockutils [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.186 250273 DEBUG oslo_concurrency.lockutils [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.186 250273 DEBUG oslo_concurrency.lockutils [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.186 250273 DEBUG nova.compute.manager [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] No waiting events found dispatching network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:36:02 np0005593232 nova_compute[250269]: 2026-01-23 10:36:02.187 250273 WARNING nova.compute.manager [req-7305aa35-4f77-47e9-8d03-ae4bcd1dab90 req-326dfdc9-cc42-4bd3-9979-bc22a08b787c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received unexpected event network-vif-plugged-629a33c7-4917-4a44-9978-d78fecc89001 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:36:02 np0005593232 sweet_dijkstra[378547]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:36:02 np0005593232 sweet_dijkstra[378547]: --> relative data size: 1.0
Jan 23 05:36:02 np0005593232 sweet_dijkstra[378547]: --> All data devices are unavailable
Jan 23 05:36:02 np0005593232 systemd[1]: libpod-5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17.scope: Deactivated successfully.
Jan 23 05:36:02 np0005593232 podman[378529]: 2026-01-23 10:36:02.258514654 +0000 UTC m=+1.131526523 container died 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:36:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d48375710d442bdcac6a7131f80a1e2a1e6d03df57ee276dfc6e0c33c2d2e90d-merged.mount: Deactivated successfully.
Jan 23 05:36:02 np0005593232 podman[378529]: 2026-01-23 10:36:02.320103344 +0000 UTC m=+1.193115213 container remove 5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_dijkstra, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:36:02 np0005593232 systemd[1]: libpod-conmon-5958511f3babbae3a1c578ff20175dfc9ef92eda14f62e200c59042598d45b17.scope: Deactivated successfully.
Jan 23 05:36:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:02.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.294 250273 INFO nova.virt.libvirt.driver [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Deleting instance files /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_del#033[00m
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.295 250273 INFO nova.virt.libvirt.driver [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Deletion of /var/lib/nova/instances/5d42acd2-a3c4-40d2-b1c7-0dd7920671fe_del complete#033[00m
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.301046636 +0000 UTC m=+0.060902532 container create af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 05:36:03 np0005593232 systemd[1]: Started libpod-conmon-af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307.scope.
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.363 250273 INFO nova.compute.manager [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Took 4.43 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.364 250273 DEBUG oslo.service.loopingcall [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.365 250273 DEBUG nova.compute.manager [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:36:03 np0005593232 nova_compute[250269]: 2026-01-23 10:36:03.365 250273 DEBUG nova.network.neutron [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.273691088 +0000 UTC m=+0.033547074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.421386536 +0000 UTC m=+0.181242452 container init af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.437279238 +0000 UTC m=+0.197135154 container start af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.441455197 +0000 UTC m=+0.201311163 container attach af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:36:03 np0005593232 eloquent_jones[378731]: 167 167
Jan 23 05:36:03 np0005593232 systemd[1]: libpod-af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307.scope: Deactivated successfully.
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.447057976 +0000 UTC m=+0.206913892 container died af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:36:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1b2f62b40dc9d3626e6bd2ac0e315661222de4df6b13dfe6c3fdf687081a3603-merged.mount: Deactivated successfully.
Jan 23 05:36:03 np0005593232 podman[378715]: 2026-01-23 10:36:03.496246744 +0000 UTC m=+0.256102660 container remove af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_jones, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 05:36:03 np0005593232 systemd[1]: libpod-conmon-af47bd594c3a9bbe228b9b9170e6cd509887bfa0690f1a703a55c174f337d307.scope: Deactivated successfully.
Jan 23 05:36:03 np0005593232 podman[378755]: 2026-01-23 10:36:03.750326916 +0000 UTC m=+0.065961496 container create 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:36:03 np0005593232 systemd[1]: Started libpod-conmon-22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1.scope.
Jan 23 05:36:03 np0005593232 podman[378755]: 2026-01-23 10:36:03.727494877 +0000 UTC m=+0.043129507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f7c468db1ca90a5ab20076edba5c46aa4311ccd959c98b58f7f34e865f87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f7c468db1ca90a5ab20076edba5c46aa4311ccd959c98b58f7f34e865f87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f7c468db1ca90a5ab20076edba5c46aa4311ccd959c98b58f7f34e865f87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd8f7c468db1ca90a5ab20076edba5c46aa4311ccd959c98b58f7f34e865f87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:03 np0005593232 podman[378755]: 2026-01-23 10:36:03.852405957 +0000 UTC m=+0.168040557 container init 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:36:03 np0005593232 podman[378755]: 2026-01-23 10:36:03.865551911 +0000 UTC m=+0.181186491 container start 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:36:03 np0005593232 podman[378755]: 2026-01-23 10:36:03.869279807 +0000 UTC m=+0.184914397 container attach 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:36:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3247: 321 pgs: 321 active+clean; 822 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.4 MiB/s rd, 9.9 MiB/s wr, 314 op/s
Jan 23 05:36:04 np0005593232 nova_compute[250269]: 2026-01-23 10:36:04.209 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:04.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]: {
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:    "0": [
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:        {
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "devices": [
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "/dev/loop3"
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            ],
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "lv_name": "ceph_lv0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "lv_size": "7511998464",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "name": "ceph_lv0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "tags": {
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.cluster_name": "ceph",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.crush_device_class": "",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.encrypted": "0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.osd_id": "0",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.type": "block",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:                "ceph.vdo": "0"
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            },
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "type": "block",
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:            "vg_name": "ceph_vg0"
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:        }
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]:    ]
Jan 23 05:36:04 np0005593232 kind_northcutt[378771]: }
Jan 23 05:36:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:04.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:04 np0005593232 systemd[1]: libpod-22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1.scope: Deactivated successfully.
Jan 23 05:36:04 np0005593232 podman[378782]: 2026-01-23 10:36:04.769121604 +0000 UTC m=+0.033540995 container died 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:36:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4cd8f7c468db1ca90a5ab20076edba5c46aa4311ccd959c98b58f7f34e865f87-merged.mount: Deactivated successfully.
Jan 23 05:36:05 np0005593232 podman[378782]: 2026-01-23 10:36:05.066365393 +0000 UTC m=+0.330784724 container remove 22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_northcutt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:36:05 np0005593232 systemd[1]: libpod-conmon-22167cbf33a91d38e3f1817406a41f48c8f38d6c6cce668537334ca0d5bd2fd1.scope: Deactivated successfully.
Jan 23 05:36:05 np0005593232 podman[378781]: 2026-01-23 10:36:05.123756314 +0000 UTC m=+0.358080219 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.506 250273 DEBUG nova.network.neutron [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.508 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.531 250273 INFO nova.compute.manager [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Took 2.17 seconds to deallocate network for instance.#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.629 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.630 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.633 250273 DEBUG nova.compute.manager [req-65a677b8-3be4-4ea4-883d-91a7648036f5 req-839357d6-072b-4ef2-816a-beacc3ebb2b8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Received event network-vif-deleted-629a33c7-4917-4a44-9978-d78fecc89001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.742 250273 DEBUG oslo_concurrency.processutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:05 np0005593232 nova_compute[250269]: 2026-01-23 10:36:05.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:05.821 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=71, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=70) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:36:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:05.824 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:36:05 np0005593232 podman[378957]: 2026-01-23 10:36:05.956921714 +0000 UTC m=+0.073907110 container create 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 05:36:06 np0005593232 systemd[1]: Started libpod-conmon-57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0.scope.
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:05.921719394 +0000 UTC m=+0.038704830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:06.072447728 +0000 UTC m=+0.189433164 container init 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:06.086501538 +0000 UTC m=+0.203486924 container start 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:06.091238782 +0000 UTC m=+0.208224218 container attach 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:36:06 np0005593232 naughty_hodgkin[378993]: 167 167
Jan 23 05:36:06 np0005593232 systemd[1]: libpod-57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0.scope: Deactivated successfully.
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:06.099275211 +0000 UTC m=+0.216260617 container died 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:36:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aaf532c2362f094609df04e53b6c70faafe3fad8ab85ba4cf6ecbc522469a446-merged.mount: Deactivated successfully.
Jan 23 05:36:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3248: 321 pgs: 321 active+clean; 822 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 6.3 MiB/s rd, 6.0 MiB/s wr, 211 op/s
Jan 23 05:36:06 np0005593232 podman[378957]: 2026-01-23 10:36:06.158558076 +0000 UTC m=+0.275543452 container remove 57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_hodgkin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:36:06 np0005593232 systemd[1]: libpod-conmon-57d7f0c74aa4be4ef6c41a1d8e45eba310b24417b81bca95a593c270cfc11db0.scope: Deactivated successfully.
Jan 23 05:36:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:36:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534524620' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.316 250273 DEBUG oslo_concurrency.processutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.331 250273 DEBUG nova.compute.provider_tree [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.373 250273 DEBUG nova.scheduler.client.report [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.411 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.460 250273 INFO nova.scheduler.client.report [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Deleted allocations for instance 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe#033[00m
Jan 23 05:36:06 np0005593232 podman[379019]: 2026-01-23 10:36:06.485243331 +0000 UTC m=+0.120518176 container create df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:36:06 np0005593232 podman[379019]: 2026-01-23 10:36:06.410698062 +0000 UTC m=+0.045972997 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:36:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:06.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:06 np0005593232 systemd[1]: Started libpod-conmon-df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354.scope.
Jan 23 05:36:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fac38edc1e8ed4b7d6630b9608016f1c91216087a751a2078802ed288127eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fac38edc1e8ed4b7d6630b9608016f1c91216087a751a2078802ed288127eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fac38edc1e8ed4b7d6630b9608016f1c91216087a751a2078802ed288127eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16fac38edc1e8ed4b7d6630b9608016f1c91216087a751a2078802ed288127eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:06 np0005593232 podman[379019]: 2026-01-23 10:36:06.646759962 +0000 UTC m=+0.282034847 container init df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:36:06 np0005593232 podman[379019]: 2026-01-23 10:36:06.659966457 +0000 UTC m=+0.295241302 container start df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:36:06 np0005593232 podman[379019]: 2026-01-23 10:36:06.663827537 +0000 UTC m=+0.299102462 container attach df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:36:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:06.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:06 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:06.825 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '71'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:06 np0005593232 nova_compute[250269]: 2026-01-23 10:36:06.985 250273 DEBUG oslo_concurrency.lockutils [None req-2def721b-5e10-4e28-a306-44091961de9a e1629a4b14764dddaabcadd16f3e1c1c 815b71acf60d4ed8933ebd05228fa0c0 - - default default] Lock "5d42acd2-a3c4-40d2-b1c7-0dd7920671fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:07 np0005593232 tender_colden[379035]: {
Jan 23 05:36:07 np0005593232 tender_colden[379035]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:36:07 np0005593232 tender_colden[379035]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:36:07 np0005593232 tender_colden[379035]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:36:07 np0005593232 tender_colden[379035]:        "osd_id": 0,
Jan 23 05:36:07 np0005593232 tender_colden[379035]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:36:07 np0005593232 tender_colden[379035]:        "type": "bluestore"
Jan 23 05:36:07 np0005593232 tender_colden[379035]:    }
Jan 23 05:36:07 np0005593232 tender_colden[379035]: }
Jan 23 05:36:07 np0005593232 systemd[1]: libpod-df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354.scope: Deactivated successfully.
Jan 23 05:36:07 np0005593232 podman[379019]: 2026-01-23 10:36:07.69861705 +0000 UTC m=+1.333891935 container died df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:36:07 np0005593232 systemd[1]: libpod-df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354.scope: Consumed 1.041s CPU time.
Jan 23 05:36:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-16fac38edc1e8ed4b7d6630b9608016f1c91216087a751a2078802ed288127eb-merged.mount: Deactivated successfully.
Jan 23 05:36:07 np0005593232 podman[379019]: 2026-01-23 10:36:07.78728592 +0000 UTC m=+1.422560795 container remove df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:36:07 np0005593232 systemd[1]: libpod-conmon-df590d55a745dc847c4cbbcafca9e590fdf98dd68785f6cfc852236f7fb26354.scope: Deactivated successfully.
Jan 23 05:36:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:36:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:36:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7bad4a87-2226-40d1-baac-29c3dac0e1ae does not exist
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 900cdc53-3f91-45f6-a705-e0364b1abc58 does not exist
Jan 23 05:36:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 56b7b2ef-5afb-4267-86b8-2f85aa663cce does not exist
Jan 23 05:36:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3249: 321 pgs: 321 active+clean; 743 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.4 MiB/s rd, 5.2 MiB/s wr, 235 op/s
Jan 23 05:36:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:08.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:08.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Jan 23 05:36:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Jan 23 05:36:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Jan 23 05:36:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:36:09 np0005593232 nova_compute[250269]: 2026-01-23 10:36:09.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:09 np0005593232 nova_compute[250269]: 2026-01-23 10:36:09.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3251: 321 pgs: 321 active+clean; 743 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 MiB/s rd, 4.8 MiB/s wr, 218 op/s
Jan 23 05:36:10 np0005593232 nova_compute[250269]: 2026-01-23 10:36:10.506 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:10.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:10.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.128 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.129 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.187 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.338 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.338 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.349 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.350 250273 INFO nova.compute.claims [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:36:11 np0005593232 nova_compute[250269]: 2026-01-23 10:36:11.595 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:36:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165391708' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.043 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.050 250273 DEBUG nova.compute.provider_tree [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:36:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3252: 321 pgs: 321 active+clean; 743 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.179 250273 DEBUG nova.scheduler.client.report [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.226 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.887s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.227 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.296 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.297 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.332 250273 INFO nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.366 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:36:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:12.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.530 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.531 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.532 250273 INFO nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Creating image(s)#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.562 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.591 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.617 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.621 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "837b7ab03ad4fb4610105a045b2601effdd3b429" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.622 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "837b7ab03ad4fb4610105a045b2601effdd3b429" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:12.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:12 np0005593232 nova_compute[250269]: 2026-01-23 10:36:12.870 250273 DEBUG nova.policy [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9a8ce4c88e8b46c5806ada5e3a6cdbbf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd59dad6496894352a2f4c7eb66ca1914', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.068333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573068537, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 2186, "num_deletes": 254, "total_data_size": 3870452, "memory_usage": 3934360, "flush_reason": "Manual Compaction"}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 23 05:36:13 np0005593232 nova_compute[250269]: 2026-01-23 10:36:13.091 250273 DEBUG nova.virt.libvirt.imagebackend [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Image locations are: [{'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/60e9525a-9f0e-4c80-9c46-e7936c55e48b/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/60e9525a-9f0e-4c80-9c46-e7936c55e48b/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573117665, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 3784504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70191, "largest_seqno": 72376, "table_properties": {"data_size": 3774477, "index_size": 6392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20991, "raw_average_key_size": 20, "raw_value_size": 3754358, "raw_average_value_size": 3709, "num_data_blocks": 276, "num_entries": 1012, "num_filter_entries": 1012, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164376, "oldest_key_time": 1769164376, "file_creation_time": 1769164573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 49765 microseconds, and 16988 cpu microseconds.
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.118090) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 3784504 bytes OK
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.118247) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.120871) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.120929) EVENT_LOG_v1 {"time_micros": 1769164573120917, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.120966) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 3861316, prev total WAL file size 3861316, number of live WAL files 2.
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.124078) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(3695KB)], [164(9679KB)]
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573124396, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 13696573, "oldest_snapshot_seqno": -1}
Jan 23 05:36:13 np0005593232 nova_compute[250269]: 2026-01-23 10:36:13.169 250273 DEBUG nova.virt.libvirt.imagebackend [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Selected location: {'url': 'rbd://e1533653-0a5a-584c-b34b-8689f0d32e77/images/60e9525a-9f0e-4c80-9c46-e7936c55e48b/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Jan 23 05:36:13 np0005593232 nova_compute[250269]: 2026-01-23 10:36:13.169 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] cloning images/60e9525a-9f0e-4c80-9c46-e7936c55e48b@snap to None/9f8603a4-2f28-496b-91b4-e30cc94657b4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 9727 keys, 11804431 bytes, temperature: kUnknown
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573243431, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 11804431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11743089, "index_size": 35949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 256328, "raw_average_key_size": 26, "raw_value_size": 11574086, "raw_average_value_size": 1189, "num_data_blocks": 1369, "num_entries": 9727, "num_filter_entries": 9727, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164573, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.243721) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 11804431 bytes
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.285559) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 99.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.6, 9.5 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 10254, records dropped: 527 output_compression: NoCompression
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.285600) EVENT_LOG_v1 {"time_micros": 1769164573285581, "job": 102, "event": "compaction_finished", "compaction_time_micros": 119129, "compaction_time_cpu_micros": 57114, "output_level": 6, "num_output_files": 1, "total_output_size": 11804431, "num_input_records": 10254, "num_output_records": 9727, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573286590, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164573288668, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.123957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.288708) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.288713) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.288716) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.288717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:36:13.288720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:36:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3253: 321 pgs: 321 active+clean; 742 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 15 KiB/s wr, 73 op/s
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.173 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164559.1716297, 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.174 250273 INFO nova.compute.manager [-] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.209 250273 DEBUG nova.compute.manager [None req-0bf4e7a1-0e34-45ad-8882-f3c4a48253c6 - - - - - -] [instance: 5d42acd2-a3c4-40d2-b1c7-0dd7920671fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.310 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Successfully created port: a0c65008-fe55-4ef8-95db-ee5d2b022c41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.320 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.321 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.321 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.321 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.322 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:14.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.559 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "837b7ab03ad4fb4610105a045b2601effdd3b429" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:14.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.692 250273 DEBUG nova.objects.instance [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.752 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.752 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Ensure instance console log exists: /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.753 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.753 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.753 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:36:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622217632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.804 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.976 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.977 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4076MB free_disk=20.783443450927734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.977 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:14 np0005593232 nova_compute[250269]: 2026-01-23 10:36:14.978 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.186 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.186 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.187 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.307 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.456 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.457 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.494 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.509 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.525 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:36:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Jan 23 05:36:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Jan 23 05:36:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Jan 23 05:36:15 np0005593232 nova_compute[250269]: 2026-01-23 10:36:15.571 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:36:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3303724014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.097 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.103 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:36:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3255: 321 pgs: 321 active+clean; 742 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.161 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.209 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.210 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.283 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Successfully updated port: a0c65008-fe55-4ef8-95db-ee5d2b022c41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.313 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.314 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquired lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.314 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:36:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:16.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:16 np0005593232 nova_compute[250269]: 2026-01-23 10:36:16.631 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:36:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:16.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.211 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.211 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.212 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.251 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.252 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.765 250273 DEBUG nova.compute.manager [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.766 250273 DEBUG nova.compute.manager [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing instance network info cache due to event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:36:17 np0005593232 nova_compute[250269]: 2026-01-23 10:36:17.766 250273 DEBUG oslo_concurrency.lockutils [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:36:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3256: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 92 KiB/s rd, 18 KiB/s wr, 128 op/s
Jan 23 05:36:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:18.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:18.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.811 250273 DEBUG nova.network.neutron [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating instance_info_cache with network_info: [{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:36:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Jan 23 05:36:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.864 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Releasing lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.865 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Instance network_info: |[{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:36:18 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.866 250273 DEBUG oslo_concurrency.lockutils [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.866 250273 DEBUG nova.network.neutron [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.869 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Start _get_guest_xml network_info=[{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T10:35:53Z,direct_url=<?>,disk_format='raw',id=60e9525a-9f0e-4c80-9c46-e7936c55e48b,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-447735217',owner='d59dad6496894352a2f4c7eb66ca1914',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T10:36:03Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '60e9525a-9f0e-4c80-9c46-e7936c55e48b'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.873 250273 WARNING nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.879 250273 DEBUG nova.virt.libvirt.host [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.880 250273 DEBUG nova.virt.libvirt.host [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.883 250273 DEBUG nova.virt.libvirt.host [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.884 250273 DEBUG nova.virt.libvirt.host [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.886 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.886 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-23T10:35:53Z,direct_url=<?>,disk_format='raw',id=60e9525a-9f0e-4c80-9c46-e7936c55e48b,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-447735217',owner='d59dad6496894352a2f4c7eb66ca1914',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-23T10:36:03Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.887 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.887 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.887 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.888 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.888 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.888 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.888 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.889 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.889 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.889 250273 DEBUG nova.virt.hardware [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:36:18 np0005593232 nova_compute[250269]: 2026-01-23 10:36:18.894 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:19 np0005593232 nova_compute[250269]: 2026-01-23 10:36:19.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:36:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632166235' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:36:19 np0005593232 nova_compute[250269]: 2026-01-23 10:36:19.417 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:19 np0005593232 nova_compute[250269]: 2026-01-23 10:36:19.454 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:19 np0005593232 nova_compute[250269]: 2026-01-23 10:36:19.459 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:36:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1717345741' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.014 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.019 250273 DEBUG nova.virt.libvirt.vif [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:36:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1699138195',display_name='tempest-TestStampPattern-server-1699138195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1699138195',id=188,image_ref='60e9525a-9f0e-4c80-9c46-e7936c55e48b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrKK+vqQ2ONAoFKX7V4eVrHBpyCPyjGn5U244sG4513gIb+5QaK2mU3GvydCfCOzo9xS+SqUIELsowqSaXGJbd+N0J3WtlcZAfr/OV3xzB4Bu/L3WF2HV34qxyNgfmi9Q==',key_name='tempest-TestStampPattern-1711959098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d59dad6496894352a2f4c7eb66ca1914',ramdisk_id='',reservation_id='r-rfkoiaj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6673e062-5d99-4c31-a3e0-673f55438d6e',image_min_disk='1',image_min_ram='0',image_owner_id='d59dad6496894352a2f4c7eb66ca1914',image_owner_project_name='tempest-TestStampPattern-1763690147',image_owner_user_name='tempest-TestStampPattern-1763690147-project-member',image_user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',network_allocated='True',owner_project_name='tempest-TestStampPattern-1763690147',owner_user_name='tempest-TestStampPattern-1763690147-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:36:12Z,user_data=None,user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',uuid=9f8603a4-2f28-496b-91b4-e30cc94657b4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.020 250273 DEBUG nova.network.os_vif_util [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converting VIF {"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.022 250273 DEBUG nova.network.os_vif_util [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.024 250273 DEBUG nova.objects.instance [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.059 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <uuid>9f8603a4-2f28-496b-91b4-e30cc94657b4</uuid>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <name>instance-000000bc</name>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestStampPattern-server-1699138195</nova:name>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:36:18</nova:creationTime>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:user uuid="9a8ce4c88e8b46c5806ada5e3a6cdbbf">tempest-TestStampPattern-1763690147-project-member</nova:user>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:project uuid="d59dad6496894352a2f4c7eb66ca1914">tempest-TestStampPattern-1763690147</nova:project>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="60e9525a-9f0e-4c80-9c46-e7936c55e48b"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <nova:port uuid="a0c65008-fe55-4ef8-95db-ee5d2b022c41">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="serial">9f8603a4-2f28-496b-91b4-e30cc94657b4</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="uuid">9f8603a4-2f28-496b-91b4-e30cc94657b4</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9f8603a4-2f28-496b-91b4-e30cc94657b4_disk">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:d3:48:be"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <target dev="tapa0c65008-fe"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/console.log" append="off"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <input type="keyboard" bus="usb"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:36:20 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:36:20 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:36:20 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:36:20 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.060 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Preparing to wait for external event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.061 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.061 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.061 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.062 250273 DEBUG nova.virt.libvirt.vif [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:36:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1699138195',display_name='tempest-TestStampPattern-server-1699138195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1699138195',id=188,image_ref='60e9525a-9f0e-4c80-9c46-e7936c55e48b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrKK+vqQ2ONAoFKX7V4eVrHBpyCPyjGn5U244sG4513gIb+5QaK2mU3GvydCfCOzo9xS+SqUIELsowqSaXGJbd+N0J3WtlcZAfr/OV3xzB4Bu/L3WF2HV34qxyNgfmi9Q==',key_name='tempest-TestStampPattern-1711959098',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d59dad6496894352a2f4c7eb66ca1914',ramdisk_id='',reservation_id='r-rfkoiaj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6673e062-5d99-4c31-a3e0-673f55438d6e',image_min_disk='1',image_min_ram='0',image_owner_id='d59dad6496894352a2f4c7eb66ca1914',image_owner_project_name='tempest-TestStampPattern-1763690147',image_owner_user_name='tempest-TestStampPattern-1763690147-project-member',image_user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',network_allocated='True',owner_project_name='tempest-TestStampPattern-1763690147',owner_user_name='tempest-TestStampPattern-1763690147-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:36:12Z,user_data=None,user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',uuid=9f8603a4-2f28-496b-91b4-e30cc94657b4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.063 250273 DEBUG nova.network.os_vif_util [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converting VIF {"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.063 250273 DEBUG nova.network.os_vif_util [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.064 250273 DEBUG os_vif [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.065 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.066 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.070 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0c65008-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.071 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0c65008-fe, col_values=(('external_ids', {'iface-id': 'a0c65008-fe55-4ef8-95db-ee5d2b022c41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:48:be', 'vm-uuid': '9f8603a4-2f28-496b-91b4-e30cc94657b4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.073 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:20 np0005593232 NetworkManager[49057]: <info>  [1769164580.0747] manager: (tapa0c65008-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.075 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.082 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.083 250273 INFO os_vif [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe')#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.142 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.142 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.143 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No VIF found with MAC fa:16:3e:d3:48:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.145 250273 INFO nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Using config drive#033[00m
Jan 23 05:36:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3258: 321 pgs: 321 active+clean; 617 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 21 KiB/s wr, 141 op/s
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.177 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:20.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:20 np0005593232 nova_compute[250269]: 2026-01-23 10:36:20.571 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:20.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.059 250273 INFO nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Creating config drive at /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config#033[00m
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.065 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl5bi3xw2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.219 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl5bi3xw2" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.894 250273 DEBUG nova.storage.rbd_utils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] rbd image 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.901 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:36:21 np0005593232 nova_compute[250269]: 2026-01-23 10:36:21.954 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3259: 321 pgs: 321 active+clean; 592 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 96 KiB/s rd, 5.0 KiB/s wr, 130 op/s
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.250 250273 DEBUG nova.network.neutron [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updated VIF entry in instance network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.250 250273 DEBUG nova.network.neutron [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating instance_info_cache with network_info: [{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.265 250273 DEBUG oslo_concurrency.lockutils [req-ac94e2b7-caa1-40a1-b80f-3d2921c18af0 req-73212bda-844e-4474-8877-e7d8366b1738 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.744 250273 DEBUG oslo_concurrency.processutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config 9f8603a4-2f28-496b-91b4-e30cc94657b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.843s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.744 250273 INFO nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Deleting local config drive /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4/disk.config because it was imported into RBD.#033[00m
Jan 23 05:36:22 np0005593232 kernel: tapa0c65008-fe: entered promiscuous mode
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.8239] manager: (tapa0c65008-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/344)
Jan 23 05:36:22 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:22Z|00742|binding|INFO|Claiming lport a0c65008-fe55-4ef8-95db-ee5d2b022c41 for this chassis.
Jan 23 05:36:22 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:22Z|00743|binding|INFO|a0c65008-fe55-4ef8-95db-ee5d2b022c41: Claiming fa:16:3e:d3:48:be 10.100.0.7
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.8437] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/345)
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.8444] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 23 05:36:22 np0005593232 nova_compute[250269]: 2026-01-23 10:36:22.842 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.847 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:48:be 10.100.0.7'], port_security=['fa:16:3e:d3:48:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9f8603a4-2f28-496b-91b4-e30cc94657b4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2cadf27-eae4-40fd-be37-b605e054ab76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd59dad6496894352a2f4c7eb66ca1914', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b37d3a3b-2932-4053-8960-53ee514541cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abb49df5-6f5b-4d51-9b7e-cc6910f0a6bb, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a0c65008-fe55-4ef8-95db-ee5d2b022c41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.848 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a0c65008-fe55-4ef8-95db-ee5d2b022c41 in datapath f2cadf27-eae4-40fd-be37-b605e054ab76 bound to our chassis#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.850 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f2cadf27-eae4-40fd-be37-b605e054ab76#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.869 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6f5ec0da-997d-473d-b637-01010014e97b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.870 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf2cadf27-e1 in ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.874 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf2cadf27-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.875 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8ebcb3c8-d7a8-459c-931a-27a35ca51be6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 systemd-machined[215836]: New machine qemu-84-instance-000000bc.
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.878 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f02bfb14-f000-466b-853c-b4f86f9fed4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 systemd[1]: Started Virtual Machine qemu-84-instance-000000bc.
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.894 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[dea289b8-7317-4e26-9fb8-ed8cacb803d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 systemd-udevd[379560]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.9291] device (tapa0c65008-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.9306] device (tapa0c65008-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.930 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cfdd2364-21a6-4574-a3fa-17a705b1c80a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.975 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[075de553-7be3-46a4-8607-c68bfeb78e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 NetworkManager[49057]: <info>  [1769164582.9878] manager: (tapf2cadf27-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/347)
Jan 23 05:36:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:22.986 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1891e364-408a-4876-b939-12dba6f0507f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:22 np0005593232 systemd-udevd[379565]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.035 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.039 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.038 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e444d20-25c8-4447-afb4-25b523944966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.042 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[64e2e3f8-575f-4027-8795-0ba58040ba73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.066 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 NetworkManager[49057]: <info>  [1769164583.0726] device (tapf2cadf27-e0): carrier: link connected
Jan 23 05:36:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:23Z|00744|binding|INFO|Setting lport a0c65008-fe55-4ef8-95db-ee5d2b022c41 ovn-installed in OVS
Jan 23 05:36:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:23Z|00745|binding|INFO|Setting lport a0c65008-fe55-4ef8-95db-ee5d2b022c41 up in Southbound
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.078 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[74c667ed-0157-4c41-9f72-095c5e436bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.105 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4a114532-68f9-42f1-b574-3a939fd87d93]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2cadf27-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:5f:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849621, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379592, 'error': None, 'target': 'ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.128 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e4440e8d-cec1-4c8f-a038-92f86575dc5e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe20:5fa8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 849621, 'tstamp': 849621}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379607, 'error': None, 'target': 'ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.339 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbc21a7-6da8-4a37-882b-8abb912a4004]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf2cadf27-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:20:5f:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 224], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849621, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379611, 'error': None, 'target': 'ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.389 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6cef216c-b866-4ba2-9a8c-b8f28171797b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.500 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[52018e28-63ef-4260-bc3a-a437894ed653]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.502 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2cadf27-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.502 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.503 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2cadf27-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 kernel: tapf2cadf27-e0: entered promiscuous mode
Jan 23 05:36:23 np0005593232 NetworkManager[49057]: <info>  [1769164583.5063] manager: (tapf2cadf27-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.509 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf2cadf27-e0, col_values=(('external_ids', {'iface-id': 'f45d82fe-2ce9-4738-9a72-0ec8f8b8032e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:36:23 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:23Z|00746|binding|INFO|Releasing lport f45d82fe-2ce9-4738-9a72-0ec8f8b8032e from this chassis (sb_readonly=0)
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.543 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.544 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f2cadf27-eae4-40fd-be37-b605e054ab76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f2cadf27-eae4-40fd-be37-b605e054ab76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.545 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[782d46c7-9773-4263-9ef8-a594142a25b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.546 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-f2cadf27-eae4-40fd-be37-b605e054ab76
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/f2cadf27-eae4-40fd-be37-b605e054ab76.pid.haproxy
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID f2cadf27-eae4-40fd-be37-b605e054ab76
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:36:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:23.547 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76', 'env', 'PROCESS_TAG=haproxy-f2cadf27-eae4-40fd-be37-b605e054ab76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f2cadf27-eae4-40fd-be37-b605e054ab76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.580 250273 DEBUG nova.compute.manager [req-53b4c14a-e121-47c4-9417-4d558a18618d req-2e66e729-2bc2-4d3e-9a56-5a6c2598b90c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.580 250273 DEBUG oslo_concurrency.lockutils [req-53b4c14a-e121-47c4-9417-4d558a18618d req-2e66e729-2bc2-4d3e-9a56-5a6c2598b90c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.581 250273 DEBUG oslo_concurrency.lockutils [req-53b4c14a-e121-47c4-9417-4d558a18618d req-2e66e729-2bc2-4d3e-9a56-5a6c2598b90c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.581 250273 DEBUG oslo_concurrency.lockutils [req-53b4c14a-e121-47c4-9417-4d558a18618d req-2e66e729-2bc2-4d3e-9a56-5a6c2598b90c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.581 250273 DEBUG nova.compute.manager [req-53b4c14a-e121-47c4-9417-4d558a18618d req-2e66e729-2bc2-4d3e-9a56-5a6c2598b90c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Processing event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.618 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164583.6176805, 9f8603a4-2f28-496b-91b4-e30cc94657b4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.619 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] VM Started (Lifecycle Event)#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.625 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.630 250273 DEBUG nova.virt.libvirt.driver [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.636 250273 INFO nova.virt.libvirt.driver [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Instance spawned successfully.#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.636 250273 INFO nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Took 11.11 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.637 250273 DEBUG nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.642 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.657 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.722 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.723 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164583.6194117, 9f8603a4-2f28-496b-91b4-e30cc94657b4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.723 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.764 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.770 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164583.6291537, 9f8603a4-2f28-496b-91b4-e30cc94657b4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.770 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.797 250273 INFO nova.compute.manager [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Took 12.51 seconds to build instance.#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.802 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.808 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:36:23 np0005593232 nova_compute[250269]: 2026-01-23 10:36:23.838 250273 DEBUG oslo_concurrency.lockutils [None req-d8edb64d-dd6f-4f9b-90b5-a13aa75b6d54 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:24 np0005593232 podman[379667]: 2026-01-23 10:36:24.045685279 +0000 UTC m=+0.043850318 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:36:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3260: 321 pgs: 321 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 107 KiB/s rd, 6.5 KiB/s wr, 149 op/s
Jan 23 05:36:24 np0005593232 podman[379667]: 2026-01-23 10:36:24.172169544 +0000 UTC m=+0.170334523 container create 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:36:24 np0005593232 systemd[1]: Started libpod-conmon-2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828.scope.
Jan 23 05:36:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:36:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1162163afb41419ac0fdae16782011381f83bcb8185857165f37764080eef324/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:36:24 np0005593232 podman[379667]: 2026-01-23 10:36:24.292059661 +0000 UTC m=+0.290224650 container init 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:36:24 np0005593232 podman[379667]: 2026-01-23 10:36:24.300161302 +0000 UTC m=+0.298326281 container start 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:36:24 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [NOTICE]   (379685) : New worker (379687) forked
Jan 23 05:36:24 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [NOTICE]   (379685) : Loading success.
Jan 23 05:36:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:24.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Jan 23 05:36:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Jan 23 05:36:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.075 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.755 250273 DEBUG nova.compute.manager [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.756 250273 DEBUG oslo_concurrency.lockutils [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.757 250273 DEBUG oslo_concurrency.lockutils [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.757 250273 DEBUG oslo_concurrency.lockutils [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.757 250273 DEBUG nova.compute.manager [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] No waiting events found dispatching network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:36:25 np0005593232 nova_compute[250269]: 2026-01-23 10:36:25.758 250273 WARNING nova.compute.manager [req-3aaad6e3-94e3-4e52-91bf-fea7ba87f9b5 req-ab8b61b4-e3ad-43d5-8a68-8d85142d4470 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received unexpected event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:36:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3262: 321 pgs: 321 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 2.0 KiB/s wr, 42 op/s
Jan 23 05:36:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:26.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3263: 321 pgs: 321 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 18 KiB/s wr, 153 op/s
Jan 23 05:36:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:28.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:28.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:30 np0005593232 nova_compute[250269]: 2026-01-23 10:36:30.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3264: 321 pgs: 321 active+clean; 536 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 142 op/s
Jan 23 05:36:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:30.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:30 np0005593232 nova_compute[250269]: 2026-01-23 10:36:30.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:30 np0005593232 nova_compute[250269]: 2026-01-23 10:36:30.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:31 np0005593232 podman[379700]: 2026-01-23 10:36:31.438597689 +0000 UTC m=+0.090917565 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:36:31 np0005593232 nova_compute[250269]: 2026-01-23 10:36:31.449 250273 DEBUG nova.compute.manager [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:36:31 np0005593232 nova_compute[250269]: 2026-01-23 10:36:31.449 250273 DEBUG nova.compute.manager [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing instance network info cache due to event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:36:31 np0005593232 nova_compute[250269]: 2026-01-23 10:36:31.450 250273 DEBUG oslo_concurrency.lockutils [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:36:31 np0005593232 nova_compute[250269]: 2026-01-23 10:36:31.450 250273 DEBUG oslo_concurrency.lockutils [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:36:31 np0005593232 nova_compute[250269]: 2026-01-23 10:36:31.450 250273 DEBUG nova.network.neutron [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:36:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3265: 321 pgs: 321 active+clean; 505 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 153 op/s
Jan 23 05:36:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:32.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:32.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:33 np0005593232 nova_compute[250269]: 2026-01-23 10:36:33.528 250273 DEBUG nova.network.neutron [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updated VIF entry in instance network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:36:33 np0005593232 nova_compute[250269]: 2026-01-23 10:36:33.528 250273 DEBUG nova.network.neutron [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating instance_info_cache with network_info: [{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:36:33 np0005593232 nova_compute[250269]: 2026-01-23 10:36:33.548 250273 DEBUG oslo_concurrency.lockutils [req-4defe9cf-2013-4be0-841c-974acf840970 req-713ec302-a2af-4fcf-afce-09db8b5c9b2a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:36:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Jan 23 05:36:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Jan 23 05:36:33 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Jan 23 05:36:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3267: 321 pgs: 321 active+clean; 408 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 19 KiB/s wr, 162 op/s
Jan 23 05:36:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:34.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:34.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:35 np0005593232 nova_compute[250269]: 2026-01-23 10:36:35.082 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:35 np0005593232 podman[379727]: 2026-01-23 10:36:35.435622238 +0000 UTC m=+0.080049966 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:36:35 np0005593232 nova_compute[250269]: 2026-01-23 10:36:35.583 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3268: 321 pgs: 321 active+clean; 408 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 18 KiB/s wr, 150 op/s
Jan 23 05:36:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:36.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:36.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Jan 23 05:36:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Jan 23 05:36:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:36:37
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'default.rgw.control', 'images', 'vms']
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:36:37 np0005593232 nova_compute[250269]: 2026-01-23 10:36:37.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:36:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3270: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 423 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 757 KiB/s wr, 126 op/s
Jan 23 05:36:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:38.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:36:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:36:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:38.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:38Z|00094|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.7
Jan 23 05:36:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:38Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d3:48:be 10.100.0.7
Jan 23 05:36:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:40 np0005593232 nova_compute[250269]: 2026-01-23 10:36:40.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3271: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 423 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 755 KiB/s wr, 99 op/s
Jan 23 05:36:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:40.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:40 np0005593232 nova_compute[250269]: 2026-01-23 10:36:40.585 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:40.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3272: 321 pgs: 4 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 308 active+clean; 388 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 732 KiB/s wr, 86 op/s
Jan 23 05:36:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:42Z|00096|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.7
Jan 23 05:36:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:42Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d3:48:be 10.100.0.7
Jan 23 05:36:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:42.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:42.653 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:42.654 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:36:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:36:42.654 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:36:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:42.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:43Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:48:be 10.100.0.7
Jan 23 05:36:43 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:43Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:48:be 10.100.0.7
Jan 23 05:36:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3273: 321 pgs: 321 active+clean; 295 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 123 op/s
Jan 23 05:36:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:44.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:44.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:45 np0005593232 nova_compute[250269]: 2026-01-23 10:36:45.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:45 np0005593232 nova_compute[250269]: 2026-01-23 10:36:45.588 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3274: 321 pgs: 321 active+clean; 295 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.3 MiB/s rd, 606 KiB/s wr, 123 op/s
Jan 23 05:36:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:46.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:46.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002473667527452075 of space, bias 1.0, pg target 0.7421002582356225 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0022695581495440166 of space, bias 1.0, pg target 0.6808674448632049 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00407110058393821 of space, bias 1.0, pg target 1.221330175181463 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:36:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:36:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3275: 321 pgs: 321 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.5 MiB/s wr, 138 op/s
Jan 23 05:36:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:48.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:48.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:48 np0005593232 ovn_controller[151001]: 2026-01-23T10:36:48Z|00747|binding|INFO|Releasing lport f45d82fe-2ce9-4738-9a72-0ec8f8b8032e from this chassis (sb_readonly=0)
Jan 23 05:36:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e369 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Jan 23 05:36:48 np0005593232 nova_compute[250269]: 2026-01-23 10:36:48.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Jan 23 05:36:48 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Jan 23 05:36:50 np0005593232 nova_compute[250269]: 2026-01-23 10:36:50.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3277: 321 pgs: 321 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 340 KiB/s rd, 2.2 MiB/s wr, 100 op/s
Jan 23 05:36:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:50 np0005593232 nova_compute[250269]: 2026-01-23 10:36:50.590 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:50.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3278: 321 pgs: 321 active+clean; 345 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 338 KiB/s rd, 2.2 MiB/s wr, 96 op/s
Jan 23 05:36:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:52.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:36:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3279: 321 pgs: 321 active+clean; 349 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 2.4 MiB/s wr, 53 op/s
Jan 23 05:36:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:54.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:54.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:55 np0005593232 nova_compute[250269]: 2026-01-23 10:36:55.096 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:55 np0005593232 nova_compute[250269]: 2026-01-23 10:36:55.596 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:36:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3280: 321 pgs: 321 active+clean; 349 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 253 KiB/s rd, 2.4 MiB/s wr, 53 op/s
Jan 23 05:36:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:56.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:36:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:56.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:36:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3281: 321 pgs: 321 active+clean; 349 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 254 KiB/s wr, 95 op/s
Jan 23 05:36:58 np0005593232 nova_compute[250269]: 2026-01-23 10:36:58.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:36:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:36:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:36:58.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:36:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:36:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:36:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:36:58.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:36:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:00 np0005593232 nova_compute[250269]: 2026-01-23 10:37:00.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3282: 321 pgs: 321 active+clean; 349 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.2 MiB/s rd, 226 KiB/s wr, 84 op/s
Jan 23 05:37:00 np0005593232 nova_compute[250269]: 2026-01-23 10:37:00.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:00 np0005593232 nova_compute[250269]: 2026-01-23 10:37:00.597 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:00.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:00.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:01 np0005593232 podman[379834]: 2026-01-23 10:37:01.677806687 +0000 UTC m=+0.091793060 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 05:37:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3283: 321 pgs: 321 active+clean; 359 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 669 KiB/s wr, 80 op/s
Jan 23 05:37:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:02.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.796 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.797 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.818 250273 DEBUG nova.objects.instance [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'flavor' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:03 np0005593232 nova_compute[250269]: 2026-01-23 10:37:03.876 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.184 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.184 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.185 250273 INFO nova.compute.manager [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Attaching volume 8bbabf32-b7e7-4da1-b195-dac1294e522d to /dev/vdb#033[00m
Jan 23 05:37:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3284: 321 pgs: 321 active+clean; 395 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 99 op/s
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.350 250273 DEBUG os_brick.utils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.353 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.375 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.375 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[c6c7db50-07e0-4d82-82b8-2e4977175a20]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.377 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.391 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.392 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f7455e61-e7ac-4004-bf81-8e9ee25b90be]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.394 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.410 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.411 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3d699d-6be7-4dcd-958f-6c9f1278b3e0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.412 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[15320a0d-fc43-45fc-8734-3c30849d625c]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.413 250273 DEBUG oslo_concurrency.processutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.460 250273 DEBUG oslo_concurrency.processutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "nvme version" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.463 250273 DEBUG os_brick.initiator.connectors.lightos [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.463 250273 DEBUG os_brick.initiator.connectors.lightos [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.463 250273 DEBUG os_brick.initiator.connectors.lightos [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.464 250273 DEBUG os_brick.utils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:37:04 np0005593232 nova_compute[250269]: 2026-01-23 10:37:04.464 250273 DEBUG nova.virt.block_device [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating existing volume attachment record: 8dda7496-20b2-4bc6-8d18-61807f6209d4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:37:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:04.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.215 250273 DEBUG nova.objects.instance [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'flavor' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.237 250273 DEBUG nova.virt.libvirt.driver [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Attempting to attach volume 8bbabf32-b7e7-4da1-b195-dac1294e522d with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.241 250273 DEBUG nova.virt.libvirt.guest [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-8bbabf32-b7e7-4da1-b195-dac1294e522d">
Jan 23 05:37:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:37:05 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:37:05 np0005593232 nova_compute[250269]:  <serial>8bbabf32-b7e7-4da1-b195-dac1294e522d</serial>
Jan 23 05:37:05 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:37:05 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.418 250273 DEBUG nova.virt.libvirt.driver [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.419 250273 DEBUG nova.virt.libvirt.driver [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.419 250273 DEBUG nova.virt.libvirt.driver [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.419 250273 DEBUG nova.virt.libvirt.driver [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] No VIF found with MAC fa:16:3e:d3:48:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.582 250273 DEBUG oslo_concurrency.lockutils [None req-1c4af3c1-c81a-449e-be34-296a5a72e3cd 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.398s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:05 np0005593232 nova_compute[250269]: 2026-01-23 10:37:05.601 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3285: 321 pgs: 321 active+clean; 395 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 23 05:37:06 np0005593232 podman[379913]: 2026-01-23 10:37:06.415510429 +0000 UTC m=+0.070828685 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 23 05:37:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:06.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:06.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:07 np0005593232 nova_compute[250269]: 2026-01-23 10:37:07.943 250273 DEBUG oslo_concurrency.lockutils [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:07 np0005593232 nova_compute[250269]: 2026-01-23 10:37:07.943 250273 DEBUG oslo_concurrency.lockutils [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:07 np0005593232 nova_compute[250269]: 2026-01-23 10:37:07.961 250273 INFO nova.compute.manager [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Detaching volume 8bbabf32-b7e7-4da1-b195-dac1294e522d#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.103 250273 INFO nova.virt.block_device [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Attempting to driver detach volume 8bbabf32-b7e7-4da1-b195-dac1294e522d from mountpoint /dev/vdb#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.113 250273 DEBUG nova.virt.libvirt.driver [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Attempting to detach device vdb from instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.113 250273 DEBUG nova.virt.libvirt.guest [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-8bbabf32-b7e7-4da1-b195-dac1294e522d">
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <serial>8bbabf32-b7e7-4da1-b195-dac1294e522d</serial>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:37:08 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.123 250273 INFO nova.virt.libvirt.driver [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Successfully detached device vdb from instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 from the persistent domain config.#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.124 250273 DEBUG nova.virt.libvirt.driver [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.124 250273 DEBUG nova.virt.libvirt.guest [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-8bbabf32-b7e7-4da1-b195-dac1294e522d">
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <serial>8bbabf32-b7e7-4da1-b195-dac1294e522d</serial>
Jan 23 05:37:08 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Jan 23 05:37:08 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:37:08 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:37:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3286: 321 pgs: 321 active+clean; 430 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 4.1 MiB/s wr, 242 op/s
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.521 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769164628.5209537, 9f8603a4-2f28-496b-91b4-e30cc94657b4 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.525 250273 DEBUG nova.virt.libvirt.driver [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.528 250273 INFO nova.virt.libvirt.driver [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Successfully detached device vdb from instance 9f8603a4-2f28-496b-91b4-e30cc94657b4 from the live domain config.#033[00m
Jan 23 05:37:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.687 250273 DEBUG nova.objects.instance [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'flavor' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:08.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:08 np0005593232 nova_compute[250269]: 2026-01-23 10:37:08.823 250273 DEBUG oslo_concurrency.lockutils [None req-8170d266-c350-4182-ba5f-99840731979c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:09 np0005593232 nova_compute[250269]: 2026-01-23 10:37:09.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 00a44731-fd35-4207-969b-c9e8d763b419 does not exist
Jan 23 05:37:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2e965938-f664-421e-b0b9-2f0fe7b270ad does not exist
Jan 23 05:37:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 95e92219-ec64-45fa-804b-625eb9112b5d does not exist
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:37:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:37:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:09.965 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=72, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=71) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:37:09 np0005593232 nova_compute[250269]: 2026-01-23 10:37:09.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:09.968 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.105 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3287: 321 pgs: 321 active+clean; 430 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 178 op/s
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.507304641 +0000 UTC m=+0.041546212 container create c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:10 np0005593232 systemd[1]: Started libpod-conmon-c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1.scope.
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.486545531 +0000 UTC m=+0.020787112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.647 250273 DEBUG nova.compute.manager [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.649 250273 DEBUG nova.compute.manager [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing instance network info cache due to event network-changed-a0c65008-fe55-4ef8-95db-ee5d2b022c41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.649 250273 DEBUG oslo_concurrency.lockutils [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.650 250273 DEBUG oslo_concurrency.lockutils [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.650 250273 DEBUG nova.network.neutron [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Refreshing network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.660014442 +0000 UTC m=+0.194256043 container init c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.668364419 +0000 UTC m=+0.202605980 container start c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.672888268 +0000 UTC m=+0.207129869 container attach c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:37:10 np0005593232 distracted_noyce[380228]: 167 167
Jan 23 05:37:10 np0005593232 systemd[1]: libpod-c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1.scope: Deactivated successfully.
Jan 23 05:37:10 np0005593232 conmon[380228]: conmon c44df70e04ce57a7e412 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1.scope/container/memory.events
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.677827958 +0000 UTC m=+0.212069529 container died c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:37:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-feb80a2bee1ab448cc1cbcbeef6c7d82999f4f6b4cbc692fb0aa33d034e0f5ce-merged.mount: Deactivated successfully.
Jan 23 05:37:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:37:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:37:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:37:10 np0005593232 podman[380211]: 2026-01-23 10:37:10.718972657 +0000 UTC m=+0.253214218 container remove c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:10 np0005593232 systemd[1]: libpod-conmon-c44df70e04ce57a7e412e89147912ca510ca8c938d0b7e4c737b974cc859f4e1.scope: Deactivated successfully.
Jan 23 05:37:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:10.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.801 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.802 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.802 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.803 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.803 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.804 250273 INFO nova.compute.manager [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Terminating instance#033[00m
Jan 23 05:37:10 np0005593232 nova_compute[250269]: 2026-01-23 10:37:10.806 250273 DEBUG nova.compute.manager [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:37:10 np0005593232 podman[380252]: 2026-01-23 10:37:10.868703873 +0000 UTC m=+0.024013703 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:11 np0005593232 kernel: tapa0c65008-fe (unregistering): left promiscuous mode
Jan 23 05:37:11 np0005593232 NetworkManager[49057]: <info>  [1769164631.0770] device (tapa0c65008-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:37:11 np0005593232 podman[380252]: 2026-01-23 10:37:11.080852423 +0000 UTC m=+0.236162233 container create 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:37:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:11Z|00748|binding|INFO|Releasing lport a0c65008-fe55-4ef8-95db-ee5d2b022c41 from this chassis (sb_readonly=0)
Jan 23 05:37:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:11Z|00749|binding|INFO|Setting lport a0c65008-fe55-4ef8-95db-ee5d2b022c41 down in Southbound
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:11Z|00750|binding|INFO|Removing iface tapa0c65008-fe ovn-installed in OVS
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.122 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.121 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:48:be 10.100.0.7'], port_security=['fa:16:3e:d3:48:be 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9f8603a4-2f28-496b-91b4-e30cc94657b4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f2cadf27-eae4-40fd-be37-b605e054ab76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd59dad6496894352a2f4c7eb66ca1914', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b37d3a3b-2932-4053-8960-53ee514541cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abb49df5-6f5b-4d51-9b7e-cc6910f0a6bb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=a0c65008-fe55-4ef8-95db-ee5d2b022c41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.123 161902 INFO neutron.agent.ovn.metadata.agent [-] Port a0c65008-fe55-4ef8-95db-ee5d2b022c41 in datapath f2cadf27-eae4-40fd-be37-b605e054ab76 unbound from our chassis#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.125 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f2cadf27-eae4-40fd-be37-b605e054ab76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.128 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a0630769-8144-464c-a362-aa9389ff5bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.129 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76 namespace which is not needed anymore#033[00m
Jan 23 05:37:11 np0005593232 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000bc.scope: Deactivated successfully.
Jan 23 05:37:11 np0005593232 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d000000bc.scope: Consumed 17.655s CPU time.
Jan 23 05:37:11 np0005593232 systemd-machined[215836]: Machine qemu-84-instance-000000bc terminated.
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.245 250273 INFO nova.virt.libvirt.driver [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Instance destroyed successfully.#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.246 250273 DEBUG nova.objects.instance [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lazy-loading 'resources' on Instance uuid 9f8603a4-2f28-496b-91b4-e30cc94657b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.263 250273 DEBUG nova.virt.libvirt.vif [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:36:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1699138195',display_name='tempest-TestStampPattern-server-1699138195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1699138195',id=188,image_ref='60e9525a-9f0e-4c80-9c46-e7936c55e48b',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMrKK+vqQ2ONAoFKX7V4eVrHBpyCPyjGn5U244sG4513gIb+5QaK2mU3GvydCfCOzo9xS+SqUIELsowqSaXGJbd+N0J3WtlcZAfr/OV3xzB4Bu/L3WF2HV34qxyNgfmi9Q==',key_name='tempest-TestStampPattern-1711959098',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:36:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d59dad6496894352a2f4c7eb66ca1914',ramdisk_id='',reservation_id='r-rfkoiaj3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='6673e062-5d99-4c31-a3e0-673f55438d6e',image_min_disk='1',image_min_ram='0',image_owner_id='d59dad6496894352a2f4c7eb66ca1914',image_owner_project_name='tempest-TestStampPattern-1763690147',image_owner_user_name='tempest-TestStampPattern-1763690147-project-member',image_user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',owner_project_name='tempest-TestStampPattern-1763690147',owner_user_name='tempest-TestStampPattern-1763690147-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:36:23Z,user_data=None,user_id='9a8ce4c88e8b46c5806ada5e3a6cdbbf',uuid=9f8603a4-2f28-496b-91b4-e30cc94657b4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.263 250273 DEBUG nova.network.os_vif_util [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converting VIF {"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.264 250273 DEBUG nova.network.os_vif_util [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.265 250273 DEBUG os_vif [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.266 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.267 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0c65008-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.268 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.272 250273 INFO os_vif [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:48:be,bridge_name='br-int',has_traffic_filtering=True,id=a0c65008-fe55-4ef8-95db-ee5d2b022c41,network=Network(f2cadf27-eae4-40fd-be37-b605e054ab76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0c65008-fe')#033[00m
Jan 23 05:37:11 np0005593232 systemd[1]: Started libpod-conmon-616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919.scope.
Jan 23 05:37:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:11 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [NOTICE]   (379685) : haproxy version is 2.8.14-c23fe91
Jan 23 05:37:11 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [NOTICE]   (379685) : path to executable is /usr/sbin/haproxy
Jan 23 05:37:11 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [WARNING]  (379685) : Exiting Master process...
Jan 23 05:37:11 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [ALERT]    (379685) : Current worker (379687) exited with code 143 (Terminated)
Jan 23 05:37:11 np0005593232 neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76[379681]: [WARNING]  (379685) : All workers exited. Exiting... (0)
Jan 23 05:37:11 np0005593232 systemd[1]: libpod-2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828.scope: Deactivated successfully.
Jan 23 05:37:11 np0005593232 podman[380303]: 2026-01-23 10:37:11.405355397 +0000 UTC m=+0.116943375 container died 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 05:37:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828-userdata-shm.mount: Deactivated successfully.
Jan 23 05:37:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1162163afb41419ac0fdae16782011381f83bcb8185857165f37764080eef324-merged.mount: Deactivated successfully.
Jan 23 05:37:11 np0005593232 podman[380303]: 2026-01-23 10:37:11.526250913 +0000 UTC m=+0.237838891 container cleanup 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:11 np0005593232 systemd[1]: libpod-conmon-2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828.scope: Deactivated successfully.
Jan 23 05:37:11 np0005593232 podman[380252]: 2026-01-23 10:37:11.561736612 +0000 UTC m=+0.717046442 container init 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:11 np0005593232 podman[380252]: 2026-01-23 10:37:11.571437518 +0000 UTC m=+0.726747328 container start 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:37:11 np0005593232 podman[380252]: 2026-01-23 10:37:11.574766602 +0000 UTC m=+0.730076412 container attach 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:37:11 np0005593232 podman[380351]: 2026-01-23 10:37:11.59651488 +0000 UTC m=+0.047412158 container remove 2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.602 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[143ceaf0-1a48-496d-8760-229d35c88237]: (4, ('Fri Jan 23 10:37:11 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76 (2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828)\n2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828\nFri Jan 23 10:37:11 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76 (2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828)\n2cfe4cc42937588f1f59a80d6f5f080328b3b8b4fb741a02a37b2566a3347828\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.605 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b453f342-cbb6-44d9-aafb-3e211691028a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.606 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2cadf27-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 kernel: tapf2cadf27-e0: left promiscuous mode
Jan 23 05:37:11 np0005593232 nova_compute[250269]: 2026-01-23 10:37:11.623 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.630 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3dabad0b-c677-4c15-82c8-01fb9e4f0dfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.648 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7716a6-79ad-459e-a794-bd23b53817a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.653 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[90ef39b2-1777-4de0-8681-03e01fb5ad76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.669 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f737ad32-dbb2-4425-8158-93dc049a1b39]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 849611, 'reachable_time': 21453, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380370, 'error': None, 'target': 'ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.674 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f2cadf27-eae4-40fd-be37-b605e054ab76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:37:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:11.674 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[af38c1af-8881-497e-a0b5-373b802b99ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:11 np0005593232 systemd[1]: run-netns-ovnmeta\x2df2cadf27\x2deae4\x2d40fd\x2dbe37\x2db605e054ab76.mount: Deactivated successfully.
Jan 23 05:37:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3288: 321 pgs: 321 active+clean; 430 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.1 MiB/s wr, 178 op/s
Jan 23 05:37:12 np0005593232 reverent_benz[380324]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:37:12 np0005593232 reverent_benz[380324]: --> relative data size: 1.0
Jan 23 05:37:12 np0005593232 reverent_benz[380324]: --> All data devices are unavailable
Jan 23 05:37:12 np0005593232 systemd[1]: libpod-616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919.scope: Deactivated successfully.
Jan 23 05:37:12 np0005593232 podman[380252]: 2026-01-23 10:37:12.396452198 +0000 UTC m=+1.551762048 container died 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b80bf5cb9225688d661c138a615df9bac9bda79dd5e3df3652b43c9269c80014-merged.mount: Deactivated successfully.
Jan 23 05:37:12 np0005593232 podman[380252]: 2026-01-23 10:37:12.455151266 +0000 UTC m=+1.610461076 container remove 616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:12 np0005593232 systemd[1]: libpod-conmon-616db4f9a85a5c00030561fdae4121baf6a7749fa7c9b42c6cc4fb91d3344919.scope: Deactivated successfully.
Jan 23 05:37:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:12.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.766 250273 DEBUG nova.compute.manager [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-unplugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.768 250273 DEBUG oslo_concurrency.lockutils [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.768 250273 DEBUG oslo_concurrency.lockutils [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.769 250273 DEBUG oslo_concurrency.lockutils [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.769 250273 DEBUG nova.compute.manager [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] No waiting events found dispatching network-vif-unplugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:37:12 np0005593232 nova_compute[250269]: 2026-01-23 10:37:12.769 250273 DEBUG nova.compute.manager [req-05ae5685-b574-4492-b01e-e7d0d37e2939 req-87c9b690-6354-4b45-9f9d-fd78aa8ec9a5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-unplugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.138686245 +0000 UTC m=+0.056038894 container create 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:37:13 np0005593232 systemd[1]: Started libpod-conmon-427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e.scope.
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.108540748 +0000 UTC m=+0.025893467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.26306406 +0000 UTC m=+0.180416769 container init 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.274748152 +0000 UTC m=+0.192100801 container start 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:37:13 np0005593232 funny_shirley[380552]: 167 167
Jan 23 05:37:13 np0005593232 systemd[1]: libpod-427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e.scope: Deactivated successfully.
Jan 23 05:37:13 np0005593232 nova_compute[250269]: 2026-01-23 10:37:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:13 np0005593232 nova_compute[250269]: 2026-01-23 10:37:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.47803089 +0000 UTC m=+0.395383549 container attach 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:37:13 np0005593232 podman[380536]: 2026-01-23 10:37:13.481188649 +0000 UTC m=+0.398541308 container died 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:13.970 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '72'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0aa8ae0cbbfcd4e3963f11e1403b792b62a3369ac2c6e8b3bfb025512759fba4-merged.mount: Deactivated successfully.
Jan 23 05:37:14 np0005593232 podman[380536]: 2026-01-23 10:37:14.033603851 +0000 UTC m=+0.950956470 container remove 427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shirley, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:37:14 np0005593232 systemd[1]: libpod-conmon-427d7485d5649da942f60943c2a35b7a75fb6950e2c6c11d72333809babafa9e.scope: Deactivated successfully.
Jan 23 05:37:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3289: 321 pgs: 321 active+clean; 430 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 3.6 MiB/s wr, 185 op/s
Jan 23 05:37:14 np0005593232 podman[380580]: 2026-01-23 10:37:14.247101859 +0000 UTC m=+0.051723861 container create 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:37:14 np0005593232 systemd[1]: Started libpod-conmon-1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789.scope.
Jan 23 05:37:14 np0005593232 podman[380580]: 2026-01-23 10:37:14.220248366 +0000 UTC m=+0.024870378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e2b11b3ffe48ab09bb9ad8bfb44466c44bcd6ffa3e5906ae39d21b2d7ed78d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e2b11b3ffe48ab09bb9ad8bfb44466c44bcd6ffa3e5906ae39d21b2d7ed78d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e2b11b3ffe48ab09bb9ad8bfb44466c44bcd6ffa3e5906ae39d21b2d7ed78d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e2b11b3ffe48ab09bb9ad8bfb44466c44bcd6ffa3e5906ae39d21b2d7ed78d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:14 np0005593232 podman[380580]: 2026-01-23 10:37:14.338988891 +0000 UTC m=+0.143610883 container init 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:37:14 np0005593232 podman[380580]: 2026-01-23 10:37:14.352851895 +0000 UTC m=+0.157473857 container start 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:37:14 np0005593232 podman[380580]: 2026-01-23 10:37:14.356821018 +0000 UTC m=+0.161442980 container attach 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:37:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:14 np0005593232 nova_compute[250269]: 2026-01-23 10:37:14.710 250273 DEBUG nova.network.neutron [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updated VIF entry in instance network info cache for port a0c65008-fe55-4ef8-95db-ee5d2b022c41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:37:14 np0005593232 nova_compute[250269]: 2026-01-23 10:37:14.711 250273 DEBUG nova.network.neutron [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating instance_info_cache with network_info: [{"id": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "address": "fa:16:3e:d3:48:be", "network": {"id": "f2cadf27-eae4-40fd-be37-b605e054ab76", "bridge": "br-int", "label": "tempest-TestStampPattern-1832792840-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d59dad6496894352a2f4c7eb66ca1914", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0c65008-fe", "ovs_interfaceid": "a0c65008-fe55-4ef8-95db-ee5d2b022c41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:14 np0005593232 nova_compute[250269]: 2026-01-23 10:37:14.737 250273 DEBUG oslo_concurrency.lockutils [req-02c5f545-082f-43f0-a3aa-5131f80db665 req-63c44405-2c22-45b9-a53e-3dcf36772673 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9f8603a4-2f28-496b-91b4-e30cc94657b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:37:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.003 250273 DEBUG nova.compute.manager [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.003 250273 DEBUG oslo_concurrency.lockutils [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.004 250273 DEBUG oslo_concurrency.lockutils [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.004 250273 DEBUG oslo_concurrency.lockutils [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.004 250273 DEBUG nova.compute.manager [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] No waiting events found dispatching network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.004 250273 WARNING nova.compute.manager [req-99c84698-423d-4c8d-9c23-078f46276548 req-a1bbcf02-33b5-43ab-947a-d8df881c7a17 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received unexpected event network-vif-plugged-a0c65008-fe55-4ef8-95db-ee5d2b022c41 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.016 250273 INFO nova.virt.libvirt.driver [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Deleting instance files /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4_del#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.017 250273 INFO nova.virt.libvirt.driver [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Deletion of /var/lib/nova/instances/9f8603a4-2f28-496b-91b4-e30cc94657b4_del complete#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.074 250273 INFO nova.compute.manager [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Took 4.27 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.075 250273 DEBUG oslo.service.loopingcall [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.076 250273 DEBUG nova.compute.manager [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.076 250273 DEBUG nova.network.neutron [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]: {
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:    "0": [
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:        {
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "devices": [
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "/dev/loop3"
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            ],
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "lv_name": "ceph_lv0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "lv_size": "7511998464",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "name": "ceph_lv0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "tags": {
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.cluster_name": "ceph",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.crush_device_class": "",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.encrypted": "0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.osd_id": "0",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.type": "block",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:                "ceph.vdo": "0"
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            },
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "type": "block",
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:            "vg_name": "ceph_vg0"
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:        }
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]:    ]
Jan 23 05:37:15 np0005593232 goofy_banzai[380597]: }
Jan 23 05:37:15 np0005593232 systemd[1]: libpod-1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789.scope: Deactivated successfully.
Jan 23 05:37:15 np0005593232 podman[380580]: 2026-01-23 10:37:15.160581173 +0000 UTC m=+0.965203155 container died 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:37:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-61e2b11b3ffe48ab09bb9ad8bfb44466c44bcd6ffa3e5906ae39d21b2d7ed78d-merged.mount: Deactivated successfully.
Jan 23 05:37:15 np0005593232 podman[380580]: 2026-01-23 10:37:15.212632233 +0000 UTC m=+1.017254195 container remove 1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:15 np0005593232 systemd[1]: libpod-conmon-1f05febd14f60772a32130952ac210b77e3c32b7f0ac5e273aae28f0a32ea789.scope: Deactivated successfully.
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.321 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.321 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.322 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.355 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.355 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.356 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.356 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.356 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.661 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.668 250273 DEBUG nova.network.neutron [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.692 250273 INFO nova.compute.manager [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Took 0.62 seconds to deallocate network for instance.#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.754 250273 DEBUG nova.compute.manager [req-2e759538-4b66-4d76-9a63-4a4294b8bff2 req-34235a36-eed2-478c-b521-d58d7996b632 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Received event network-vif-deleted-a0c65008-fe55-4ef8-95db-ee5d2b022c41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.758 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.758 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.762963145 +0000 UTC m=+0.039039090 container create 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:37:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:37:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852048324' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.796 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:15 np0005593232 systemd[1]: Started libpod-conmon-411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df.scope.
Jan 23 05:37:15 np0005593232 nova_compute[250269]: 2026-01-23 10:37:15.810 250273 DEBUG oslo_concurrency.processutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.746115496 +0000 UTC m=+0.022191461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.842842606 +0000 UTC m=+0.118918551 container init 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.849493665 +0000 UTC m=+0.125569600 container start 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.853422456 +0000 UTC m=+0.129498421 container attach 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:37:15 np0005593232 relaxed_buck[380796]: 167 167
Jan 23 05:37:15 np0005593232 systemd[1]: libpod-411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df.scope: Deactivated successfully.
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.855065263 +0000 UTC m=+0.131141208 container died 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:37:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2b140c55e371e85c4a10a92c96e8b48e0eccfefe2f63e558de17676bc204e0a3-merged.mount: Deactivated successfully.
Jan 23 05:37:15 np0005593232 podman[380778]: 2026-01-23 10:37:15.886949939 +0000 UTC m=+0.163025884 container remove 411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:15 np0005593232 systemd[1]: libpod-conmon-411e79b9da3eabd93e90ea6e74567946744084dd85b422ebf5c09e11dd50d3df.scope: Deactivated successfully.
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.038 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.043 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=20.86777114868164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.044 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.050414206 +0000 UTC m=+0.040644067 container create 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:37:16 np0005593232 systemd[1]: Started libpod-conmon-69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb.scope.
Jan 23 05:37:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0504d747183c296014830dd86a2f1445a2094b47261287ac528dbf2f72fbf68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0504d747183c296014830dd86a2f1445a2094b47261287ac528dbf2f72fbf68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0504d747183c296014830dd86a2f1445a2094b47261287ac528dbf2f72fbf68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0504d747183c296014830dd86a2f1445a2094b47261287ac528dbf2f72fbf68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.033827974 +0000 UTC m=+0.024057855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.143038818 +0000 UTC m=+0.133268709 container init 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.149316007 +0000 UTC m=+0.139545868 container start 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.152746184 +0000 UTC m=+0.142976065 container attach 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:37:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3290: 321 pgs: 321 active+clean; 430 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 2.3 MiB/s wr, 158 op/s
Jan 23 05:37:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:37:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030352435' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.273 250273 DEBUG oslo_concurrency.processutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.278 250273 DEBUG nova.compute.provider_tree [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.308 250273 DEBUG nova.scheduler.client.report [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.331 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.573s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.335 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.364 250273 INFO nova.scheduler.client.report [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Deleted allocations for instance 9f8603a4-2f28-496b-91b4-e30cc94657b4#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.401 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.401 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.419 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.453 250273 DEBUG oslo_concurrency.lockutils [None req-effba9c9-e11e-49fc-aa76-7d8a246e5d9c 9a8ce4c88e8b46c5806ada5e3a6cdbbf d59dad6496894352a2f4c7eb66ca1914 - - default default] Lock "9f8603a4-2f28-496b-91b4-e30cc94657b4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:16.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:37:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/638667220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.841 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.848 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.872 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.894 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:37:16 np0005593232 nova_compute[250269]: 2026-01-23 10:37:16.894 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.559s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]: {
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:        "osd_id": 0,
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:        "type": "bluestore"
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]:    }
Jan 23 05:37:16 np0005593232 amazing_gagarin[380857]: }
Jan 23 05:37:16 np0005593232 systemd[1]: libpod-69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb.scope: Deactivated successfully.
Jan 23 05:37:16 np0005593232 podman[380839]: 2026-01-23 10:37:16.984185157 +0000 UTC m=+0.974415018 container died 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:37:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0504d747183c296014830dd86a2f1445a2094b47261287ac528dbf2f72fbf68e-merged.mount: Deactivated successfully.
Jan 23 05:37:17 np0005593232 podman[380839]: 2026-01-23 10:37:17.274360015 +0000 UTC m=+1.264589916 container remove 69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:37:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:37:17 np0005593232 systemd[1]: libpod-conmon-69e353b48f4197faf007fb2f81b4e6356856b7c3b027d8e412b76a46365e28cb.scope: Deactivated successfully.
Jan 23 05:37:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e0a06b74-c4de-40b2-8e47-412ac7111067 does not exist
Jan 23 05:37:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd1bba9a-c9d9-4e77-b269-89df0dc6e892 does not exist
Jan 23 05:37:17 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f42b938b-6b3e-49a0-a3c6-15c392dc3995 does not exist
Jan 23 05:37:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3291: 321 pgs: 321 active+clean; 434 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 4.4 MiB/s wr, 225 op/s
Jan 23 05:37:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.588 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.588 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.610 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:37:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:18.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.949 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.950 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.959 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:37:18 np0005593232 nova_compute[250269]: 2026-01-23 10:37:18.959 250273 INFO nova.compute.claims [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.088 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Jan 23 05:37:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Jan 23 05:37:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Jan 23 05:37:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:37:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782919186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.531 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.539 250273 DEBUG nova.compute.provider_tree [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.565 250273 DEBUG nova.scheduler.client.report [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.606 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.607 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.657 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.657 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.685 250273 INFO nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.707 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.820 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.821 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.822 250273 INFO nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Creating image(s)#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.856 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.901 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.945 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:19 np0005593232 nova_compute[250269]: 2026-01-23 10:37:19.951 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.006 250273 DEBUG nova.policy [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a3cd8c3758e14f9c8e4ad1a9a94a9995', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b27af793a8cc42259216fbeaa302ba03', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.042 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.044 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.045 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.046 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.075 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.079 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 62d1083a-9525-43f2-8862-dac2d5af1dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3293: 321 pgs: 321 active+clean; 434 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 2.5 MiB/s wr, 88 op/s
Jan 23 05:37:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Jan 23 05:37:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Jan 23 05:37:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.439 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 62d1083a-9525-43f2-8862-dac2d5af1dff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.360s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.515 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] resizing rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.616 250273 DEBUG nova.objects.instance [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'migration_context' on Instance uuid 62d1083a-9525-43f2-8862-dac2d5af1dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:20.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.632 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.633 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Ensure instance console log exists: /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.633 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.633 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.634 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:20 np0005593232 nova_compute[250269]: 2026-01-23 10:37:20.663 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:21 np0005593232 nova_compute[250269]: 2026-01-23 10:37:21.159 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Successfully created port: 362368d8-3e1c-40ea-bc59-1224e20856c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:37:21 np0005593232 nova_compute[250269]: 2026-01-23 10:37:21.271 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3295: 321 pgs: 321 active+clean; 425 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 4.7 MiB/s wr, 153 op/s
Jan 23 05:37:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:22.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.028 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Successfully updated port: 362368d8-3e1c-40ea-bc59-1224e20856c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.049 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.049 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquired lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.050 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.226 250273 DEBUG nova.compute.manager [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-changed-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.226 250273 DEBUG nova.compute.manager [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Refreshing instance network info cache due to event network-changed-362368d8-3e1c-40ea-bc59-1224e20856c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.227 250273 DEBUG oslo_concurrency.lockutils [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:37:23 np0005593232 nova_compute[250269]: 2026-01-23 10:37:23.303 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:37:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3296: 321 pgs: 321 active+clean; 343 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 455 KiB/s rd, 6.8 MiB/s wr, 300 op/s
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.330 250273 DEBUG nova.network.neutron [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updating instance_info_cache with network_info: [{"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.354 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Releasing lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.355 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Instance network_info: |[{"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.356 250273 DEBUG oslo_concurrency.lockutils [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.356 250273 DEBUG nova.network.neutron [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Refreshing network info cache for port 362368d8-3e1c-40ea-bc59-1224e20856c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.361 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Start _get_guest_xml network_info=[{"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.370 250273 WARNING nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.380 250273 DEBUG nova.virt.libvirt.host [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.381 250273 DEBUG nova.virt.libvirt.host [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.386 250273 DEBUG nova.virt.libvirt.host [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.387 250273 DEBUG nova.virt.libvirt.host [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.390 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.391 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.392 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.393 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.395 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.398 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.399 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.399 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.400 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.401 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.401 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.402 250273 DEBUG nova.virt.hardware [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:37:24 np0005593232 nova_compute[250269]: 2026-01-23 10:37:24.413 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:24.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:24.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:37:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324330249' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.023 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.060 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.067 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:37:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798263677' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.540 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.543 250273 DEBUG nova.virt.libvirt.vif [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=191,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBQ7hbI9dSlZowLrM+55BW4vZefgM5C7AtbCmYzjlG7RhMLD86z6HKT1ky7da4FJ/rvc4D//2MBDEN2yr//ERuTxme2OPEqVyC6OVqkosE4nxK5JvPAi4Vemn/j2yc45jQ==',key_name='tempest-TestSecurityGroupsBasicOps-832716294',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-7xxgmy8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:37:19Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=62d1083a-9525-43f2-8862-dac2d5af1dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.544 250273 DEBUG nova.network.os_vif_util [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.546 250273 DEBUG nova.network.os_vif_util [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.548 250273 DEBUG nova.objects.instance [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62d1083a-9525-43f2-8862-dac2d5af1dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.567 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <uuid>62d1083a-9525-43f2-8862-dac2d5af1dff</uuid>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <name>instance-000000bf</name>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962</nova:name>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:37:24</nova:creationTime>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:user uuid="a3cd8c3758e14f9c8e4ad1a9a94a9995">tempest-TestSecurityGroupsBasicOps-622349977-project-member</nova:user>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:project uuid="b27af793a8cc42259216fbeaa302ba03">tempest-TestSecurityGroupsBasicOps-622349977</nova:project>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <nova:port uuid="362368d8-3e1c-40ea-bc59-1224e20856c7">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="serial">62d1083a-9525-43f2-8862-dac2d5af1dff</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="uuid">62d1083a-9525-43f2-8862-dac2d5af1dff</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/62d1083a-9525-43f2-8862-dac2d5af1dff_disk">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:f7:f4:16"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <target dev="tap362368d8-3e"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/console.log" append="off"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:37:25 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:37:25 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:37:25 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:37:25 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.568 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Preparing to wait for external event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.570 250273 DEBUG nova.virt.libvirt.vif [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=191,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBQ7hbI9dSlZowLrM+55BW4vZefgM5C7AtbCmYzjlG7RhMLD86z6HKT1ky7da4FJ/rvc4D//2MBDEN2yr//ERuTxme2OPEqVyC6OVqkosE4nxK5JvPAi4Vemn/j2yc45jQ==',key_name='tempest-TestSecurityGroupsBasicOps-832716294',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-7xxgmy8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:37:19Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=62d1083a-9525-43f2-8862-dac2d5af1dff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.570 250273 DEBUG nova.network.os_vif_util [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.571 250273 DEBUG nova.network.os_vif_util [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.572 250273 DEBUG os_vif [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.575 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.576 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.577 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.584 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.585 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap362368d8-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.586 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap362368d8-3e, col_values=(('external_ids', {'iface-id': '362368d8-3e1c-40ea-bc59-1224e20856c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f7:f4:16', 'vm-uuid': '62d1083a-9525-43f2-8862-dac2d5af1dff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.590 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:25 np0005593232 NetworkManager[49057]: <info>  [1769164645.5914] manager: (tap362368d8-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/349)
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.594 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.604 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.606 250273 INFO os_vif [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e')#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.654 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.655 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.655 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] No VIF found with MAC fa:16:3e:f7:f4:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.657 250273 INFO nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Using config drive#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.698 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.706 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.717 250273 DEBUG nova.network.neutron [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updated VIF entry in instance network info cache for port 362368d8-3e1c-40ea-bc59-1224e20856c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.718 250273 DEBUG nova.network.neutron [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updating instance_info_cache with network_info: [{"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:25 np0005593232 nova_compute[250269]: 2026-01-23 10:37:25.734 250273 DEBUG oslo_concurrency.lockutils [req-c9501fe4-2b99-4ada-9db5-7586efb07951 req-6193dc60-8e49-47fc-a924-5956c404e096 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.098 250273 INFO nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Creating config drive at /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.104 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ijf7ev execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3297: 321 pgs: 321 active+clean; 343 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 182 KiB/s rd, 3.7 MiB/s wr, 200 op/s
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.244 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164631.2422297, 9f8603a4-2f28-496b-91b4-e30cc94657b4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.245 250273 INFO nova.compute.manager [-] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.255 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7ijf7ev" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.305 250273 DEBUG nova.storage.rbd_utils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] rbd image 62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.311 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config 62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.370 250273 DEBUG nova.compute.manager [None req-b15fae8a-a90b-4cca-b958-4892db78c010 - - - - - -] [instance: 9f8603a4-2f28-496b-91b4-e30cc94657b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:37:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:26.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:26.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.876 250273 DEBUG oslo_concurrency.processutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config 62d1083a-9525-43f2-8862-dac2d5af1dff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.877 250273 INFO nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Deleting local config drive /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff/disk.config because it was imported into RBD.#033[00m
Jan 23 05:37:26 np0005593232 kernel: tap362368d8-3e: entered promiscuous mode
Jan 23 05:37:26 np0005593232 NetworkManager[49057]: <info>  [1769164646.9735] manager: (tap362368d8-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/350)
Jan 23 05:37:26 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:26Z|00751|binding|INFO|Claiming lport 362368d8-3e1c-40ea-bc59-1224e20856c7 for this chassis.
Jan 23 05:37:26 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:26Z|00752|binding|INFO|362368d8-3e1c-40ea-bc59-1224e20856c7: Claiming fa:16:3e:f7:f4:16 10.100.0.5
Jan 23 05:37:26 np0005593232 nova_compute[250269]: 2026-01-23 10:37:26.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:26.982 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:f4:16 10.100.0.5'], port_security=['fa:16:3e:f7:f4:16 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62d1083a-9525-43f2-8862-dac2d5af1dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-868ec025-7796-402b-ba12-8a3a5dac7373', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92c18c07-ed83-4ff4-89c6-5a468e492558', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07547cf6-dd0a-47b0-85a9-11b383a1aaf0, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=362368d8-3e1c-40ea-bc59-1224e20856c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:37:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:26.983 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 362368d8-3e1c-40ea-bc59-1224e20856c7 in datapath 868ec025-7796-402b-ba12-8a3a5dac7373 bound to our chassis#033[00m
Jan 23 05:37:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:26.984 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 868ec025-7796-402b-ba12-8a3a5dac7373#033[00m
Jan 23 05:37:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:27Z|00753|binding|INFO|Setting lport 362368d8-3e1c-40ea-bc59-1224e20856c7 ovn-installed in OVS
Jan 23 05:37:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:27Z|00754|binding|INFO|Setting lport 362368d8-3e1c-40ea-bc59-1224e20856c7 up in Southbound
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.010 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.012 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dd42432b-a084-4773-be98-8e34fa6d1131]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.013 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap868ec025-71 in ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.035 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap868ec025-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.035 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7d39ba7f-af65-468a-ae67-6ec8f2d183c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.036 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d0663357-bd24-499d-b949-09ce1ad86f26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 systemd-machined[215836]: New machine qemu-85-instance-000000bf.
Jan 23 05:37:27 np0005593232 systemd[1]: Started Virtual Machine qemu-85-instance-000000bf.
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.057 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4f009154-c2cf-4a34-92c8-c63518f42b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 systemd-udevd[381344]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:37:27 np0005593232 NetworkManager[49057]: <info>  [1769164647.0860] device (tap362368d8-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:37:27 np0005593232 NetworkManager[49057]: <info>  [1769164647.0877] device (tap362368d8-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.091 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[525887fe-a69d-4536-b906-4d058348c687]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.131 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[b384b35b-2ff0-4f22-b6c8-23cbb9f3a7e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 systemd-udevd[381350]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.142 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbb3fdd-1572-4651-acd7-aba62da58333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 NetworkManager[49057]: <info>  [1769164647.1437] manager: (tap868ec025-70): new Veth device (/org/freedesktop/NetworkManager/Devices/351)
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.196 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1e9032a6-935a-41fa-960a-ca8918221fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.200 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[cdb4efa6-f259-4fac-927e-8fb52029b3d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 NetworkManager[49057]: <info>  [1769164647.2430] device (tap868ec025-70): carrier: link connected
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.252 250273 DEBUG nova.compute.manager [req-5cc3bce4-4c2f-4252-811a-2091e982b8c1 req-e345a5a0-7978-4603-b36c-5be0ccb0ae9c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.253 250273 DEBUG oslo_concurrency.lockutils [req-5cc3bce4-4c2f-4252-811a-2091e982b8c1 req-e345a5a0-7978-4603-b36c-5be0ccb0ae9c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.253 250273 DEBUG oslo_concurrency.lockutils [req-5cc3bce4-4c2f-4252-811a-2091e982b8c1 req-e345a5a0-7978-4603-b36c-5be0ccb0ae9c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.254 250273 DEBUG oslo_concurrency.lockutils [req-5cc3bce4-4c2f-4252-811a-2091e982b8c1 req-e345a5a0-7978-4603-b36c-5be0ccb0ae9c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.254 250273 DEBUG nova.compute.manager [req-5cc3bce4-4c2f-4252-811a-2091e982b8c1 req-e345a5a0-7978-4603-b36c-5be0ccb0ae9c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Processing event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.261 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[983808e3-bc0f-4ff4-b0f2-78686d4b94bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.291 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ce3bfb9a-04f8-4eab-bbb2-9c37b3b3fa04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap868ec025-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:85:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856038, 'reachable_time': 43389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381375, 'error': None, 'target': 'ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.325 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[82de9317-3208-4bbf-a8f0-8743c1e2d6a7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea0:858e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856038, 'tstamp': 856038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381376, 'error': None, 'target': 'ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.359 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0df56c7b-782b-4a19-a8e0-1b4646c1f78c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap868ec025-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a0:85:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 227], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856038, 'reachable_time': 43389, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 381392, 'error': None, 'target': 'ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.396 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ea14bb-df53-42a4-81e3-17e4d6280fff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.475 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[be99e4a8-9799-4a9f-8a0f-5ca51e3dc339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.477 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap868ec025-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.477 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.477 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap868ec025-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.479 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:27 np0005593232 kernel: tap868ec025-70: entered promiscuous mode
Jan 23 05:37:27 np0005593232 NetworkManager[49057]: <info>  [1769164647.4812] manager: (tap868ec025-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/352)
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.482 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap868ec025-70, col_values=(('external_ids', {'iface-id': 'a8d6af13-33ee-459a-b9e0-d67de69b3614'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:27Z|00755|binding|INFO|Releasing lport a8d6af13-33ee-459a-b9e0-d67de69b3614 from this chassis (sb_readonly=0)
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.497 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.500 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/868ec025-7796-402b-ba12-8a3a5dac7373.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/868ec025-7796-402b-ba12-8a3a5dac7373.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.501 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a756560f-f712-4155-862a-9dfc2a25a6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.502 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-868ec025-7796-402b-ba12-8a3a5dac7373
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/868ec025-7796-402b-ba12-8a3a5dac7373.pid.haproxy
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 868ec025-7796-402b-ba12-8a3a5dac7373
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:37:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:27.504 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373', 'env', 'PROCESS_TAG=haproxy-868ec025-7796-402b-ba12-8a3a5dac7373', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/868ec025-7796-402b-ba12-8a3a5dac7373.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.563 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164647.562721, 62d1083a-9525-43f2-8862-dac2d5af1dff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.564 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] VM Started (Lifecycle Event)#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.569 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.574 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.579 250273 INFO nova.virt.libvirt.driver [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Instance spawned successfully.#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.579 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.597 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.604 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.609 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.610 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.610 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.611 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.611 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.612 250273 DEBUG nova.virt.libvirt.driver [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.646 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.647 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164647.5630653, 62d1083a-9525-43f2-8862-dac2d5af1dff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.647 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.677 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.682 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164647.572366, 62d1083a-9525-43f2-8862-dac2d5af1dff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.682 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.686 250273 INFO nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Took 7.87 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.686 250273 DEBUG nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.700 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.704 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.743 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.766 250273 INFO nova.compute.manager [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Took 8.84 seconds to build instance.#033[00m
Jan 23 05:37:27 np0005593232 nova_compute[250269]: 2026-01-23 10:37:27.785 250273 DEBUG oslo_concurrency.lockutils [None req-7db7481e-1223-4d8e-aa26-a52f21a6544a a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:27 np0005593232 podman[381448]: 2026-01-23 10:37:27.929557339 +0000 UTC m=+0.066113290 container create 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:37:27 np0005593232 systemd[1]: Started libpod-conmon-6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d.scope.
Jan 23 05:37:27 np0005593232 podman[381448]: 2026-01-23 10:37:27.895970104 +0000 UTC m=+0.032526075 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:37:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:37:28 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/daf1086d32adecf2d9e5b8f097c12e7810865acef858e4bf0a7580dcc252ecda/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:37:28 np0005593232 podman[381448]: 2026-01-23 10:37:28.03372685 +0000 UTC m=+0.170282841 container init 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:37:28 np0005593232 podman[381448]: 2026-01-23 10:37:28.041086519 +0000 UTC m=+0.177642470 container start 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:37:28 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [NOTICE]   (381469) : New worker (381471) forked
Jan 23 05:37:28 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [NOTICE]   (381469) : Loading success.
Jan 23 05:37:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3298: 321 pgs: 321 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.0 MiB/s wr, 400 op/s
Jan 23 05:37:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:28.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e372 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Jan 23 05:37:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Jan 23 05:37:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.475 250273 DEBUG nova.compute.manager [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.476 250273 DEBUG oslo_concurrency.lockutils [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.477 250273 DEBUG oslo_concurrency.lockutils [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.478 250273 DEBUG oslo_concurrency.lockutils [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.478 250273 DEBUG nova.compute.manager [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] No waiting events found dispatching network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:37:29 np0005593232 nova_compute[250269]: 2026-01-23 10:37:29.479 250273 WARNING nova.compute.manager [req-5a3982d3-0056-48ea-b0d5-e3efffe729cd req-4e41f36e-58b0-46c7-b8fa-f68076bd4873 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received unexpected event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:37:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3300: 321 pgs: 321 active+clean; 257 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 4.5 MiB/s wr, 360 op/s
Jan 23 05:37:30 np0005593232 nova_compute[250269]: 2026-01-23 10:37:30.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:30.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:30 np0005593232 nova_compute[250269]: 2026-01-23 10:37:30.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:30.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:31Z|00756|binding|INFO|Releasing lport a8d6af13-33ee-459a-b9e0-d67de69b3614 from this chassis (sb_readonly=0)
Jan 23 05:37:31 np0005593232 nova_compute[250269]: 2026-01-23 10:37:31.322 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3301: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 3.2 MiB/s wr, 352 op/s
Jan 23 05:37:32 np0005593232 podman[381482]: 2026-01-23 10:37:32.455996065 +0000 UTC m=+0.111453189 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:37:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:32.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:32.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:33 np0005593232 nova_compute[250269]: 2026-01-23 10:37:33.821 250273 DEBUG nova.compute.manager [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-changed-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:33 np0005593232 nova_compute[250269]: 2026-01-23 10:37:33.822 250273 DEBUG nova.compute.manager [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Refreshing instance network info cache due to event network-changed-362368d8-3e1c-40ea-bc59-1224e20856c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:37:33 np0005593232 nova_compute[250269]: 2026-01-23 10:37:33.823 250273 DEBUG oslo_concurrency.lockutils [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:37:33 np0005593232 nova_compute[250269]: 2026-01-23 10:37:33.823 250273 DEBUG oslo_concurrency.lockutils [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:37:33 np0005593232 nova_compute[250269]: 2026-01-23 10:37:33.823 250273 DEBUG nova.network.neutron [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Refreshing network info cache for port 362368d8-3e1c-40ea-bc59-1224e20856c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:37:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3302: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.5 MiB/s wr, 277 op/s
Jan 23 05:37:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:34.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:35 np0005593232 nova_compute[250269]: 2026-01-23 10:37:35.640 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:35 np0005593232 nova_compute[250269]: 2026-01-23 10:37:35.670 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3303: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 4.7 MiB/s rd, 1.5 MiB/s wr, 277 op/s
Jan 23 05:37:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:36.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:36 np0005593232 nova_compute[250269]: 2026-01-23 10:37:36.714 250273 DEBUG nova.network.neutron [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updated VIF entry in instance network info cache for port 362368d8-3e1c-40ea-bc59-1224e20856c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:37:36 np0005593232 nova_compute[250269]: 2026-01-23 10:37:36.714 250273 DEBUG nova.network.neutron [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updating instance_info_cache with network_info: [{"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:36 np0005593232 nova_compute[250269]: 2026-01-23 10:37:36.758 250273 DEBUG oslo_concurrency.lockutils [req-465574f8-c692-4c86-81d9-2211aec63f02 req-d24176ac-6664-46a5-882f-35ac5e9c5552 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-62d1083a-9525-43f2-8862-dac2d5af1dff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:37:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:37:37
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'backups']
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:37:37 np0005593232 podman[381511]: 2026-01-23 10:37:37.419794972 +0000 UTC m=+0.072637075 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:37:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:37:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:38Z|00757|binding|INFO|Releasing lport a8d6af13-33ee-459a-b9e0-d67de69b3614 from this chassis (sb_readonly=0)
Jan 23 05:37:38 np0005593232 nova_compute[250269]: 2026-01-23 10:37:38.142 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3304: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 5.1 KiB/s wr, 85 op/s
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:37:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:37:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:38.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:38.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3305: 321 pgs: 321 active+clean; 246 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.1 MiB/s rd, 4.7 KiB/s wr, 78 op/s
Jan 23 05:37:40 np0005593232 nova_compute[250269]: 2026-01-23 10:37:40.644 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:40.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:40 np0005593232 nova_compute[250269]: 2026-01-23 10:37:40.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:40.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3306: 321 pgs: 321 active+clean; 254 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 619 KiB/s wr, 85 op/s
Jan 23 05:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:42.654 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:42.654 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:42.655 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:42.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:42Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f7:f4:16 10.100.0.5
Jan 23 05:37:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:42Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f7:f4:16 10.100.0.5
Jan 23 05:37:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:42.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3307: 321 pgs: 321 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 23 05:37:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:37:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483731196' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:37:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:37:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/483731196' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:37:44 np0005593232 nova_compute[250269]: 2026-01-23 10:37:44.635 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:44.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:37:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:44.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:37:45 np0005593232 nova_compute[250269]: 2026-01-23 10:37:45.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:45 np0005593232 nova_compute[250269]: 2026-01-23 10:37:45.674 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3308: 321 pgs: 321 active+clean; 272 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 23 05:37:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:46.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:46.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.004311924204355109 of space, bias 1.0, pg target 1.2935772613065328 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.650393169551461 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:37:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:37:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3309: 321 pgs: 321 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:37:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:48.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:48.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.071 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.072 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.073 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.073 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.074 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.076 250273 INFO nova.compute.manager [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Terminating instance#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.077 250273 DEBUG nova.compute.manager [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:37:49 np0005593232 kernel: tap362368d8-3e (unregistering): left promiscuous mode
Jan 23 05:37:49 np0005593232 NetworkManager[49057]: <info>  [1769164669.4577] device (tap362368d8-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:49Z|00758|binding|INFO|Releasing lport 362368d8-3e1c-40ea-bc59-1224e20856c7 from this chassis (sb_readonly=0)
Jan 23 05:37:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:49Z|00759|binding|INFO|Setting lport 362368d8-3e1c-40ea-bc59-1224e20856c7 down in Southbound
Jan 23 05:37:49 np0005593232 ovn_controller[151001]: 2026-01-23T10:37:49Z|00760|binding|INFO|Removing iface tap362368d8-3e ovn-installed in OVS
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.522 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000bf.scope: Deactivated successfully.
Jan 23 05:37:49 np0005593232 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d000000bf.scope: Consumed 14.754s CPU time.
Jan 23 05:37:49 np0005593232 systemd-machined[215836]: Machine qemu-85-instance-000000bf terminated.
Jan 23 05:37:49 np0005593232 kernel: tap362368d8-3e: entered promiscuous mode
Jan 23 05:37:49 np0005593232 kernel: tap362368d8-3e (unregistering): left promiscuous mode
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.719 250273 INFO nova.virt.libvirt.driver [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Instance destroyed successfully.#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.719 250273 DEBUG nova.objects.instance [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lazy-loading 'resources' on Instance uuid 62d1083a-9525-43f2-8862-dac2d5af1dff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:37:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:49.899 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f7:f4:16 10.100.0.5'], port_security=['fa:16:3e:f7:f4:16 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '62d1083a-9525-43f2-8862-dac2d5af1dff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-868ec025-7796-402b-ba12-8a3a5dac7373', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b27af793a8cc42259216fbeaa302ba03', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4f95f3a9-bfe8-439f-b277-b796a4874838', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07547cf6-dd0a-47b0-85a9-11b383a1aaf0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=362368d8-3e1c-40ea-bc59-1224e20856c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:37:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:49.902 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 362368d8-3e1c-40ea-bc59-1224e20856c7 in datapath 868ec025-7796-402b-ba12-8a3a5dac7373 unbound from our chassis#033[00m
Jan 23 05:37:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:49.904 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 868ec025-7796-402b-ba12-8a3a5dac7373, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:37:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:49.906 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[586456fe-87ef-4bde-afb6-916cbee33f5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:49.907 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373 namespace which is not needed anymore#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.908 250273 DEBUG nova.virt.libvirt.vif [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:37:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-622349977-gen-1-960718962',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-622349977-gen',id=191,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBQ7hbI9dSlZowLrM+55BW4vZefgM5C7AtbCmYzjlG7RhMLD86z6HKT1ky7da4FJ/rvc4D//2MBDEN2yr//ERuTxme2OPEqVyC6OVqkosE4nxK5JvPAi4Vemn/j2yc45jQ==',key_name='tempest-TestSecurityGroupsBasicOps-832716294',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:37:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b27af793a8cc42259216fbeaa302ba03',ramdisk_id='',reservation_id='r-7xxgmy8d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-622349977',owner_user_name='tempest-TestSecurityGroupsBasicOps-622349977-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:37:27Z,user_data=None,user_id='a3cd8c3758e14f9c8e4ad1a9a94a9995',uuid=62d1083a-9525-43f2-8862-dac2d5af1dff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.910 250273 DEBUG nova.network.os_vif_util [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converting VIF {"id": "362368d8-3e1c-40ea-bc59-1224e20856c7", "address": "fa:16:3e:f7:f4:16", "network": {"id": "868ec025-7796-402b-ba12-8a3a5dac7373", "bridge": "br-int", "label": "tempest-network-smoke--2132727647", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b27af793a8cc42259216fbeaa302ba03", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap362368d8-3e", "ovs_interfaceid": "362368d8-3e1c-40ea-bc59-1224e20856c7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.911 250273 DEBUG nova.network.os_vif_util [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.911 250273 DEBUG os_vif [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.913 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.913 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap362368d8-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.916 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:49 np0005593232 nova_compute[250269]: 2026-01-23 10:37:49.920 250273 INFO os_vif [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f7:f4:16,bridge_name='br-int',has_traffic_filtering=True,id=362368d8-3e1c-40ea-bc59-1224e20856c7,network=Network(868ec025-7796-402b-ba12-8a3a5dac7373),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap362368d8-3e')#033[00m
Jan 23 05:37:50 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [NOTICE]   (381469) : haproxy version is 2.8.14-c23fe91
Jan 23 05:37:50 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [NOTICE]   (381469) : path to executable is /usr/sbin/haproxy
Jan 23 05:37:50 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [WARNING]  (381469) : Exiting Master process...
Jan 23 05:37:50 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [ALERT]    (381469) : Current worker (381471) exited with code 143 (Terminated)
Jan 23 05:37:50 np0005593232 neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373[381464]: [WARNING]  (381469) : All workers exited. Exiting... (0)
Jan 23 05:37:50 np0005593232 systemd[1]: libpod-6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d.scope: Deactivated successfully.
Jan 23 05:37:50 np0005593232 podman[381640]: 2026-01-23 10:37:50.203048582 +0000 UTC m=+0.184386852 container died 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:37:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3310: 321 pgs: 321 active+clean; 279 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 05:37:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d-userdata-shm.mount: Deactivated successfully.
Jan 23 05:37:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-daf1086d32adecf2d9e5b8f097c12e7810865acef858e4bf0a7580dcc252ecda-merged.mount: Deactivated successfully.
Jan 23 05:37:50 np0005593232 podman[381640]: 2026-01-23 10:37:50.350621267 +0000 UTC m=+0.331959527 container cleanup 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 23 05:37:50 np0005593232 systemd[1]: libpod-conmon-6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d.scope: Deactivated successfully.
Jan 23 05:37:50 np0005593232 podman[381672]: 2026-01-23 10:37:50.43376799 +0000 UTC m=+0.060306295 container remove 6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.440 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97d2bba9-71e7-417e-9a58-c88585273090]: (4, ('Fri Jan 23 10:37:50 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373 (6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d)\n6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d\nFri Jan 23 10:37:50 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373 (6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d)\n6b0bab2a9662072cd1cd7ae4315fd86f66a76e6c0d7f3b4f5d572a4107101a0d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.442 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[15c3e4d6-2434-4f13-bec9-fdf4fa5c68a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.443 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap868ec025-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:37:50 np0005593232 nova_compute[250269]: 2026-01-23 10:37:50.445 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:50 np0005593232 kernel: tap868ec025-70: left promiscuous mode
Jan 23 05:37:50 np0005593232 nova_compute[250269]: 2026-01-23 10:37:50.447 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.450 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[120f77c2-a8a8-4b1d-8b63-5bcee66bc37c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 nova_compute[250269]: 2026-01-23 10:37:50.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.471 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[40d1c90b-5f59-4114-9e52-f31ec029d1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.472 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b58c6325-394b-450b-9ce4-13b8bdddbd96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.486 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dadccb09-6bc1-462a-ba72-ddddc8fe3c2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856026, 'reachable_time': 34955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381688, 'error': None, 'target': 'ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 systemd[1]: run-netns-ovnmeta\x2d868ec025\x2d7796\x2d402b\x2dba12\x2d8a3a5dac7373.mount: Deactivated successfully.
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.490 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-868ec025-7796-402b-ba12-8a3a5dac7373 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:37:50 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:37:50.491 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[57c68e28-aa46-45f1-a5f6-5e7076a1a33e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:37:50 np0005593232 nova_compute[250269]: 2026-01-23 10:37:50.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:50.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:50.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.601 250273 DEBUG nova.compute.manager [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-unplugged-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.602 250273 DEBUG oslo_concurrency.lockutils [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.602 250273 DEBUG oslo_concurrency.lockutils [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.603 250273 DEBUG oslo_concurrency.lockutils [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.603 250273 DEBUG nova.compute.manager [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] No waiting events found dispatching network-vif-unplugged-362368d8-3e1c-40ea-bc59-1224e20856c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:37:51 np0005593232 nova_compute[250269]: 2026-01-23 10:37:51.604 250273 DEBUG nova.compute.manager [req-8fe2faa0-dcc3-4f6d-86ca-348b5abb68d0 req-533d6c00-81c4-4643-9005-aa94ed0f6023 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-unplugged-362368d8-3e1c-40ea-bc59-1224e20856c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:37:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3311: 321 pgs: 321 active+clean; 253 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Jan 23 05:37:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:52.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:52.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.740 250273 DEBUG nova.compute.manager [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.741 250273 DEBUG oslo_concurrency.lockutils [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.741 250273 DEBUG oslo_concurrency.lockutils [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.742 250273 DEBUG oslo_concurrency.lockutils [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.742 250273 DEBUG nova.compute.manager [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] No waiting events found dispatching network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:37:53 np0005593232 nova_compute[250269]: 2026-01-23 10:37:53.742 250273 WARNING nova.compute.manager [req-77bdd99f-57b3-42d6-a548-1c0987e9c119 req-0e341e60-5ac1-43ea-af7a-acfb5a1b1e9f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received unexpected event network-vif-plugged-362368d8-3e1c-40ea-bc59-1224e20856c7 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:37:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3312: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 244 KiB/s rd, 1.6 MiB/s wr, 73 op/s
Jan 23 05:37:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:37:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:54.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.719 250273 INFO nova.virt.libvirt.driver [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Deleting instance files /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff_del#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.721 250273 INFO nova.virt.libvirt.driver [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Deletion of /var/lib/nova/instances/62d1083a-9525-43f2-8862-dac2d5af1dff_del complete#033[00m
Jan 23 05:37:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:54.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.829 250273 INFO nova.compute.manager [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Took 5.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.830 250273 DEBUG oslo.service.loopingcall [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.830 250273 DEBUG nova.compute.manager [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.831 250273 DEBUG nova.network.neutron [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:54 np0005593232 nova_compute[250269]: 2026-01-23 10:37:54.915 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.692 250273 DEBUG nova.network.neutron [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.719 250273 INFO nova.compute.manager [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Took 0.89 seconds to deallocate network for instance.#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.778 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.779 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.803 250273 DEBUG nova.compute.manager [req-27fed958-30bb-489c-ad2e-0b9b4219ceaa req-8be339f5-a463-4f41-9314-b8042bb98b95 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Received event network-vif-deleted-362368d8-3e1c-40ea-bc59-1224e20856c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.863 250273 DEBUG oslo_concurrency.processutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:37:55 np0005593232 nova_compute[250269]: 2026-01-23 10:37:55.916 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:37:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3313: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 91 KiB/s wr, 37 op/s
Jan 23 05:37:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:37:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3190741548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.380 250273 DEBUG oslo_concurrency.processutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.390 250273 DEBUG nova.compute.provider_tree [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.482 250273 DEBUG nova.scheduler.client.report [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.524 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.569 250273 INFO nova.scheduler.client.report [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Deleted allocations for instance 62d1083a-9525-43f2-8862-dac2d5af1dff#033[00m
Jan 23 05:37:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:56.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:56 np0005593232 nova_compute[250269]: 2026-01-23 10:37:56.706 250273 DEBUG oslo_concurrency.lockutils [None req-f9c62fec-aa35-4734-a3e1-74894c3614ce a3cd8c3758e14f9c8e4ad1a9a94a9995 b27af793a8cc42259216fbeaa302ba03 - - default default] Lock "62d1083a-9525-43f2-8862-dac2d5af1dff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:37:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:56.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3314: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 64 KiB/s rd, 93 KiB/s wr, 43 op/s
Jan 23 05:37:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:37:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:37:58.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:37:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:37:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:37:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:37:58.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:37:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.213564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679213701, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 1290, "num_deletes": 265, "total_data_size": 1977448, "memory_usage": 2002032, "flush_reason": "Manual Compaction"}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679230699, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 1955379, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72378, "largest_seqno": 73666, "table_properties": {"data_size": 1949267, "index_size": 3314, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13695, "raw_average_key_size": 20, "raw_value_size": 1936705, "raw_average_value_size": 2873, "num_data_blocks": 146, "num_entries": 674, "num_filter_entries": 674, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164573, "oldest_key_time": 1769164573, "file_creation_time": 1769164679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 17191 microseconds, and 10177 cpu microseconds.
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.230764) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 1955379 bytes OK
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.230790) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.238547) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.238574) EVENT_LOG_v1 {"time_micros": 1769164679238566, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.238600) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 1971561, prev total WAL file size 1971561, number of live WAL files 2.
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.239836) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303135' seq:72057594037927935, type:22 .. '6C6F676D0033323638' seq:0, type:0; will stop at (end)
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(1909KB)], [167(11MB)]
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679239966, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 13759810, "oldest_snapshot_seqno": -1}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 9857 keys, 13626063 bytes, temperature: kUnknown
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679423250, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 13626063, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13561564, "index_size": 38722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24709, "raw_key_size": 260091, "raw_average_key_size": 26, "raw_value_size": 13388040, "raw_average_value_size": 1358, "num_data_blocks": 1484, "num_entries": 9857, "num_filter_entries": 9857, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164679, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.423506) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 13626063 bytes
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.425740) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 75.0 rd, 74.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 11.3 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(14.0) write-amplify(7.0) OK, records in: 10401, records dropped: 544 output_compression: NoCompression
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.425755) EVENT_LOG_v1 {"time_micros": 1769164679425748, "job": 104, "event": "compaction_finished", "compaction_time_micros": 183360, "compaction_time_cpu_micros": 62508, "output_level": 6, "num_output_files": 1, "total_output_size": 13626063, "num_input_records": 10401, "num_output_records": 9857, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679426295, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164679428468, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.239738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.428525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.428530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.428532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.428534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:37:59.428535) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:37:59 np0005593232 nova_compute[250269]: 2026-01-23 10:37:59.919 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3315: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 29 op/s
Jan 23 05:38:00 np0005593232 nova_compute[250269]: 2026-01-23 10:38:00.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:00.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:00.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:01 np0005593232 nova_compute[250269]: 2026-01-23 10:38:01.865 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:01 np0005593232 nova_compute[250269]: 2026-01-23 10:38:01.866 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3316: 321 pgs: 321 active+clean; 178 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 406 KiB/s rd, 24 KiB/s wr, 41 op/s
Jan 23 05:38:02 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 23 05:38:02 np0005593232 nova_compute[250269]: 2026-01-23 10:38:02.500 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:02.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:03 np0005593232 nova_compute[250269]: 2026-01-23 10:38:03.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:03 np0005593232 nova_compute[250269]: 2026-01-23 10:38:03.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:38:03 np0005593232 podman[381768]: 2026-01-23 10:38:03.425732625 +0000 UTC m=+0.084236125 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 23 05:38:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3317: 321 pgs: 321 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 24 KiB/s wr, 50 op/s
Jan 23 05:38:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:04.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:04 np0005593232 nova_compute[250269]: 2026-01-23 10:38:04.718 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164669.716481, 62d1083a-9525-43f2-8862-dac2d5af1dff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:38:04 np0005593232 nova_compute[250269]: 2026-01-23 10:38:04.718 250273 INFO nova.compute.manager [-] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:38:04 np0005593232 nova_compute[250269]: 2026-01-23 10:38:04.741 250273 DEBUG nova.compute.manager [None req-a0f3524b-3481-4618-b0b0-5569ed616166 - - - - - -] [instance: 62d1083a-9525-43f2-8862-dac2d5af1dff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:38:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:04 np0005593232 nova_compute[250269]: 2026-01-23 10:38:04.922 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:05 np0005593232 nova_compute[250269]: 2026-01-23 10:38:05.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:05 np0005593232 nova_compute[250269]: 2026-01-23 10:38:05.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3318: 321 pgs: 321 active+clean; 121 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.5 KiB/s wr, 35 op/s
Jan 23 05:38:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:06.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3319: 321 pgs: 321 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 3.5 KiB/s wr, 55 op/s
Jan 23 05:38:08 np0005593232 podman[381798]: 2026-01-23 10:38:08.418238527 +0000 UTC m=+0.065824702 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 23 05:38:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:08.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:09 np0005593232 nova_compute[250269]: 2026-01-23 10:38:09.925 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:10 np0005593232 nova_compute[250269]: 2026-01-23 10:38:10.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3320: 321 pgs: 321 active+clean; 120 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 KiB/s wr, 49 op/s
Jan 23 05:38:10 np0005593232 nova_compute[250269]: 2026-01-23 10:38:10.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:10 np0005593232 nova_compute[250269]: 2026-01-23 10:38:10.486 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:10.487 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=73, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=72) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:38:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:10.496 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:38:10 np0005593232 nova_compute[250269]: 2026-01-23 10:38:10.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:10.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3321: 321 pgs: 321 active+clean; 132 MiB data, 1.3 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 457 KiB/s wr, 52 op/s
Jan 23 05:38:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:12.499 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '73'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:38:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:12.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:12.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:12 np0005593232 nova_compute[250269]: 2026-01-23 10:38:12.882 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:13 np0005593232 nova_compute[250269]: 2026-01-23 10:38:13.117 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:13 np0005593232 nova_compute[250269]: 2026-01-23 10:38:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3322: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 23 05:38:14 np0005593232 nova_compute[250269]: 2026-01-23 10:38:14.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:14.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:14.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:14 np0005593232 nova_compute[250269]: 2026-01-23 10:38:14.928 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.429 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.430 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.430 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.430 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.431 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:38:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3847413735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:38:15 np0005593232 nova_compute[250269]: 2026-01-23 10:38:15.882 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.104 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.106 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4145MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.106 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.107 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.190 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.190 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.207 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3323: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 23 05:38:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:38:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2652841328' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.719 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:16.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.726 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.748 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.776 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:38:16 np0005593232 nova_compute[250269]: 2026-01-23 10:38:16.777 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:38:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:16.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:17 np0005593232 nova_compute[250269]: 2026-01-23 10:38:17.779 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:17 np0005593232 nova_compute[250269]: 2026-01-23 10:38:17.779 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:38:17 np0005593232 nova_compute[250269]: 2026-01-23 10:38:17.780 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:38:17 np0005593232 nova_compute[250269]: 2026-01-23 10:38:17.806 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:38:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3324: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 23 05:38:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:18.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:18.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:38:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 640d9fbf-fa17-411f-9b4a-d664869f8216 does not exist
Jan 23 05:38:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5ff61393-4605-4a5c-91ca-5dafe454b197 does not exist
Jan 23 05:38:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3fcec0b4-209d-49e7-9969-5f73c0107196 does not exist
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:38:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:38:19 np0005593232 nova_compute[250269]: 2026-01-23 10:38:19.931 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:19 np0005593232 podman[382136]: 2026-01-23 10:38:19.950276716 +0000 UTC m=+0.059361638 container create 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 23 05:38:19 np0005593232 systemd[1]: Started libpod-conmon-071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5.scope.
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:19.919220953 +0000 UTC m=+0.028305935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:20.070858504 +0000 UTC m=+0.179943396 container init 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:20.088832194 +0000 UTC m=+0.197917086 container start 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:20.092655383 +0000 UTC m=+0.201740275 container attach 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:38:20 np0005593232 infallible_clarke[382152]: 167 167
Jan 23 05:38:20 np0005593232 systemd[1]: libpod-071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5.scope: Deactivated successfully.
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:20.103237514 +0000 UTC m=+0.212322456 container died 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:38:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1eac08b8961e6a028442cf7467ef51b2e6e7c6e195be0f470fb0e58dfca78a14-merged.mount: Deactivated successfully.
Jan 23 05:38:20 np0005593232 podman[382136]: 2026-01-23 10:38:20.164151115 +0000 UTC m=+0.273236017 container remove 071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:38:20 np0005593232 systemd[1]: libpod-conmon-071a350f045c9535dc7b1a8034f4de5c970ea78147b47bcb53d99fc7218b1ae5.scope: Deactivated successfully.
Jan 23 05:38:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3325: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 23 05:38:20 np0005593232 podman[382179]: 2026-01-23 10:38:20.405219017 +0000 UTC m=+0.056164477 container create fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 05:38:20 np0005593232 systemd[1]: Started libpod-conmon-fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a.scope.
Jan 23 05:38:20 np0005593232 podman[382179]: 2026-01-23 10:38:20.383598463 +0000 UTC m=+0.034543953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:20 np0005593232 podman[382179]: 2026-01-23 10:38:20.535384597 +0000 UTC m=+0.186330077 container init fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:38:20 np0005593232 podman[382179]: 2026-01-23 10:38:20.546709919 +0000 UTC m=+0.197655389 container start fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:38:20 np0005593232 podman[382179]: 2026-01-23 10:38:20.551083763 +0000 UTC m=+0.202029233 container attach fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:38:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:38:20 np0005593232 nova_compute[250269]: 2026-01-23 10:38:20.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:21 np0005593232 keen_bassi[382196]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:38:21 np0005593232 keen_bassi[382196]: --> relative data size: 1.0
Jan 23 05:38:21 np0005593232 keen_bassi[382196]: --> All data devices are unavailable
Jan 23 05:38:21 np0005593232 systemd[1]: libpod-fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a.scope: Deactivated successfully.
Jan 23 05:38:21 np0005593232 podman[382179]: 2026-01-23 10:38:21.441561934 +0000 UTC m=+1.092507384 container died fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:38:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a6da2abdc298b01dadd70d16beb83997315bd23dc0801b987422999171d4998c-merged.mount: Deactivated successfully.
Jan 23 05:38:21 np0005593232 podman[382179]: 2026-01-23 10:38:21.515939138 +0000 UTC m=+1.166884628 container remove fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bassi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:38:21 np0005593232 systemd[1]: libpod-conmon-fa4f70814457b3b940eb8e849d77698438cab0f5682a77b5111fbafedc08470a.scope: Deactivated successfully.
Jan 23 05:38:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3326: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.398838682 +0000 UTC m=+0.079364307 container create 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:38:22 np0005593232 systemd[1]: Started libpod-conmon-0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281.scope.
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.366324158 +0000 UTC m=+0.046849853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.498188806 +0000 UTC m=+0.178714491 container init 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.510581508 +0000 UTC m=+0.191107133 container start 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.515468657 +0000 UTC m=+0.195994292 container attach 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:38:22 np0005593232 fervent_shtern[382431]: 167 167
Jan 23 05:38:22 np0005593232 systemd[1]: libpod-0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281.scope: Deactivated successfully.
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.52050858 +0000 UTC m=+0.201034215 container died 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:38:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4e298152774de97d7357c9131435652985e6375065d25163419f020a5985fb84-merged.mount: Deactivated successfully.
Jan 23 05:38:22 np0005593232 podman[382402]: 2026-01-23 10:38:22.575179614 +0000 UTC m=+0.255705239 container remove 0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:38:22 np0005593232 systemd[1]: libpod-conmon-0c3ba1297c1880d52715a0c1e277225d3d917b78f236ed8e3d2ce432a391c281.scope: Deactivated successfully.
Jan 23 05:38:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:22.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:22 np0005593232 podman[382456]: 2026-01-23 10:38:22.814679682 +0000 UTC m=+0.063039973 container create 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:38:22 np0005593232 systemd[1]: Started libpod-conmon-79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec.scope.
Jan 23 05:38:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:22 np0005593232 podman[382456]: 2026-01-23 10:38:22.79704035 +0000 UTC m=+0.045400661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8cdfce4d5e0999af11009a9bac93bebb227c6d7df375992bcc0012b9ebd4f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8cdfce4d5e0999af11009a9bac93bebb227c6d7df375992bcc0012b9ebd4f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8cdfce4d5e0999af11009a9bac93bebb227c6d7df375992bcc0012b9ebd4f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de8cdfce4d5e0999af11009a9bac93bebb227c6d7df375992bcc0012b9ebd4f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:22 np0005593232 podman[382456]: 2026-01-23 10:38:22.916196677 +0000 UTC m=+0.164556998 container init 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:38:22 np0005593232 podman[382456]: 2026-01-23 10:38:22.927600781 +0000 UTC m=+0.175961082 container start 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 23 05:38:22 np0005593232 podman[382456]: 2026-01-23 10:38:22.931279646 +0000 UTC m=+0.179640017 container attach 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]: {
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:    "0": [
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:        {
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "devices": [
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "/dev/loop3"
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            ],
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "lv_name": "ceph_lv0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "lv_size": "7511998464",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "name": "ceph_lv0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "tags": {
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.cluster_name": "ceph",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.crush_device_class": "",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.encrypted": "0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.osd_id": "0",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.type": "block",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:                "ceph.vdo": "0"
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            },
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "type": "block",
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:            "vg_name": "ceph_vg0"
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:        }
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]:    ]
Jan 23 05:38:23 np0005593232 interesting_bassi[382472]: }
Jan 23 05:38:23 np0005593232 systemd[1]: libpod-79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec.scope: Deactivated successfully.
Jan 23 05:38:23 np0005593232 podman[382456]: 2026-01-23 10:38:23.774678568 +0000 UTC m=+1.023038869 container died 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:38:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-de8cdfce4d5e0999af11009a9bac93bebb227c6d7df375992bcc0012b9ebd4f5-merged.mount: Deactivated successfully.
Jan 23 05:38:23 np0005593232 podman[382456]: 2026-01-23 10:38:23.987014933 +0000 UTC m=+1.235375254 container remove 79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bassi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:38:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:24 np0005593232 systemd[1]: libpod-conmon-79ff57bb4efbc609059f5e809fbb3ba65f87d952405c3af007e69b49dc19ebec.scope: Deactivated successfully.
Jan 23 05:38:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3327: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.3 MiB/s wr, 28 op/s
Jan 23 05:38:24 np0005593232 nova_compute[250269]: 2026-01-23 10:38:24.313 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:38:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:24.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:24 np0005593232 podman[382634]: 2026-01-23 10:38:24.771976235 +0000 UTC m=+0.077451502 container create 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:38:24 np0005593232 podman[382634]: 2026-01-23 10:38:24.718528136 +0000 UTC m=+0.024003443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:24 np0005593232 systemd[1]: Started libpod-conmon-6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28.scope.
Jan 23 05:38:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:24.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:24 np0005593232 nova_compute[250269]: 2026-01-23 10:38:24.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:24 np0005593232 podman[382634]: 2026-01-23 10:38:24.990358822 +0000 UTC m=+0.295834129 container init 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:38:25 np0005593232 podman[382634]: 2026-01-23 10:38:24.998479423 +0000 UTC m=+0.303954680 container start 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:38:25 np0005593232 dazzling_kowalevski[382648]: 167 167
Jan 23 05:38:25 np0005593232 systemd[1]: libpod-6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28.scope: Deactivated successfully.
Jan 23 05:38:25 np0005593232 podman[382634]: 2026-01-23 10:38:25.004628398 +0000 UTC m=+0.310103715 container attach 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:38:25 np0005593232 conmon[382648]: conmon 6ae1a77f3aada1a6be7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28.scope/container/memory.events
Jan 23 05:38:25 np0005593232 podman[382634]: 2026-01-23 10:38:25.005438931 +0000 UTC m=+0.310914188 container died 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:38:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b67b705be1909878ceb473cdbf2e5bed6b1ef6cdfbe9f15c022f98a75b83bbff-merged.mount: Deactivated successfully.
Jan 23 05:38:25 np0005593232 podman[382634]: 2026-01-23 10:38:25.052427036 +0000 UTC m=+0.357902303 container remove 6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kowalevski, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:38:25 np0005593232 systemd[1]: libpod-conmon-6ae1a77f3aada1a6be7aebb053203554c41295665c815167c41d83ac702a2d28.scope: Deactivated successfully.
Jan 23 05:38:25 np0005593232 podman[382675]: 2026-01-23 10:38:25.278291196 +0000 UTC m=+0.043268721 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:38:25 np0005593232 podman[382675]: 2026-01-23 10:38:25.392178133 +0000 UTC m=+0.157155578 container create ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:38:25 np0005593232 systemd[1]: Started libpod-conmon-ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2.scope.
Jan 23 05:38:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:38:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b48a5a3e2214413a346ffb4ee66bdee182ca319ecb19973b5b5d77ad0118c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b48a5a3e2214413a346ffb4ee66bdee182ca319ecb19973b5b5d77ad0118c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b48a5a3e2214413a346ffb4ee66bdee182ca319ecb19973b5b5d77ad0118c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b48a5a3e2214413a346ffb4ee66bdee182ca319ecb19973b5b5d77ad0118c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:38:25 np0005593232 podman[382675]: 2026-01-23 10:38:25.554440984 +0000 UTC m=+0.319418489 container init ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:38:25 np0005593232 podman[382675]: 2026-01-23 10:38:25.569679187 +0000 UTC m=+0.334656672 container start ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:38:25 np0005593232 podman[382675]: 2026-01-23 10:38:25.574667399 +0000 UTC m=+0.339644884 container attach ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:38:25 np0005593232 nova_compute[250269]: 2026-01-23 10:38:25.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3328: 321 pgs: 321 active+clean; 167 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 15 op/s
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]: {
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:        "osd_id": 0,
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:        "type": "bluestore"
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]:    }
Jan 23 05:38:26 np0005593232 nifty_neumann[382692]: }
Jan 23 05:38:26 np0005593232 systemd[1]: libpod-ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2.scope: Deactivated successfully.
Jan 23 05:38:26 np0005593232 conmon[382692]: conmon ce41daef478ed8a588a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2.scope/container/memory.events
Jan 23 05:38:26 np0005593232 podman[382675]: 2026-01-23 10:38:26.514737689 +0000 UTC m=+1.279715134 container died ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:38:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f3b48a5a3e2214413a346ffb4ee66bdee182ca319ecb19973b5b5d77ad0118c9-merged.mount: Deactivated successfully.
Jan 23 05:38:26 np0005593232 podman[382675]: 2026-01-23 10:38:26.584804801 +0000 UTC m=+1.349782256 container remove ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_neumann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:38:26 np0005593232 systemd[1]: libpod-conmon-ce41daef478ed8a588a215ba1e3a1a8937dc831ca03d9290e82eff8d21b2f5b2.scope: Deactivated successfully.
Jan 23 05:38:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:38:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:38:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 46cb73c4-139a-4c44-b217-c533a0bde0d7 does not exist
Jan 23 05:38:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9db2efd6-1537-4989-8abc-aeca3ae15c35 does not exist
Jan 23 05:38:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 312f5e80-07db-4fdc-a701-f7f2945334f5 does not exist
Jan 23 05:38:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:38:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3329: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 81 op/s
Jan 23 05:38:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Jan 23 05:38:29 np0005593232 nova_compute[250269]: 2026-01-23 10:38:29.938 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Jan 23 05:38:29 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Jan 23 05:38:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3331: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Jan 23 05:38:30 np0005593232 nova_compute[250269]: 2026-01-23 10:38:30.696 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:30.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Jan 23 05:38:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Jan 23 05:38:31 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Jan 23 05:38:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Jan 23 05:38:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Jan 23 05:38:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Jan 23 05:38:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3334: 321 pgs: 321 active+clean; 213 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 163 op/s
Jan 23 05:38:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:32.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:32.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3335: 321 pgs: 321 active+clean; 214 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 107 op/s
Jan 23 05:38:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Jan 23 05:38:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Jan 23 05:38:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Jan 23 05:38:34 np0005593232 podman[382783]: 2026-01-23 10:38:34.508059059 +0000 UTC m=+0.163449337 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 23 05:38:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:34.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:34.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:34 np0005593232 nova_compute[250269]: 2026-01-23 10:38:34.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:35 np0005593232 nova_compute[250269]: 2026-01-23 10:38:35.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3337: 321 pgs: 321 active+clean; 214 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 3.7 MiB/s rd, 17 KiB/s wr, 107 op/s
Jan 23 05:38:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:36.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:36.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:38:37
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'images', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'vms', 'volumes']
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:38:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3338: 321 pgs: 321 active+clean; 259 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 119 op/s
Jan 23 05:38:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:38:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641068743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:38:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:38:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:38.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:38.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Jan 23 05:38:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Jan 23 05:38:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Jan 23 05:38:39 np0005593232 podman[382812]: 2026-01-23 10:38:39.414215269 +0000 UTC m=+0.069031173 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 05:38:39 np0005593232 nova_compute[250269]: 2026-01-23 10:38:39.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Jan 23 05:38:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Jan 23 05:38:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Jan 23 05:38:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3341: 321 pgs: 321 active+clean; 259 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.4 MiB/s rd, 3.5 MiB/s wr, 35 op/s
Jan 23 05:38:40 np0005593232 nova_compute[250269]: 2026-01-23 10:38:40.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:40.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:40.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3342: 321 pgs: 321 active+clean; 301 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.7 MiB/s rd, 5.5 MiB/s wr, 48 op/s
Jan 23 05:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:42.654 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:42.655 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:38:42.656 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:38:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:42.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:42.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3343: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 5.0 MiB/s rd, 11 MiB/s wr, 201 op/s
Jan 23 05:38:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:38:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513497207' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:38:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:38:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3513497207' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:38:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:44.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:44 np0005593232 nova_compute[250269]: 2026-01-23 10:38:44.948 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:45 np0005593232 nova_compute[250269]: 2026-01-23 10:38:45.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3344: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 175 op/s
Jan 23 05:38:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:46.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00630867008654383 of space, bias 1.0, pg target 1.892601025963149 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0009892852573050437 of space, bias 1.0, pg target 0.2957962919342081 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:38:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:38:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:38:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 16K writes, 74K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1651 writes, 7500 keys, 1651 commit groups, 1.0 writes per commit group, ingest: 11.26 MB, 0.02 MB/s#012Interval WAL: 1651 writes, 1651 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     48.2      2.07              0.39        52    0.040       0      0       0.0       0.0#012  L6      1/0   12.99 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.1     85.5     73.0      6.92              1.73        51    0.136    370K    27K       0.0       0.0#012 Sum      1/0   12.99 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.1     65.8     67.3      9.00              2.12       103    0.087    370K    27K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.7     84.1     87.5      0.93              0.35        12    0.077     60K   3151       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0     85.5     73.0      6.92              1.73        51    0.136    370K    27K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     48.3      2.07              0.39        51    0.041       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.098, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.59 GB write, 0.10 MB/s write, 0.58 GB read, 0.10 MB/s read, 9.0 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 65.22 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000521 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3760,62.57 MB,20.5806%) FilterBlock(104,1.01 MB,0.331111%) IndexBlock(104,1.65 MB,0.542244%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:38:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3345: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.8 MiB/s rd, 7.4 MiB/s wr, 152 op/s
Jan 23 05:38:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:48.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:49 np0005593232 nova_compute[250269]: 2026-01-23 10:38:49.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3346: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 137 op/s
Jan 23 05:38:50 np0005593232 nova_compute[250269]: 2026-01-23 10:38:50.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:50.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:38:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:50.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.241 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.242 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3347: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 132 op/s
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.268 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.358 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.359 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.371 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.372 250273 INFO nova.compute.claims [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:38:52 np0005593232 nova_compute[250269]: 2026-01-23 10:38:52.526 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:52.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:38:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558265131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.064 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.075 250273 DEBUG nova.compute.provider_tree [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.100 250273 DEBUG nova.scheduler.client.report [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.133 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.134 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.199 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.200 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.226 250273 INFO nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.251 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.305 250273 INFO nova.virt.block_device [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Booting with volume abf07740-6e5f-44bf-b631-41bec916b9fe at /dev/vda#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.568 250273 DEBUG os_brick.utils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.571 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.586 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.586 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[2801396b-4914-417f-a31e-1005e274ce0d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.588 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.597 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.598 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[01235555-9904-4cd3-bd04-31fb284fdec0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.599 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.609 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.609 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[18255a90-7005-4830-b508-bc3e2bf595ee]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.611 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[4c318752-c293-428b-9125-4a623752c0e2]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.612 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.647 250273 DEBUG nova.policy [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9609ee98328640299138fa34258ef48f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '95243c9b3c544aff8e9ee6043bb6f522', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.651 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "nvme version" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.654 250273 DEBUG os_brick.initiator.connectors.lightos [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.654 250273 DEBUG os_brick.initiator.connectors.lightos [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.654 250273 DEBUG os_brick.initiator.connectors.lightos [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.655 250273 DEBUG os_brick.utils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:38:53 np0005593232 nova_compute[250269]: 2026-01-23 10:38:53.655 250273 DEBUG nova.virt.block_device [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating existing volume attachment record: e578a84a-b1f5-4de5-81cd-7fbcde412aeb _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:38:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3348: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.5 MiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 23 05:38:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:38:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371413049' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.726 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.728 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.728 250273 INFO nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Creating image(s)#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.729 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.729 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Ensure instance console log exists: /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.729 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.730 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.730 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:38:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:54.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:54.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:54 np0005593232 nova_compute[250269]: 2026-01-23 10:38:54.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:55 np0005593232 nova_compute[250269]: 2026-01-23 10:38:55.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:38:56 np0005593232 nova_compute[250269]: 2026-01-23 10:38:56.204 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Successfully created port: ba062aba-c2c0-4bcb-b9fe-732316c56b6f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:38:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3349: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 74 op/s
Jan 23 05:38:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:56.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3350: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 22 KiB/s wr, 74 op/s
Jan 23 05:38:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:38:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:38:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:38:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:38:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:38:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:38:58.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:38:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:38:59 np0005593232 nova_compute[250269]: 2026-01-23 10:38:59.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3351: 321 pgs: 321 active+clean; 385 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.689 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Successfully updated port: ba062aba-c2c0-4bcb-b9fe-732316c56b6f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.708 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.708 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.708 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.756 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.790 250273 DEBUG nova.compute.manager [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.791 250273 DEBUG nova.compute.manager [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing instance network info cache due to event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.791 250273 DEBUG oslo_concurrency.lockutils [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:00.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:00.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:00 np0005593232 nova_compute[250269]: 2026-01-23 10:39:00.925 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:39:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3352: 321 pgs: 321 active+clean; 411 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 670 KiB/s wr, 95 op/s
Jan 23 05:39:02 np0005593232 nova_compute[250269]: 2026-01-23 10:39:02.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:02 np0005593232 nova_compute[250269]: 2026-01-23 10:39:02.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:02 np0005593232 nova_compute[250269]: 2026-01-23 10:39:02.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:02 np0005593232 nova_compute[250269]: 2026-01-23 10:39:02.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:39:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:02.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:02.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.739 250273 DEBUG nova.network.neutron [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.778 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.779 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Instance network_info: |[{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.779 250273 DEBUG oslo_concurrency.lockutils [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.780 250273 DEBUG nova.network.neutron [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.783 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Start _get_guest_xml network_info=[{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': 'e578a84a-b1f5-4de5-81cd-7fbcde412aeb', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-abf07740-6e5f-44bf-b631-41bec916b9fe', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'abf07740-6e5f-44bf-b631-41bec916b9fe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c664bd16-8380-4052-abce-702c782ec7b0', 'attached_at': '', 'detached_at': '', 'volume_id': 'abf07740-6e5f-44bf-b631-41bec916b9fe', 'serial': 'abf07740-6e5f-44bf-b631-41bec916b9fe'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.787 250273 WARNING nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.802 250273 DEBUG nova.virt.libvirt.host [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.803 250273 DEBUG nova.virt.libvirt.host [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.807 250273 DEBUG nova.virt.libvirt.host [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.807 250273 DEBUG nova.virt.libvirt.host [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.808 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.808 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.809 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.809 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.809 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.809 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.810 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.810 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.810 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.810 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.811 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.811 250273 DEBUG nova.virt.hardware [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.846 250273 DEBUG nova.storage.rbd_utils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] rbd image c664bd16-8380-4052-abce-702c782ec7b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:03 np0005593232 nova_compute[250269]: 2026-01-23 10:39:03.851 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3353: 321 pgs: 321 active+clean; 478 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 111 op/s
Jan 23 05:39:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:39:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479130788' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.314 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.346 250273 DEBUG nova.virt.libvirt.vif [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-93762835',display_name='tempest-TestVolumeBackupRestore-server-93762835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-93762835',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBX2xFmZxNP6bRufhziZpgwtkNkHilS3rnY04UWe0caEUCJTElILZ86JEOY4NskaZbsQl3T2gH6bMCRe7r2pXDGtSwrnbrxiZNdr3ByPJ0sRLAFFXiCrXTdWNg6j1BnFig==',key_name='tempest-TestVolumeBackupRestore-1870318488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95243c9b3c544aff8e9ee6043bb6f522',ramdisk_id='',reservation_id='r-edrokjv4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1966435226',owner_user_name='tempest-TestVolumeBackupRestore-1966435226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:38:53Z,user_data=None,user_id='9609ee98328640299138fa34258ef48f',uuid=c664bd16-8380-4052-abce-702c782ec7b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.346 250273 DEBUG nova.network.os_vif_util [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converting VIF {"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.348 250273 DEBUG nova.network.os_vif_util [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.350 250273 DEBUG nova.objects.instance [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lazy-loading 'pci_devices' on Instance uuid c664bd16-8380-4052-abce-702c782ec7b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.364 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.365 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.371 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <uuid>c664bd16-8380-4052-abce-702c782ec7b0</uuid>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <name>instance-000000c2</name>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestVolumeBackupRestore-server-93762835</nova:name>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:39:03</nova:creationTime>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:user uuid="9609ee98328640299138fa34258ef48f">tempest-TestVolumeBackupRestore-1966435226-project-member</nova:user>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:project uuid="95243c9b3c544aff8e9ee6043bb6f522">tempest-TestVolumeBackupRestore-1966435226</nova:project>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <nova:port uuid="ba062aba-c2c0-4bcb-b9fe-732316c56b6f">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="serial">c664bd16-8380-4052-abce-702c782ec7b0</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="uuid">c664bd16-8380-4052-abce-702c782ec7b0</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/c664bd16-8380-4052-abce-702c782ec7b0_disk.config">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-abf07740-6e5f-44bf-b631-41bec916b9fe">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <serial>abf07740-6e5f-44bf-b631-41bec916b9fe</serial>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:0d:26:c0"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <target dev="tapba062aba-c2"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/console.log" append="off"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:39:04 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:39:04 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:39:04 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:39:04 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.372 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Preparing to wait for external event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.372 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.372 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.372 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.373 250273 DEBUG nova.virt.libvirt.vif [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-93762835',display_name='tempest-TestVolumeBackupRestore-server-93762835',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-93762835',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBX2xFmZxNP6bRufhziZpgwtkNkHilS3rnY04UWe0caEUCJTElILZ86JEOY4NskaZbsQl3T2gH6bMCRe7r2pXDGtSwrnbrxiZNdr3ByPJ0sRLAFFXiCrXTdWNg6j1BnFig==',key_name='tempest-TestVolumeBackupRestore-1870318488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='95243c9b3c544aff8e9ee6043bb6f522',ramdisk_id='',reservation_id='r-edrokjv4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-1966435226',owner_user_name='tempest-TestVolumeBackupRestore-1966435226-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:38:53Z,user_data=None,user_id='9609ee98328640299138fa34258ef48f',uuid=c664bd16-8380-4052-abce-702c782ec7b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.373 250273 DEBUG nova.network.os_vif_util [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converting VIF {"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.374 250273 DEBUG nova.network.os_vif_util [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.374 250273 DEBUG os_vif [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.376 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.376 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.382 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba062aba-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.382 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba062aba-c2, col_values=(('external_ids', {'iface-id': 'ba062aba-c2c0-4bcb-b9fe-732316c56b6f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0d:26:c0', 'vm-uuid': 'c664bd16-8380-4052-abce-702c782ec7b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:04 np0005593232 NetworkManager[49057]: <info>  [1769164744.3856] manager: (tapba062aba-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/353)
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.386 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.393 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.394 250273 INFO os_vif [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2')#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.466 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.467 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.467 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] No VIF found with MAC fa:16:3e:0d:26:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.468 250273 INFO nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Using config drive#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.499 250273 DEBUG nova.storage.rbd_utils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] rbd image c664bd16-8380-4052-abce-702c782ec7b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:04.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:04.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.972 250273 INFO nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Creating config drive at /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config#033[00m
Jan 23 05:39:04 np0005593232 nova_compute[250269]: 2026-01-23 10:39:04.983 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj45xm545 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.156 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj45xm545" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.198 250273 DEBUG nova.storage.rbd_utils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] rbd image c664bd16-8380-4052-abce-702c782ec7b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.204 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config c664bd16-8380-4052-abce-702c782ec7b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.393 250273 DEBUG oslo_concurrency.processutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config c664bd16-8380-4052-abce-702c782ec7b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.394 250273 INFO nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Deleting local config drive /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0/disk.config because it was imported into RBD.#033[00m
Jan 23 05:39:05 np0005593232 kernel: tapba062aba-c2: entered promiscuous mode
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.5041] manager: (tapba062aba-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Jan 23 05:39:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:05Z|00761|binding|INFO|Claiming lport ba062aba-c2c0-4bcb-b9fe-732316c56b6f for this chassis.
Jan 23 05:39:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:05Z|00762|binding|INFO|ba062aba-c2c0-4bcb-b9fe-732316c56b6f: Claiming fa:16:3e:0d:26:c0 10.100.0.6
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.505 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.523 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:c0 10.100.0.6'], port_security=['fa:16:3e:0d:26:c0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c664bd16-8380-4052-abce-702c782ec7b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c962c42-ea1d-4660-859c-00539e47d1a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95243c9b3c544aff8e9ee6043bb6f522', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2c746b2-6e40-4f3c-ad8b-26c2cab5b7b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8df0499-9f01-4fad-bac2-4aad85c08d62, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=ba062aba-c2c0-4bcb-b9fe-732316c56b6f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.525 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ba062aba-c2c0-4bcb-b9fe-732316c56b6f in datapath 0c962c42-ea1d-4660-859c-00539e47d1a1 bound to our chassis#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.527 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0c962c42-ea1d-4660-859c-00539e47d1a1#033[00m
Jan 23 05:39:05 np0005593232 systemd-machined[215836]: New machine qemu-86-instance-000000c2.
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.547 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff956bf-0aa8-4fb2-b79c-eb98e72a9b64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.549 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0c962c42-e1 in ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.551 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0c962c42-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.552 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[75c675a4-a05a-4477-ab02-4c55dd966828]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.553 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8d9e3b-198c-4d73-a631-86a10a33c7c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 systemd[1]: Started Virtual Machine qemu-86-instance-000000c2.
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.574 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[38e1e0b2-f256-4ecf-825a-21e3ddf6a345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:05Z|00763|binding|INFO|Setting lport ba062aba-c2c0-4bcb-b9fe-732316c56b6f ovn-installed in OVS
Jan 23 05:39:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:05Z|00764|binding|INFO|Setting lport ba062aba-c2c0-4bcb-b9fe-732316c56b6f up in Southbound
Jan 23 05:39:05 np0005593232 systemd-udevd[383113]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.594 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.604 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a08e98e5-eff2-48e9-87a7-51a4a5a75325]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.6138] device (tapba062aba-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.6147] device (tapba062aba-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:39:05 np0005593232 podman[383070]: 2026-01-23 10:39:05.621775692 +0000 UTC m=+0.258679454 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.639 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[77a79ac9-e9c8-46dd-b110-e2170c9c5d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.645 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b29d87-181e-4c67-9edf-de7556207e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 systemd-udevd[383116]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.6466] manager: (tap0c962c42-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/355)
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.682 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d962c12b-c517-413b-b1bd-93036c1e8ea0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.686 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[368ffc57-66e1-479f-8183-c5c215bf6642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.7097] device (tap0c962c42-e0): carrier: link connected
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.714 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e9033960-5bb1-4f4e-9b0b-5fdb33a94caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.732 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e848c497-f3fb-4a10-8479-78d05996767b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c962c42-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:98:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865885, 'reachable_time': 37259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383143, 'error': None, 'target': 'ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.749 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[aa28d3da-6e99-418f-b108-b3876080dd2f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe83:98cb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 865885, 'tstamp': 865885}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383144, 'error': None, 'target': 'ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.759 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.768 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a98772cd-6270-4462-aa21-7354085391c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0c962c42-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:83:98:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 230], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865885, 'reachable_time': 37259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383145, 'error': None, 'target': 'ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.806 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[29208ff0-32dd-46e5-94be-6a3c83522a6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.867 250273 DEBUG nova.compute.manager [req-137df193-bc44-4be0-8995-1f1bed4d8cae req-7bfad3db-7da1-47a7-a663-65841ff0fb05 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.867 250273 DEBUG oslo_concurrency.lockutils [req-137df193-bc44-4be0-8995-1f1bed4d8cae req-7bfad3db-7da1-47a7-a663-65841ff0fb05 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.868 250273 DEBUG oslo_concurrency.lockutils [req-137df193-bc44-4be0-8995-1f1bed4d8cae req-7bfad3db-7da1-47a7-a663-65841ff0fb05 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.868 250273 DEBUG oslo_concurrency.lockutils [req-137df193-bc44-4be0-8995-1f1bed4d8cae req-7bfad3db-7da1-47a7-a663-65841ff0fb05 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.869 250273 DEBUG nova.compute.manager [req-137df193-bc44-4be0-8995-1f1bed4d8cae req-7bfad3db-7da1-47a7-a663-65841ff0fb05 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Processing event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.882 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[67542ecd-5d50-45a8-9e6d-55ece10e639d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.884 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c962c42-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.884 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.884 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c962c42-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.886 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 NetworkManager[49057]: <info>  [1769164745.8871] manager: (tap0c962c42-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/356)
Jan 23 05:39:05 np0005593232 kernel: tap0c962c42-e0: entered promiscuous mode
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.891 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0c962c42-e0, col_values=(('external_ids', {'iface-id': 'a247fd67-3a52-4cce-af91-ca5a4e3f937b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.891 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:05 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:05Z|00765|binding|INFO|Releasing lport a247fd67-3a52-4cce-af91-ca5a4e3f937b from this chassis (sb_readonly=0)
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.895 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0c962c42-ea1d-4660-859c-00539e47d1a1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0c962c42-ea1d-4660-859c-00539e47d1a1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.896 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c2416694-2065-41e1-9cf4-a60e2d4f61c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.896 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-0c962c42-ea1d-4660-859c-00539e47d1a1
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/0c962c42-ea1d-4660-859c-00539e47d1a1.pid.haproxy
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 0c962c42-ea1d-4660-859c-00539e47d1a1
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:39:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:05.897 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1', 'env', 'PROCESS_TAG=haproxy-0c962c42-ea1d-4660-859c-00539e47d1a1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0c962c42-ea1d-4660-859c-00539e47d1a1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:39:05 np0005593232 nova_compute[250269]: 2026-01-23 10:39:05.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3354: 321 pgs: 321 active+clean; 478 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 40 KiB/s rd, 3.5 MiB/s wr, 53 op/s
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.299 250273 DEBUG nova.network.neutron [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated VIF entry in instance network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.300 250273 DEBUG nova.network.neutron [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.326 250273 DEBUG oslo_concurrency.lockutils [req-5cf9e2e3-0a03-435b-98af-3bb95a6568f0 req-93ed9eb9-96cf-4b9b-a457-afe116c796f2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.328 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164746.3255231, c664bd16-8380-4052-abce-702c782ec7b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.328 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] VM Started (Lifecycle Event)#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.330 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.334 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:39:06 np0005593232 podman[383219]: 2026-01-23 10:39:06.338345629 +0000 UTC m=+0.086514080 container create 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.340 250273 INFO nova.virt.libvirt.driver [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Instance spawned successfully.#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.342 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.351 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.360 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.366 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.367 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.368 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.369 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.369 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.370 250273 DEBUG nova.virt.libvirt.driver [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.378 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.379 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164746.327368, c664bd16-8380-4052-abce-702c782ec7b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.380 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:39:06 np0005593232 systemd[1]: Started libpod-conmon-9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9.scope.
Jan 23 05:39:06 np0005593232 podman[383219]: 2026-01-23 10:39:06.305965159 +0000 UTC m=+0.054133690 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.405 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.408 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164746.3338542, c664bd16-8380-4052-abce-702c782ec7b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.408 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:39:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a508d31a310e0719f91bd9511b9655800a7e75774f07222009059c529fb98e87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.433 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.436 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.442 250273 INFO nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Took 11.72 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.442 250273 DEBUG nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:06 np0005593232 podman[383219]: 2026-01-23 10:39:06.443286512 +0000 UTC m=+0.191455003 container init 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 23 05:39:06 np0005593232 podman[383219]: 2026-01-23 10:39:06.450437615 +0000 UTC m=+0.198606066 container start 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.469 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:39:06 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [NOTICE]   (383239) : New worker (383241) forked
Jan 23 05:39:06 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [NOTICE]   (383239) : Loading success.
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.505 250273 INFO nova.compute.manager [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Took 14.18 seconds to build instance.#033[00m
Jan 23 05:39:06 np0005593232 nova_compute[250269]: 2026-01-23 10:39:06.523 250273 DEBUG oslo_concurrency.lockutils [None req-9bab0455-6d54-48ca-ab98-9ca6df6a4dc1 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:06.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:06.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.992 250273 DEBUG nova.compute.manager [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.992 250273 DEBUG oslo_concurrency.lockutils [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.992 250273 DEBUG oslo_concurrency.lockutils [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.993 250273 DEBUG oslo_concurrency.lockutils [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.993 250273 DEBUG nova.compute.manager [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] No waiting events found dispatching network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:39:07 np0005593232 nova_compute[250269]: 2026-01-23 10:39:07.993 250273 WARNING nova.compute.manager [req-00cf85b9-ca47-4f57-ae56-41b5a53e94f5 req-74708bec-2a66-4a6f-9515-885de9c288e0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received unexpected event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f for instance with vm_state active and task_state None.#033[00m
Jan 23 05:39:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3355: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 173 op/s
Jan 23 05:39:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:08.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:08.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:09 np0005593232 nova_compute[250269]: 2026-01-23 10:39:09.386 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:09 np0005593232 nova_compute[250269]: 2026-01-23 10:39:09.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:09 np0005593232 NetworkManager[49057]: <info>  [1769164749.8104] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 23 05:39:09 np0005593232 NetworkManager[49057]: <info>  [1769164749.8114] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/358)
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:10 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:10Z|00766|binding|INFO|Releasing lport a247fd67-3a52-4cce-af91-ca5a4e3f937b from this chassis (sb_readonly=0)
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.050 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.152 250273 DEBUG nova.compute.manager [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.153 250273 DEBUG nova.compute.manager [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing instance network info cache due to event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.153 250273 DEBUG oslo_concurrency.lockutils [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.153 250273 DEBUG oslo_concurrency.lockutils [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.154 250273 DEBUG nova.network.neutron [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3356: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 5.7 MiB/s wr, 173 op/s
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:10 np0005593232 podman[383253]: 2026-01-23 10:39:10.441198916 +0000 UTC m=+0.084930585 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 23 05:39:10 np0005593232 nova_compute[250269]: 2026-01-23 10:39:10.761 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:10.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:11 np0005593232 nova_compute[250269]: 2026-01-23 10:39:11.037 250273 DEBUG nova.compute.manager [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:11 np0005593232 nova_compute[250269]: 2026-01-23 10:39:11.037 250273 DEBUG nova.compute.manager [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing instance network info cache due to event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:11 np0005593232 nova_compute[250269]: 2026-01-23 10:39:11.038 250273 DEBUG oslo_concurrency.lockutils [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3357: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 5.7 MiB/s wr, 186 op/s
Jan 23 05:39:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:12 np0005593232 nova_compute[250269]: 2026-01-23 10:39:12.841 250273 DEBUG nova.network.neutron [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated VIF entry in instance network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:12 np0005593232 nova_compute[250269]: 2026-01-23 10:39:12.842 250273 DEBUG nova.network.neutron [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:12 np0005593232 nova_compute[250269]: 2026-01-23 10:39:12.862 250273 DEBUG oslo_concurrency.lockutils [req-125a4c66-7bd7-4095-8b0d-4e988fee56fc req-06036397-7f56-499d-882f-68ce41ae4251 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:12 np0005593232 nova_compute[250269]: 2026-01-23 10:39:12.863 250273 DEBUG oslo_concurrency.lockutils [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:12 np0005593232 nova_compute[250269]: 2026-01-23 10:39:12.863 250273 DEBUG nova.network.neutron [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:12.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:13 np0005593232 nova_compute[250269]: 2026-01-23 10:39:13.159 250273 DEBUG nova.compute.manager [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:13 np0005593232 nova_compute[250269]: 2026-01-23 10:39:13.159 250273 DEBUG nova.compute.manager [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing instance network info cache due to event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:13 np0005593232 nova_compute[250269]: 2026-01-23 10:39:13.160 250273 DEBUG oslo_concurrency.lockutils [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3358: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 5.1 MiB/s wr, 220 op/s
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.388 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.533 250273 DEBUG nova.network.neutron [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated VIF entry in instance network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.534 250273 DEBUG nova.network.neutron [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.577 250273 DEBUG oslo_concurrency.lockutils [req-3210f681-d001-4c77-8cd6-270056a487f0 req-268aca6e-507f-4a3c-b4cd-c636aa2bfc21 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.579 250273 DEBUG oslo_concurrency.lockutils [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:14 np0005593232 nova_compute[250269]: 2026-01-23 10:39:14.579 250273 DEBUG nova.network.neutron [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:14.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:14.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:15 np0005593232 nova_compute[250269]: 2026-01-23 10:39:15.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:15 np0005593232 nova_compute[250269]: 2026-01-23 10:39:15.765 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3359: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 189 op/s
Jan 23 05:39:16 np0005593232 nova_compute[250269]: 2026-01-23 10:39:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:16.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:16.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.330 250273 DEBUG nova.network.neutron [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated VIF entry in instance network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.331 250273 DEBUG nova.network.neutron [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.567 250273 DEBUG oslo_concurrency.lockutils [req-6a92ec23-7797-4032-9dd8-c103e6b2ccfb req-221455d3-4a54-42d9-9516-f58a0121d7a2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.945 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.946 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.946 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:39:17 np0005593232 nova_compute[250269]: 2026-01-23 10:39:17.946 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c664bd16-8380-4052-abce-702c782ec7b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:39:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:18.040 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=74, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=73) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:39:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:18.041 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:39:18 np0005593232 nova_compute[250269]: 2026-01-23 10:39:18.084 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3360: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 2.2 MiB/s wr, 288 op/s
Jan 23 05:39:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:18.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:18.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:19Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0d:26:c0 10.100.0.6
Jan 23 05:39:19 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:19Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0d:26:c0 10.100.0.6
Jan 23 05:39:19 np0005593232 nova_compute[250269]: 2026-01-23 10:39:19.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3361: 321 pgs: 321 active+clean; 511 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 43 KiB/s wr, 168 op/s
Jan 23 05:39:20 np0005593232 nova_compute[250269]: 2026-01-23 10:39:20.769 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:20.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:21 np0005593232 nova_compute[250269]: 2026-01-23 10:39:21.512 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3362: 321 pgs: 321 active+clean; 528 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.5 MiB/s rd, 979 KiB/s wr, 194 op/s
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.420 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.420 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.420 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.454 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.455 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.456 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:22.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:22.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:39:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4157999872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:39:22 np0005593232 nova_compute[250269]: 2026-01-23 10:39:22.993 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3363: 321 pgs: 321 active+clean; 553 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 2.2 MiB/s wr, 229 op/s
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.373 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.373 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c2 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.437 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.588 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.590 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3967MB free_disk=20.900588989257812GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.591 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.592 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.742 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance c664bd16-8380-4052-abce-702c782ec7b0 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.743 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.743 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:39:24 np0005593232 nova_compute[250269]: 2026-01-23 10:39:24.796 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:24.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:24.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:39:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860192945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:39:25 np0005593232 nova_compute[250269]: 2026-01-23 10:39:25.318 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:25 np0005593232 nova_compute[250269]: 2026-01-23 10:39:25.325 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:39:25 np0005593232 nova_compute[250269]: 2026-01-23 10:39:25.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3364: 321 pgs: 321 active+clean; 553 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Jan 23 05:39:26 np0005593232 nova_compute[250269]: 2026-01-23 10:39:26.507 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:39:26 np0005593232 nova_compute[250269]: 2026-01-23 10:39:26.529 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:39:26 np0005593232 nova_compute[250269]: 2026-01-23 10:39:26.530 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:26.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:26.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.343 250273 DEBUG nova.compute.manager [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.344 250273 DEBUG nova.compute.manager [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing instance network info cache due to event network-changed-ba062aba-c2c0-4bcb-b9fe-732316c56b6f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.344 250273 DEBUG oslo_concurrency.lockutils [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.345 250273 DEBUG oslo_concurrency.lockutils [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.345 250273 DEBUG nova.network.neutron [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Refreshing network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.450 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.451 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.451 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.452 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.452 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.454 250273 INFO nova.compute.manager [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Terminating instance#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.455 250273 DEBUG nova.compute.manager [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:39:27 np0005593232 kernel: tapba062aba-c2 (unregistering): left promiscuous mode
Jan 23 05:39:27 np0005593232 NetworkManager[49057]: <info>  [1769164767.6042] device (tapba062aba-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:39:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:27Z|00767|binding|INFO|Releasing lport ba062aba-c2c0-4bcb-b9fe-732316c56b6f from this chassis (sb_readonly=0)
Jan 23 05:39:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:27Z|00768|binding|INFO|Setting lport ba062aba-c2c0-4bcb-b9fe-732316c56b6f down in Southbound
Jan 23 05:39:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:27Z|00769|binding|INFO|Removing iface tapba062aba-c2 ovn-installed in OVS
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.655 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:27.664 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:26:c0 10.100.0.6'], port_security=['fa:16:3e:0d:26:c0 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c664bd16-8380-4052-abce-702c782ec7b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c962c42-ea1d-4660-859c-00539e47d1a1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '95243c9b3c544aff8e9ee6043bb6f522', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2c746b2-6e40-4f3c-ad8b-26c2cab5b7b0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c8df0499-9f01-4fad-bac2-4aad85c08d62, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=ba062aba-c2c0-4bcb-b9fe-732316c56b6f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:39:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:27.666 161902 INFO neutron.agent.ovn.metadata.agent [-] Port ba062aba-c2c0-4bcb-b9fe-732316c56b6f in datapath 0c962c42-ea1d-4660-859c-00539e47d1a1 unbound from our chassis#033[00m
Jan 23 05:39:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:27.667 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c962c42-ea1d-4660-859c-00539e47d1a1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:27.669 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e1743600-4738-46d6-9c18-5e56f5c7da97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:27.670 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1 namespace which is not needed anymore#033[00m
Jan 23 05:39:27 np0005593232 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000c2.scope: Deactivated successfully.
Jan 23 05:39:27 np0005593232 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d000000c2.scope: Consumed 14.456s CPU time.
Jan 23 05:39:27 np0005593232 systemd-machined[215836]: Machine qemu-86-instance-000000c2 terminated.
Jan 23 05:39:27 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [NOTICE]   (383239) : haproxy version is 2.8.14-c23fe91
Jan 23 05:39:27 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [NOTICE]   (383239) : path to executable is /usr/sbin/haproxy
Jan 23 05:39:27 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [WARNING]  (383239) : Exiting Master process...
Jan 23 05:39:27 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [ALERT]    (383239) : Current worker (383241) exited with code 143 (Terminated)
Jan 23 05:39:27 np0005593232 neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1[383235]: [WARNING]  (383239) : All workers exited. Exiting... (0)
Jan 23 05:39:27 np0005593232 systemd[1]: libpod-9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9.scope: Deactivated successfully.
Jan 23 05:39:27 np0005593232 podman[383504]: 2026-01-23 10:39:27.850466877 +0000 UTC m=+0.054482899 container died 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:39:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9-userdata-shm.mount: Deactivated successfully.
Jan 23 05:39:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a508d31a310e0719f91bd9511b9655800a7e75774f07222009059c529fb98e87-merged.mount: Deactivated successfully.
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.915 250273 INFO nova.virt.libvirt.driver [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Instance destroyed successfully.#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.916 250273 DEBUG nova.objects.instance [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lazy-loading 'resources' on Instance uuid c664bd16-8380-4052-abce-702c782ec7b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:39:27 np0005593232 podman[383504]: 2026-01-23 10:39:27.921958329 +0000 UTC m=+0.125974351 container cleanup 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:39:27 np0005593232 systemd[1]: libpod-conmon-9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9.scope: Deactivated successfully.
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.932 250273 DEBUG nova.virt.libvirt.vif [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:38:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-93762835',display_name='tempest-TestVolumeBackupRestore-server-93762835',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-93762835',id=194,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBX2xFmZxNP6bRufhziZpgwtkNkHilS3rnY04UWe0caEUCJTElILZ86JEOY4NskaZbsQl3T2gH6bMCRe7r2pXDGtSwrnbrxiZNdr3ByPJ0sRLAFFXiCrXTdWNg6j1BnFig==',key_name='tempest-TestVolumeBackupRestore-1870318488',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:39:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='95243c9b3c544aff8e9ee6043bb6f522',ramdisk_id='',reservation_id='r-edrokjv4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-1966435226',owner_user_name='tempest-TestVolumeBackupRestore-1966435226-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:39:06Z,user_data=None,user_id='9609ee98328640299138fa34258ef48f',uuid=c664bd16-8380-4052-abce-702c782ec7b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.933 250273 DEBUG nova.network.os_vif_util [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converting VIF {"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.934 250273 DEBUG nova.network.os_vif_util [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.934 250273 DEBUG os_vif [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.937 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba062aba-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.939 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:27 np0005593232 nova_compute[250269]: 2026-01-23 10:39:27.944 250273 INFO os_vif [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0d:26:c0,bridge_name='br-int',has_traffic_filtering=True,id=ba062aba-c2c0-4bcb-b9fe-732316c56b6f,network=Network(0c962c42-ea1d-4660-859c-00539e47d1a1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba062aba-c2')#033[00m
Jan 23 05:39:28 np0005593232 podman[383554]: 2026-01-23 10:39:28.011779832 +0000 UTC m=+0.050353402 container remove 9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.022 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[26103519-91a7-47c8-8889-7ef9e64a1001]: (4, ('Fri Jan 23 10:39:27 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1 (9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9)\n9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9\nFri Jan 23 10:39:27 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1 (9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9)\n9704e331a57394307f328e6ea5372f40466d4631ceb58de63717ce7aded1b3c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.026 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b13aa7-dba0-4960-9d81-c4509c056cd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.027 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c962c42-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:28 np0005593232 kernel: tap0c962c42-e0: left promiscuous mode
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.034 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6afa4b-de56-4de2-8624-059ee6db8685]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.045 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.047 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '74'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.057 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[83774bd0-56b4-4320-8958-8b053cdd9411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.058 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[40e929a2-f386-4884-b05f-c2eef6e6abec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.075 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8c2463-8475-4e48-a1a3-9b683a0d7400]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 865877, 'reachable_time': 23430, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383591, 'error': None, 'target': 'ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.079 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0c962c42-ea1d-4660-859c-00539e47d1a1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:39:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:28.080 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[a59cdccb-f19c-4f53-8813-c0dccb77b270]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:28 np0005593232 systemd[1]: run-netns-ovnmeta\x2d0c962c42\x2dea1d\x2d4660\x2d859c\x2d00539e47d1a1.mount: Deactivated successfully.
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.204 250273 INFO nova.virt.libvirt.driver [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Deleting instance files /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0_del#033[00m
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.204 250273 INFO nova.virt.libvirt.driver [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Deletion of /var/lib/nova/instances/c664bd16-8380-4052-abce-702c782ec7b0_del complete#033[00m
Jan 23 05:39:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3365: 321 pgs: 321 active+clean; 612 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.7 MiB/s rd, 6.4 MiB/s wr, 288 op/s
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.332 250273 INFO nova.compute.manager [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Took 0.88 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.333 250273 DEBUG oslo.service.loopingcall [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.333 250273 DEBUG nova.compute.manager [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:39:28 np0005593232 nova_compute[250269]: 2026-01-23 10:39:28.333 250273 DEBUG nova.network.neutron [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:39:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:28.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:28.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:29 np0005593232 nova_compute[250269]: 2026-01-23 10:39:29.067 250273 DEBUG nova.network.neutron [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updated VIF entry in instance network info cache for port ba062aba-c2c0-4bcb-b9fe-732316c56b6f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:29 np0005593232 nova_compute[250269]: 2026-01-23 10:39:29.068 250273 DEBUG nova.network.neutron [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [{"id": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "address": "fa:16:3e:0d:26:c0", "network": {"id": "0c962c42-ea1d-4660-859c-00539e47d1a1", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-505955194-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "95243c9b3c544aff8e9ee6043bb6f522", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba062aba-c2", "ovs_interfaceid": "ba062aba-c2c0-4bcb-b9fe-732316c56b6f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.213 250273 DEBUG oslo_concurrency.lockutils [req-2653e650-3862-472a-bcc8-4184527710ab req-0925a2b5-6a67-40ab-89d9-d0beccc995b0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-c664bd16-8380-4052-abce-702c782ec7b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 403d523e-3fc4-4818-86f0-fca4feae2c89 does not exist
Jan 23 05:39:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 41468024-ea4f-4ae5-ad12-6269d61f49ed does not exist
Jan 23 05:39:30 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b41e10cd-59ce-4fca-bd9b-4ae43095d788 does not exist
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3366: 321 pgs: 321 active+clean; 612 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 910 KiB/s rd, 6.4 MiB/s wr, 188 op/s
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.357 250273 DEBUG nova.network.neutron [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.391 250273 INFO nova.compute.manager [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Took 2.06 seconds to deallocate network for instance.#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.614 250273 INFO nova.compute.manager [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.651 250273 DEBUG nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-unplugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.652 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.653 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.654 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.654 250273 DEBUG nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] No waiting events found dispatching network-vif-unplugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.654 250273 DEBUG nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-unplugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.655 250273 DEBUG nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.655 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "c664bd16-8380-4052-abce-702c782ec7b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.655 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.656 250273 DEBUG oslo_concurrency.lockutils [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.656 250273 DEBUG nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] No waiting events found dispatching network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.656 250273 WARNING nova.compute.manager [req-60a1acbc-e923-48b3-924e-775c4ab7654f req-9d1e9384-2654-4c6e-b730-4d4c1d6344be 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received unexpected event network-vif-plugged-ba062aba-c2c0-4bcb-b9fe-732316c56b6f for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.668 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.668 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.747 250273 DEBUG oslo_concurrency.processutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:30 np0005593232 nova_compute[250269]: 2026-01-23 10:39:30.787 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.823211582 +0000 UTC m=+0.035808849 container create 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:39:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:30.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:30 np0005593232 systemd[1]: Started libpod-conmon-8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505.scope.
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.807976809 +0000 UTC m=+0.020574106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.934530316 +0000 UTC m=+0.147127633 container init 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.944256713 +0000 UTC m=+0.156853990 container start 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.947982478 +0000 UTC m=+0.160579815 container attach 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:39:30 np0005593232 zen_khorana[383770]: 167 167
Jan 23 05:39:30 np0005593232 systemd[1]: libpod-8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505.scope: Deactivated successfully.
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.952020003 +0000 UTC m=+0.164617300 container died 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:39:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:30.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-97d2338b950293e7b309250652d2a366ee2c456f40c91cfc71c97b6a2a04b62d-merged.mount: Deactivated successfully.
Jan 23 05:39:30 np0005593232 podman[383751]: 2026-01-23 10:39:30.993111011 +0000 UTC m=+0.205708308 container remove 8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_khorana, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:39:31 np0005593232 systemd[1]: libpod-conmon-8750be53876b0e65d040d22c3e64a2c490ef2abeb3e3f29c1445992206b12505.scope: Deactivated successfully.
Jan 23 05:39:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:39:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140089813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.182 250273 DEBUG oslo_concurrency.processutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.193 250273 DEBUG nova.compute.provider_tree [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:39:31 np0005593232 podman[383813]: 2026-01-23 10:39:31.200497306 +0000 UTC m=+0.063018852 container create 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.218 250273 DEBUG nova.scheduler.client.report [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.243 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:31 np0005593232 systemd[1]: Started libpod-conmon-8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be.scope.
Jan 23 05:39:31 np0005593232 podman[383813]: 2026-01-23 10:39:31.179724435 +0000 UTC m=+0.042245961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.289 250273 INFO nova.scheduler.client.report [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Deleted allocations for instance c664bd16-8380-4052-abce-702c782ec7b0#033[00m
Jan 23 05:39:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:31 np0005593232 podman[383813]: 2026-01-23 10:39:31.318437488 +0000 UTC m=+0.180958994 container init 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:39:31 np0005593232 podman[383813]: 2026-01-23 10:39:31.335705319 +0000 UTC m=+0.198226865 container start 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:39:31 np0005593232 podman[383813]: 2026-01-23 10:39:31.34030869 +0000 UTC m=+0.202830216 container attach 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 05:39:31 np0005593232 nova_compute[250269]: 2026-01-23 10:39:31.367 250273 DEBUG oslo_concurrency.lockutils [None req-ea8a2d97-2e80-4e50-95ab-b90728b2ba83 9609ee98328640299138fa34258ef48f 95243c9b3c544aff8e9ee6043bb6f522 - - default default] Lock "c664bd16-8380-4052-abce-702c782ec7b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.916s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:32 np0005593232 practical_archimedes[383832]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:39:32 np0005593232 practical_archimedes[383832]: --> relative data size: 1.0
Jan 23 05:39:32 np0005593232 practical_archimedes[383832]: --> All data devices are unavailable
Jan 23 05:39:32 np0005593232 systemd[1]: libpod-8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be.scope: Deactivated successfully.
Jan 23 05:39:32 np0005593232 podman[383813]: 2026-01-23 10:39:32.171187276 +0000 UTC m=+1.033708822 container died 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:39:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e7e1bad3ce7a1ac0ca2307f8dab9fc36f90008a7720d396ec7991f1464423d92-merged.mount: Deactivated successfully.
Jan 23 05:39:32 np0005593232 podman[383813]: 2026-01-23 10:39:32.245018435 +0000 UTC m=+1.107539931 container remove 8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:39:32 np0005593232 systemd[1]: libpod-conmon-8ef44dbbe49bcc3d2dc533802e47cdce4600503c7fba3e333c04d5b26440a6be.scope: Deactivated successfully.
Jan 23 05:39:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3367: 321 pgs: 321 active+clean; 613 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 962 KiB/s rd, 6.4 MiB/s wr, 208 op/s
Jan 23 05:39:32 np0005593232 nova_compute[250269]: 2026-01-23 10:39:32.783 250273 DEBUG nova.compute.manager [req-495a259d-ae26-4618-8ed6-21d7410beae4 req-7013e211-e20f-4384-8d23-5baf6678b8a4 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Received event network-vif-deleted-ba062aba-c2c0-4bcb-b9fe-732316c56b6f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:32.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:32 np0005593232 podman[383999]: 2026-01-23 10:39:32.902017589 +0000 UTC m=+0.052628057 container create 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:39:32 np0005593232 nova_compute[250269]: 2026-01-23 10:39:32.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:32 np0005593232 systemd[1]: Started libpod-conmon-4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f.scope.
Jan 23 05:39:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:32.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:32 np0005593232 podman[383999]: 2026-01-23 10:39:32.876956727 +0000 UTC m=+0.027567275 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Jan 23 05:39:32 np0005593232 podman[383999]: 2026-01-23 10:39:32.990516445 +0000 UTC m=+0.141126903 container init 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:39:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Jan 23 05:39:32 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Jan 23 05:39:32 np0005593232 podman[383999]: 2026-01-23 10:39:32.999526821 +0000 UTC m=+0.150137289 container start 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:39:33 np0005593232 podman[383999]: 2026-01-23 10:39:33.005079698 +0000 UTC m=+0.155690166 container attach 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:39:33 np0005593232 dreamy_bouman[384015]: 167 167
Jan 23 05:39:33 np0005593232 systemd[1]: libpod-4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f.scope: Deactivated successfully.
Jan 23 05:39:33 np0005593232 podman[383999]: 2026-01-23 10:39:33.007716503 +0000 UTC m=+0.158326971 container died 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:39:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9d961f27a993403608332ece5ca1f15d7aea0041b5f1818920187aa1abd40b1c-merged.mount: Deactivated successfully.
Jan 23 05:39:33 np0005593232 podman[383999]: 2026-01-23 10:39:33.050183481 +0000 UTC m=+0.200793959 container remove 4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:39:33 np0005593232 systemd[1]: libpod-conmon-4fdc279a3df4747d2624621f049a2f76eea153d61680227bd3d48d3f3881608f.scope: Deactivated successfully.
Jan 23 05:39:33 np0005593232 podman[384040]: 2026-01-23 10:39:33.286681573 +0000 UTC m=+0.078844622 container create 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:39:33 np0005593232 podman[384040]: 2026-01-23 10:39:33.242369323 +0000 UTC m=+0.034532452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:33 np0005593232 systemd[1]: Started libpod-conmon-595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d.scope.
Jan 23 05:39:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae96d6737ae54f8547b544a134cb3989665d027e3fb96d7536c75da8c88b5f8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae96d6737ae54f8547b544a134cb3989665d027e3fb96d7536c75da8c88b5f8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae96d6737ae54f8547b544a134cb3989665d027e3fb96d7536c75da8c88b5f8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:33 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae96d6737ae54f8547b544a134cb3989665d027e3fb96d7536c75da8c88b5f8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:33 np0005593232 podman[384040]: 2026-01-23 10:39:33.412360215 +0000 UTC m=+0.204523304 container init 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:39:33 np0005593232 podman[384040]: 2026-01-23 10:39:33.421254588 +0000 UTC m=+0.213417627 container start 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:39:33 np0005593232 podman[384040]: 2026-01-23 10:39:33.425008614 +0000 UTC m=+0.217171663 container attach 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:39:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]: {
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:    "0": [
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:        {
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "devices": [
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "/dev/loop3"
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            ],
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "lv_name": "ceph_lv0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "lv_size": "7511998464",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "name": "ceph_lv0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "tags": {
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.cluster_name": "ceph",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.crush_device_class": "",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.encrypted": "0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.osd_id": "0",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.type": "block",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:                "ceph.vdo": "0"
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            },
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "type": "block",
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:            "vg_name": "ceph_vg0"
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:        }
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]:    ]
Jan 23 05:39:34 np0005593232 vibrant_bhabha[384056]: }
Jan 23 05:39:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3369: 321 pgs: 321 active+clean; 619 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 815 KiB/s rd, 5.1 MiB/s wr, 191 op/s
Jan 23 05:39:34 np0005593232 systemd[1]: libpod-595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d.scope: Deactivated successfully.
Jan 23 05:39:34 np0005593232 podman[384040]: 2026-01-23 10:39:34.298513331 +0000 UTC m=+1.090676380 container died 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:39:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ae96d6737ae54f8547b544a134cb3989665d027e3fb96d7536c75da8c88b5f8f-merged.mount: Deactivated successfully.
Jan 23 05:39:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:34.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:34.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:35 np0005593232 podman[384040]: 2026-01-23 10:39:35.067706975 +0000 UTC m=+1.859870014 container remove 595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:39:35 np0005593232 systemd[1]: libpod-conmon-595f079e4ca14c90b840a5cd295093e7ce43431a6cccd3614698bfa2d74c9f8d.scope: Deactivated successfully.
Jan 23 05:39:35 np0005593232 nova_compute[250269]: 2026-01-23 10:39:35.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.832610086 +0000 UTC m=+0.050657291 container create a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:39:35 np0005593232 systemd[1]: Started libpod-conmon-a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d.scope.
Jan 23 05:39:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.808101329 +0000 UTC m=+0.026148554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.912191808 +0000 UTC m=+0.130239003 container init a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.919275849 +0000 UTC m=+0.137323024 container start a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.922607824 +0000 UTC m=+0.140655029 container attach a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:39:35 np0005593232 sharp_mclaren[384237]: 167 167
Jan 23 05:39:35 np0005593232 systemd[1]: libpod-a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d.scope: Deactivated successfully.
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.92845213 +0000 UTC m=+0.146499305 container died a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:39:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-780e938f20d2697dc8fd1ed9658079979c613046af0513bb1695df6d7b5c2131-merged.mount: Deactivated successfully.
Jan 23 05:39:35 np0005593232 podman[384220]: 2026-01-23 10:39:35.969985981 +0000 UTC m=+0.188033186 container remove a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclaren, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:39:35 np0005593232 systemd[1]: libpod-conmon-a8ba3201df83487fdc0fb4e318d249941c662fcbadccea7d17662c0592792f4d.scope: Deactivated successfully.
Jan 23 05:39:36 np0005593232 podman[384234]: 2026-01-23 10:39:36.000152718 +0000 UTC m=+0.117060898 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:39:36 np0005593232 podman[384288]: 2026-01-23 10:39:36.17822166 +0000 UTC m=+0.069150657 container create fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:39:36 np0005593232 systemd[1]: Started libpod-conmon-fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703.scope.
Jan 23 05:39:36 np0005593232 podman[384288]: 2026-01-23 10:39:36.156032749 +0000 UTC m=+0.046961796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:39:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8873f7d363d8ba311fa1f6e71e8018fda8505329cd6a3fec442b79cfe3bec97e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8873f7d363d8ba311fa1f6e71e8018fda8505329cd6a3fec442b79cfe3bec97e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8873f7d363d8ba311fa1f6e71e8018fda8505329cd6a3fec442b79cfe3bec97e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:36 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8873f7d363d8ba311fa1f6e71e8018fda8505329cd6a3fec442b79cfe3bec97e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3370: 321 pgs: 321 active+clean; 619 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 815 KiB/s rd, 5.1 MiB/s wr, 191 op/s
Jan 23 05:39:36 np0005593232 podman[384288]: 2026-01-23 10:39:36.27604906 +0000 UTC m=+0.166978147 container init fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:39:36 np0005593232 podman[384288]: 2026-01-23 10:39:36.284557052 +0000 UTC m=+0.175486059 container start fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:39:36 np0005593232 podman[384288]: 2026-01-23 10:39:36.288255577 +0000 UTC m=+0.179184614 container attach fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.713191) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776713339, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1183, "num_deletes": 251, "total_data_size": 1828267, "memory_usage": 1860672, "flush_reason": "Manual Compaction"}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776732772, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 1809017, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73667, "largest_seqno": 74849, "table_properties": {"data_size": 1803231, "index_size": 3116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13032, "raw_average_key_size": 20, "raw_value_size": 1791371, "raw_average_value_size": 2821, "num_data_blocks": 136, "num_entries": 635, "num_filter_entries": 635, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164679, "oldest_key_time": 1769164679, "file_creation_time": 1769164776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 19611 microseconds, and 9682 cpu microseconds.
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.732825) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 1809017 bytes OK
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.732850) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.734965) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.734988) EVENT_LOG_v1 {"time_micros": 1769164776734981, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.735011) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 1822795, prev total WAL file size 1822795, number of live WAL files 2.
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.736469) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(1766KB)], [170(12MB)]
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776736633, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 15435080, "oldest_snapshot_seqno": -1}
Jan 23 05:39:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:36.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9971 keys, 13459596 bytes, temperature: kUnknown
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776875666, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 13459596, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13394465, "index_size": 39098, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263445, "raw_average_key_size": 26, "raw_value_size": 13219129, "raw_average_value_size": 1325, "num_data_blocks": 1493, "num_entries": 9971, "num_filter_entries": 9971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164776, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.876286) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 13459596 bytes
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.878651) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 110.8 rd, 96.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 13.0 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(16.0) write-amplify(7.4) OK, records in: 10492, records dropped: 521 output_compression: NoCompression
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.878687) EVENT_LOG_v1 {"time_micros": 1769164776878669, "job": 106, "event": "compaction_finished", "compaction_time_micros": 139248, "compaction_time_cpu_micros": 69409, "output_level": 6, "num_output_files": 1, "total_output_size": 13459596, "num_input_records": 10492, "num_output_records": 9971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776879441, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164776884909, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.736086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.885040) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.885049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.885051) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.885052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:39:36.885054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:39:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:36.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:37 np0005593232 festive_carver[384305]: {
Jan 23 05:39:37 np0005593232 festive_carver[384305]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:39:37 np0005593232 festive_carver[384305]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:39:37 np0005593232 festive_carver[384305]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:39:37 np0005593232 festive_carver[384305]:        "osd_id": 0,
Jan 23 05:39:37 np0005593232 festive_carver[384305]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:39:37 np0005593232 festive_carver[384305]:        "type": "bluestore"
Jan 23 05:39:37 np0005593232 festive_carver[384305]:    }
Jan 23 05:39:37 np0005593232 festive_carver[384305]: }
Jan 23 05:39:37 np0005593232 systemd[1]: libpod-fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703.scope: Deactivated successfully.
Jan 23 05:39:37 np0005593232 podman[384327]: 2026-01-23 10:39:37.277038671 +0000 UTC m=+0.033662388 container died fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:39:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8873f7d363d8ba311fa1f6e71e8018fda8505329cd6a3fec442b79cfe3bec97e-merged.mount: Deactivated successfully.
Jan 23 05:39:37 np0005593232 podman[384327]: 2026-01-23 10:39:37.334574106 +0000 UTC m=+0.091197813 container remove fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_carver, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:39:37 np0005593232 systemd[1]: libpod-conmon-fa1e700c1fac50b8f96caa7fe084715900c98009034ac968b883a3a1a3f4d703.scope: Deactivated successfully.
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:39:37
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images', 'vms', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 41fdab73-0a65-4bc4-8532-645afe9c448f does not exist
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a2ad6dbe-f774-47a9-a77c-1943db1d4e1c does not exist
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 073e341d-8b6f-4241-90da-ee27a36b321e does not exist
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:39:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:39:37 np0005593232 nova_compute[250269]: 2026-01-23 10:39:37.991 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3371: 321 pgs: 321 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 141 KiB/s rd, 82 KiB/s wr, 117 op/s
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:39:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:39:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:38.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:38.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Jan 23 05:39:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Jan 23 05:39:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Jan 23 05:39:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3373: 321 pgs: 321 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 97 KiB/s rd, 58 KiB/s wr, 116 op/s
Jan 23 05:39:40 np0005593232 nova_compute[250269]: 2026-01-23 10:39:40.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:40.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:40.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:41 np0005593232 nova_compute[250269]: 2026-01-23 10:39:41.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:41 np0005593232 nova_compute[250269]: 2026-01-23 10:39:41.434 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:41 np0005593232 podman[384394]: 2026-01-23 10:39:41.497701526 +0000 UTC m=+0.064140314 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:39:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3374: 321 pgs: 321 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 45 KiB/s rd, 22 KiB/s wr, 68 op/s
Jan 23 05:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:42.656 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:42.657 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:42.657 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:42.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:42 np0005593232 nova_compute[250269]: 2026-01-23 10:39:42.911 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164767.9095063, c664bd16-8380-4052-abce-702c782ec7b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:42 np0005593232 nova_compute[250269]: 2026-01-23 10:39:42.912 250273 INFO nova.compute.manager [-] [instance: c664bd16-8380-4052-abce-702c782ec7b0] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:39:42 np0005593232 nova_compute[250269]: 2026-01-23 10:39:42.980 250273 DEBUG nova.compute.manager [None req-f1bc1cd3-36a9-4279-8d98-6ceb8ffb7ed6 - - - - - -] [instance: c664bd16-8380-4052-abce-702c782ec7b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:42.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:42 np0005593232 nova_compute[250269]: 2026-01-23 10:39:42.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:43 np0005593232 nova_compute[250269]: 2026-01-23 10:39:43.674 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:43 np0005593232 nova_compute[250269]: 2026-01-23 10:39:43.674 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:43 np0005593232 nova_compute[250269]: 2026-01-23 10:39:43.933 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:39:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3375: 321 pgs: 321 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.493 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.587 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.588 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.599 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.600 250273 INFO nova.compute.claims [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:39:44 np0005593232 nova_compute[250269]: 2026-01-23 10:39:44.763 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:44.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:39:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:44.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:39:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:39:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963671110' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.227 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.236 250273 DEBUG nova.compute.provider_tree [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.254 250273 DEBUG nova.scheduler.client.report [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.282 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.283 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.372 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.372 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.392 250273 INFO nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.413 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.505 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.506 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.506 250273 INFO nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Creating image(s)#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.537 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.565 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.596 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.600 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.638 250273 DEBUG nova.policy [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93cd560e84264023877c47122b5919de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e762fca3b634c7aa1d994314c059c54', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.703 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.704 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.705 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.705 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.737 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.742 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:45 np0005593232 nova_compute[250269]: 2026-01-23 10:39:45.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3376: 321 pgs: 321 active+clean; 438 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 41 KiB/s rd, 27 KiB/s wr, 63 op/s
Jan 23 05:39:46 np0005593232 nova_compute[250269]: 2026-01-23 10:39:46.630 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.888s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:46 np0005593232 nova_compute[250269]: 2026-01-23 10:39:46.748 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] resizing rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 05:39:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:46.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:46.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.002 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Successfully created port: 456d51f3-4b45-4e54-acee-c50facafcd50 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.372 250273 DEBUG nova.objects.instance [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'migration_context' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.405 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.405 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Ensure instance console log exists: /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.406 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.407 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:47 np0005593232 nova_compute[250269]: 2026-01-23 10:39:47.407 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.006519867857807556 of space, bias 1.0, pg target 1.9559603573422668 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004335733935883119 of space, bias 1.0, pg target 1.2963844468290526 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.416259538432905e-05 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5671365362693095 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017332030522985297 quantized to 16 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002166503815373162 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018415282430671877 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:39:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004333007630746324 quantized to 32 (current 32)
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.046 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3377: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.427 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Successfully updated port: 456d51f3-4b45-4e54-acee-c50facafcd50 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.443 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.444 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.445 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.530 250273 DEBUG nova.compute.manager [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.531 250273 DEBUG nova.compute.manager [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing instance network info cache due to event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:39:48 np0005593232 nova_compute[250269]: 2026-01-23 10:39:48.531 250273 DEBUG oslo_concurrency.lockutils [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:39:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:48.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:48.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:49 np0005593232 nova_compute[250269]: 2026-01-23 10:39:49.034 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:39:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3378: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.9 MiB/s wr, 30 op/s
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.836 250273 DEBUG nova.network.neutron [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.878 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:50.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.878 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance network_info: |[{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.878 250273 DEBUG oslo_concurrency.lockutils [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.879 250273 DEBUG nova.network.neutron [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.882 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start _get_guest_xml network_info=[{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.887 250273 WARNING nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.892 250273 DEBUG nova.virt.libvirt.host [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.893 250273 DEBUG nova.virt.libvirt.host [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.896 250273 DEBUG nova.virt.libvirt.host [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.896 250273 DEBUG nova.virt.libvirt.host [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.898 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.898 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.898 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.899 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.899 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.899 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.900 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.900 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.901 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.901 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.901 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.901 250273 DEBUG nova.virt.hardware [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:39:50 np0005593232 nova_compute[250269]: 2026-01-23 10:39:50.905 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:39:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3170876753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.355 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.409 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.416 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:39:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080704497' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.879 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.881 250273 DEBUG nova.virt.libvirt.vif [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:39:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.881 250273 DEBUG nova.network.os_vif_util [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.882 250273 DEBUG nova.network.os_vif_util [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.884 250273 DEBUG nova.objects.instance [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.907 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <uuid>0888913c-71a6-45fe-97bf-9dddd2b7b521</uuid>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <name>instance-000000c5</name>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:name>multiattach-server-0</nova:name>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:39:50</nova:creationTime>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:user uuid="93cd560e84264023877c47122b5919de">tempest-AttachVolumeMultiAttachTest-63035580-project-member</nova:user>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:project uuid="6e762fca3b634c7aa1d994314c059c54">tempest-AttachVolumeMultiAttachTest-63035580</nova:project>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <nova:port uuid="456d51f3-4b45-4e54-acee-c50facafcd50">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="serial">0888913c-71a6-45fe-97bf-9dddd2b7b521</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="uuid">0888913c-71a6-45fe-97bf-9dddd2b7b521</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0888913c-71a6-45fe-97bf-9dddd2b7b521_disk">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:06:5d:ea"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <target dev="tap456d51f3-4b"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/console.log" append="off"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:39:51 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:39:51 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:39:51 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:39:51 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.910 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Preparing to wait for external event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.910 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.911 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.911 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.912 250273 DEBUG nova.virt.libvirt.vif [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:39:45Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.912 250273 DEBUG nova.network.os_vif_util [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.913 250273 DEBUG nova.network.os_vif_util [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.913 250273 DEBUG os_vif [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.914 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.914 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.915 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.918 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap456d51f3-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.918 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap456d51f3-4b, col_values=(('external_ids', {'iface-id': '456d51f3-4b45-4e54-acee-c50facafcd50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:5d:ea', 'vm-uuid': '0888913c-71a6-45fe-97bf-9dddd2b7b521'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.964 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:51 np0005593232 NetworkManager[49057]: <info>  [1769164791.9652] manager: (tap456d51f3-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/359)
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:51 np0005593232 nova_compute[250269]: 2026-01-23 10:39:51.976 250273 INFO os_vif [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b')#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.089 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.090 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.090 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No VIF found with MAC fa:16:3e:06:5d:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.090 250273 INFO nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Using config drive#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.117 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3379: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 05:39:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:52.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.939 250273 INFO nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Creating config drive at /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config#033[00m
Jan 23 05:39:52 np0005593232 nova_compute[250269]: 2026-01-23 10:39:52.951 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprm1o2_h1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:52.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.119 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprm1o2_h1" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.154 250273 DEBUG nova.storage.rbd_utils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.161 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.418 250273 DEBUG oslo_concurrency.processutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config 0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.419 250273 INFO nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Deleting local config drive /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/disk.config because it was imported into RBD.#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.4704] manager: (tap456d51f3-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/360)
Jan 23 05:39:53 np0005593232 kernel: tap456d51f3-4b: entered promiscuous mode
Jan 23 05:39:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:53Z|00770|binding|INFO|Claiming lport 456d51f3-4b45-4e54-acee-c50facafcd50 for this chassis.
Jan 23 05:39:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:53Z|00771|binding|INFO|456d51f3-4b45-4e54-acee-c50facafcd50: Claiming fa:16:3e:06:5d:ea 10.100.0.8
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.471 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.484 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.4852] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/361)
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.4864] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/362)
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.489 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:ea 10.100.0.8'], port_security=['fa:16:3e:06:5d:ea 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0888913c-71a6-45fe-97bf-9dddd2b7b521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=456d51f3-4b45-4e54-acee-c50facafcd50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.490 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 456d51f3-4b45-4e54-acee-c50facafcd50 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 bound to our chassis#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.491 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:39:53 np0005593232 systemd-machined[215836]: New machine qemu-87-instance-000000c5.
Jan 23 05:39:53 np0005593232 systemd-udevd[384796]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.508 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[61cc43a1-31c0-4113-8f6a-765fa7c489df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.509 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba2ba4a-d1 in ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.510 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba2ba4a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.510 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4aeafa-d174-40a0-b32d-06cfbcb45c36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.511 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f9723a16-7428-4bed-87cf-f8d01e3e8134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.522 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[5295ca80-2d27-4a52-b2ee-f60a22bb705d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.5264] device (tap456d51f3-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.5275] device (tap456d51f3-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:39:53 np0005593232 systemd[1]: Started Virtual Machine qemu-87-instance-000000c5.
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.549 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[41c5c7f7-2e1f-4bf7-afbe-b2d3931555f5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.578 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[21d17429-27ec-46c5-9a71-ab8f3ac7b8fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.6009] manager: (tapfba2ba4a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.603 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[47522ae1-ce37-4079-a98c-a46d677a4436]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.619 250273 DEBUG nova.network.neutron [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated VIF entry in instance network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.620 250273 DEBUG nova.network.neutron [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.638 250273 DEBUG oslo_concurrency.lockutils [req-8312bcc2-3b56-4cb0-b148-bf8eb32c4360 req-45a5603a-b4e8-4efd-96a3-12ef240129b2 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.644 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c455a5-4bf3-4cfc-ae87-9a4b879538db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.648 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[de28794f-de04-47ba-bab3-77c2092e55f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.669 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.672 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.6751] device (tapfba2ba4a-d0): carrier: link connected
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.681 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[74dab734-d6ce-4eb9-82b9-c13d724de144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.696 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.701 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[fa80dfe3-efc9-4a7f-8c96-ec357fcac24c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870681, 'reachable_time': 29343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384828, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:53Z|00772|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 ovn-installed in OVS
Jan 23 05:39:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:53Z|00773|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 up in Southbound
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.712 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.718 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8be45637-c047-4aa8-a05a-4f8264e22d4b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:db55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 870681, 'tstamp': 870681}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384829, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.734 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3dd7591f-a0d7-4d08-923f-294554e0aff8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 233], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870681, 'reachable_time': 29343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384830, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.772 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3a437664-ae2f-4fb2-a958-166a95906b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.826 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9da270-d99d-4d3a-b63a-5758b0c4f14c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.828 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.828 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.829 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 NetworkManager[49057]: <info>  [1769164793.8317] manager: (tapfba2ba4a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 23 05:39:53 np0005593232 kernel: tapfba2ba4a-d0: entered promiscuous mode
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.833 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.835 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.836 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:39:53Z|00774|binding|INFO|Releasing lport 2348ddba-3dc3-4456-a637-f3065ba0d8f6 from this chassis (sb_readonly=0)
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.837 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.839 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.839 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a182c6-213e-4328-a667-09429bf5e962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.840 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fba2ba4a-d82c-4f8b-9754-c13fbec41a04
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fba2ba4a-d82c-4f8b-9754-c13fbec41a04
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:39:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:39:53.841 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'env', 'PROCESS_TAG=haproxy-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:39:53 np0005593232 nova_compute[250269]: 2026-01-23 10:39:53.854 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.031 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164794.030945, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.033 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Started (Lifecycle Event)#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.057 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.063 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164794.0312207, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.064 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.087 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.092 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.126 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.144 250273 DEBUG nova.compute.manager [req-81a29904-8b19-403e-a5bb-72baacc174c6 req-76468d2b-5431-448b-8bc2-cc7ee89f4c0e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.145 250273 DEBUG oslo_concurrency.lockutils [req-81a29904-8b19-403e-a5bb-72baacc174c6 req-76468d2b-5431-448b-8bc2-cc7ee89f4c0e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.145 250273 DEBUG oslo_concurrency.lockutils [req-81a29904-8b19-403e-a5bb-72baacc174c6 req-76468d2b-5431-448b-8bc2-cc7ee89f4c0e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.145 250273 DEBUG oslo_concurrency.lockutils [req-81a29904-8b19-403e-a5bb-72baacc174c6 req-76468d2b-5431-448b-8bc2-cc7ee89f4c0e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.145 250273 DEBUG nova.compute.manager [req-81a29904-8b19-403e-a5bb-72baacc174c6 req-76468d2b-5431-448b-8bc2-cc7ee89f4c0e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Processing event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.146 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:39:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3380: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 23 05:39:54 np0005593232 podman[384905]: 2026-01-23 10:39:54.18670566 +0000 UTC m=+0.020484113 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.644 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164794.6443837, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.645 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.649 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.653 250273 INFO nova.virt.libvirt.driver [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance spawned successfully.#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.654 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:39:54 np0005593232 podman[384905]: 2026-01-23 10:39:54.665369875 +0000 UTC m=+0.499148318 container create 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:39:54 np0005593232 systemd[1]: Started libpod-conmon-167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c.scope.
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.713 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.719 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.722 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.723 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.723 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.724 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.724 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.724 250273 DEBUG nova.virt.libvirt.driver [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:39:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:39:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47e7b399d815f9aba5bdc51c98c287f8c7c7a47cbbc7dca7a2e9a45cbbf6294e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:39:54 np0005593232 podman[384905]: 2026-01-23 10:39:54.757070092 +0000 UTC m=+0.590848545 container init 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:39:54 np0005593232 podman[384905]: 2026-01-23 10:39:54.765913853 +0000 UTC m=+0.599692286 container start 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.773 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:39:54 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [NOTICE]   (384924) : New worker (384926) forked
Jan 23 05:39:54 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [NOTICE]   (384924) : Loading success.
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.817 250273 INFO nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Took 9.31 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.818 250273 DEBUG nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.876 250273 INFO nova.compute.manager [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Took 10.34 seconds to build instance.#033[00m
Jan 23 05:39:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:54.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:54 np0005593232 nova_compute[250269]: 2026-01-23 10:39:54.892 250273 DEBUG oslo_concurrency.lockutils [None req-d976c469-072f-40c3-80a1-55f4b2cce2bb 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:55.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:55 np0005593232 nova_compute[250269]: 2026-01-23 10:39:55.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.273 250273 DEBUG nova.compute.manager [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.273 250273 DEBUG oslo_concurrency.lockutils [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.274 250273 DEBUG oslo_concurrency.lockutils [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.274 250273 DEBUG oslo_concurrency.lockutils [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.274 250273 DEBUG nova.compute.manager [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.275 250273 WARNING nova.compute.manager [req-c670ba2b-6319-45ce-a1c3-db2bdc369f8e req-cca853d1-7ec2-4728-afe0-27c7699605db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:39:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3381: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 05:39:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:39:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:56.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:39:56 np0005593232 nova_compute[250269]: 2026-01-23 10:39:56.965 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:39:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:57.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3382: 321 pgs: 321 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 23 05:39:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:39:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:39:58.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:39:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:39:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:39:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:39:59.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:39:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:39:59 np0005593232 nova_compute[250269]: 2026-01-23 10:39:59.156 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:40:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3383: 321 pgs: 321 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 105 op/s
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.642 250273 DEBUG nova.compute.manager [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.643 250273 DEBUG nova.compute.manager [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing instance network info cache due to event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.644 250273 DEBUG oslo_concurrency.lockutils [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.644 250273 DEBUG oslo_concurrency.lockutils [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.644 250273 DEBUG nova.network.neutron [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:40:00 np0005593232 nova_compute[250269]: 2026-01-23 10:40:00.795 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:40:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:00.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:40:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:01.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:40:01 np0005593232 nova_compute[250269]: 2026-01-23 10:40:01.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3384: 321 pgs: 321 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 105 op/s
Jan 23 05:40:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:02.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:03.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:03 np0005593232 nova_compute[250269]: 2026-01-23 10:40:03.317 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:40:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3385: 321 pgs: 321 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 103 op/s
Jan 23 05:40:04 np0005593232 nova_compute[250269]: 2026-01-23 10:40:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:04 np0005593232 nova_compute[250269]: 2026-01-23 10:40:04.561 250273 DEBUG nova.network.neutron [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated VIF entry in instance network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:40:04 np0005593232 nova_compute[250269]: 2026-01-23 10:40:04.561 250273 DEBUG nova.network.neutron [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:40:04 np0005593232 nova_compute[250269]: 2026-01-23 10:40:04.584 250273 DEBUG oslo_concurrency.lockutils [req-4ff108b7-6b23-4883-ab4c-0eb6b5ea514c req-be5e27fc-d4ce-444a-84f9-43a2d06a1f03 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:40:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:04.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:05.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:05 np0005593232 nova_compute[250269]: 2026-01-23 10:40:05.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:05 np0005593232 nova_compute[250269]: 2026-01-23 10:40:05.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:40:05 np0005593232 nova_compute[250269]: 2026-01-23 10:40:05.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:40:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1146671172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:40:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3386: 321 pgs: 321 active+clean; 405 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 102 op/s
Jan 23 05:40:06 np0005593232 podman[384991]: 2026-01-23 10:40:06.434695348 +0000 UTC m=+0.095900976 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:40:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:06.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:06 np0005593232 nova_compute[250269]: 2026-01-23 10:40:06.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:07.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:07 np0005593232 nova_compute[250269]: 2026-01-23 10:40:07.265 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:07 np0005593232 nova_compute[250269]: 2026-01-23 10:40:07.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:07 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:07Z|00775|binding|INFO|Releasing lport 2348ddba-3dc3-4456-a637-f3065ba0d8f6 from this chassis (sb_readonly=0)
Jan 23 05:40:08 np0005593232 nova_compute[250269]: 2026-01-23 10:40:08.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:08Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:5d:ea 10.100.0.8
Jan 23 05:40:08 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:08Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:5d:ea 10.100.0.8
Jan 23 05:40:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3387: 321 pgs: 321 active+clean; 473 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.7 MiB/s wr, 153 op/s
Jan 23 05:40:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:08.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:09.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3388: 321 pgs: 321 active+clean; 473 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 69 KiB/s rd, 3.7 MiB/s wr, 50 op/s
Jan 23 05:40:10 np0005593232 nova_compute[250269]: 2026-01-23 10:40:10.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:10.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:11.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:11 np0005593232 nova_compute[250269]: 2026-01-23 10:40:11.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3389: 321 pgs: 321 active+clean; 478 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 3.9 MiB/s wr, 69 op/s
Jan 23 05:40:12 np0005593232 nova_compute[250269]: 2026-01-23 10:40:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:12 np0005593232 podman[385022]: 2026-01-23 10:40:12.392050726 +0000 UTC m=+0.049530269 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:40:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:12.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:13.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:13Z|00776|binding|INFO|Releasing lport 2348ddba-3dc3-4456-a637-f3065ba0d8f6 from this chassis (sb_readonly=0)
Jan 23 05:40:13 np0005593232 nova_compute[250269]: 2026-01-23 10:40:13.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3390: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 377 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 23 05:40:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:14.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:15.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:15 np0005593232 nova_compute[250269]: 2026-01-23 10:40:15.310 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:15 np0005593232 nova_compute[250269]: 2026-01-23 10:40:15.802 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3391: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 377 KiB/s rd, 3.9 MiB/s wr, 94 op/s
Jan 23 05:40:16 np0005593232 nova_compute[250269]: 2026-01-23 10:40:16.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:16.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:16 np0005593232 nova_compute[250269]: 2026-01-23 10:40:16.973 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:17.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:17 np0005593232 nova_compute[250269]: 2026-01-23 10:40:17.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3392: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 169 op/s
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.511 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:18.510 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=75, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=74) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:40:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:18.512 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.601 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.602 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.602 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:40:18 np0005593232 nova_compute[250269]: 2026-01-23 10:40:18.602 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:18.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:19.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3393: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 221 KiB/s wr, 118 op/s
Jan 23 05:40:20 np0005593232 nova_compute[250269]: 2026-01-23 10:40:20.805 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:20.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.663 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.685 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.685 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.686 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.729 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.729 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.729 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.729 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.730 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:21 np0005593232 nova_compute[250269]: 2026-01-23 10:40:21.976 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.027 250273 DEBUG nova.compute.manager [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.027 250273 DEBUG nova.compute.manager [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing instance network info cache due to event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.028 250273 DEBUG oslo_concurrency.lockutils [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.028 250273 DEBUG oslo_concurrency.lockutils [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.028 250273 DEBUG nova.network.neutron [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:40:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:40:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/446045889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.185 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3394: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 221 KiB/s wr, 118 op/s
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.544 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.544 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.709 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.711 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3924MB free_disk=20.830509185791016GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.711 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.711 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.820 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.820 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.821 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:40:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:22.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:22 np0005593232 nova_compute[250269]: 2026-01-23 10:40:22.929 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:23.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:40:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3748736507' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.385 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.392 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.413 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.453 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.453 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:23 np0005593232 nova_compute[250269]: 2026-01-23 10:40:23.923 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3395: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 80 KiB/s wr, 99 op/s
Jan 23 05:40:24 np0005593232 nova_compute[250269]: 2026-01-23 10:40:24.871 250273 DEBUG nova.network.neutron [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated VIF entry in instance network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:40:24 np0005593232 nova_compute[250269]: 2026-01-23 10:40:24.872 250273 DEBUG nova.network.neutron [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:40:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:24.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:25.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:25 np0005593232 nova_compute[250269]: 2026-01-23 10:40:25.191 250273 DEBUG oslo_concurrency.lockutils [req-3fbc684f-1a52-4b17-9961-ecd63f17fc0f req-cd5cc33b-0323-4a22-a40e-05e503fa7bd7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:40:25 np0005593232 nova_compute[250269]: 2026-01-23 10:40:25.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3396: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 74 op/s
Jan 23 05:40:26 np0005593232 nova_compute[250269]: 2026-01-23 10:40:26.844 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:26 np0005593232 nova_compute[250269]: 2026-01-23 10:40:26.845 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:26 np0005593232 nova_compute[250269]: 2026-01-23 10:40:26.882 250273 DEBUG nova.objects.instance [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'flavor' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:26.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:26 np0005593232 nova_compute[250269]: 2026-01-23 10:40:26.934 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:26 np0005593232 nova_compute[250269]: 2026-01-23 10:40:26.979 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:27.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.231 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.232 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.233 250273 INFO nova.compute.manager [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Attaching volume 4393c992-1666-40a0-ab11-4cc66bdcd721 to /dev/vdb#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.418 250273 DEBUG os_brick.utils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.421 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.433 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.433 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f8fb9498-09a7-4ddf-a593-e5b8d2c3f831]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.435 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.444 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.445 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[5c87fba7-f70b-495e-a92c-83a65a3f12ec]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.446 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.454 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.454 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3844ac-4517-4147-abea-8b14a1a768cb]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.457 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[d56cf794-5e1d-4381-8228-35df65216903]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.458 250273 DEBUG oslo_concurrency.processutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.507 250273 DEBUG oslo_concurrency.processutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "nvme version" returned: 0 in 0.049s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.509 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.510 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.510 250273 DEBUG os_brick.initiator.connectors.lightos [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.510 250273 DEBUG os_brick.utils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] <== get_connector_properties: return (91ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:40:27 np0005593232 nova_compute[250269]: 2026-01-23 10:40:27.511 250273 DEBUG nova.virt.block_device [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating existing volume attachment record: b975d685-7101-4c02-9d08-f95e23de2674 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:40:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:27.514 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '75'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:40:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3397: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 79 op/s
Jan 23 05:40:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:28.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:29.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:29 np0005593232 nova_compute[250269]: 2026-01-23 10:40:29.824 250273 DEBUG nova.objects.instance [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'flavor' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:29 np0005593232 nova_compute[250269]: 2026-01-23 10:40:29.893 250273 DEBUG nova.virt.libvirt.driver [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Attempting to attach volume 4393c992-1666-40a0-ab11-4cc66bdcd721 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168#033[00m
Jan 23 05:40:29 np0005593232 nova_compute[250269]: 2026-01-23 10:40:29.896 250273 DEBUG nova.virt.libvirt.guest [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] attach device xml: <disk type="network" device="disk">
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:40:29 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <auth username="openstack">
Jan 23 05:40:29 np0005593232 nova_compute[250269]:    <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  </auth>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:40:29 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 05:40:29 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:40:29 np0005593232 nova_compute[250269]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.195 250273 DEBUG nova.virt.libvirt.driver [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.196 250273 DEBUG nova.virt.libvirt.driver [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.196 250273 DEBUG nova.virt.libvirt.driver [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.196 250273 DEBUG nova.virt.libvirt.driver [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No VIF found with MAC fa:16:3e:06:5d:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:40:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3398: 321 pgs: 321 active+clean; 484 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 KiB/s rd, 511 B/s wr, 4 op/s
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.448 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.511 250273 DEBUG oslo_concurrency.lockutils [None req-c06c41d6-213e-474f-8030-2380dc80fd4f 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 3.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:30 np0005593232 nova_compute[250269]: 2026-01-23 10:40:30.859 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:31.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:32 np0005593232 nova_compute[250269]: 2026-01-23 10:40:32.002 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3399: 321 pgs: 321 active+clean; 490 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 347 KiB/s wr, 8 op/s
Jan 23 05:40:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3400: 321 pgs: 321 active+clean; 516 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 05:40:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:35.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:35 np0005593232 nova_compute[250269]: 2026-01-23 10:40:35.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3401: 321 pgs: 321 active+clean; 516 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 244 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 05:40:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:36.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:37.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:37 np0005593232 nova_compute[250269]: 2026-01-23 10:40:37.053 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:40:37
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms']
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:40:37 np0005593232 podman[385176]: 2026-01-23 10:40:37.414559837 +0000 UTC m=+0.073798529 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:40:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3402: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:40:38 np0005593232 podman[385374]: 2026-01-23 10:40:38.626592915 +0000 UTC m=+0.072424799 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:40:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:40:38 np0005593232 podman[385374]: 2026-01-23 10:40:38.746796722 +0000 UTC m=+0.192628696 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:40:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:39.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:39 np0005593232 podman[385527]: 2026-01-23 10:40:39.499781874 +0000 UTC m=+0.054990094 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:40:39 np0005593232 podman[385527]: 2026-01-23 10:40:39.522270524 +0000 UTC m=+0.077478704 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:40:39 np0005593232 podman[385593]: 2026-01-23 10:40:39.716899886 +0000 UTC m=+0.052489493 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2)
Jan 23 05:40:39 np0005593232 podman[385593]: 2026-01-23 10:40:39.73322782 +0000 UTC m=+0.068817407 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, build-date=2023-02-22T09:23:20, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793)
Jan 23 05:40:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:40:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:40:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.257 250273 DEBUG nova.compute.manager [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 23 05:40:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3403: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 300 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.407 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.408 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.438 250273 DEBUG nova.objects.instance [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_requests' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.452 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.452 250273 INFO nova.compute.claims [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.452 250273 DEBUG nova.objects.instance [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'resources' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.467 250273 DEBUG nova.objects.instance [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.510 250273 INFO nova.compute.resource_tracker [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating resource usage from migration a4212371-48a4-4079-aaa5-8560115dd624#033[00m
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d8f06f06-68c1-4d01-86f1-3eefc5bd7108 does not exist
Jan 23 05:40:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 03d4c43b-c43b-441f-8a05-405601091987 does not exist
Jan 23 05:40:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e8abb7c6-be9d-4eb2-b52f-02b574a73018 does not exist
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.574 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:40:40 np0005593232 nova_compute[250269]: 2026-01-23 10:40:40.900 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:40.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:40:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/460237991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.021 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.029 250273 DEBUG nova.compute.provider_tree [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.055 250273 DEBUG nova.scheduler.client.report [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:40:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:41.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.082 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.082 250273 INFO nova.compute.manager [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Migrating#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.129 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.129 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:40:41 np0005593232 nova_compute[250269]: 2026-01-23 10:40:41.130 250273 DEBUG nova.network.neutron [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.167776095 +0000 UTC m=+0.021518743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.357189308 +0000 UTC m=+0.210931956 container create 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:41 np0005593232 systemd[1]: Started libpod-conmon-423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6.scope.
Jan 23 05:40:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.511345629 +0000 UTC m=+0.365088317 container init 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.523229567 +0000 UTC m=+0.376972195 container start 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:40:41 np0005593232 ecstatic_cori[385936]: 167 167
Jan 23 05:40:41 np0005593232 systemd[1]: libpod-423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6.scope: Deactivated successfully.
Jan 23 05:40:41 np0005593232 conmon[385936]: conmon 423e8a260fb63a3dd505 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6.scope/container/memory.events
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.756585749 +0000 UTC m=+0.610328467 container attach 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.75730337 +0000 UTC m=+0.611046028 container died 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d875e45551250d9492cb7461fcf18bcbc22d6744560f1c85cbdd4cb4913c3764-merged.mount: Deactivated successfully.
Jan 23 05:40:41 np0005593232 podman[385920]: 2026-01-23 10:40:41.980299958 +0000 UTC m=+0.834042586 container remove 423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:40:42 np0005593232 systemd[1]: libpod-conmon-423e8a260fb63a3dd5051525dbe1d0edb33e36e4d2e52b5df63c1915b3be19d6.scope: Deactivated successfully.
Jan 23 05:40:42 np0005593232 nova_compute[250269]: 2026-01-23 10:40:42.055 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:42 np0005593232 podman[385961]: 2026-01-23 10:40:42.134147491 +0000 UTC m=+0.023215471 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3404: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 300 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 23 05:40:42 np0005593232 podman[385961]: 2026-01-23 10:40:42.355247896 +0000 UTC m=+0.244315866 container create 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:42.657 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:42.658 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:42.659 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:42 np0005593232 systemd[1]: Started libpod-conmon-9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7.scope.
Jan 23 05:40:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:42 np0005593232 podman[385961]: 2026-01-23 10:40:42.801960763 +0000 UTC m=+0.691028783 container init 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:40:42 np0005593232 podman[385961]: 2026-01-23 10:40:42.810575708 +0000 UTC m=+0.699643638 container start 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:40:42 np0005593232 podman[385979]: 2026-01-23 10:40:42.807584773 +0000 UTC m=+0.108211647 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 23 05:40:42 np0005593232 podman[385961]: 2026-01-23 10:40:42.822947519 +0000 UTC m=+0.712015499 container attach 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:42.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:43 np0005593232 unruffled_boyd[385978]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:40:43 np0005593232 unruffled_boyd[385978]: --> relative data size: 1.0
Jan 23 05:40:43 np0005593232 unruffled_boyd[385978]: --> All data devices are unavailable
Jan 23 05:40:43 np0005593232 systemd[1]: libpod-9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7.scope: Deactivated successfully.
Jan 23 05:40:43 np0005593232 podman[385961]: 2026-01-23 10:40:43.662737799 +0000 UTC m=+1.551805739 container died 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:40:44 np0005593232 nova_compute[250269]: 2026-01-23 10:40:44.002 250273 DEBUG nova.network.neutron [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:40:44 np0005593232 nova_compute[250269]: 2026-01-23 10:40:44.051 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:40:44 np0005593232 nova_compute[250269]: 2026-01-23 10:40:44.168 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 23 05:40:44 np0005593232 nova_compute[250269]: 2026-01-23 10:40:44.174 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 05:40:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3405: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 23 05:40:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ad4324446b06378b932e0b399a146daa751027d89c84880d7948c7045ee65aa2-merged.mount: Deactivated successfully.
Jan 23 05:40:44 np0005593232 podman[385961]: 2026-01-23 10:40:44.804999277 +0000 UTC m=+2.694067247 container remove 9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:40:44 np0005593232 systemd[1]: libpod-conmon-9df11e4e14967d4b2902f78608738ae0bbd6e1e140f4b011b98967ab4c49ecf7.scope: Deactivated successfully.
Jan 23 05:40:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:44.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:45.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:40:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4159593942' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.372327622 +0000 UTC m=+0.036960322 container create 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:40:45 np0005593232 systemd[1]: Started libpod-conmon-93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c.scope.
Jan 23 05:40:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.356050749 +0000 UTC m=+0.020683469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.453057726 +0000 UTC m=+0.117690446 container init 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.459088008 +0000 UTC m=+0.123720708 container start 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:40:45 np0005593232 focused_agnesi[386233]: 167 167
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.462501605 +0000 UTC m=+0.127134305 container attach 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:45 np0005593232 systemd[1]: libpod-93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c.scope: Deactivated successfully.
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.463173454 +0000 UTC m=+0.127806144 container died 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:40:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f23360a09fb8bf04edd7e3bc7cb4f4eb9f7cd302fd84d5aa28ed11d6b5d266f4-merged.mount: Deactivated successfully.
Jan 23 05:40:45 np0005593232 podman[386217]: 2026-01-23 10:40:45.50243268 +0000 UTC m=+0.167065380 container remove 93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_agnesi, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:40:45 np0005593232 systemd[1]: libpod-conmon-93dda4047d1eb5e6f42d41166099f84079769f79deeedd71dfea9f037c64bd7c.scope: Deactivated successfully.
Jan 23 05:40:45 np0005593232 podman[386259]: 2026-01-23 10:40:45.66640809 +0000 UTC m=+0.046108881 container create 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:40:45 np0005593232 systemd[1]: Started libpod-conmon-8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4.scope.
Jan 23 05:40:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554fce3c1ac11de7142d77d98efb17e7dbc305518ce404c0d0eb3fc1a6d16fdf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554fce3c1ac11de7142d77d98efb17e7dbc305518ce404c0d0eb3fc1a6d16fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554fce3c1ac11de7142d77d98efb17e7dbc305518ce404c0d0eb3fc1a6d16fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/554fce3c1ac11de7142d77d98efb17e7dbc305518ce404c0d0eb3fc1a6d16fdf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:45 np0005593232 podman[386259]: 2026-01-23 10:40:45.64213404 +0000 UTC m=+0.021834881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:45 np0005593232 podman[386259]: 2026-01-23 10:40:45.748822913 +0000 UTC m=+0.128523744 container init 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:40:45 np0005593232 podman[386259]: 2026-01-23 10:40:45.756600434 +0000 UTC m=+0.136301215 container start 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:40:45 np0005593232 podman[386259]: 2026-01-23 10:40:45.760137404 +0000 UTC m=+0.139838265 container attach 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:40:45 np0005593232 nova_compute[250269]: 2026-01-23 10:40:45.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3406: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 61 KiB/s rd, 38 KiB/s wr, 13 op/s
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]: {
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:    "0": [
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:        {
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "devices": [
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "/dev/loop3"
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            ],
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "lv_name": "ceph_lv0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "lv_size": "7511998464",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "name": "ceph_lv0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "tags": {
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.cluster_name": "ceph",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.crush_device_class": "",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.encrypted": "0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.osd_id": "0",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.type": "block",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:                "ceph.vdo": "0"
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            },
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "type": "block",
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:            "vg_name": "ceph_vg0"
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:        }
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]:    ]
Jan 23 05:40:46 np0005593232 laughing_lumiere[386275]: }
Jan 23 05:40:46 np0005593232 systemd[1]: libpod-8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4.scope: Deactivated successfully.
Jan 23 05:40:46 np0005593232 podman[386259]: 2026-01-23 10:40:46.59697125 +0000 UTC m=+0.976672041 container died 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:40:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-554fce3c1ac11de7142d77d98efb17e7dbc305518ce404c0d0eb3fc1a6d16fdf-merged.mount: Deactivated successfully.
Jan 23 05:40:46 np0005593232 podman[386259]: 2026-01-23 10:40:46.660740473 +0000 UTC m=+1.040441264 container remove 8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_lumiere, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:40:46 np0005593232 systemd[1]: libpod-conmon-8cb5a4e386031b4c1218dfdf7275c70c841143ee40ab7564d6c192be3d3c84a4.scope: Deactivated successfully.
Jan 23 05:40:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:46.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:47 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:40:47 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:40:47 np0005593232 kernel: tap456d51f3-4b (unregistering): left promiscuous mode
Jan 23 05:40:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:47.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:47 np0005593232 NetworkManager[49057]: <info>  [1769164847.1020] device (tap456d51f3-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:47Z|00777|binding|INFO|Releasing lport 456d51f3-4b45-4e54-acee-c50facafcd50 from this chassis (sb_readonly=0)
Jan 23 05:40:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:47Z|00778|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 down in Southbound
Jan 23 05:40:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:40:47Z|00779|binding|INFO|Removing iface tap456d51f3-4b ovn-installed in OVS
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.113 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.120 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:ea 10.100.0.8'], port_security=['fa:16:3e:06:5d:ea 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0888913c-71a6-45fe-97bf-9dddd2b7b521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=456d51f3-4b45-4e54-acee-c50facafcd50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.122 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 456d51f3-4b45-4e54-acee-c50facafcd50 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 unbound from our chassis#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.123 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.127 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bcbfbc4e-75b0-489d-8916-c5d4a3e7ea71]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.128 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 namespace which is not needed anymore#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.146 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Jan 23 05:40:47 np0005593232 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d000000c5.scope: Consumed 15.224s CPU time.
Jan 23 05:40:47 np0005593232 systemd-machined[215836]: Machine qemu-87-instance-000000c5 terminated.
Jan 23 05:40:47 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [NOTICE]   (384924) : haproxy version is 2.8.14-c23fe91
Jan 23 05:40:47 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [NOTICE]   (384924) : path to executable is /usr/sbin/haproxy
Jan 23 05:40:47 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [WARNING]  (384924) : Exiting Master process...
Jan 23 05:40:47 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [ALERT]    (384924) : Current worker (384926) exited with code 143 (Terminated)
Jan 23 05:40:47 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[384920]: [WARNING]  (384924) : All workers exited. Exiting... (0)
Jan 23 05:40:47 np0005593232 systemd[1]: libpod-167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c.scope: Deactivated successfully.
Jan 23 05:40:47 np0005593232 podman[386436]: 2026-01-23 10:40:47.277140933 +0000 UTC m=+0.042318064 container died 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:40:47 np0005593232 kernel: tap456d51f3-4b: entered promiscuous mode
Jan 23 05:40:47 np0005593232 kernel: tap456d51f3-4b (unregistering): left promiscuous mode
Jan 23 05:40:47 np0005593232 NetworkManager[49057]: <info>  [1769164847.3319] manager: (tap456d51f3-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/365)
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.355 250273 INFO nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.359 250273 INFO nova.virt.libvirt.driver [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance destroyed successfully.#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.360 250273 DEBUG nova.virt.libvirt.vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:39:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='resize_migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:40:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.360 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.361 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.361 250273 DEBUG os_vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.363 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap456d51f3-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.368 250273 INFO os_vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b')#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.415 250273 DEBUG nova.compute.manager [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.416 250273 DEBUG oslo_concurrency.lockutils [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.416 250273 DEBUG oslo_concurrency.lockutils [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.417 250273 DEBUG oslo_concurrency.lockutils [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.417 250273 DEBUG nova.compute.manager [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.417 250273 WARNING nova.compute.manager [req-74fc51dd-6187-484f-9cd0-3b4b239a8c6d req-254f1618-e185-4b92-ba13-58e49757804d 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:40:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c-userdata-shm.mount: Deactivated successfully.
Jan 23 05:40:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-47e7b399d815f9aba5bdc51c98c287f8c7c7a47cbbc7dca7a2e9a45cbbf6294e-merged.mount: Deactivated successfully.
Jan 23 05:40:47 np0005593232 podman[386436]: 2026-01-23 10:40:47.56728815 +0000 UTC m=+0.332465311 container cleanup 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:40:47 np0005593232 systemd[1]: libpod-conmon-167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c.scope: Deactivated successfully.
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008692006037130095 of space, bias 1.0, pg target 2.6076018111390282 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004347911432160804 of space, bias 1.0, pg target 1.2956776067839195 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:40:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.628 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.629 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.630 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:40:47 np0005593232 podman[386487]: 2026-01-23 10:40:47.812066278 +0000 UTC m=+0.206552122 container remove 167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.818 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[588cc96e-495c-403c-a8f2-a004b8eb2651]: (4, ('Fri Jan 23 10:40:47 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 (167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c)\n167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c\nFri Jan 23 10:40:47 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 (167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c)\n167fef3247cdacfc1b0169e9d91db6f7f303d1e3bb44fcfa4463a29d4671ec8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.821 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0a245b20-33ef-46a3-b955-4d402c2238a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.822 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:40:47 np0005593232 kernel: tapfba2ba4a-d0: left promiscuous mode
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.824 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.830 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[18cf7a79-c24a-4630-9b9a-88e661b3faaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 nova_compute[250269]: 2026-01-23 10:40:47.842 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.853 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7a990a31-b7a0-4f60-9446-639b20a7c5a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.854 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5eaa51e1-13dd-46f2-b559-7ec344294a5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.869 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0339dd10-9487-46d9-ae1d-af6f50b12bc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 870671, 'reachable_time': 25874, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386520, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.872 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:40:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:40:47.872 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[eac96fe3-8190-4246-a67e-3dbc9c3f62e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:47 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfba2ba4a\x2dd82c\x2d4f8b\x2d9754\x2dc13fbec41a04.mount: Deactivated successfully.
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.897909028 +0000 UTC m=+0.038913327 container create 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:40:47 np0005593232 systemd[1]: Started libpod-conmon-7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b.scope.
Jan 23 05:40:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.973407144 +0000 UTC m=+0.114411523 container init 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.883089837 +0000 UTC m=+0.024094166 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.987495974 +0000 UTC m=+0.128500323 container start 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.99228514 +0000 UTC m=+0.133289459 container attach 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:40:47 np0005593232 stupefied_keldysh[386534]: 167 167
Jan 23 05:40:47 np0005593232 systemd[1]: libpod-7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b.scope: Deactivated successfully.
Jan 23 05:40:47 np0005593232 podman[386516]: 2026-01-23 10:40:47.995223074 +0000 UTC m=+0.136227383 container died 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:40:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d7f3eef72f9976c738a42ca4078c892159770decc4069c9ec05d0eb20d91bf40-merged.mount: Deactivated successfully.
Jan 23 05:40:48 np0005593232 podman[386516]: 2026-01-23 10:40:48.045143563 +0000 UTC m=+0.186147902 container remove 7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:40:48 np0005593232 systemd[1]: libpod-conmon-7419759917a01abe3b64f75e0e897e4deefe582f36cf3bc8c235680a92bf639b.scope: Deactivated successfully.
Jan 23 05:40:48 np0005593232 podman[386559]: 2026-01-23 10:40:48.243554452 +0000 UTC m=+0.040312136 container create 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:40:48 np0005593232 systemd[1]: Started libpod-conmon-42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217.scope.
Jan 23 05:40:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3407: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 64 KiB/s rd, 53 KiB/s wr, 16 op/s
Jan 23 05:40:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:40:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12abbad191540dd6d78f132efab445f3272f9e01ba6430bc5f798478fcdedcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12abbad191540dd6d78f132efab445f3272f9e01ba6430bc5f798478fcdedcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12abbad191540dd6d78f132efab445f3272f9e01ba6430bc5f798478fcdedcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d12abbad191540dd6d78f132efab445f3272f9e01ba6430bc5f798478fcdedcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:40:48 np0005593232 podman[386559]: 2026-01-23 10:40:48.226811237 +0000 UTC m=+0.023568951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:40:48 np0005593232 podman[386559]: 2026-01-23 10:40:48.324149963 +0000 UTC m=+0.120907657 container init 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:40:48 np0005593232 podman[386559]: 2026-01-23 10:40:48.331319287 +0000 UTC m=+0.128076971 container start 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 05:40:48 np0005593232 podman[386559]: 2026-01-23 10:40:48.335063063 +0000 UTC m=+0.131820747 container attach 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:40:48 np0005593232 nova_compute[250269]: 2026-01-23 10:40:48.511 250273 DEBUG nova.network.neutron [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Port 456d51f3-4b45-4e54-acee-c50facafcd50 binding to destination host compute-0.ctlplane.example.com is already ACTIVE migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3171#033[00m
Jan 23 05:40:48 np0005593232 nova_compute[250269]: 2026-01-23 10:40:48.625 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:48 np0005593232 nova_compute[250269]: 2026-01-23 10:40:48.626 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:48 np0005593232 nova_compute[250269]: 2026-01-23 10:40:48.627 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:49.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]: {
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:        "osd_id": 0,
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:        "type": "bluestore"
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]:    }
Jan 23 05:40:49 np0005593232 agitated_faraday[386576]: }
Jan 23 05:40:49 np0005593232 systemd[1]: libpod-42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217.scope: Deactivated successfully.
Jan 23 05:40:49 np0005593232 podman[386559]: 2026-01-23 10:40:49.187389119 +0000 UTC m=+0.984146823 container died 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:40:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d12abbad191540dd6d78f132efab445f3272f9e01ba6430bc5f798478fcdedcb-merged.mount: Deactivated successfully.
Jan 23 05:40:49 np0005593232 podman[386559]: 2026-01-23 10:40:49.240582391 +0000 UTC m=+1.037340075 container remove 42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:40:49 np0005593232 systemd[1]: libpod-conmon-42dfc900d7718382baf7291e669e96b554cb5816370c0e08c79af86a5236d217.scope: Deactivated successfully.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.334341) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849334425, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 905, "num_deletes": 253, "total_data_size": 1353417, "memory_usage": 1374904, "flush_reason": "Manual Compaction"}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849348801, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 853521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74850, "largest_seqno": 75754, "table_properties": {"data_size": 849839, "index_size": 1397, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10230, "raw_average_key_size": 21, "raw_value_size": 841737, "raw_average_value_size": 1735, "num_data_blocks": 62, "num_entries": 485, "num_filter_entries": 485, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164777, "oldest_key_time": 1769164777, "file_creation_time": 1769164849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 14546 microseconds, and 6723 cpu microseconds.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.348902) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 853521 bytes OK
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.348928) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.351125) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.351169) EVENT_LOG_v1 {"time_micros": 1769164849351141, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.351196) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1349037, prev total WAL file size 1370030, number of live WAL files 2.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.352165) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373538' seq:72057594037927935, type:22 .. '6D6772737461740033303130' seq:0, type:0; will stop at (end)
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(833KB)], [173(12MB)]
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849352246, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 14313117, "oldest_snapshot_seqno": -1}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9963 keys, 10926789 bytes, temperature: kUnknown
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849513268, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10926789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10865537, "index_size": 35219, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 263461, "raw_average_key_size": 26, "raw_value_size": 10694147, "raw_average_value_size": 1073, "num_data_blocks": 1333, "num_entries": 9963, "num_filter_entries": 9963, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164849, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.513781) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10926789 bytes
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.515494) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.8 rd, 67.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(29.6) write-amplify(12.8) OK, records in: 10456, records dropped: 493 output_compression: NoCompression
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.515515) EVENT_LOG_v1 {"time_micros": 1769164849515506, "job": 108, "event": "compaction_finished", "compaction_time_micros": 161204, "compaction_time_cpu_micros": 53217, "output_level": 6, "num_output_files": 1, "total_output_size": 10926789, "num_input_records": 10456, "num_output_records": 9963, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849515859, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164849518254, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.352054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.518384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.518397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.518401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.518405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:40:49.518410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.533 250273 DEBUG nova.compute.manager [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.534 250273 DEBUG oslo_concurrency.lockutils [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.534 250273 DEBUG oslo_concurrency.lockutils [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.535 250273 DEBUG oslo_concurrency.lockutils [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.535 250273 DEBUG nova.compute.manager [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.535 250273 WARNING nova.compute.manager [req-030d4401-4094-4d80-9d37-431ebd3a2102 req-c305821e-5f46-4a33-b0a6-823828623e27 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:40:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e4f33f40-e57b-4c93-8f0a-5f7666865de4 does not exist
Jan 23 05:40:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f7a3bf5-bc8d-42f7-b0a7-891ea35e9dbc does not exist
Jan 23 05:40:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 642510d9-e059-489a-a254-e97fa53d09df does not exist
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.914 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.915 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:40:49 np0005593232 nova_compute[250269]: 2026-01-23 10:40:49.915 250273 DEBUG nova.network.neutron [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:40:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:40:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3408: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 23 05:40:50 np0005593232 nova_compute[250269]: 2026-01-23 10:40:50.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:50.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:51.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3409: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 29 KiB/s wr, 4 op/s
Jan 23 05:40:52 np0005593232 nova_compute[250269]: 2026-01-23 10:40:52.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:52.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:53.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3410: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 KiB/s rd, 40 KiB/s wr, 11 op/s
Jan 23 05:40:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:54.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:55.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:55 np0005593232 nova_compute[250269]: 2026-01-23 10:40:55.948 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3411: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 28 KiB/s wr, 10 op/s
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.405 250273 DEBUG nova.network.neutron [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.510 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.673 250273 DEBUG os_brick.utils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.674 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.692 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.692 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb904a7-2e95-4f8f-93bf-585643c8a52d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.697 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.708 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.709 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[4dbfd6a4-be47-4403-8567-b39d36eae2d3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.711 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.720 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.720 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[0334dd66-2497-4e08-a20f-1e4d324a3b64]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.722 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[3b60627d-5710-479e-90e6-031aa2c3080d]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.723 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.764 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "nvme version" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.769 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.770 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.770 250273 DEBUG os_brick.initiator.connectors.lightos [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:40:56 np0005593232 nova_compute[250269]: 2026-01-23 10:40:56.771 250273 DEBUG os_brick.utils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] <== get_connector_properties: return (97ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:40:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:40:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:56.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:40:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:57.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:57 np0005593232 nova_compute[250269]: 2026-01-23 10:40:57.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:40:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3412: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 28 KiB/s wr, 13 op/s
Jan 23 05:40:58 np0005593232 nova_compute[250269]: 2026-01-23 10:40:58.530 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 23 05:40:58 np0005593232 nova_compute[250269]: 2026-01-23 10:40:58.531 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:40:58 np0005593232 nova_compute[250269]: 2026-01-23 10:40:58.533 250273 INFO nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Creating image(s)#033[00m
Jan 23 05:40:58 np0005593232 nova_compute[250269]: 2026-01-23 10:40:58.604 250273 DEBUG nova.storage.rbd_utils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] creating snapshot(nova-resize) on rbd image(0888913c-71a6-45fe-97bf-9dddd2b7b521_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:40:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:40:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:40:58.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:40:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:40:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:40:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:40:59.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:40:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:40:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Jan 23 05:40:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Jan 23 05:40:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Jan 23 05:40:59 np0005593232 nova_compute[250269]: 2026-01-23 10:40:59.594 250273 DEBUG nova.objects.instance [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3414: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 12 op/s
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.373 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.374 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Ensure instance console log exists: /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.374 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.374 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.375 250273 DEBUG oslo_concurrency.lockutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.380 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start _get_guest_xml network_info=[{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '1f8ab79e-6332-4cd6-979a-3a7c8fade811', 'device_type': 'disk', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4393c992-1666-40a0-ab11-4cc66bdcd721', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': '0888913c-71a6-45fe-97bf-9dddd2b7b521', 'attached_at': '2026-01-23T10:40:58.000000', 'detached_at': '', 'volume_id': '4393c992-1666-40a0-ab11-4cc66bdcd721', 'multiattach': True, 'serial': '4393c992-1666-40a0-ab11-4cc66bdcd721'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.390 250273 WARNING nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.400 250273 DEBUG nova.virt.libvirt.host [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.401 250273 DEBUG nova.virt.libvirt.host [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.412 250273 DEBUG nova.virt.libvirt.host [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.412 250273 DEBUG nova.virt.libvirt.host [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.414 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.414 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eebea5f8-9b11-45ad-873d-c4ea90d3de87',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.414 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.415 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.415 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.415 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.415 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.416 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.416 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.416 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.416 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.417 250273 DEBUG nova.virt.hardware [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.417 250273 DEBUG nova.objects.instance [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.599 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:00 np0005593232 nova_compute[250269]: 2026-01-23 10:41:00.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:41:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896474132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.118 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.185 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:41:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901990334' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.654 250273 DEBUG oslo_concurrency.processutils [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.772 250273 DEBUG nova.virt.libvirt.vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:39:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:40:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.774 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.776 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.782 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <uuid>0888913c-71a6-45fe-97bf-9dddd2b7b521</uuid>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <name>instance-000000c5</name>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <memory>196608</memory>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:name>multiattach-server-0</nova:name>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:41:00</nova:creationTime>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.micro">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:memory>192</nova:memory>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:user uuid="93cd560e84264023877c47122b5919de">tempest-AttachVolumeMultiAttachTest-63035580-project-member</nova:user>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:project uuid="6e762fca3b634c7aa1d994314c059c54">tempest-AttachVolumeMultiAttachTest-63035580</nova:project>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <nova:port uuid="456d51f3-4b45-4e54-acee-c50facafcd50">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="serial">0888913c-71a6-45fe-97bf-9dddd2b7b521</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="uuid">0888913c-71a6-45fe-97bf-9dddd2b7b521</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0888913c-71a6-45fe-97bf-9dddd2b7b521_disk">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/0888913c-71a6-45fe-97bf-9dddd2b7b521_disk.config">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <shareable/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:06:5d:ea"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <target dev="tap456d51f3-4b"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521/console.log" append="off"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:41:01 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:41:01 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:41:01 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:41:01 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.784 250273 DEBUG nova.virt.libvirt.vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:39:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:40:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.786 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:06:5d:ea"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.787 250273 DEBUG nova.network.os_vif_util [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.788 250273 DEBUG os_vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.789 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.790 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.791 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.796 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap456d51f3-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.798 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap456d51f3-4b, col_values=(('external_ids', {'iface-id': '456d51f3-4b45-4e54-acee-c50facafcd50', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:5d:ea', 'vm-uuid': '0888913c-71a6-45fe-97bf-9dddd2b7b521'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:01 np0005593232 NetworkManager[49057]: <info>  [1769164861.8026] manager: (tap456d51f3-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/366)
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.815 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.818 250273 INFO os_vif [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b')#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.935 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.936 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.937 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.937 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No VIF found with MAC fa:16:3e:06:5d:ea, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:41:01 np0005593232 nova_compute[250269]: 2026-01-23 10:41:01.938 250273 INFO nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Using config drive#033[00m
Jan 23 05:41:02 np0005593232 virtqemud[249592]: End of file while reading data: Input/output error
Jan 23 05:41:02 np0005593232 virtqemud[249592]: End of file while reading data: Input/output error
Jan 23 05:41:02 np0005593232 kernel: tap456d51f3-4b: entered promiscuous mode
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.1170] manager: (tap456d51f3-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/367)
Jan 23 05:41:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:02Z|00780|binding|INFO|Claiming lport 456d51f3-4b45-4e54-acee-c50facafcd50 for this chassis.
Jan 23 05:41:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:02Z|00781|binding|INFO|456d51f3-4b45-4e54-acee-c50facafcd50: Claiming fa:16:3e:06:5d:ea 10.100.0.8
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.119 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.138 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:ea 10.100.0.8'], port_security=['fa:16:3e:06:5d:ea 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0888913c-71a6-45fe-97bf-9dddd2b7b521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=456d51f3-4b45-4e54-acee-c50facafcd50) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.140 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 456d51f3-4b45-4e54-acee-c50facafcd50 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 bound to our chassis#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.141 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.157 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb0449c-57a5-424b-a4e9-1f6c51650181]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.158 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfba2ba4a-d1 in ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.160 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfba2ba4a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.160 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ecac4bf5-beb5-45b0-89e0-66082a942fa7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:02Z|00782|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 ovn-installed in OVS
Jan 23 05:41:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:02Z|00783|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 up in Southbound
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.161 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9194fe2b-573c-49e5-9111-bd7caeaaa578]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:02 np0005593232 systemd-udevd[386842]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:02 np0005593232 systemd-machined[215836]: New machine qemu-88-instance-000000c5.
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.183 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[9821b515-b434-48f1-b162-0229f189b6e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 systemd[1]: Started Virtual Machine qemu-88-instance-000000c5.
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.1927] device (tap456d51f3-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.1951] device (tap456d51f3-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.211 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c0795c93-216f-44a2-bf36-65a901220e8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.259 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[4fdd2402-4e20-49f1-9cf0-d969915967ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.264 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9b6c5b5a-cfa1-4fbe-8128-0165c6263815]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.2659] manager: (tapfba2ba4a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/368)
Jan 23 05:41:02 np0005593232 systemd-udevd[386846]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.302 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[834e9c3d-cb20-4421-a7cf-fe84a4623353]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.305 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[d2510681-2e0d-4637-9bc9-142b8a6b1507]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3415: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 13 op/s
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.3258] device (tapfba2ba4a-d0): carrier: link connected
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.331 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[718edd09-bcbe-4ade-b11c-e6b317ba755c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.350 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[31e4b91e-3340-475c-bd23-238d71ab82c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386874, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.356 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769164847.355012, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.357 250273 INFO nova.compute.manager [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.369 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[09bdfbb5-3fda-40f9-8a26-c3438c322d9e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:db55'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877546, 'tstamp': 877546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386875, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.393 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6a3f70b-ba58-4ef6-9281-e91aefb9477d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 39120, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386876, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.430 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c31a764-439e-4ddd-a62c-c2173bedaf2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.535 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4c791d4a-483a-4339-8de6-84da21dae2f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.537 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.537 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.538 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:02 np0005593232 NetworkManager[49057]: <info>  [1769164862.5405] manager: (tapfba2ba4a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 23 05:41:02 np0005593232 kernel: tapfba2ba4a-d0: entered promiscuous mode
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.543 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:02Z|00784|binding|INFO|Releasing lport 2348ddba-3dc3-4456-a637-f3065ba0d8f6 from this chassis (sb_readonly=0)
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.544 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.552 250273 DEBUG nova.compute.manager [None req-52623604-e7df-40f9-a62e-f9e639c09803 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:41:02 np0005593232 nova_compute[250269]: 2026-01-23 10:41:02.567 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.570 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.572 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6aac0103-370d-4c60-8668-d158f3694eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.574 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fba2ba4a-d82c-4f8b-9754-c13fbec41a04
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.pid.haproxy
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fba2ba4a-d82c-4f8b-9754-c13fbec41a04
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:41:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:02.576 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'env', 'PROCESS_TAG=haproxy-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fba2ba4a-d82c-4f8b-9754-c13fbec41a04.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:41:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:03 np0005593232 podman[386967]: 2026-01-23 10:41:03.072858579 +0000 UTC m=+0.089448313 container create f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.079 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164863.0784986, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.080 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.084 250273 DEBUG nova.compute.manager [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:41:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.088 250273 INFO nova.virt.libvirt.driver [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance running successfully.#033[00m
Jan 23 05:41:03 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.093 250273 DEBUG nova.virt.libvirt.guest [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.094 250273 DEBUG nova.virt.libvirt.driver [None req-3ce75456-354b-4ca4-9818-a6d3c4afa068 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 23 05:41:03 np0005593232 podman[386967]: 2026-01-23 10:41:03.013136491 +0000 UTC m=+0.029726225 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:41:03 np0005593232 systemd[1]: Started libpod-conmon-f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f.scope.
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.133 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.138 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:41:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.171 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.171 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164863.0832982, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.172 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Started (Lifecycle Event)#033[00m
Jan 23 05:41:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b40bd483516f699516ff9a160cd8337f77bb3fa7547b3184d329b65853aab4b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:03 np0005593232 podman[386967]: 2026-01-23 10:41:03.201605278 +0000 UTC m=+0.218195012 container init f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:41:03 np0005593232 podman[386967]: 2026-01-23 10:41:03.209155053 +0000 UTC m=+0.225744767 container start f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.212 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.221 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:41:03 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [NOTICE]   (386988) : New worker (386990) forked
Jan 23 05:41:03 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [NOTICE]   (386988) : Loading success.
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.296 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.733 250273 DEBUG nova.compute.manager [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.734 250273 DEBUG oslo_concurrency.lockutils [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.735 250273 DEBUG oslo_concurrency.lockutils [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.735 250273 DEBUG oslo_concurrency.lockutils [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.736 250273 DEBUG nova.compute.manager [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:41:03 np0005593232 nova_compute[250269]: 2026-01-23 10:41:03.736 250273 WARNING nova.compute.manager [req-01854864-3f53-4aad-a1c6-d97350fb812c req-bbb849aa-15a2-422c-936f-40a52e087ccd 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 05:41:04 np0005593232 nova_compute[250269]: 2026-01-23 10:41:04.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3416: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 1.1 KiB/s wr, 39 op/s
Jan 23 05:41:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:05 np0005593232 nova_compute[250269]: 2026-01-23 10:41:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:05.944 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=76, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=75) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:41:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:05.945 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:41:05 np0005593232 nova_compute[250269]: 2026-01-23 10:41:05.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:05 np0005593232 nova_compute[250269]: 2026-01-23 10:41:05.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.084 250273 DEBUG nova.compute.manager [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.085 250273 DEBUG oslo_concurrency.lockutils [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.086 250273 DEBUG oslo_concurrency.lockutils [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.086 250273 DEBUG oslo_concurrency.lockutils [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.087 250273 DEBUG nova.compute.manager [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.087 250273 WARNING nova.compute.manager [req-66c1b043-74a9-4b20-88d3-17c9594fface req-65adc375-b781-4481-88ca-ba8c300b981c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.247 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.248 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.248 250273 DEBUG nova.compute.manager [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Going to confirm migration 19 do_confirm_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:4679#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:41:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3417: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 56 KiB/s rd, 1.1 KiB/s wr, 39 op/s
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.785 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.787 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.787 250273 DEBUG nova.network.neutron [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.788 250273 DEBUG nova.objects.instance [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'info_cache' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:06 np0005593232 nova_compute[250269]: 2026-01-23 10:41:06.802 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:06.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:07.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:08 np0005593232 nova_compute[250269]: 2026-01-23 10:41:08.288 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3418: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 KiB/s wr, 122 op/s
Jan 23 05:41:08 np0005593232 podman[387052]: 2026-01-23 10:41:08.454650637 +0000 UTC m=+0.104057738 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:41:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:08.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:09.947 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '76'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:10 np0005593232 nova_compute[250269]: 2026-01-23 10:41:10.277 250273 DEBUG nova.network.neutron [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:41:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3419: 321 pgs: 321 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 3.8 KiB/s wr, 113 op/s
Jan 23 05:41:10 np0005593232 nova_compute[250269]: 2026-01-23 10:41:10.711 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:41:10 np0005593232 nova_compute[250269]: 2026-01-23 10:41:10.712 250273 DEBUG nova.objects.instance [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'migration_context' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:10 np0005593232 nova_compute[250269]: 2026-01-23 10:41:10.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:10.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:11.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:11 np0005593232 nova_compute[250269]: 2026-01-23 10:41:11.516 250273 DEBUG nova.storage.rbd_utils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] removing snapshot(nova-resize) on rbd image(0888913c-71a6-45fe-97bf-9dddd2b7b521_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 05:41:11 np0005593232 nova_compute[250269]: 2026-01-23 10:41:11.806 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Jan 23 05:41:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Jan 23 05:41:11 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Jan 23 05:41:11 np0005593232 nova_compute[250269]: 2026-01-23 10:41:11.952 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:11 np0005593232 nova_compute[250269]: 2026-01-23 10:41:11.954 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:12 np0005593232 nova_compute[250269]: 2026-01-23 10:41:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3421: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 518 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.1 KiB/s wr, 123 op/s
Jan 23 05:41:12 np0005593232 nova_compute[250269]: 2026-01-23 10:41:12.359 250273 DEBUG oslo_concurrency.processutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:41:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/755137964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:41:12 np0005593232 nova_compute[250269]: 2026-01-23 10:41:12.805 250273 DEBUG oslo_concurrency.processutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:12 np0005593232 nova_compute[250269]: 2026-01-23 10:41:12.813 250273 DEBUG nova.compute.provider_tree [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:41:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:13.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:13 np0005593232 nova_compute[250269]: 2026-01-23 10:41:13.158 250273 DEBUG nova.scheduler.client.report [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:41:13 np0005593232 podman[387140]: 2026-01-23 10:41:13.432947877 +0000 UTC m=+0.082915648 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 05:41:13 np0005593232 nova_compute[250269]: 2026-01-23 10:41:13.604 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_source" :: held 1.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:14 np0005593232 nova_compute[250269]: 2026-01-23 10:41:14.056 250273 INFO nova.scheduler.client.report [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Deleted allocation for migration a4212371-48a4-4079-aaa5-8560115dd624#033[00m
Jan 23 05:41:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3422: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 517 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 KiB/s wr, 115 op/s
Jan 23 05:41:14 np0005593232 nova_compute[250269]: 2026-01-23 10:41:14.924 250273 DEBUG oslo_concurrency.lockutils [None req-df147b5e-1780-45c7-984e-b10a65d16d5d 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.confirm_resize.<locals>.do_confirm_resize" :: held 8.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:15.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:15.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:15 np0005593232 nova_compute[250269]: 2026-01-23 10:41:15.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3423: 321 pgs: 1 active+clean+snaptrim, 320 active+clean; 517 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 4.2 KiB/s wr, 115 op/s
Jan 23 05:41:16 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:16Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:5d:ea 10.100.0.8
Jan 23 05:41:16 np0005593232 nova_compute[250269]: 2026-01-23 10:41:16.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:41:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 55K writes, 207K keys, 55K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 55K writes, 21K syncs, 2.65 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6411 writes, 22K keys, 6411 commit groups, 1.0 writes per commit group, ingest: 22.44 MB, 0.04 MB/s#012Interval WAL: 6411 writes, 2659 syncs, 2.41 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fb9efc0430#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 
Jan 23 05:41:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:17.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:17.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.300 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.301 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.301 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:41:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3424: 321 pgs: 321 active+clean; 517 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 660 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.746 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.746 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.746 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:41:18 np0005593232 nova_compute[250269]: 2026-01-23 10:41:18.746 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:19.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:19.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3425: 321 pgs: 321 active+clean; 517 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 660 KiB/s rd, 15 KiB/s wr, 82 op/s
Jan 23 05:41:20 np0005593232 nova_compute[250269]: 2026-01-23 10:41:20.958 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:21.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:21 np0005593232 nova_compute[250269]: 2026-01-23 10:41:21.813 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3426: 321 pgs: 321 active+clean; 517 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 15 KiB/s wr, 80 op/s
Jan 23 05:41:22 np0005593232 nova_compute[250269]: 2026-01-23 10:41:22.923 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:41:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:23.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:23.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3427: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 32 KiB/s wr, 77 op/s
Jan 23 05:41:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Jan 23 05:41:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Jan 23 05:41:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Jan 23 05:41:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:25.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:25.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:25 np0005593232 nova_compute[250269]: 2026-01-23 10:41:25.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3429: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 37 KiB/s wr, 66 op/s
Jan 23 05:41:26 np0005593232 nova_compute[250269]: 2026-01-23 10:41:26.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:27.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:41:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:27.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.278 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.278 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.279 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.280 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.280 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.410 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.411 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.412 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.412 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.413 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:41:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:41:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147825269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:41:27 np0005593232 nova_compute[250269]: 2026-01-23 10:41:27.955 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3430: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 22 op/s
Jan 23 05:41:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:29.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3431: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 23 KiB/s wr, 22 op/s
Jan 23 05:41:30 np0005593232 nova_compute[250269]: 2026-01-23 10:41:30.738 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:41:30 np0005593232 nova_compute[250269]: 2026-01-23 10:41:30.739 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:41:30 np0005593232 nova_compute[250269]: 2026-01-23 10:41:30.739 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.000 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:31.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.048 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.051 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3879MB free_disk=20.805767059326172GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.051 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.052 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:31.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:31 np0005593232 nova_compute[250269]: 2026-01-23 10:41:31.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3432: 321 pgs: 321 active+clean; 541 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 745 KiB/s wr, 43 op/s
Jan 23 05:41:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:33.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:33.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.750 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 actively managed on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.751 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.751 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=704MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.823 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.898 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.899 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.920 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:41:33 np0005593232 nova_compute[250269]: 2026-01-23 10:41:33.943 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:41:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3433: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Jan 23 05:41:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.572 250273 DEBUG nova.compute.manager [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.572 250273 DEBUG nova.compute.manager [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing instance network info cache due to event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.573 250273 DEBUG oslo_concurrency.lockutils [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.573 250273 DEBUG oslo_concurrency.lockutils [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.573 250273 DEBUG nova.network.neutron [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:41:34 np0005593232 nova_compute[250269]: 2026-01-23 10:41:34.637 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:35.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:41:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3832577021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:41:35 np0005593232 nova_compute[250269]: 2026-01-23 10:41:35.105 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:35 np0005593232 nova_compute[250269]: 2026-01-23 10:41:35.115 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:41:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:35.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:35 np0005593232 nova_compute[250269]: 2026-01-23 10:41:35.473 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:41:35 np0005593232 nova_compute[250269]: 2026-01-23 10:41:35.476 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:41:35 np0005593232 nova_compute[250269]: 2026-01-23 10:41:35.477 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:36 np0005593232 nova_compute[250269]: 2026-01-23 10:41:36.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3434: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 05:41:36 np0005593232 nova_compute[250269]: 2026-01-23 10:41:36.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:37.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:41:37
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.meta', 'backups']
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:41:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3435: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:41:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:41:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:39.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Jan 23 05:41:39 np0005593232 podman[387268]: 2026-01-23 10:41:39.498887755 +0000 UTC m=+0.132228289 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:41:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Jan 23 05:41:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Jan 23 05:41:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3437: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 2.1 MiB/s wr, 30 op/s
Jan 23 05:41:40 np0005593232 nova_compute[250269]: 2026-01-23 10:41:40.335 250273 DEBUG nova.network.neutron [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated VIF entry in instance network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:41:40 np0005593232 nova_compute[250269]: 2026-01-23 10:41:40.336 250273 DEBUG nova.network.neutron [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:41:40 np0005593232 nova_compute[250269]: 2026-01-23 10:41:40.626 250273 DEBUG oslo_concurrency.lockutils [req-dc7a3861-49d4-4b37-bf69-9fb13fa3b6a7 req-05e462f9-67f1-438d-8229-4efeb419dc53 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:41:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:41.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.049 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.760 250273 DEBUG nova.compute.manager [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.891 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.892 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.929 250273 DEBUG nova.objects.instance [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_requests' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.950 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.950 250273 INFO nova.compute.claims [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.950 250273 DEBUG nova.objects.instance [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'resources' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:41 np0005593232 nova_compute[250269]: 2026-01-23 10:41:41.968 250273 DEBUG nova.objects.instance [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_devices' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.017 250273 INFO nova.compute.resource_tracker [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating resource usage from migration adbf924a-ffa1-465a-bc4b-f2bc3ae2e761#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.018 250273 DEBUG nova.compute.resource_tracker [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Starting to track incoming migration adbf924a-ffa1-465a-bc4b-f2bc3ae2e761 with flavor eebea5f8-9b11-45ad-873d-c4ea90d3de87 _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.114 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3438: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.4 MiB/s wr, 16 op/s
Jan 23 05:41:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:41:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604398264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.594 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.601 250273 DEBUG nova.compute.provider_tree [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.616 250273 DEBUG nova.scheduler.client.report [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.641 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:42 np0005593232 nova_compute[250269]: 2026-01-23 10:41:42.642 250273 INFO nova.compute.manager [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Migrating#033[00m
Jan 23 05:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:42.658 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:42.659 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:42.660 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:44 np0005593232 podman[387342]: 2026-01-23 10:41:44.114338093 +0000 UTC m=+0.059284886 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:41:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3439: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 3.7 KiB/s wr, 8 op/s
Jan 23 05:41:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:44 np0005593232 systemd-logind[808]: New session 58 of user nova.
Jan 23 05:41:44 np0005593232 systemd[1]: Created slice User Slice of UID 42436.
Jan 23 05:41:44 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 23 05:41:44 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 23 05:41:44 np0005593232 systemd[1]: Starting User Manager for UID 42436...
Jan 23 05:41:45 np0005593232 systemd[387392]: Queued start job for default target Main User Target.
Jan 23 05:41:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:45 np0005593232 systemd[387392]: Created slice User Application Slice.
Jan 23 05:41:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:45 np0005593232 systemd[387392]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:41:45 np0005593232 systemd[387392]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 05:41:45 np0005593232 systemd[387392]: Reached target Paths.
Jan 23 05:41:45 np0005593232 systemd[387392]: Reached target Timers.
Jan 23 05:41:45 np0005593232 systemd[387392]: Starting D-Bus User Message Bus Socket...
Jan 23 05:41:45 np0005593232 systemd[387392]: Starting Create User's Volatile Files and Directories...
Jan 23 05:41:45 np0005593232 systemd[387392]: Listening on D-Bus User Message Bus Socket.
Jan 23 05:41:45 np0005593232 systemd[387392]: Reached target Sockets.
Jan 23 05:41:45 np0005593232 systemd[387392]: Finished Create User's Volatile Files and Directories.
Jan 23 05:41:45 np0005593232 systemd[387392]: Reached target Basic System.
Jan 23 05:41:45 np0005593232 systemd[387392]: Reached target Main User Target.
Jan 23 05:41:45 np0005593232 systemd[387392]: Startup finished in 171ms.
Jan 23 05:41:45 np0005593232 systemd[1]: Started User Manager for UID 42436.
Jan 23 05:41:45 np0005593232 systemd[1]: Started Session 58 of User nova.
Jan 23 05:41:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:45 np0005593232 systemd[1]: session-58.scope: Deactivated successfully.
Jan 23 05:41:45 np0005593232 systemd-logind[808]: Session 58 logged out. Waiting for processes to exit.
Jan 23 05:41:45 np0005593232 systemd-logind[808]: Removed session 58.
Jan 23 05:41:45 np0005593232 systemd-logind[808]: New session 60 of user nova.
Jan 23 05:41:45 np0005593232 systemd[1]: Started Session 60 of User nova.
Jan 23 05:41:45 np0005593232 systemd[1]: session-60.scope: Deactivated successfully.
Jan 23 05:41:45 np0005593232 systemd-logind[808]: Session 60 logged out. Waiting for processes to exit.
Jan 23 05:41:45 np0005593232 systemd-logind[808]: Removed session 60.
Jan 23 05:41:46 np0005593232 nova_compute[250269]: 2026-01-23 10:41:46.050 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3440: 321 pgs: 321 active+clean; 565 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 KiB/s rd, 3.7 KiB/s wr, 8 op/s
Jan 23 05:41:46 np0005593232 nova_compute[250269]: 2026-01-23 10:41:46.866 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:47.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:47.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.008696368125348967 of space, bias 1.0, pg target 2.60891043760469 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005324837439512377 of space, bias 1.0, pg target 1.5868015569746883 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:41:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:41:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3441: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 23 op/s
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.664 250273 DEBUG nova.compute.manager [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.664 250273 DEBUG oslo_concurrency.lockutils [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.665 250273 DEBUG oslo_concurrency.lockutils [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.665 250273 DEBUG oslo_concurrency.lockutils [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.665 250273 DEBUG nova.compute.manager [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.665 250273 WARNING nova.compute.manager [req-d73228b2-a896-490f-a4d2-d02433f02f23 req-28091c24-bedf-4910-afd0-1dff1eaf85d0 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received unexpected event network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with vm_state active and task_state resize_migrating.#033[00m
Jan 23 05:41:48 np0005593232 nova_compute[250269]: 2026-01-23 10:41:48.973 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:48.973 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=77, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=76) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:41:48 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:48.975 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:41:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:49.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:49.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:49 np0005593232 nova_compute[250269]: 2026-01-23 10:41:49.921 250273 INFO nova.network.neutron [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating port 3b1ac782-1188-42b9-a89f-eb26c7876140 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 23 05:41:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3442: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 14 KiB/s wr, 21 op/s
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.662 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.662 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.663 250273 DEBUG nova.network.neutron [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.808 250273 DEBUG nova.compute.manager [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.810 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.811 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.811 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.812 250273 DEBUG nova.compute.manager [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.812 250273 WARNING nova.compute.manager [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received unexpected event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.812 250273 DEBUG nova.compute.manager [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-changed-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.813 250273 DEBUG nova.compute.manager [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Refreshing instance network info cache due to event network-changed-3b1ac782-1188-42b9-a89f-eb26c7876140. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:41:50 np0005593232 nova_compute[250269]: 2026-01-23 10:41:50.813 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:41:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:51.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:51 np0005593232 nova_compute[250269]: 2026-01-23 10:41:51.086 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:51.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:41:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:41:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:41:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:41:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:41:51 np0005593232 nova_compute[250269]: 2026-01-23 10:41:51.869 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:41:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 217e6332-a0b0-4578-907b-986f29751d05 does not exist
Jan 23 05:41:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bfdd7681-9edf-422f-8df1-21b3f224d556 does not exist
Jan 23 05:41:52 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c756bb45-73db-4de2-ad37-b45ea565d631 does not exist
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:41:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3443: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 12 KiB/s wr, 19 op/s
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:41:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.790509785 +0000 UTC m=+0.058398280 container create 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:41:52 np0005593232 systemd[1]: Started libpod-conmon-605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128.scope.
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.754897673 +0000 UTC m=+0.022786188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.881700877 +0000 UTC m=+0.149589392 container init 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.888782079 +0000 UTC m=+0.156670574 container start 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.891955529 +0000 UTC m=+0.159844034 container attach 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:41:52 np0005593232 charming_chaplygin[387705]: 167 167
Jan 23 05:41:52 np0005593232 systemd[1]: libpod-605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128.scope: Deactivated successfully.
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.894564103 +0000 UTC m=+0.162452618 container died 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:41:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5d3b451723901fd05a573bb497424f38715a9713a13e44f6696317495f844db6-merged.mount: Deactivated successfully.
Jan 23 05:41:52 np0005593232 podman[387689]: 2026-01-23 10:41:52.938262615 +0000 UTC m=+0.206151110 container remove 605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:41:52 np0005593232 systemd[1]: libpod-conmon-605ab40bdb8188aedfc9539a2f0f33678f1e06a166e25f25d9e8c16f6e995128.scope: Deactivated successfully.
Jan 23 05:41:52 np0005593232 nova_compute[250269]: 2026-01-23 10:41:52.955 250273 DEBUG nova.network.neutron [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating instance_info_cache with network_info: [{"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:41:52 np0005593232 nova_compute[250269]: 2026-01-23 10:41:52.974 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:41:52 np0005593232 nova_compute[250269]: 2026-01-23 10:41:52.977 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:41:52 np0005593232 nova_compute[250269]: 2026-01-23 10:41:52.977 250273 DEBUG nova.network.neutron [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Refreshing network info cache for port 3b1ac782-1188-42b9-a89f-eb26c7876140 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.046 250273 DEBUG os_brick.utils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.048 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.064 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.065 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[cf739158-7a5f-4996-b449-e7c07fcf2f6d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.067 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:53.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.076 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.077 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f181e21b-a24e-4bb6-876d-dd77980136c0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.078 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.087 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.087 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f5fc1f-123c-4205-9a36-3326b0baa887]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.089 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[15b71fa3-b016-4170-b22b-a27408d13449]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.089 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.123 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.125 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.125 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.126 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:41:53 np0005593232 nova_compute[250269]: 2026-01-23 10:41:53.126 250273 DEBUG os_brick.utils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:41:53 np0005593232 podman[387732]: 2026-01-23 10:41:53.143897369 +0000 UTC m=+0.054772828 container create a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:41:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:53.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:53 np0005593232 systemd[1]: Started libpod-conmon-a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890.scope.
Jan 23 05:41:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:53 np0005593232 podman[387732]: 2026-01-23 10:41:53.123499609 +0000 UTC m=+0.034375078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:53 np0005593232 podman[387732]: 2026-01-23 10:41:53.21885766 +0000 UTC m=+0.129733109 container init a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:41:53 np0005593232 podman[387732]: 2026-01-23 10:41:53.227378312 +0000 UTC m=+0.138253761 container start a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:41:53 np0005593232 podman[387732]: 2026-01-23 10:41:53.232120467 +0000 UTC m=+0.142995946 container attach a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:41:54 np0005593232 wonderful_sinoussi[387752]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:41:54 np0005593232 wonderful_sinoussi[387752]: --> relative data size: 1.0
Jan 23 05:41:54 np0005593232 wonderful_sinoussi[387752]: --> All data devices are unavailable
Jan 23 05:41:54 np0005593232 systemd[1]: libpod-a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890.scope: Deactivated successfully.
Jan 23 05:41:54 np0005593232 podman[387732]: 2026-01-23 10:41:54.089464735 +0000 UTC m=+1.000340184 container died a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:41:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b91acfdbdab0943bdfcada6ef878f9c5aa1d716aa16946cb28dcf67af4920332-merged.mount: Deactivated successfully.
Jan 23 05:41:54 np0005593232 podman[387732]: 2026-01-23 10:41:54.214629463 +0000 UTC m=+1.125504932 container remove a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:41:54 np0005593232 systemd[1]: libpod-conmon-a98af87b9610915c45a531a9496fb5d73c310da53b6cbf1377bed5a859992890.scope: Deactivated successfully.
Jan 23 05:41:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3444: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 13 op/s
Jan 23 05:41:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:54 np0005593232 nova_compute[250269]: 2026-01-23 10:41:54.465 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 23 05:41:54 np0005593232 nova_compute[250269]: 2026-01-23 10:41:54.469 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:41:54 np0005593232 nova_compute[250269]: 2026-01-23 10:41:54.469 250273 INFO nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Creating image(s)#033[00m
Jan 23 05:41:54 np0005593232 nova_compute[250269]: 2026-01-23 10:41:54.523 250273 DEBUG nova.storage.rbd_utils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] creating snapshot(nova-resize) on rbd image(f34f1af9-6c51-42ec-97f8-fb5bb146aeb6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:41:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Jan 23 05:41:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Jan 23 05:41:54 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Jan 23 05:41:55 np0005593232 podman[387959]: 2026-01-23 10:41:54.906721585 +0000 UTC m=+0.024910199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:55 np0005593232 nova_compute[250269]: 2026-01-23 10:41:55.058 250273 DEBUG nova.network.neutron [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updated VIF entry in instance network info cache for port 3b1ac782-1188-42b9-a89f-eb26c7876140. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:41:55 np0005593232 nova_compute[250269]: 2026-01-23 10:41:55.058 250273 DEBUG nova.network.neutron [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating instance_info_cache with network_info: [{"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:41:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:55.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:55 np0005593232 nova_compute[250269]: 2026-01-23 10:41:55.076 250273 DEBUG oslo_concurrency.lockutils [req-9b563e15-ac82-4079-83f9-a0260e79e7aa req-bfa8addd-cdd4-45fc-945e-4f86b59fc37c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:41:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:55.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:55 np0005593232 podman[387959]: 2026-01-23 10:41:55.394253452 +0000 UTC m=+0.512442046 container create 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:41:55 np0005593232 systemd[1]: Stopping User Manager for UID 42436...
Jan 23 05:41:55 np0005593232 systemd[387392]: Activating special unit Exit the Session...
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped target Main User Target.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped target Basic System.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped target Paths.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped target Sockets.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped target Timers.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 05:41:55 np0005593232 systemd[387392]: Closed D-Bus User Message Bus Socket.
Jan 23 05:41:55 np0005593232 systemd[387392]: Stopped Create User's Volatile Files and Directories.
Jan 23 05:41:55 np0005593232 systemd[387392]: Removed slice User Application Slice.
Jan 23 05:41:55 np0005593232 systemd[387392]: Reached target Shutdown.
Jan 23 05:41:55 np0005593232 systemd[387392]: Finished Exit the Session.
Jan 23 05:41:55 np0005593232 systemd[387392]: Reached target Exit the Session.
Jan 23 05:41:55 np0005593232 systemd[1]: user@42436.service: Deactivated successfully.
Jan 23 05:41:55 np0005593232 systemd[1]: Stopped User Manager for UID 42436.
Jan 23 05:41:55 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 23 05:41:55 np0005593232 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 23 05:41:55 np0005593232 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 23 05:41:55 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 23 05:41:55 np0005593232 systemd[1]: Removed slice User Slice of UID 42436.
Jan 23 05:41:55 np0005593232 systemd[1]: Started libpod-conmon-06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901.scope.
Jan 23 05:41:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:56 np0005593232 podman[387959]: 2026-01-23 10:41:56.081335402 +0000 UTC m=+1.199524096 container init 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:56 np0005593232 podman[387959]: 2026-01-23 10:41:56.09146435 +0000 UTC m=+1.209652954 container start 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:41:56 np0005593232 practical_tu[387978]: 167 167
Jan 23 05:41:56 np0005593232 systemd[1]: libpod-06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901.scope: Deactivated successfully.
Jan 23 05:41:56 np0005593232 conmon[387978]: conmon 06efe51a617ee2d96959 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901.scope/container/memory.events
Jan 23 05:41:56 np0005593232 podman[387959]: 2026-01-23 10:41:56.10766704 +0000 UTC m=+1.225855654 container attach 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:41:56 np0005593232 podman[387959]: 2026-01-23 10:41:56.10904703 +0000 UTC m=+1.227235624 container died 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:41:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-456617fdbdac53966427da504f97e7f9611423a5743b502fa448f2db0e4143c6-merged.mount: Deactivated successfully.
Jan 23 05:41:56 np0005593232 podman[387959]: 2026-01-23 10:41:56.160671217 +0000 UTC m=+1.278859821 container remove 06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_tu, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:41:56 np0005593232 systemd[1]: libpod-conmon-06efe51a617ee2d969596d732f5cb1c2d0dbdf53115e1878f5277b09ef0f9901.scope: Deactivated successfully.
Jan 23 05:41:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3446: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Jan 23 05:41:56 np0005593232 podman[388003]: 2026-01-23 10:41:56.354200758 +0000 UTC m=+0.050847557 container create 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 05:41:56 np0005593232 systemd[1]: Started libpod-conmon-0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f.scope.
Jan 23 05:41:56 np0005593232 podman[388003]: 2026-01-23 10:41:56.330948737 +0000 UTC m=+0.027595576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:56 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e57f97b384cc0b68f4212043a699475aa4a3e72a9cb9f463789f56fac75eea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e57f97b384cc0b68f4212043a699475aa4a3e72a9cb9f463789f56fac75eea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e57f97b384cc0b68f4212043a699475aa4a3e72a9cb9f463789f56fac75eea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:56 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5e57f97b384cc0b68f4212043a699475aa4a3e72a9cb9f463789f56fac75eea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:56 np0005593232 podman[388003]: 2026-01-23 10:41:56.455787535 +0000 UTC m=+0.152434354 container init 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:41:56 np0005593232 podman[388003]: 2026-01-23 10:41:56.462514936 +0000 UTC m=+0.159161735 container start 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:41:56 np0005593232 podman[388003]: 2026-01-23 10:41:56.4672006 +0000 UTC m=+0.163847399 container attach 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.822 250273 DEBUG nova.objects.instance [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'trusted_certs' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.874 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.928 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.928 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Ensure instance console log exists: /var/lib/nova/instances/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.929 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.929 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.929 250273 DEBUG oslo_concurrency.lockutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.933 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Start _get_guest_xml network_info=[{"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:ee:36:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vdb', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '366ccb94-0e1f-4db2-ac2b-26a36302f406', 'device_type': 'disk', 'boot_index': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '4393c992-1666-40a0-ab11-4cc66bdcd721', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'attaching', 'instance': 'f34f1af9-6c51-42ec-97f8-fb5bb146aeb6', 'attached_at': '2026-01-23T10:41:54.000000', 'detached_at': '', 'volume_id': '4393c992-1666-40a0-ab11-4cc66bdcd721', 'multiattach': True, 'serial': '4393c992-1666-40a0-ab11-4cc66bdcd721'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.940 250273 WARNING nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.945 250273 DEBUG nova.virt.libvirt.host [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.947 250273 DEBUG nova.virt.libvirt.host [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.949 250273 DEBUG nova.virt.libvirt.host [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.950 250273 DEBUG nova.virt.libvirt.host [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.952 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.952 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='eebea5f8-9b11-45ad-873d-c4ea90d3de87',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.953 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.954 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.954 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.955 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.955 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.955 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.956 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.956 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.956 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.957 250273 DEBUG nova.virt.hardware [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.957 250273 DEBUG nova.objects.instance [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:41:56 np0005593232 nova_compute[250269]: 2026-01-23 10:41:56.979 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:57.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:41:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:57.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]: {
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:    "0": [
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:        {
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "devices": [
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "/dev/loop3"
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            ],
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "lv_name": "ceph_lv0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "lv_size": "7511998464",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "name": "ceph_lv0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "tags": {
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.cluster_name": "ceph",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.crush_device_class": "",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.encrypted": "0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.osd_id": "0",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.type": "block",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:                "ceph.vdo": "0"
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            },
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "type": "block",
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:            "vg_name": "ceph_vg0"
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:        }
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]:    ]
Jan 23 05:41:57 np0005593232 romantic_tharp[388019]: }
Jan 23 05:41:57 np0005593232 systemd[1]: libpod-0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f.scope: Deactivated successfully.
Jan 23 05:41:57 np0005593232 podman[388003]: 2026-01-23 10:41:57.284096878 +0000 UTC m=+0.980743697 container died 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:41:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b5e57f97b384cc0b68f4212043a699475aa4a3e72a9cb9f463789f56fac75eea-merged.mount: Deactivated successfully.
Jan 23 05:41:57 np0005593232 podman[388003]: 2026-01-23 10:41:57.347355286 +0000 UTC m=+1.044002115 container remove 0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:41:57 np0005593232 systemd[1]: libpod-conmon-0a0b7fc75560eec1ac23e86f91ab89cbf68781e8123e3d68b1e0bf2495b98a0f.scope: Deactivated successfully.
Jan 23 05:41:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:41:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501525944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.451 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.500 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:41:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:41:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626680456' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.959 250273 DEBUG oslo_concurrency.processutils [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:41:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:57.976 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '77'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:57 np0005593232 podman[388279]: 2026-01-23 10:41:57.993609995 +0000 UTC m=+0.050668711 container create f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.994 250273 DEBUG nova.virt.libvirt.vif [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:40:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=198,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:40:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-1k48k1p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:41:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=f34f1af9-6c51-42ec-97f8-fb5bb146aeb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:ee:36:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.994 250273 DEBUG nova.network.os_vif_util [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:ee:36:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:41:57 np0005593232 nova_compute[250269]: 2026-01-23 10:41:57.995 250273 DEBUG nova.network.os_vif_util [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.001 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <uuid>f34f1af9-6c51-42ec-97f8-fb5bb146aeb6</uuid>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <name>instance-000000c6</name>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <memory>196608</memory>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:name>multiattach-server-1</nova:name>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:41:56</nova:creationTime>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.micro">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:memory>192</nova:memory>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:user uuid="93cd560e84264023877c47122b5919de">tempest-AttachVolumeMultiAttachTest-63035580-project-member</nova:user>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:project uuid="6e762fca3b634c7aa1d994314c059c54">tempest-AttachVolumeMultiAttachTest-63035580</nova:project>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <nova:port uuid="3b1ac782-1188-42b9-a89f-eb26c7876140">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="serial">f34f1af9-6c51-42ec-97f8-fb5bb146aeb6</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="uuid">f34f1af9-6c51-42ec-97f8-fb5bb146aeb6</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6_disk">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6_disk.config">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <target dev="vdb" bus="virtio"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <shareable/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:ee:36:e0"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <target dev="tap3b1ac782-11"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6/console.log" append="off"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:41:58 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:41:58 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:41:58 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:41:58 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.003 250273 DEBUG nova.virt.libvirt.vif [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:40:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=198,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:40:16Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=MigrationContext,new_flavor=Flavor(2),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-1k48k1p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:41:49Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=f34f1af9-6c51-42ec-97f8-fb5bb146aeb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:ee:36:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.004 250273 DEBUG nova.network.os_vif_util [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "vif_mac": "fa:16:3e:ee:36:e0"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.005 250273 DEBUG nova.network.os_vif_util [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.005 250273 DEBUG os_vif [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.006 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.007 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.014 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b1ac782-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.014 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b1ac782-11, col_values=(('external_ids', {'iface-id': '3b1ac782-1188-42b9-a89f-eb26c7876140', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:36:e0', 'vm-uuid': 'f34f1af9-6c51-42ec-97f8-fb5bb146aeb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.017 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 NetworkManager[49057]: <info>  [1769164918.0186] manager: (tap3b1ac782-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/370)
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.025 250273 INFO os_vif [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11')#033[00m
Jan 23 05:41:58 np0005593232 systemd[1]: Started libpod-conmon-f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74.scope.
Jan 23 05:41:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:58.064236542 +0000 UTC m=+0.121295248 container init f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:57.969754507 +0000 UTC m=+0.026813253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:58.070818959 +0000 UTC m=+0.127877665 container start f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:58.07578264 +0000 UTC m=+0.132841346 container attach f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:41:58 np0005593232 goofy_galois[388299]: 167 167
Jan 23 05:41:58 np0005593232 systemd[1]: libpod-f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74.scope: Deactivated successfully.
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:58.076575143 +0000 UTC m=+0.133633849 container died f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.075 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.077 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.077 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.077 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No VIF found with MAC fa:16:3e:ee:36:e0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.078 250273 INFO nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Using config drive#033[00m
Jan 23 05:41:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d7d93bc82ece2bbce9810d1c79cb1f3f126a66a42ec44663feee0ef10fbe0edc-merged.mount: Deactivated successfully.
Jan 23 05:41:58 np0005593232 podman[388279]: 2026-01-23 10:41:58.116985331 +0000 UTC m=+0.174044037 container remove f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:41:58 np0005593232 systemd[1]: libpod-conmon-f6e438a74874caed06e41d14dfc4693f09011c92d256666da7e204a5f4fd6f74.scope: Deactivated successfully.
Jan 23 05:41:58 np0005593232 kernel: tap3b1ac782-11: entered promiscuous mode
Jan 23 05:41:58 np0005593232 NetworkManager[49057]: <info>  [1769164918.1899] manager: (tap3b1ac782-11): new Tun device (/org/freedesktop/NetworkManager/Devices/371)
Jan 23 05:41:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:58Z|00785|binding|INFO|Claiming lport 3b1ac782-1188-42b9-a89f-eb26c7876140 for this chassis.
Jan 23 05:41:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:58Z|00786|binding|INFO|3b1ac782-1188-42b9-a89f-eb26c7876140: Claiming fa:16:3e:ee:36:e0 10.100.0.4
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.243 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:36:e0 10.100.0.4'], port_security=['fa:16:3e:ee:36:e0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f34f1af9-6c51-42ec-97f8-fb5bb146aeb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3b1ac782-1188-42b9-a89f-eb26c7876140) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.244 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3b1ac782-1188-42b9-a89f-eb26c7876140 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 bound to our chassis#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.245 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:41:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:58Z|00787|binding|INFO|Setting lport 3b1ac782-1188-42b9-a89f-eb26c7876140 ovn-installed in OVS
Jan 23 05:41:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:41:58Z|00788|binding|INFO|Setting lport 3b1ac782-1188-42b9-a89f-eb26c7876140 up in Southbound
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.260 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.262 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 systemd-machined[215836]: New machine qemu-89-instance-000000c6.
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.268 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8c4e90c2-0972-4f9a-a31b-cebb619f8a73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 systemd[1]: Started Virtual Machine qemu-89-instance-000000c6.
Jan 23 05:41:58 np0005593232 systemd-udevd[388361]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:41:58 np0005593232 NetworkManager[49057]: <info>  [1769164918.2995] device (tap3b1ac782-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:41:58 np0005593232 NetworkManager[49057]: <info>  [1769164918.3003] device (tap3b1ac782-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.300 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5e93c1a6-9133-42e6-a9d2-9356901ddbc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.304 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee49b22-14fa-4aa6-bfb0-c76b9c182943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 podman[388353]: 2026-01-23 10:41:58.332504007 +0000 UTC m=+0.052227705 container create a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.332 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[fccd665e-18fa-4de1-8e54-cd15827fd6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3447: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 15 KiB/s wr, 83 op/s
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.350 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3e67b22c-5162-4cb2-b2a8-b87f00fbc930]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388379, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.364 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9da92e92-e0d8-49f8-9470-e94fef23f893]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877561, 'tstamp': 877561}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388381, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877566, 'tstamp': 877566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388381, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.369 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.371 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 nova_compute[250269]: 2026-01-23 10:41:58.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.373 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.373 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.374 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:41:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:41:58.375 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:41:58 np0005593232 systemd[1]: Started libpod-conmon-a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001.scope.
Jan 23 05:41:58 np0005593232 podman[388353]: 2026-01-23 10:41:58.305977133 +0000 UTC m=+0.025700881 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:41:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:41:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d838ea309ac275db9f62bbaeab73a6e5d9a76cec43e731ae79322a218a121291/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d838ea309ac275db9f62bbaeab73a6e5d9a76cec43e731ae79322a218a121291/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d838ea309ac275db9f62bbaeab73a6e5d9a76cec43e731ae79322a218a121291/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d838ea309ac275db9f62bbaeab73a6e5d9a76cec43e731ae79322a218a121291/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:41:58 np0005593232 podman[388353]: 2026-01-23 10:41:58.466705782 +0000 UTC m=+0.186429500 container init a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:41:58 np0005593232 podman[388353]: 2026-01-23 10:41:58.480866014 +0000 UTC m=+0.200589722 container start a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:41:58 np0005593232 podman[388353]: 2026-01-23 10:41:58.485048143 +0000 UTC m=+0.204771861 container attach a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:41:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:41:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:41:59.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.103 250273 DEBUG nova.compute.manager [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.105 250273 DEBUG oslo_concurrency.lockutils [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.105 250273 DEBUG oslo_concurrency.lockutils [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.105 250273 DEBUG oslo_concurrency.lockutils [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.105 250273 DEBUG nova.compute.manager [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.106 250273 WARNING nova.compute.manager [req-c986ae52-e071-4c92-b953-cbd564e39362 req-f2a225af-dc66-46bf-9e58-5df1e2e32335 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received unexpected event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.125 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164919.1252112, f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.126 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.129 250273 DEBUG nova.compute.manager [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.134 250273 INFO nova.virt.libvirt.driver [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Instance running successfully.#033[00m
Jan 23 05:41:59 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.138 250273 DEBUG nova.virt.libvirt.guest [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.138 250273 DEBUG nova.virt.libvirt.driver [None req-0e0edcb4-dfa1-45aa-97de-307869ae07c0 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.149 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.155 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:41:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:41:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:41:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:41:59.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.183 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.184 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769164919.1285324, f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.184 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] VM Started (Lifecycle Event)#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.210 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.214 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:41:59 np0005593232 nova_compute[250269]: 2026-01-23 10:41:59.253 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:41:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]: {
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:        "osd_id": 0,
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:        "type": "bluestore"
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]:    }
Jan 23 05:41:59 np0005593232 angry_bardeen[388384]: }
Jan 23 05:41:59 np0005593232 systemd[1]: libpod-a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001.scope: Deactivated successfully.
Jan 23 05:41:59 np0005593232 podman[388353]: 2026-01-23 10:41:59.44645283 +0000 UTC m=+1.166176568 container died a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:41:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d838ea309ac275db9f62bbaeab73a6e5d9a76cec43e731ae79322a218a121291-merged.mount: Deactivated successfully.
Jan 23 05:41:59 np0005593232 podman[388353]: 2026-01-23 10:41:59.537030454 +0000 UTC m=+1.256754182 container remove a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:41:59 np0005593232 systemd[1]: libpod-conmon-a67fd0e28ebad7f10a7a794583d9f223379c227c0584017f212259c0286e1001.scope: Deactivated successfully.
Jan 23 05:41:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:41:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:41:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:41:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:41:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 32e5c101-6415-4995-805b-ba53c2e06bb9 does not exist
Jan 23 05:41:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ec08f978-cea6-43e5-b643-56354aeba3ed does not exist
Jan 23 05:41:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 17de6820-889f-4826-a72d-4b560b2780e3 does not exist
Jan 23 05:42:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3448: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 15 KiB/s wr, 83 op/s
Jan 23 05:42:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:42:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:42:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:01.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.093 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:01.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.188 250273 DEBUG nova.compute.manager [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.189 250273 DEBUG oslo_concurrency.lockutils [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.189 250273 DEBUG oslo_concurrency.lockutils [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.190 250273 DEBUG oslo_concurrency.lockutils [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.190 250273 DEBUG nova.compute.manager [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:42:01 np0005593232 nova_compute[250269]: 2026-01-23 10:42:01.190 250273 WARNING nova.compute.manager [req-e83bee79-cc1c-4238-bcdd-1525aca5bb09 req-ecbf1275-c2e0-4dc0-90df-5a510f0df801 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received unexpected event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:42:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3449: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 104 op/s
Jan 23 05:42:03 np0005593232 nova_compute[250269]: 2026-01-23 10:42:03.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:03.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:03.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3450: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 MiB/s rd, 16 KiB/s wr, 214 op/s
Jan 23 05:42:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:05.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:05.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:06 np0005593232 nova_compute[250269]: 2026-01-23 10:42:06.095 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3451: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 14 KiB/s wr, 184 op/s
Jan 23 05:42:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:07.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:07.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Jan 23 05:42:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Jan 23 05:42:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Jan 23 05:42:08 np0005593232 nova_compute[250269]: 2026-01-23 10:42:08.115 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3453: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.4 KiB/s wr, 171 op/s
Jan 23 05:42:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:09.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:09.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3454: 321 pgs: 321 active+clean; 566 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 2.4 KiB/s wr, 171 op/s
Jan 23 05:42:10 np0005593232 podman[388582]: 2026-01-23 10:42:10.537200736 +0000 UTC m=+0.177495837 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 23 05:42:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:11.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:11 np0005593232 nova_compute[250269]: 2026-01-23 10:42:11.098 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:11.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3455: 321 pgs: 321 active+clean; 544 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.8 MiB/s rd, 2.5 KiB/s wr, 169 op/s
Jan 23 05:42:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:13.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.154 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:13.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:13 np0005593232 ovn_controller[151001]: 2026-01-23T10:42:13Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:36:e0 10.100.0.4
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.489 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.490 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.490 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.491 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.491 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:13 np0005593232 nova_compute[250269]: 2026-01-23 10:42:13.491 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:42:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3456: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 17 KiB/s wr, 82 op/s
Jan 23 05:42:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:14 np0005593232 podman[388611]: 2026-01-23 10:42:14.417675953 +0000 UTC m=+0.082628409 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 23 05:42:14 np0005593232 nova_compute[250269]: 2026-01-23 10:42:14.871 250273 DEBUG nova.compute.manager [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:42:14 np0005593232 nova_compute[250269]: 2026-01-23 10:42:14.872 250273 DEBUG nova.compute.manager [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing instance network info cache due to event network-changed-456d51f3-4b45-4e54-acee-c50facafcd50. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:42:14 np0005593232 nova_compute[250269]: 2026-01-23 10:42:14.872 250273 DEBUG oslo_concurrency.lockutils [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:42:14 np0005593232 nova_compute[250269]: 2026-01-23 10:42:14.872 250273 DEBUG oslo_concurrency.lockutils [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:42:14 np0005593232 nova_compute[250269]: 2026-01-23 10:42:14.872 250273 DEBUG nova.network.neutron [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Refreshing network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:42:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:15.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:15.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.157 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3457: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 303 KiB/s rd, 17 KiB/s wr, 82 op/s
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.969 250273 DEBUG nova.compute.manager [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-changed-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.970 250273 DEBUG nova.compute.manager [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Refreshing instance network info cache due to event network-changed-3b1ac782-1188-42b9-a89f-eb26c7876140. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.970 250273 DEBUG oslo_concurrency.lockutils [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.971 250273 DEBUG oslo_concurrency.lockutils [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:42:16 np0005593232 nova_compute[250269]: 2026-01-23 10:42:16.971 250273 DEBUG nova.network.neutron [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Refreshing network info cache for port 3b1ac782-1188-42b9-a89f-eb26c7876140 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:42:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:17.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:17 np0005593232 nova_compute[250269]: 2026-01-23 10:42:17.222 250273 DEBUG nova.network.neutron [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated VIF entry in instance network info cache for port 456d51f3-4b45-4e54-acee-c50facafcd50. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:42:17 np0005593232 nova_compute[250269]: 2026-01-23 10:42:17.222 250273 DEBUG nova.network.neutron [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:42:17 np0005593232 nova_compute[250269]: 2026-01-23 10:42:17.256 250273 DEBUG oslo_concurrency.lockutils [req-21b33a1b-c9ce-4f28-85dc-2e4f4e0cb1b1 req-38f26dba-59b0-4773-aec1-4dadc00f3893 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.157 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:42:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3458: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 82 op/s
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.586 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.587 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.587 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:42:18 np0005593232 nova_compute[250269]: 2026-01-23 10:42:18.587 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:42:19 np0005593232 nova_compute[250269]: 2026-01-23 10:42:19.122 250273 DEBUG nova.network.neutron [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updated VIF entry in instance network info cache for port 3b1ac782-1188-42b9-a89f-eb26c7876140. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:42:19 np0005593232 nova_compute[250269]: 2026-01-23 10:42:19.123 250273 DEBUG nova.network.neutron [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating instance_info_cache with network_info: [{"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:42:19 np0005593232 nova_compute[250269]: 2026-01-23 10:42:19.149 250273 DEBUG oslo_concurrency.lockutils [req-5891daa7-8b0e-4ca6-b1f7-bf1bb80802fb req-5ffc7862-275e-4a34-a8eb-1076ba9dbec7 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:42:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:19.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Jan 23 05:42:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Jan 23 05:42:19 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Jan 23 05:42:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3460: 321 pgs: 321 active+clean; 519 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 87 op/s
Jan 23 05:42:20 np0005593232 nova_compute[250269]: 2026-01-23 10:42:20.354 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [{"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:42:20 np0005593232 nova_compute[250269]: 2026-01-23 10:42:20.378 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-0888913c-71a6-45fe-97bf-9dddd2b7b521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:42:20 np0005593232 nova_compute[250269]: 2026-01-23 10:42:20.379 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:42:20 np0005593232 nova_compute[250269]: 2026-01-23 10:42:20.379 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:20 np0005593232 nova_compute[250269]: 2026-01-23 10:42:20.379 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:21 np0005593232 nova_compute[250269]: 2026-01-23 10:42:21.154 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:21.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:21.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.322 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.322 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.323 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.323 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:42:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3461: 321 pgs: 321 active+clean; 537 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 230 KiB/s wr, 87 op/s
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.693 250273 DEBUG oslo_concurrency.lockutils [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.693 250273 DEBUG oslo_concurrency.lockutils [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.730 250273 INFO nova.compute.manager [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Detaching volume 4393c992-1666-40a0-ab11-4cc66bdcd721#033[00m
Jan 23 05:42:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:42:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3741217946' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.762 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.847 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.848 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.848 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.852 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.852 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.852 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.907 250273 INFO nova.virt.block_device [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Attempting to driver detach volume 4393c992-1666-40a0-ab11-4cc66bdcd721 from mountpoint /dev/vdb#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.917 250273 DEBUG nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Attempting to detach device vdb from instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.918 250273 DEBUG nova.virt.libvirt.guest [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:42:22 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.926 250273 INFO nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully detached device vdb from instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 from the persistent domain config.#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.926 250273 DEBUG nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:42:22 np0005593232 nova_compute[250269]: 2026-01-23 10:42:22.927 250273 DEBUG nova.virt.libvirt.guest [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:42:22 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:42:22 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.037 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769164943.036889, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.038 250273 DEBUG nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.040 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.041 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3681MB free_disk=20.805667877197266GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.041 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.041 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.043 250273 INFO nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully detached device vdb from instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 from the live domain config.#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.137 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.137 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.137 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.137 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:23.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:23.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.220 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.278 250273 INFO nova.virt.libvirt.driver [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Detected multiple connections on this host for volume: 4393c992-1666-40a0-ab11-4cc66bdcd721, skipping target disconnect.#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.630 250273 DEBUG nova.objects.instance [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'flavor' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.665 250273 DEBUG oslo_concurrency.lockutils [None req-471ab53e-9330-4339-93e7-850e7eaf4186 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:42:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533574599' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.689 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.695 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.725 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.755 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:42:23 np0005593232 nova_compute[250269]: 2026-01-23 10:42:23.755 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3462: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 23 05:42:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.518 250273 DEBUG oslo_concurrency.lockutils [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.518 250273 DEBUG oslo_concurrency.lockutils [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.532 250273 INFO nova.compute.manager [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Detaching volume 4393c992-1666-40a0-ab11-4cc66bdcd721#033[00m
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.793 250273 INFO nova.virt.block_device [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Attempting to driver detach volume 4393c992-1666-40a0-ab11-4cc66bdcd721 from mountpoint /dev/vdb#033[00m
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.807 250273 DEBUG nova.virt.libvirt.driver [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Attempting to detach device vdb from instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Jan 23 05:42:24 np0005593232 nova_compute[250269]: 2026-01-23 10:42:24.808 250273 DEBUG nova.virt.libvirt.guest [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:42:24 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:42:24 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:42:24 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.026 250273 INFO nova.virt.libvirt.driver [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully detached device vdb from instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 from the persistent domain config.#033[00m
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.026 250273 DEBUG nova.virt.libvirt.driver [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.027 250273 DEBUG nova.virt.libvirt.guest [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] detach device xml: <disk type="network" device="disk">
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <source protocol="rbd" name="volumes/volume-4393c992-1666-40a0-ab11-4cc66bdcd721">
Jan 23 05:42:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.100" port="6789"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.102" port="6789"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:    <host name="192.168.122.101" port="6789"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  </source>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <target dev="vdb" bus="virtio"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <serial>4393c992-1666-40a0-ab11-4cc66bdcd721</serial>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <shareable/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]:  <address type="pci" domain="0x0000" bus="0x04" slot="0x00" function="0x0"/>
Jan 23 05:42:25 np0005593232 nova_compute[250269]: </disk>
Jan 23 05:42:25 np0005593232 nova_compute[250269]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Jan 23 05:42:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:25.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:25.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.324 250273 DEBUG nova.virt.libvirt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Received event <DeviceRemovedEvent: 1769164945.3238468, f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.328 250273 DEBUG nova.virt.libvirt.driver [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Jan 23 05:42:25 np0005593232 nova_compute[250269]: 2026-01-23 10:42:25.331 250273 INFO nova.virt.libvirt.driver [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully detached device vdb from instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 from the live domain config.#033[00m
Jan 23 05:42:26 np0005593232 nova_compute[250269]: 2026-01-23 10:42:26.157 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3463: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 85 op/s
Jan 23 05:42:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:27.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:28 np0005593232 nova_compute[250269]: 2026-01-23 10:42:28.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:28 np0005593232 nova_compute[250269]: 2026-01-23 10:42:28.270 250273 DEBUG nova.objects.instance [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'flavor' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:42:28 np0005593232 nova_compute[250269]: 2026-01-23 10:42:28.342 250273 DEBUG oslo_concurrency.lockutils [None req-01c634d6-4511-4c23-ac8d-4ae36fdfd240 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 3.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3464: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 28 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Jan 23 05:42:28 np0005593232 nova_compute[250269]: 2026-01-23 10:42:28.749 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:42:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:29.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:29.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3465: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 25 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Jan 23 05:42:31 np0005593232 nova_compute[250269]: 2026-01-23 10:42:31.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:31.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:31.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3466: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 23 05:42:33 np0005593232 nova_compute[250269]: 2026-01-23 10:42:33.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:33.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:33.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3467: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.6 MiB/s wr, 25 op/s
Jan 23 05:42:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:34 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 23 05:42:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:35.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:35.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:36 np0005593232 nova_compute[250269]: 2026-01-23 10:42:36.219 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3468: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 13 KiB/s wr, 8 op/s
Jan 23 05:42:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:42:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3222135964' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:42:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:37.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:37.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:42:37
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', '.rgw.root']
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:42:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:42:38 np0005593232 nova_compute[250269]: 2026-01-23 10:42:38.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3469: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 14 KiB/s wr, 13 op/s
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:42:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:42:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:39.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:39.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3470: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 10 KiB/s wr, 12 op/s
Jan 23 05:42:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:41.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:41.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:41 np0005593232 nova_compute[250269]: 2026-01-23 10:42:41.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:41 np0005593232 podman[388744]: 2026-01-23 10:42:41.457239679 +0000 UTC m=+0.112526599 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:42:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3471: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 10 KiB/s wr, 21 op/s
Jan 23 05:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:42.660 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:42.660 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:42:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:42.661 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:42:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:43.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:43.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:43 np0005593232 nova_compute[250269]: 2026-01-23 10:42:43.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3472: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 10 KiB/s wr, 22 op/s
Jan 23 05:42:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:44 np0005593232 podman[388796]: 2026-01-23 10:42:44.633986083 +0000 UTC m=+0.055221561 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:42:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:45.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:45.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:46 np0005593232 nova_compute[250269]: 2026-01-23 10:42:46.225 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3473: 321 pgs: 321 active+clean; 567 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Jan 23 05:42:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:47.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:47.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00870218424297413 of space, bias 1.0, pg target 2.610655272892239 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00532465568583659 of space, bias 1.0, pg target 1.5867473943793038 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.398084170854272e-05 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5652333935301508 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001727386934673367 quantized to 16 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021592336683417087 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018353486180904522 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:42:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043184673366834174 quantized to 32 (current 32)
Jan 23 05:42:48 np0005593232 nova_compute[250269]: 2026-01-23 10:42:48.276 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3474: 321 pgs: 321 active+clean; 568 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 34 KiB/s rd, 16 KiB/s wr, 28 op/s
Jan 23 05:42:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:49.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:49.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3475: 321 pgs: 321 active+clean; 568 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 15 KiB/s wr, 24 op/s
Jan 23 05:42:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:42:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:51.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:42:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:51.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:51 np0005593232 nova_compute[250269]: 2026-01-23 10:42:51.228 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:51 np0005593232 ceph-osd[85010]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/lock/cls_lock.cc:291: Could not read list of current lockers off disk: (2) No such file or directory
Jan 23 05:42:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3476: 321 pgs: 321 active+clean; 579 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 311 KiB/s rd, 411 KiB/s wr, 45 op/s
Jan 23 05:42:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:53.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:53.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:53 np0005593232 nova_compute[250269]: 2026-01-23 10:42:53.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3477: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Jan 23 05:42:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:42:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:55.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:55.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:56 np0005593232 nova_compute[250269]: 2026-01-23 10:42:56.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3478: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 23 05:42:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:57.102 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=78, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=77) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:42:57 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:57.102 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:42:57 np0005593232 nova_compute[250269]: 2026-01-23 10:42:57.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:57.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:42:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:57.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:42:58 np0005593232 nova_compute[250269]: 2026-01-23 10:42:58.284 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:42:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3479: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 23 05:42:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:42:59.105 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '78'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:42:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:42:59.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:42:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:42:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:42:59.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:42:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3480: 321 pgs: 321 active+clean; 614 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 23 05:43:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:01.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:01.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:01 np0005593232 nova_compute[250269]: 2026-01-23 10:43:01.230 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3481: 321 pgs: 321 active+clean; 621 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 109 op/s
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.852023) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164982852137, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 1482, "num_deletes": 254, "total_data_size": 2443246, "memory_usage": 2479280, "flush_reason": "Manual Compaction"}
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164982880232, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 2406749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75755, "largest_seqno": 77236, "table_properties": {"data_size": 2399853, "index_size": 3966, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15021, "raw_average_key_size": 20, "raw_value_size": 2385849, "raw_average_value_size": 3246, "num_data_blocks": 174, "num_entries": 735, "num_filter_entries": 735, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164849, "oldest_key_time": 1769164849, "file_creation_time": 1769164982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 28257 microseconds, and 7909 cpu microseconds.
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.880320) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 2406749 bytes OK
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.880353) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.885363) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.885421) EVENT_LOG_v1 {"time_micros": 1769164982885378, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.885439) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 2436770, prev total WAL file size 2457825, number of live WAL files 2.
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.886555) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(2350KB)], [176(10MB)]
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164982886710, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 13333538, "oldest_snapshot_seqno": -1}
Jan 23 05:43:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 10173 keys, 11379329 bytes, temperature: kUnknown
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164983058689, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 11379329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11316251, "index_size": 36533, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25477, "raw_key_size": 268618, "raw_average_key_size": 26, "raw_value_size": 11140919, "raw_average_value_size": 1095, "num_data_blocks": 1383, "num_entries": 10173, "num_filter_entries": 10173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769164982, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.059079) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 11379329 bytes
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.061469) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 77.5 rd, 66.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.4 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(10.3) write-amplify(4.7) OK, records in: 10698, records dropped: 525 output_compression: NoCompression
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.061570) EVENT_LOG_v1 {"time_micros": 1769164983061532, "job": 110, "event": "compaction_finished", "compaction_time_micros": 172133, "compaction_time_cpu_micros": 36896, "output_level": 6, "num_output_files": 1, "total_output_size": 11379329, "num_input_records": 10698, "num_output_records": 10173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164983063120, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769164983068192, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:02.886223) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.068262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.068271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.068286) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.068290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:43:03.068294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:43:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:03.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:03.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:03 np0005593232 nova_compute[250269]: 2026-01-23 10:43:03.323 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c0180f89-1616-419b-be8c-625fc6f4d09c does not exist
Jan 23 05:43:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d9d2d93f-d2ae-4d4d-8a89-5ed19550be23 does not exist
Jan 23 05:43:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 76d4a895-36ac-467c-a4da-995870248b77 does not exist
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:43:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3482: 321 pgs: 321 active+clean; 645 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 122 op/s
Jan 23 05:43:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.494134044 +0000 UTC m=+0.087152898 container create b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.441304053 +0000 UTC m=+0.034322917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:04 np0005593232 systemd[1]: Started libpod-conmon-b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0.scope.
Jan 23 05:43:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.672680548 +0000 UTC m=+0.265699412 container init b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.680072488 +0000 UTC m=+0.273091332 container start b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:43:04 np0005593232 distracted_noyce[389141]: 167 167
Jan 23 05:43:04 np0005593232 systemd[1]: libpod-b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0.scope: Deactivated successfully.
Jan 23 05:43:04 np0005593232 conmon[389141]: conmon b6d4231b1230d5e5bbb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0.scope/container/memory.events
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.700970512 +0000 UTC m=+0.293989376 container attach b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.701380294 +0000 UTC m=+0.294399158 container died b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:43:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cd8425204498c1105591982784d3366de2d39986833beea48d4c2cc7f92163c9-merged.mount: Deactivated successfully.
Jan 23 05:43:04 np0005593232 podman[389124]: 2026-01-23 10:43:04.761381739 +0000 UTC m=+0.354400583 container remove b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:43:04 np0005593232 systemd[1]: libpod-conmon-b6d4231b1230d5e5bbb4d1e35757f454e99f4249a7312a285794e23fde5819f0.scope: Deactivated successfully.
Jan 23 05:43:04 np0005593232 podman[389214]: 2026-01-23 10:43:04.928446518 +0000 UTC m=+0.043520798 container create 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:43:04 np0005593232 systemd[1]: Started libpod-conmon-7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898.scope.
Jan 23 05:43:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:04.910558629 +0000 UTC m=+0.025632929 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:05 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:05.014934636 +0000 UTC m=+0.130008916 container init 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:05.023822509 +0000 UTC m=+0.138896779 container start 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:05.028019978 +0000 UTC m=+0.143094278 container attach 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:43:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:05.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:05.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:05 np0005593232 nova_compute[250269]: 2026-01-23 10:43:05.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:05 np0005593232 festive_haslett[389230]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:43:05 np0005593232 festive_haslett[389230]: --> relative data size: 1.0
Jan 23 05:43:05 np0005593232 festive_haslett[389230]: --> All data devices are unavailable
Jan 23 05:43:05 np0005593232 systemd[1]: libpod-7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898.scope: Deactivated successfully.
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:05.876452854 +0000 UTC m=+0.991527144 container died 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:43:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-80c764f738261083b0c81f89cc14ad62e5bbe9dc637f9600c6414f4a476b57ec-merged.mount: Deactivated successfully.
Jan 23 05:43:05 np0005593232 podman[389214]: 2026-01-23 10:43:05.936738417 +0000 UTC m=+1.051812687 container remove 7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:43:05 np0005593232 systemd[1]: libpod-conmon-7d57a34ecc9a43b4811f9c90692a823e3e402b10949226d6dac7fb49f06f3898.scope: Deactivated successfully.
Jan 23 05:43:06 np0005593232 nova_compute[250269]: 2026-01-23 10:43:06.232 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:06 np0005593232 nova_compute[250269]: 2026-01-23 10:43:06.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3483: 321 pgs: 321 active+clean; 645 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.581316478 +0000 UTC m=+0.052783751 container create 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:43:06 np0005593232 systemd[1]: Started libpod-conmon-2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9.scope.
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.5563787 +0000 UTC m=+0.027845983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.681061673 +0000 UTC m=+0.152528966 container init 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.693328372 +0000 UTC m=+0.164795625 container start 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.697338276 +0000 UTC m=+0.168805569 container attach 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:43:06 np0005593232 friendly_feynman[389414]: 167 167
Jan 23 05:43:06 np0005593232 systemd[1]: libpod-2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9.scope: Deactivated successfully.
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.700804965 +0000 UTC m=+0.172272218 container died 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:43:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55c72ec7f8ec67977d7193b25bc8e87884c2081fc82c7ee47c64dd172207f769-merged.mount: Deactivated successfully.
Jan 23 05:43:06 np0005593232 podman[389398]: 2026-01-23 10:43:06.746622487 +0000 UTC m=+0.218089740 container remove 2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_feynman, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:43:06 np0005593232 systemd[1]: libpod-conmon-2dc939be6575dabd4f329b44ceef3c842530ffe9cdd8867499c75513925c7af9.scope: Deactivated successfully.
Jan 23 05:43:06 np0005593232 podman[389436]: 2026-01-23 10:43:06.942981198 +0000 UTC m=+0.060334786 container create 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:43:06 np0005593232 systemd[1]: Started libpod-conmon-18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac.scope.
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:06.904427352 +0000 UTC m=+0.021780960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868cb3a4fcbcbc5d3e09351b2be2d529e69cedbc3e5cff869ae817c65062c43b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868cb3a4fcbcbc5d3e09351b2be2d529e69cedbc3e5cff869ae817c65062c43b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868cb3a4fcbcbc5d3e09351b2be2d529e69cedbc3e5cff869ae817c65062c43b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/868cb3a4fcbcbc5d3e09351b2be2d529e69cedbc3e5cff869ae817c65062c43b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:07.022276442 +0000 UTC m=+0.139630070 container init 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:07.029349323 +0000 UTC m=+0.146702921 container start 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:07.034051087 +0000 UTC m=+0.151404685 container attach 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:43:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:07.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:07.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]: {
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:    "0": [
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:        {
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "devices": [
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "/dev/loop3"
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            ],
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "lv_name": "ceph_lv0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "lv_size": "7511998464",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "name": "ceph_lv0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "tags": {
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.cluster_name": "ceph",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.crush_device_class": "",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.encrypted": "0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.osd_id": "0",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.type": "block",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:                "ceph.vdo": "0"
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            },
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "type": "block",
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:            "vg_name": "ceph_vg0"
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:        }
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]:    ]
Jan 23 05:43:07 np0005593232 objective_maxwell[389452]: }
Jan 23 05:43:07 np0005593232 systemd[1]: libpod-18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac.scope: Deactivated successfully.
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:07.814081948 +0000 UTC m=+0.931435536 container died 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:43:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-868cb3a4fcbcbc5d3e09351b2be2d529e69cedbc3e5cff869ae817c65062c43b-merged.mount: Deactivated successfully.
Jan 23 05:43:07 np0005593232 podman[389436]: 2026-01-23 10:43:07.872358594 +0000 UTC m=+0.989712172 container remove 18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:43:07 np0005593232 systemd[1]: libpod-conmon-18fd0972bd963ef03a3b9ae9b8f7ccad884778814801b453ae77e4f426f7c0ac.scope: Deactivated successfully.
Jan 23 05:43:08 np0005593232 nova_compute[250269]: 2026-01-23 10:43:08.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:08 np0005593232 nova_compute[250269]: 2026-01-23 10:43:08.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:43:08 np0005593232 nova_compute[250269]: 2026-01-23 10:43:08.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3484: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.443909389 +0000 UTC m=+0.044096235 container create 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 05:43:08 np0005593232 systemd[1]: Started libpod-conmon-4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0.scope.
Jan 23 05:43:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.425908557 +0000 UTC m=+0.026095443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.537395396 +0000 UTC m=+0.137582272 container init 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.545140406 +0000 UTC m=+0.145327242 container start 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.548224494 +0000 UTC m=+0.148411360 container attach 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:43:08 np0005593232 xenodochial_chatterjee[389629]: 167 167
Jan 23 05:43:08 np0005593232 systemd[1]: libpod-4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0.scope: Deactivated successfully.
Jan 23 05:43:08 np0005593232 conmon[389629]: conmon 4d5eee3915a8813baa7b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0.scope/container/memory.events
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.55303337 +0000 UTC m=+0.153220206 container died 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:43:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e45bd0bc3ac103ee7658a50439fd1d5bf70ce6acd988b08260b1716b248ea44b-merged.mount: Deactivated successfully.
Jan 23 05:43:08 np0005593232 podman[389613]: 2026-01-23 10:43:08.597951257 +0000 UTC m=+0.198138093 container remove 4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_chatterjee, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:43:08 np0005593232 systemd[1]: libpod-conmon-4d5eee3915a8813baa7bf2cc8766f13afa9202b480e3fef6577872f2d489d7f0.scope: Deactivated successfully.
Jan 23 05:43:08 np0005593232 podman[389653]: 2026-01-23 10:43:08.82742432 +0000 UTC m=+0.062400355 container create 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 05:43:08 np0005593232 systemd[1]: Started libpod-conmon-1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd.scope.
Jan 23 05:43:08 np0005593232 podman[389653]: 2026-01-23 10:43:08.796697506 +0000 UTC m=+0.031673611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:43:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:43:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c89759d4064492c64875b3dcd1f876acc2d4669c90dd3c7e2b17cace0fd5f9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c89759d4064492c64875b3dcd1f876acc2d4669c90dd3c7e2b17cace0fd5f9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c89759d4064492c64875b3dcd1f876acc2d4669c90dd3c7e2b17cace0fd5f9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c89759d4064492c64875b3dcd1f876acc2d4669c90dd3c7e2b17cace0fd5f9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:43:08 np0005593232 podman[389653]: 2026-01-23 10:43:08.918692004 +0000 UTC m=+0.153668049 container init 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:43:08 np0005593232 podman[389653]: 2026-01-23 10:43:08.926923518 +0000 UTC m=+0.161899553 container start 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:43:08 np0005593232 podman[389653]: 2026-01-23 10:43:08.931020724 +0000 UTC m=+0.165996749 container attach 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:43:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:09.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:09.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:09 np0005593232 kind_shannon[389670]: {
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:        "osd_id": 0,
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:        "type": "bluestore"
Jan 23 05:43:09 np0005593232 kind_shannon[389670]:    }
Jan 23 05:43:09 np0005593232 kind_shannon[389670]: }
Jan 23 05:43:09 np0005593232 systemd[1]: libpod-1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd.scope: Deactivated successfully.
Jan 23 05:43:09 np0005593232 podman[389653]: 2026-01-23 10:43:09.800655802 +0000 UTC m=+1.035631837 container died 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:43:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8c89759d4064492c64875b3dcd1f876acc2d4669c90dd3c7e2b17cace0fd5f9e-merged.mount: Deactivated successfully.
Jan 23 05:43:09 np0005593232 podman[389653]: 2026-01-23 10:43:09.869935472 +0000 UTC m=+1.104911497 container remove 1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:43:09 np0005593232 systemd[1]: libpod-conmon-1ae3903af0f0325a36dadc213f27e28bd6abcbcb62c9e06a23dbeceabbf92fbd.scope: Deactivated successfully.
Jan 23 05:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:43:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:43:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0f39752b-10ad-4db8-b07f-a4bf080a7710 does not exist
Jan 23 05:43:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bf8b3865-eb4d-4a84-9db3-9dcf7d52b6aa does not exist
Jan 23 05:43:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6e0dbc6e-2128-41d6-985c-215a909c84c4 does not exist
Jan 23 05:43:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3485: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 05:43:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:43:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:11.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:11.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.234 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.813 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.814 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.836 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.971 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.971 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.977 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:43:11 np0005593232 nova_compute[250269]: 2026-01-23 10:43:11.978 250273 INFO nova.compute.claims [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.239 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3486: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 401 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 05:43:12 np0005593232 podman[389755]: 2026-01-23 10:43:12.448724109 +0000 UTC m=+0.107678772 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:43:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:43:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/180267541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.674 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.680 250273 DEBUG nova.compute.provider_tree [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.850 250273 DEBUG nova.scheduler.client.report [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.880 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.881 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.936 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:43:12 np0005593232 nova_compute[250269]: 2026-01-23 10:43:12.936 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:43:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:13.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:13.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:13 np0005593232 nova_compute[250269]: 2026-01-23 10:43:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:13 np0005593232 nova_compute[250269]: 2026-01-23 10:43:13.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3487: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 199 KiB/s rd, 1.4 MiB/s wr, 41 op/s
Jan 23 05:43:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 23 05:43:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:15.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:15.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 23 05:43:15 np0005593232 podman[389803]: 2026-01-23 10:43:15.404301607 +0000 UTC m=+0.066194082 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 23 05:43:15 np0005593232 nova_compute[250269]: 2026-01-23 10:43:15.536 250273 INFO nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:43:15 np0005593232 nova_compute[250269]: 2026-01-23 10:43:15.573 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:43:15 np0005593232 nova_compute[250269]: 2026-01-23 10:43:15.587 250273 DEBUG nova.policy [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '93cd560e84264023877c47122b5919de', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6e762fca3b634c7aa1d994314c059c54', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:43:15 np0005593232 nova_compute[250269]: 2026-01-23 10:43:15.667 250273 INFO nova.virt.block_device [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Booting with volume d8ca748e-6bb8-4ae2-9729-d2be77ce520f at /dev/vda#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.084 250273 DEBUG os_brick.utils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.086 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.107 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.108 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[f2d68be0-a580-41d6-9f32-7a655d80600b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.109 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.120 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.120 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[46784ff6-7bd5-4090-9e3d-81f0628a9624]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.122 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.134 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.135 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[7f312a19-8199-4e90-9519-871355a5f9e7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.137 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[83011de7-f3b2-48fa-a525-0c11b3f09609]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.138 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Jan 23 05:43:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Jan 23 05:43:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.175 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.178 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.179 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.179 250273 DEBUG os_brick.initiator.connectors.lightos [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.179 250273 DEBUG os_brick.utils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.180 250273 DEBUG nova.virt.block_device [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Updating existing volume attachment record: 89187d7b-afed-4e1f-b3ab-486c0472db38 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:43:16 np0005593232 nova_compute[250269]: 2026-01-23 10:43:16.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3489: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 31 KiB/s wr, 9 op/s
Jan 23 05:43:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:17.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:17.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:18 np0005593232 nova_compute[250269]: 2026-01-23 10:43:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:18 np0005593232 nova_compute[250269]: 2026-01-23 10:43:18.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:43:18 np0005593232 nova_compute[250269]: 2026-01-23 10:43:18.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3490: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 16 KiB/s wr, 167 op/s
Jan 23 05:43:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:19.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3491: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 101 KiB/s rd, 16 KiB/s wr, 167 op/s
Jan 23 05:43:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Jan 23 05:43:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Jan 23 05:43:20 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Jan 23 05:43:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:21.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:21 np0005593232 nova_compute[250269]: 2026-01-23 10:43:21.239 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3493: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 148 KiB/s rd, 1023 B/s wr, 243 op/s
Jan 23 05:43:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:23.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:23.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:23 np0005593232 nova_compute[250269]: 2026-01-23 10:43:23.337 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:23 np0005593232 nova_compute[250269]: 2026-01-23 10:43:23.579 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Successfully created port: db66e02a-ad5a-4ebb-9aab-112782b3fec7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.031 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.031 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.031 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.115 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.117 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.117 250273 INFO nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Creating image(s)#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.118 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.118 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Ensure instance console log exists: /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.119 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.119 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:24 np0005593232 nova_compute[250269]: 2026-01-23 10:43:24.119 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3494: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 154 KiB/s rd, 3.3 KiB/s wr, 252 op/s
Jan 23 05:43:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:25.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:25.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.186 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating instance_info_cache with network_info: [{"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.238 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.238 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.239 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.239 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.239 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.265 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Successfully updated port: db66e02a-ad5a-4ebb-9aab-112782b3fec7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.288 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.288 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.288 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.288 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.289 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.334 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.335 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquired lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.335 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:43:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3495: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 127 KiB/s rd, 2.7 KiB/s wr, 207 op/s
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.498 250273 DEBUG nova.compute.manager [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-changed-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.498 250273 DEBUG nova.compute.manager [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Refreshing instance network info cache due to event network-changed-db66e02a-ad5a-4ebb-9aab-112782b3fec7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.499 250273 DEBUG oslo_concurrency.lockutils [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:43:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:43:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/105870309' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.784 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.893 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.893 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c5 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.896 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:43:26 np0005593232 nova_compute[250269]: 2026-01-23 10:43:26.896 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000c6 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.011 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.035 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.036 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3639MB free_disk=20.805469512939453GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.036 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.036 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.141 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:43:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:27.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:27.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.254 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:43:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478274321' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.692 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.699 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.726 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.795 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:43:27 np0005593232 nova_compute[250269]: 2026-01-23 10:43:27.796 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3496: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.568 250273 DEBUG nova.network.neutron [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Updating instance_info_cache with network_info: [{"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.626 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Releasing lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.627 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Instance network_info: |[{"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.627 250273 DEBUG oslo_concurrency.lockutils [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.628 250273 DEBUG nova.network.neutron [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Refreshing network info cache for port db66e02a-ad5a-4ebb-9aab-112782b3fec7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.632 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Start _get_guest_xml network_info=[{"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '89187d7b-afed-4e1f-b3ab-486c0472db38', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-d8ca748e-6bb8-4ae2-9729-d2be77ce520f', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'd8ca748e-6bb8-4ae2-9729-d2be77ce520f', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '9ae9a99b-039d-46f8-a3ca-42ee68bae3e8', 'attached_at': '', 'detached_at': '', 'volume_id': 'd8ca748e-6bb8-4ae2-9729-d2be77ce520f', 'serial': 'd8ca748e-6bb8-4ae2-9729-d2be77ce520f', 'multiattach': True}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.637 250273 WARNING nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.644 250273 DEBUG nova.virt.libvirt.host [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.645 250273 DEBUG nova.virt.libvirt.host [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.652 250273 DEBUG nova.virt.libvirt.host [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.653 250273 DEBUG nova.virt.libvirt.host [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.655 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.656 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.656 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.657 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.657 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.658 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.658 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.658 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.659 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.659 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.660 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.660 250273 DEBUG nova.virt.hardware [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.697 250273 DEBUG nova.storage.rbd_utils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:43:28 np0005593232 nova_compute[250269]: 2026-01-23 10:43:28.701 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:43:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269590756' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.149 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:29.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:29.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.722 250273 DEBUG nova.virt.libvirt.vif [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-305908653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-305908653',id=202,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-dk2qyv5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:43:15Z,user_data=None,user_id='93cd560e84264023877c47122b5919de',uuid=9ae9a99b-039d-46f8-a3ca-42ee68bae3e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.722 250273 DEBUG nova.network.os_vif_util [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.723 250273 DEBUG nova.network.os_vif_util [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.724 250273 DEBUG nova.objects.instance [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.770 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <uuid>9ae9a99b-039d-46f8-a3ca-42ee68bae3e8</uuid>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <name>instance-000000ca</name>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:name>tempest-AttachVolumeMultiAttachTest-server-305908653</nova:name>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:43:28</nova:creationTime>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:user uuid="93cd560e84264023877c47122b5919de">tempest-AttachVolumeMultiAttachTest-63035580-project-member</nova:user>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:project uuid="6e762fca3b634c7aa1d994314c059c54">tempest-AttachVolumeMultiAttachTest-63035580</nova:project>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <nova:port uuid="db66e02a-ad5a-4ebb-9aab-112782b3fec7">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="serial">9ae9a99b-039d-46f8-a3ca-42ee68bae3e8</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="uuid">9ae9a99b-039d-46f8-a3ca-42ee68bae3e8</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-d8ca748e-6bb8-4ae2-9729-d2be77ce520f">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <serial>d8ca748e-6bb8-4ae2-9729-d2be77ce520f</serial>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <shareable/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:cd:7a:9c"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <target dev="tapdb66e02a-ad"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/console.log" append="off"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:43:29 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:43:29 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:43:29 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:43:29 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.772 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Preparing to wait for external event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.773 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.773 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.774 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.775 250273 DEBUG nova.virt.libvirt.vif [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-305908653',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-305908653',id=202,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-dk2qyv5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:43:15Z,user_data=None,user_id='93cd560e84264023877c47122b5919de',uuid=9ae9a99b-039d-46f8-a3ca-42ee68bae3e8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.775 250273 DEBUG nova.network.os_vif_util [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.776 250273 DEBUG nova.network.os_vif_util [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.777 250273 DEBUG os_vif [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.778 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.779 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.792 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb66e02a-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.793 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdb66e02a-ad, col_values=(('external_ids', {'iface-id': 'db66e02a-ad5a-4ebb-9aab-112782b3fec7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:7a:9c', 'vm-uuid': '9ae9a99b-039d-46f8-a3ca-42ee68bae3e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:29 np0005593232 NetworkManager[49057]: <info>  [1769165009.7996] manager: (tapdb66e02a-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/372)
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:29 np0005593232 nova_compute[250269]: 2026-01-23 10:43:29.810 250273 INFO os_vif [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad')#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.007 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.007 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.007 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] No VIF found with MAC fa:16:3e:cd:7a:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.008 250273 INFO nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Using config drive#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.035 250273 DEBUG nova.storage.rbd_utils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:43:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3497: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.924 250273 INFO nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Creating config drive at /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config#033[00m
Jan 23 05:43:30 np0005593232 nova_compute[250269]: 2026-01-23 10:43:30.934 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph4x24uvb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.038 250273 DEBUG nova.network.neutron [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Updated VIF entry in instance network info cache for port db66e02a-ad5a-4ebb-9aab-112782b3fec7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.040 250273 DEBUG nova.network.neutron [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Updating instance_info_cache with network_info: [{"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.091 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph4x24uvb" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.142 250273 DEBUG nova.storage.rbd_utils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] rbd image 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.149 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:31.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:31.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.407 250273 DEBUG oslo_concurrency.processutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.408 250273 INFO nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Deleting local config drive /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8/disk.config because it was imported into RBD.#033[00m
Jan 23 05:43:31 np0005593232 kernel: tapdb66e02a-ad: entered promiscuous mode
Jan 23 05:43:31 np0005593232 NetworkManager[49057]: <info>  [1769165011.4919] manager: (tapdb66e02a-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Jan 23 05:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:31Z|00789|binding|INFO|Claiming lport db66e02a-ad5a-4ebb-9aab-112782b3fec7 for this chassis.
Jan 23 05:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:31Z|00790|binding|INFO|db66e02a-ad5a-4ebb-9aab-112782b3fec7: Claiming fa:16:3e:cd:7a:9c 10.100.0.11
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.493 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:31Z|00791|binding|INFO|Setting lport db66e02a-ad5a-4ebb-9aab-112782b3fec7 ovn-installed in OVS
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:31 np0005593232 systemd-udevd[390042]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.525 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:31 np0005593232 systemd-machined[215836]: New machine qemu-90-instance-000000ca.
Jan 23 05:43:31 np0005593232 NetworkManager[49057]: <info>  [1769165011.5512] device (tapdb66e02a-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:43:31 np0005593232 NetworkManager[49057]: <info>  [1769165011.5529] device (tapdb66e02a-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:43:31 np0005593232 systemd[1]: Started Virtual Machine qemu-90-instance-000000ca.
Jan 23 05:43:31 np0005593232 nova_compute[250269]: 2026-01-23 10:43:31.772 250273 DEBUG oslo_concurrency.lockutils [req-5aeff11b-817d-4fd3-87ea-e6cecd418d02 req-cbd8b582-a7cb-4899-9891-0067ac31ef73 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.134 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165012.1336281, 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.134 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] VM Started (Lifecycle Event)#033[00m
Jan 23 05:43:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3498: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 2.0 KiB/s wr, 35 op/s
Jan 23 05:43:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:32Z|00792|binding|INFO|Setting lport db66e02a-ad5a-4ebb-9aab-112782b3fec7 up in Southbound
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.769 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7a:9c 10.100.0.11'], port_security=['fa:16:3e:cd:7a:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9ae9a99b-039d-46f8-a3ca-42ee68bae3e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48274d25-9599-424c-bfd1-ff8c0b4eb8cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=db66e02a-ad5a-4ebb-9aab-112782b3fec7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.772 161902 INFO neutron.agent.ovn.metadata.agent [-] Port db66e02a-ad5a-4ebb-9aab-112782b3fec7 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 bound to our chassis#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.775 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.778 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.784 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165012.1344872, 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.785 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.804 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8af661-b572-4f79-9494-b6bf761f0f1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.816 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.822 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.851 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.859 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d20946-eb7d-4de8-a0e5-1086e5e3215b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.865 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[18093ca0-d4f1-40ed-958f-3ada0917d262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.907 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[8334d2e7-6117-49e8-8246-6faf8d7c4f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.931 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5134c9f8-5de5-46be-9949-aacf0364d499]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390102, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.955 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[47b6b849-ed9d-42fa-aa1e-5a1dd42fbabb]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877561, 'tstamp': 877561}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390103, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877566, 'tstamp': 877566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390103, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.957 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:32 np0005593232 nova_compute[250269]: 2026-01-23 10:43:32.960 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.962 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.962 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.963 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:32.964 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:33.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:33.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.345 250273 DEBUG nova.compute.manager [req-9e009ade-be56-4e19-84df-19b056744bb5 req-e3393ac3-987b-449d-b72e-1f6ba07e8241 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.346 250273 DEBUG oslo_concurrency.lockutils [req-9e009ade-be56-4e19-84df-19b056744bb5 req-e3393ac3-987b-449d-b72e-1f6ba07e8241 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.346 250273 DEBUG oslo_concurrency.lockutils [req-9e009ade-be56-4e19-84df-19b056744bb5 req-e3393ac3-987b-449d-b72e-1f6ba07e8241 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.347 250273 DEBUG oslo_concurrency.lockutils [req-9e009ade-be56-4e19-84df-19b056744bb5 req-e3393ac3-987b-449d-b72e-1f6ba07e8241 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.347 250273 DEBUG nova.compute.manager [req-9e009ade-be56-4e19-84df-19b056744bb5 req-e3393ac3-987b-449d-b72e-1f6ba07e8241 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Processing event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.348 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.354 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165013.3538508, 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.355 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.359 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.363 250273 INFO nova.virt.libvirt.driver [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Instance spawned successfully.#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.364 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.407 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.408 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.409 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.410 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.411 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.411 250273 DEBUG nova.virt.libvirt.driver [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.549 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.560 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.627 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.673 250273 INFO nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Took 9.56 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.674 250273 DEBUG nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.836 250273 INFO nova.compute.manager [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Took 21.92 seconds to build instance.#033[00m
Jan 23 05:43:33 np0005593232 nova_compute[250269]: 2026-01-23 10:43:33.882 250273 DEBUG oslo_concurrency.lockutils [None req-0fbf7ed3-d0e1-4841-bc73-b13a24697436 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 22.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3499: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 14 KiB/s wr, 20 op/s
Jan 23 05:43:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:34 np0005593232 nova_compute[250269]: 2026-01-23 10:43:34.849 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:35.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.513 250273 DEBUG nova.compute.manager [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.515 250273 DEBUG oslo_concurrency.lockutils [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.516 250273 DEBUG oslo_concurrency.lockutils [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.516 250273 DEBUG oslo_concurrency.lockutils [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.516 250273 DEBUG nova.compute.manager [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] No waiting events found dispatching network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:35 np0005593232 nova_compute[250269]: 2026-01-23 10:43:35.517 250273 WARNING nova.compute.manager [req-eaefa0d4-e04d-49ba-9428-16539166a605 req-496d726e-5884-430f-843d-eb7195668bd9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received unexpected event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:43:36 np0005593232 nova_compute[250269]: 2026-01-23 10:43:36.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3500: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 23 05:43:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:37.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:37.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:43:37
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root']
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:43:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Jan 23 05:43:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Jan 23 05:43:37 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:43:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3502: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 98 op/s
Jan 23 05:43:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Jan 23 05:43:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Jan 23 05:43:38 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:43:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:43:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:39.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:39.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:39 np0005593232 nova_compute[250269]: 2026-01-23 10:43:39.852 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3504: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 122 op/s
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.665 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.666 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.666 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.666 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.667 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.668 250273 INFO nova.compute.manager [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Terminating instance#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.669 250273 DEBUG nova.compute.manager [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:43:40 np0005593232 kernel: tapdb66e02a-ad (unregistering): left promiscuous mode
Jan 23 05:43:40 np0005593232 NetworkManager[49057]: <info>  [1769165020.7230] device (tapdb66e02a-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:43:40 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:40Z|00793|binding|INFO|Releasing lport db66e02a-ad5a-4ebb-9aab-112782b3fec7 from this chassis (sb_readonly=0)
Jan 23 05:43:40 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:40Z|00794|binding|INFO|Setting lport db66e02a-ad5a-4ebb-9aab-112782b3fec7 down in Southbound
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.731 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:40Z|00795|binding|INFO|Removing iface tapdb66e02a-ad ovn-installed in OVS
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.733 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.738 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:7a:9c 10.100.0.11'], port_security=['fa:16:3e:cd:7a:9c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '9ae9a99b-039d-46f8-a3ca-42ee68bae3e8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48274d25-9599-424c-bfd1-ff8c0b4eb8cc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[], tunnel_key=8, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=db66e02a-ad5a-4ebb-9aab-112782b3fec7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.739 161902 INFO neutron.agent.ovn.metadata.agent [-] Port db66e02a-ad5a-4ebb-9aab-112782b3fec7 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 unbound from our chassis#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.740 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.761 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[25a8bab0-3440-42cd-a6c4-599f001f7355]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000ca.scope: Deactivated successfully.
Jan 23 05:43:40 np0005593232 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d000000ca.scope: Consumed 8.081s CPU time.
Jan 23 05:43:40 np0005593232 systemd-machined[215836]: Machine qemu-90-instance-000000ca terminated.
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.802 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f20230-a05d-42bc-817d-7773f9c3a6f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.806 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9393e9-0761-4e7a-b457-ec30b7512fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.834 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bd113339-a37b-495b-9361-f28b41dbd61f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.851 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[76405886-45d9-4aff-82f3-2b7ebc1b1ebb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390119, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.869 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6f44e5ed-a23c-4c74-bb57-f9930c828b0e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877561, 'tstamp': 877561}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390120, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877566, 'tstamp': 877566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390120, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.871 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.874 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.879 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.880 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.881 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:40 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:40.881 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.910 250273 INFO nova.virt.libvirt.driver [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Instance destroyed successfully.#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.910 250273 DEBUG nova.objects.instance [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'resources' on Instance uuid 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.945 250273 DEBUG nova.virt.libvirt.vif [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:43:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachVolumeMultiAttachTest-server-305908653',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachvolumemultiattachtest-server-305908653',id=202,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:43:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-dk2qyv5z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:43:33Z,user_data=None,user_id='93cd560e84264023877c47122b5919de',uuid=9ae9a99b-039d-46f8-a3ca-42ee68bae3e8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.945 250273 DEBUG nova.network.os_vif_util [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "address": "fa:16:3e:cd:7a:9c", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdb66e02a-ad", "ovs_interfaceid": "db66e02a-ad5a-4ebb-9aab-112782b3fec7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.946 250273 DEBUG nova.network.os_vif_util [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.947 250273 DEBUG os_vif [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.949 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb66e02a-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:40 np0005593232 nova_compute[250269]: 2026-01-23 10:43:40.955 250273 INFO os_vif [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:7a:9c,bridge_name='br-int',has_traffic_filtering=True,id=db66e02a-ad5a-4ebb-9aab-112782b3fec7,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdb66e02a-ad')#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.250 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:41.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:41.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.278 250273 DEBUG nova.compute.manager [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-unplugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.278 250273 DEBUG oslo_concurrency.lockutils [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.279 250273 DEBUG oslo_concurrency.lockutils [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.279 250273 DEBUG oslo_concurrency.lockutils [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.279 250273 DEBUG nova.compute.manager [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] No waiting events found dispatching network-vif-unplugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.280 250273 DEBUG nova.compute.manager [req-84828284-93d4-4aca-a217-bb93c7bb53a1 req-97e0108f-26c0-4d40-9665-50576fc01560 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-unplugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.535 250273 INFO nova.virt.libvirt.driver [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Deleting instance files /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_del#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.537 250273 INFO nova.virt.libvirt.driver [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Deletion of /var/lib/nova/instances/9ae9a99b-039d-46f8-a3ca-42ee68bae3e8_del complete#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.597 250273 INFO nova.compute.manager [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.598 250273 DEBUG oslo.service.loopingcall [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.598 250273 DEBUG nova.compute.manager [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:43:41 np0005593232 nova_compute[250269]: 2026-01-23 10:43:41.598 250273 DEBUG nova.network.neutron [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:43:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3505: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 5.9 KiB/s wr, 126 op/s
Jan 23 05:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:42.661 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:42.662 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:42.662 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:42 np0005593232 nova_compute[250269]: 2026-01-23 10:43:42.883 250273 DEBUG nova.network.neutron [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:43:42 np0005593232 nova_compute[250269]: 2026-01-23 10:43:42.922 250273 INFO nova.compute.manager [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Took 1.32 seconds to deallocate network for instance.#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.005 250273 DEBUG nova.compute.manager [req-66a38f6b-e1d2-4adb-87b0-2119d911e999 req-b8b4c90d-8b16-48fa-a6ac-8e9a5458113f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-deleted-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.198 250273 INFO nova.compute.manager [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Took 0.28 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:43:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:43.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.260 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.261 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:43.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.406 250273 DEBUG oslo_concurrency.processutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.440 250273 DEBUG nova.compute.manager [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.440 250273 DEBUG oslo_concurrency.lockutils [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.441 250273 DEBUG oslo_concurrency.lockutils [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.441 250273 DEBUG oslo_concurrency.lockutils [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.441 250273 DEBUG nova.compute.manager [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] No waiting events found dispatching network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.441 250273 WARNING nova.compute.manager [req-be56a4ce-afef-4bbf-8e8d-19e3a10676aa req-66e4efca-4e6d-424f-9fed-9fc7e712a3d6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Received unexpected event network-vif-plugged-db66e02a-ad5a-4ebb-9aab-112782b3fec7 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:43:43 np0005593232 podman[390154]: 2026-01-23 10:43:43.506330028 +0000 UTC m=+0.166684109 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:43:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:43:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1131023722' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:43:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:43:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531436105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.900 250273 DEBUG oslo_concurrency.processutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.909 250273 DEBUG nova.compute.provider_tree [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.926 250273 DEBUG nova.scheduler.client.report [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.954 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:43 np0005593232 nova_compute[250269]: 2026-01-23 10:43:43.993 250273 INFO nova.scheduler.client.report [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Deleted allocations for instance 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8#033[00m
Jan 23 05:43:44 np0005593232 nova_compute[250269]: 2026-01-23 10:43:44.096 250273 DEBUG oslo_concurrency.lockutils [None req-659f6e68-b1f6-414e-b91f-6fe56088a794 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "9ae9a99b-039d-46f8-a3ca-42ee68bae3e8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.431s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3506: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 8.0 KiB/s wr, 154 op/s
Jan 23 05:43:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:45.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:45 np0005593232 nova_compute[250269]: 2026-01-23 10:43:45.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:46 np0005593232 nova_compute[250269]: 2026-01-23 10:43:46.251 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3507: 321 pgs: 321 active+clean; 647 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 393 KiB/s rd, 4.2 KiB/s wr, 62 op/s
Jan 23 05:43:46 np0005593232 podman[390255]: 2026-01-23 10:43:46.417105582 +0000 UTC m=+0.063568928 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:43:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Jan 23 05:43:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Jan 23 05:43:46 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Jan 23 05:43:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:47.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:47.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00871018140470873 of space, bias 1.0, pg target 2.6130544214126186 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.007489705471803462 of space, bias 1.0, pg target 2.231932230597432 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.3799088032756375e-05 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019042332612134749 of space, bias 1.0, pg target 0.5636530453191886 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.001721570817048204 quantized to 16 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.0002151963521310255 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018291689931137169 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:43:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.000430392704262051 quantized to 32 (current 32)
Jan 23 05:43:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3509: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 612 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 32 KiB/s wr, 117 op/s
Jan 23 05:43:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Jan 23 05:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Jan 23 05:43:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Jan 23 05:43:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:49.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:49.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3511: 321 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 318 active+clean; 612 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 38 KiB/s wr, 125 op/s
Jan 23 05:43:50 np0005593232 nova_compute[250269]: 2026-01-23 10:43:50.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:43:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:51.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:43:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:51.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.839 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.840 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.840 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.841 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.842 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.843 250273 INFO nova.compute.manager [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Terminating instance#033[00m
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.845 250273 DEBUG nova.compute.manager [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:43:51 np0005593232 kernel: tap3b1ac782-11 (unregistering): left promiscuous mode
Jan 23 05:43:51 np0005593232 NetworkManager[49057]: <info>  [1769165031.9806] device (tap3b1ac782-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.988 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:51Z|00796|binding|INFO|Releasing lport 3b1ac782-1188-42b9-a89f-eb26c7876140 from this chassis (sb_readonly=0)
Jan 23 05:43:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:51Z|00797|binding|INFO|Setting lport 3b1ac782-1188-42b9-a89f-eb26c7876140 down in Southbound
Jan 23 05:43:51 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:51Z|00798|binding|INFO|Removing iface tap3b1ac782-11 ovn-installed in OVS
Jan 23 05:43:51 np0005593232 nova_compute[250269]: 2026-01-23 10:43:51.991 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:51.998 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:36:e0 10.100.0.4'], port_security=['fa:16:3e:ee:36:e0 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'f34f1af9-6c51-42ec-97f8-fb5bb146aeb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[], tunnel_key=7, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=3b1ac782-1188-42b9-a89f-eb26c7876140) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:43:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:51.999 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 3b1ac782-1188-42b9-a89f-eb26c7876140 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 unbound from our chassis#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.000 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.024 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[97a4ef70-976d-49d8-9e9c-f563af3f1823]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c6.scope: Deactivated successfully.
Jan 23 05:43:52 np0005593232 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d000000c6.scope: Consumed 19.154s CPU time.
Jan 23 05:43:52 np0005593232 systemd-machined[215836]: Machine qemu-89-instance-000000c6 terminated.
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.055 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[330f6a53-a672-42b1-ae4b-c891ad4fbf49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.059 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f038d9-4d21-4227-9b28-8ba52d37b31d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.087 250273 INFO nova.virt.libvirt.driver [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Instance destroyed successfully.#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.087 250273 DEBUG nova.objects.instance [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'resources' on Instance uuid f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.091 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[072adc15-6a9f-43e6-9deb-c1fef6fea558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.105 250273 DEBUG nova.virt.libvirt.vif [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:40:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-1',id=198,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:41:59Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-1k48k1p2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:42:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=f34f1af9-6c51-42ec-97f8-fb5bb146aeb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.106 250273 DEBUG nova.network.os_vif_util [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "3b1ac782-1188-42b9-a89f-eb26c7876140", "address": "fa:16:3e:ee:36:e0", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b1ac782-11", "ovs_interfaceid": "3b1ac782-1188-42b9-a89f-eb26c7876140", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.107 250273 DEBUG nova.network.os_vif_util [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.107 250273 DEBUG os_vif [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.108 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[663f9130-106c-4477-b823-ee4a05467316]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfba2ba4a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:db:55'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 700, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 236], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877546, 'reachable_time': 25719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390300, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.109 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.109 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b1ac782-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.111 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.115 250273 INFO os_vif [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:36:e0,bridge_name='br-int',has_traffic_filtering=True,id=3b1ac782-1188-42b9-a89f-eb26c7876140,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b1ac782-11')#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.122 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d0097285-0021-4a60-b817-5f264724f801]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877561, 'tstamp': 877561}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390301, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfba2ba4a-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 877566, 'tstamp': 877566}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390301, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.125 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.128 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfba2ba4a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.129 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.130 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfba2ba4a-d0, col_values=(('external_ids', {'iface-id': '2348ddba-3dc3-4456-a637-f3065ba0d8f6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:52 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:52.131 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.176 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.316 250273 DEBUG nova.compute.manager [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.317 250273 DEBUG oslo_concurrency.lockutils [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.317 250273 DEBUG oslo_concurrency.lockutils [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.317 250273 DEBUG oslo_concurrency.lockutils [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.317 250273 DEBUG nova.compute.manager [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:52 np0005593232 nova_compute[250269]: 2026-01-23 10:43:52.317 250273 DEBUG nova.compute.manager [req-5379f5b8-03fe-4e97-883c-0a51a92ae234 req-2afc98db-a573-46ba-8a1b-0e3a43b3136f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-unplugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:43:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3512: 321 pgs: 321 active+clean; 601 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 37 KiB/s wr, 137 op/s
Jan 23 05:43:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:43:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:53.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:43:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:53.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.103 250273 INFO nova.virt.libvirt.driver [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Deleting instance files /var/lib/nova/instances/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6_del#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.104 250273 INFO nova.virt.libvirt.driver [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Deletion of /var/lib/nova/instances/f34f1af9-6c51-42ec-97f8-fb5bb146aeb6_del complete#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.198 250273 INFO nova.compute.manager [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Took 2.35 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.199 250273 DEBUG oslo.service.loopingcall [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.200 250273 DEBUG nova.compute.manager [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.200 250273 DEBUG nova.network.neutron [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:43:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3513: 321 pgs: 321 active+clean; 527 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 39 KiB/s wr, 212 op/s
Jan 23 05:43:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Jan 23 05:43:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Jan 23 05:43:54 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.464 250273 DEBUG nova.compute.manager [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.465 250273 DEBUG oslo_concurrency.lockutils [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.465 250273 DEBUG oslo_concurrency.lockutils [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.466 250273 DEBUG oslo_concurrency.lockutils [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.466 250273 DEBUG nova.compute.manager [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] No waiting events found dispatching network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:54 np0005593232 nova_compute[250269]: 2026-01-23 10:43:54.466 250273 WARNING nova.compute.manager [req-f5bd1521-f4f3-4c41-a614-ef871799e196 req-18694647-b970-4adf-ad87-8bf4e41ec220 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received unexpected event network-vif-plugged-3b1ac782-1188-42b9-a89f-eb26c7876140 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:43:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:55.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:55.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:55 np0005593232 nova_compute[250269]: 2026-01-23 10:43:55.771 250273 DEBUG nova.network.neutron [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:43:55 np0005593232 nova_compute[250269]: 2026-01-23 10:43:55.794 250273 INFO nova.compute.manager [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Took 1.59 seconds to deallocate network for instance.#033[00m
Jan 23 05:43:55 np0005593232 nova_compute[250269]: 2026-01-23 10:43:55.909 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165020.9078417, 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:43:55 np0005593232 nova_compute[250269]: 2026-01-23 10:43:55.909 250273 INFO nova.compute.manager [-] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.093 250273 DEBUG nova.compute.manager [req-03d8695f-8b4b-4aba-a666-40d7e836cf7e req-8ab1e824-dcbf-499d-bdae-ac7ba34e5f36 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Received event network-vif-deleted-3b1ac782-1188-42b9-a89f-eb26c7876140 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.101 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.102 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.108 250273 DEBUG nova.compute.manager [None req-efc677bb-510a-4c13-82c8-084ed9f91cd8 - - - - - -] [instance: 9ae9a99b-039d-46f8-a3ca-42ee68bae3e8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.209 250273 DEBUG oslo_concurrency.processutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.256 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3515: 321 pgs: 321 active+clean; 527 MiB data, 1.7 GiB used, 19 GiB / 21 GiB avail; 1.5 MiB/s rd, 2.4 KiB/s wr, 114 op/s
Jan 23 05:43:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:43:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541398757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.689 250273 DEBUG oslo_concurrency.processutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.698 250273 DEBUG nova.compute.provider_tree [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.787 250273 DEBUG nova.scheduler.client.report [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.900 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:56 np0005593232 nova_compute[250269]: 2026-01-23 10:43:56.930 250273 INFO nova.scheduler.client.report [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Deleted allocations for instance f34f1af9-6c51-42ec-97f8-fb5bb146aeb6#033[00m
Jan 23 05:43:57 np0005593232 nova_compute[250269]: 2026-01-23 10:43:57.145 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:57 np0005593232 nova_compute[250269]: 2026-01-23 10:43:57.247 250273 DEBUG oslo_concurrency.lockutils [None req-d9e2f294-7b85-4444-8b9a-a883f21b904a 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "f34f1af9-6c51-42ec-97f8-fb5bb146aeb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:57.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:57.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3516: 321 pgs: 321 active+clean; 520 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.8 KiB/s wr, 118 op/s
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.800 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.800 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.801 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.801 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.802 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.803 250273 INFO nova.compute.manager [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Terminating instance#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.805 250273 DEBUG nova.compute.manager [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:43:58 np0005593232 kernel: tap456d51f3-4b (unregistering): left promiscuous mode
Jan 23 05:43:58 np0005593232 NetworkManager[49057]: <info>  [1769165038.8642] device (tap456d51f3-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:43:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:58Z|00799|binding|INFO|Releasing lport 456d51f3-4b45-4e54-acee-c50facafcd50 from this chassis (sb_readonly=0)
Jan 23 05:43:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:58Z|00800|binding|INFO|Setting lport 456d51f3-4b45-4e54-acee-c50facafcd50 down in Southbound
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.870 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:43:58Z|00801|binding|INFO|Removing iface tap456d51f3-4b ovn-installed in OVS
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.871 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:58.883 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:5d:ea 10.100.0.8'], port_security=['fa:16:3e:06:5d:ea 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0888913c-71a6-45fe-97bf-9dddd2b7b521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6e762fca3b634c7aa1d994314c059c54', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ed138636-f650-4a09-b808-0b05f9067a5a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a0936335-b706-4400-8411-bdd084c8cdf7, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=456d51f3-4b45-4e54-acee-c50facafcd50) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:43:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:58.884 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 456d51f3-4b45-4e54-acee-c50facafcd50 in datapath fba2ba4a-d82c-4f8b-9754-c13fbec41a04 unbound from our chassis#033[00m
Jan 23 05:43:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:58.886 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fba2ba4a-d82c-4f8b-9754-c13fbec41a04, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:43:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:58.887 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3dbec3-3804-4fc4-b9a7-be62370b67ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:58 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:58.888 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 namespace which is not needed anymore#033[00m
Jan 23 05:43:58 np0005593232 nova_compute[250269]: 2026-01-23 10:43:58.898 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:58 np0005593232 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c5.scope: Deactivated successfully.
Jan 23 05:43:58 np0005593232 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d000000c5.scope: Consumed 21.970s CPU time.
Jan 23 05:43:58 np0005593232 systemd-machined[215836]: Machine qemu-88-instance-000000c5 terminated.
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [NOTICE]   (386988) : haproxy version is 2.8.14-c23fe91
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [NOTICE]   (386988) : path to executable is /usr/sbin/haproxy
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [WARNING]  (386988) : Exiting Master process...
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [WARNING]  (386988) : Exiting Master process...
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [ALERT]    (386988) : Current worker (386990) exited with code 143 (Terminated)
Jan 23 05:43:59 np0005593232 neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04[386984]: [WARNING]  (386988) : All workers exited. Exiting... (0)
Jan 23 05:43:59 np0005593232 systemd[1]: libpod-f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f.scope: Deactivated successfully.
Jan 23 05:43:59 np0005593232 podman[390372]: 2026-01-23 10:43:59.011951852 +0000 UTC m=+0.041011437 container died f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:43:59 np0005593232 NetworkManager[49057]: <info>  [1769165039.0237] manager: (tap456d51f3-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/374)
Jan 23 05:43:59 np0005593232 kernel: tap456d51f3-4b: entered promiscuous mode
Jan 23 05:43:59 np0005593232 kernel: tap456d51f3-4b (unregistering): left promiscuous mode
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b40bd483516f699516ff9a160cd8337f77bb3fa7547b3184d329b65853aab4b1-merged.mount: Deactivated successfully.
Jan 23 05:43:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f-userdata-shm.mount: Deactivated successfully.
Jan 23 05:43:59 np0005593232 podman[390372]: 2026-01-23 10:43:59.049789557 +0000 UTC m=+0.078849122 container cleanup f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.051 250273 INFO nova.virt.libvirt.driver [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Instance destroyed successfully.#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.052 250273 DEBUG nova.objects.instance [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lazy-loading 'resources' on Instance uuid 0888913c-71a6-45fe-97bf-9dddd2b7b521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:43:59 np0005593232 systemd[1]: libpod-conmon-f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f.scope: Deactivated successfully.
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.071 250273 DEBUG nova.virt.libvirt.vif [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:39:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='multiattach-server-0',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(2),hidden=False,host='compute-0.ctlplane.example.com',hostname='multiattach-server-0',id=197,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=2,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP3FfIOd2lnI+tPBfDtyl7+3bVUJP3jvoQEZS2+zpCm94FEzq78d4QEW/4ixP6N6S+NwXEvQperhCcfeORiYVMygQWeTqWJgqUherQ/1aiNrcs4OJRb36XBDXhjh6k5P/Q==',key_name='tempest-keypair-529522234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:41:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6e762fca3b634c7aa1d994314c059c54',ramdisk_id='',reservation_id='r-v6jorgzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachVolumeMultiAttachTest-63035580',owner_user_name='tempest-AttachVolumeMultiAttachTest-63035580-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:41:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='93cd560e84264023877c47122b5919de',uuid=0888913c-71a6-45fe-97bf-9dddd2b7b521,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.072 250273 DEBUG nova.network.os_vif_util [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converting VIF {"id": "456d51f3-4b45-4e54-acee-c50facafcd50", "address": "fa:16:3e:06:5d:ea", "network": {"id": "fba2ba4a-d82c-4f8b-9754-c13fbec41a04", "bridge": "br-int", "label": "tempest-AttachVolumeMultiAttachTest-1976243271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6e762fca3b634c7aa1d994314c059c54", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap456d51f3-4b", "ovs_interfaceid": "456d51f3-4b45-4e54-acee-c50facafcd50", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.073 250273 DEBUG nova.network.os_vif_util [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.073 250273 DEBUG os_vif [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.075 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.075 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap456d51f3-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.080 250273 INFO os_vif [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:06:5d:ea,bridge_name='br-int',has_traffic_filtering=True,id=456d51f3-4b45-4e54-acee-c50facafcd50,network=Network(fba2ba4a-d82c-4f8b-9754-c13fbec41a04),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap456d51f3-4b')#033[00m
Jan 23 05:43:59 np0005593232 podman[390409]: 2026-01-23 10:43:59.115962248 +0000 UTC m=+0.041710746 container remove f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.121 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[acdbf10c-3ee4-460f-a240-cf7865c73dd4]: (4, ('Fri Jan 23 10:43:58 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 (f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f)\nf7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f\nFri Jan 23 10:43:59 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 (f7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f)\nf7d9420006b852b926415ec2055af761175a89522d4cffce2cb073ff5b94486f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.123 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cdff7fb5-b358-479b-966d-a0ff9f4be821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.124 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfba2ba4a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 kernel: tapfba2ba4a-d0: left promiscuous mode
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.138 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.141 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[89cb9eda-ff98-4014-a8d3-1442233e924b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.156 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9e57b735-1141-4dd5-acde-394dcfa187e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.158 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d2822b46-6c3b-4a21-9544-96839ed26fbc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.176 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85a08e72-dd49-492c-9fdf-89439719dcaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 877539, 'reachable_time': 20849, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390442, 'error': None, 'target': 'ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.179 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fba2ba4a-d82c-4f8b-9754-c13fbec41a04 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:43:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:43:59.179 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[88fbbc0e-1358-4b1e-bcdf-4c7662104d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:43:59 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfba2ba4a\x2dd82c\x2d4f8b\x2d9754\x2dc13fbec41a04.mount: Deactivated successfully.
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.249 250273 DEBUG nova.compute.manager [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.250 250273 DEBUG oslo_concurrency.lockutils [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.250 250273 DEBUG oslo_concurrency.lockutils [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.250 250273 DEBUG oslo_concurrency.lockutils [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.250 250273 DEBUG nova.compute.manager [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.251 250273 DEBUG nova.compute.manager [req-05f710b9-5404-40fd-9e7a-a9da8e979dcc req-946a3c5f-b391-41a0-9023-b233d16ba5ac 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-unplugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:43:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:43:59.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:43:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:43:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:43:59.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:43:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e395 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:43:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Jan 23 05:43:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Jan 23 05:43:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.842 250273 INFO nova.virt.libvirt.driver [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Deleting instance files /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521_del#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.844 250273 INFO nova.virt.libvirt.driver [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Deletion of /var/lib/nova/instances/0888913c-71a6-45fe-97bf-9dddd2b7b521_del complete#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.922 250273 INFO nova.compute.manager [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.923 250273 DEBUG oslo.service.loopingcall [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.923 250273 DEBUG nova.compute.manager [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:43:59 np0005593232 nova_compute[250269]: 2026-01-23 10:43:59.923 250273 DEBUG nova.network.neutron [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:44:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3518: 321 pgs: 321 active+clean; 520 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 2.1 KiB/s wr, 96 op/s
Jan 23 05:44:00 np0005593232 nova_compute[250269]: 2026-01-23 10:44:00.934 250273 DEBUG nova.network.neutron [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.049 250273 DEBUG nova.compute.manager [req-c9c19039-e203-44c4-a6e2-fc6efebdcafc req-8fabd511-de51-420b-a862-fb6b96fc93f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-deleted-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.050 250273 INFO nova.compute.manager [req-c9c19039-e203-44c4-a6e2-fc6efebdcafc req-8fabd511-de51-420b-a862-fb6b96fc93f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Neutron deleted interface 456d51f3-4b45-4e54-acee-c50facafcd50; detaching it from the instance and deleting it from the info cache#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.050 250273 DEBUG nova.network.neutron [req-c9c19039-e203-44c4-a6e2-fc6efebdcafc req-8fabd511-de51-420b-a862-fb6b96fc93f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.058 250273 INFO nova.compute.manager [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Took 1.13 seconds to deallocate network for instance.#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.077 250273 DEBUG nova.compute.manager [req-c9c19039-e203-44c4-a6e2-fc6efebdcafc req-8fabd511-de51-420b-a862-fb6b96fc93f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Detach interface failed, port_id=456d51f3-4b45-4e54-acee-c50facafcd50, reason: Instance 0888913c-71a6-45fe-97bf-9dddd2b7b521 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.215 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.216 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.259 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:01.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.281 250273 DEBUG oslo_concurrency.processutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:44:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:01.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.475 250273 DEBUG nova.compute.manager [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.476 250273 DEBUG oslo_concurrency.lockutils [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.476 250273 DEBUG oslo_concurrency.lockutils [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.476 250273 DEBUG oslo_concurrency.lockutils [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.477 250273 DEBUG nova.compute.manager [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] No waiting events found dispatching network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.477 250273 WARNING nova.compute.manager [req-07dbd980-0d65-4249-ad99-bfd959d9c3ae req-0c829601-b8d1-47f6-84c6-c902ade6578f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Received unexpected event network-vif-plugged-456d51f3-4b45-4e54-acee-c50facafcd50 for instance with vm_state deleted and task_state None.#033[00m
Jan 23 05:44:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:44:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882007234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.795 250273 DEBUG oslo_concurrency.processutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.805 250273 DEBUG nova.compute.provider_tree [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:44:01 np0005593232 nova_compute[250269]: 2026-01-23 10:44:01.958 250273 DEBUG nova.scheduler.client.report [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:44:02 np0005593232 nova_compute[250269]: 2026-01-23 10:44:02.039 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:02 np0005593232 nova_compute[250269]: 2026-01-23 10:44:02.282 250273 INFO nova.scheduler.client.report [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Deleted allocations for instance 0888913c-71a6-45fe-97bf-9dddd2b7b521#033[00m
Jan 23 05:44:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3519: 321 pgs: 321 active+clean; 508 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 257 KiB/s wr, 42 op/s
Jan 23 05:44:02 np0005593232 nova_compute[250269]: 2026-01-23 10:44:02.434 250273 DEBUG oslo_concurrency.lockutils [None req-8a41d7e9-c7ac-407c-be63-5904ccea4cf8 93cd560e84264023877c47122b5919de 6e762fca3b634c7aa1d994314c059c54 - - default default] Lock "0888913c-71a6-45fe-97bf-9dddd2b7b521" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:03.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:03.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:04 np0005593232 nova_compute[250269]: 2026-01-23 10:44:04.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3520: 321 pgs: 321 active+clean; 452 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 595 KiB/s wr, 107 op/s
Jan 23 05:44:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:05.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:06 np0005593232 nova_compute[250269]: 2026-01-23 10:44:06.263 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3521: 321 pgs: 321 active+clean; 452 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 594 KiB/s wr, 107 op/s
Jan 23 05:44:07 np0005593232 nova_compute[250269]: 2026-01-23 10:44:07.083 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165032.0811868, f34f1af9-6c51-42ec-97f8-fb5bb146aeb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:44:07 np0005593232 nova_compute[250269]: 2026-01-23 10:44:07.084 250273 INFO nova.compute.manager [-] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:44:07 np0005593232 nova_compute[250269]: 2026-01-23 10:44:07.154 250273 DEBUG nova.compute.manager [None req-e5156439-7e25-454a-b261-b8cb5bb64bad - - - - - -] [instance: f34f1af9-6c51-42ec-97f8-fb5bb146aeb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:44:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:07.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:07.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:07 np0005593232 nova_compute[250269]: 2026-01-23 10:44:07.848 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:07 np0005593232 nova_compute[250269]: 2026-01-23 10:44:07.848 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3522: 321 pgs: 321 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 651 KiB/s wr, 147 op/s
Jan 23 05:44:08 np0005593232 nova_compute[250269]: 2026-01-23 10:44:08.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:08.609 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=79, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=78) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:44:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:08.611 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:44:08 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:08.612 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '79'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:44:09 np0005593232 nova_compute[250269]: 2026-01-23 10:44:09.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:09.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:09.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:10 np0005593232 nova_compute[250269]: 2026-01-23 10:44:10.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:10 np0005593232 nova_compute[250269]: 2026-01-23 10:44:10.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:44:10 np0005593232 nova_compute[250269]: 2026-01-23 10:44:10.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:10 np0005593232 nova_compute[250269]: 2026-01-23 10:44:10.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:44:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3523: 321 pgs: 321 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 599 KiB/s wr, 135 op/s
Jan 23 05:44:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:44:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:44:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:11 np0005593232 nova_compute[250269]: 2026-01-23 10:44:11.265 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:11.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 70753afa-6af7-446d-94d1-835b1fea3f37 does not exist
Jan 23 05:44:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f463fafc-0a1d-4ec5-adc1-a7ceae495cf2 does not exist
Jan 23 05:44:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 96847a76-4c34-427f-a0a6-da3b56eb2340 does not exist
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:44:12 np0005593232 nova_compute[250269]: 2026-01-23 10:44:12.271 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.343532707 +0000 UTC m=+0.036467197 container create c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:44:12 np0005593232 systemd[1]: Started libpod-conmon-c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0.scope.
Jan 23 05:44:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3524: 321 pgs: 321 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 543 KiB/s wr, 122 op/s
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.419672041 +0000 UTC m=+0.112606521 container init c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.327203423 +0000 UTC m=+0.020137903 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.42808672 +0000 UTC m=+0.121021210 container start c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.432390583 +0000 UTC m=+0.125325113 container attach c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:44:12 np0005593232 hopeful_galileo[390930]: 167 167
Jan 23 05:44:12 np0005593232 systemd[1]: libpod-c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0.scope: Deactivated successfully.
Jan 23 05:44:12 np0005593232 conmon[390930]: conmon c4b1a7f4b4e147af49a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0.scope/container/memory.events
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.435674246 +0000 UTC m=+0.128608746 container died c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:44:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b1855c4dca54d47465650086d81ec8eaf854e915d1e7d671aad5643efeda48d0-merged.mount: Deactivated successfully.
Jan 23 05:44:12 np0005593232 podman[390914]: 2026-01-23 10:44:12.470835325 +0000 UTC m=+0.163769805 container remove c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:44:12 np0005593232 systemd[1]: libpod-conmon-c4b1a7f4b4e147af49a1f1c23e518274a244582c051a6b53972c348e2e7bb4b0.scope: Deactivated successfully.
Jan 23 05:44:12 np0005593232 podman[390954]: 2026-01-23 10:44:12.627322233 +0000 UTC m=+0.036915570 container create b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:44:12 np0005593232 systemd[1]: Started libpod-conmon-b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7.scope.
Jan 23 05:44:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:12 np0005593232 podman[390954]: 2026-01-23 10:44:12.706897144 +0000 UTC m=+0.116490501 container init b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:44:12 np0005593232 podman[390954]: 2026-01-23 10:44:12.611441872 +0000 UTC m=+0.021035229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:12 np0005593232 podman[390954]: 2026-01-23 10:44:12.714043947 +0000 UTC m=+0.123637284 container start b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:44:12 np0005593232 podman[390954]: 2026-01-23 10:44:12.717403613 +0000 UTC m=+0.126996950 container attach b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:44:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:13.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:13 np0005593232 nova_compute[250269]: 2026-01-23 10:44:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:13.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:13 np0005593232 exciting_ramanujan[390970]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:44:13 np0005593232 exciting_ramanujan[390970]: --> relative data size: 1.0
Jan 23 05:44:13 np0005593232 exciting_ramanujan[390970]: --> All data devices are unavailable
Jan 23 05:44:13 np0005593232 systemd[1]: libpod-b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7.scope: Deactivated successfully.
Jan 23 05:44:13 np0005593232 podman[390954]: 2026-01-23 10:44:13.56201928 +0000 UTC m=+0.971612647 container died b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:44:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2074eada49ae4cc2ffbaae5ebe7610ca8afd26c73a09a7d9354814247eba72f6-merged.mount: Deactivated successfully.
Jan 23 05:44:13 np0005593232 podman[390954]: 2026-01-23 10:44:13.620695178 +0000 UTC m=+1.030288515 container remove b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:44:13 np0005593232 systemd[1]: libpod-conmon-b52003e4dd9ab624687cfdb701f843414bff2ca9350132a40e8b57404d337ce7.scope: Deactivated successfully.
Jan 23 05:44:13 np0005593232 podman[390986]: 2026-01-23 10:44:13.745861125 +0000 UTC m=+0.158480805 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:44:14 np0005593232 nova_compute[250269]: 2026-01-23 10:44:14.050 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165039.0488327, 0888913c-71a6-45fe-97bf-9dddd2b7b521 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:44:14 np0005593232 nova_compute[250269]: 2026-01-23 10:44:14.051 250273 INFO nova.compute.manager [-] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:44:14 np0005593232 nova_compute[250269]: 2026-01-23 10:44:14.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.178568434 +0000 UTC m=+0.022203622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:14 np0005593232 nova_compute[250269]: 2026-01-23 10:44:14.331 250273 DEBUG nova.compute.manager [None req-fc31be5e-beb3-4fdf-8983-85e5ea76537d - - - - - -] [instance: 0888913c-71a6-45fe-97bf-9dddd2b7b521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.350512732 +0000 UTC m=+0.194147900 container create 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:44:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3525: 321 pgs: 321 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 688 KiB/s rd, 372 KiB/s wr, 108 op/s
Jan 23 05:44:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:14 np0005593232 systemd[1]: Started libpod-conmon-397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7.scope.
Jan 23 05:44:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.691306328 +0000 UTC m=+0.534941526 container init 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.697600037 +0000 UTC m=+0.541235205 container start 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.700450758 +0000 UTC m=+0.544086106 container attach 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:44:14 np0005593232 silly_lamarr[391181]: 167 167
Jan 23 05:44:14 np0005593232 systemd[1]: libpod-397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7.scope: Deactivated successfully.
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.703305609 +0000 UTC m=+0.546940777 container died 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:44:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f72f9bae6cadec65e11c05e827f7a6b8dab84bac3f51b96ed36f5e79eb4d4b8f-merged.mount: Deactivated successfully.
Jan 23 05:44:14 np0005593232 podman[391165]: 2026-01-23 10:44:14.741566197 +0000 UTC m=+0.585201365 container remove 397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:44:14 np0005593232 systemd[1]: libpod-conmon-397ce10e15f87062bc712498edd1e6800d2e1c7ceb8b6d51f988636a0d2d22b7.scope: Deactivated successfully.
Jan 23 05:44:14 np0005593232 podman[391204]: 2026-01-23 10:44:14.90422713 +0000 UTC m=+0.040106431 container create 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:44:14 np0005593232 systemd[1]: Started libpod-conmon-7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df.scope.
Jan 23 05:44:14 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f741195a73204c16e8d90a35ad6705559f10d6bde60b6fc2441449de76a7c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f741195a73204c16e8d90a35ad6705559f10d6bde60b6fc2441449de76a7c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f741195a73204c16e8d90a35ad6705559f10d6bde60b6fc2441449de76a7c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:14 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f741195a73204c16e8d90a35ad6705559f10d6bde60b6fc2441449de76a7c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:14 np0005593232 podman[391204]: 2026-01-23 10:44:14.886926398 +0000 UTC m=+0.022805729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:14 np0005593232 podman[391204]: 2026-01-23 10:44:14.988604108 +0000 UTC m=+0.124483429 container init 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:44:14 np0005593232 podman[391204]: 2026-01-23 10:44:14.996355199 +0000 UTC m=+0.132234500 container start 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:44:15 np0005593232 podman[391204]: 2026-01-23 10:44:15.002091572 +0000 UTC m=+0.137970873 container attach 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Jan 23 05:44:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:15.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:15.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]: {
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:    "0": [
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:        {
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "devices": [
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "/dev/loop3"
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            ],
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "lv_name": "ceph_lv0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "lv_size": "7511998464",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "name": "ceph_lv0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "tags": {
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.cluster_name": "ceph",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.crush_device_class": "",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.encrypted": "0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.osd_id": "0",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.type": "block",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:                "ceph.vdo": "0"
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            },
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "type": "block",
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:            "vg_name": "ceph_vg0"
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:        }
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]:    ]
Jan 23 05:44:15 np0005593232 affectionate_goldwasser[391220]: }
Jan 23 05:44:15 np0005593232 systemd[1]: libpod-7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df.scope: Deactivated successfully.
Jan 23 05:44:15 np0005593232 podman[391204]: 2026-01-23 10:44:15.754111177 +0000 UTC m=+0.889990478 container died 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.292 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.293 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.293 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.294 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.357 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.366 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.366 250273 WARNING nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.367 250273 WARNING nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.367 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Removable base files: /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.368 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.368 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/8edc4c18d7d1964a485fb1b305c460bdc5a45b20#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.368 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.369 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.369 250273 DEBUG nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Jan 23 05:44:16 np0005593232 nova_compute[250269]: 2026-01-23 10:44:16.369 250273 INFO nova.virt.libvirt.imagecache [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Jan 23 05:44:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-29f741195a73204c16e8d90a35ad6705559f10d6bde60b6fc2441449de76a7c9-merged.mount: Deactivated successfully.
Jan 23 05:44:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3526: 321 pgs: 321 active+clean; 376 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 152 KiB/s rd, 48 KiB/s wr, 47 op/s
Jan 23 05:44:16 np0005593232 podman[391204]: 2026-01-23 10:44:16.480004928 +0000 UTC m=+1.615884219 container remove 7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:44:16 np0005593232 systemd[1]: libpod-conmon-7e776dc7befb4d3c0826173c961d06e64d40f594757cc59120af6046768e52df.scope: Deactivated successfully.
Jan 23 05:44:16 np0005593232 podman[391244]: 2026-01-23 10:44:16.622997333 +0000 UTC m=+0.061143199 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:44:17 np0005593232 podman[391405]: 2026-01-23 10:44:17.13621236 +0000 UTC m=+0.032434923 container create 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:44:17 np0005593232 systemd[1]: Started libpod-conmon-053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33.scope.
Jan 23 05:44:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:17 np0005593232 podman[391405]: 2026-01-23 10:44:17.210366128 +0000 UTC m=+0.106588711 container init 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:44:17 np0005593232 podman[391405]: 2026-01-23 10:44:17.217206082 +0000 UTC m=+0.113428645 container start 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:44:17 np0005593232 podman[391405]: 2026-01-23 10:44:17.122613103 +0000 UTC m=+0.018835696 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:17 np0005593232 podman[391405]: 2026-01-23 10:44:17.220192397 +0000 UTC m=+0.116414960 container attach 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:44:17 np0005593232 practical_burnell[391421]: 167 167
Jan 23 05:44:17 np0005593232 systemd[1]: libpod-053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33.scope: Deactivated successfully.
Jan 23 05:44:17 np0005593232 podman[391426]: 2026-01-23 10:44:17.259914246 +0000 UTC m=+0.022430328 container died 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:44:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9d6c05654688ec17bab5668230603dd05029586693158412674bead15488ff26-merged.mount: Deactivated successfully.
Jan 23 05:44:17 np0005593232 podman[391426]: 2026-01-23 10:44:17.291549845 +0000 UTC m=+0.054065907 container remove 053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_burnell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:44:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:17.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:17 np0005593232 systemd[1]: libpod-conmon-053ca9e4ff069f09c76c6243a43361938c3706dacf83b67bce01fe23e9ee8a33.scope: Deactivated successfully.
Jan 23 05:44:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:17 np0005593232 podman[391448]: 2026-01-23 10:44:17.44863177 +0000 UTC m=+0.038954638 container create 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:44:17 np0005593232 systemd[1]: Started libpod-conmon-530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679.scope.
Jan 23 05:44:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:44:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8982c8727557204c7144999f6942bbb01e1f11abbabcfc718845165967b54c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8982c8727557204c7144999f6942bbb01e1f11abbabcfc718845165967b54c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8982c8727557204c7144999f6942bbb01e1f11abbabcfc718845165967b54c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb8982c8727557204c7144999f6942bbb01e1f11abbabcfc718845165967b54c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:44:17 np0005593232 podman[391448]: 2026-01-23 10:44:17.527307886 +0000 UTC m=+0.117630774 container init 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:44:17 np0005593232 podman[391448]: 2026-01-23 10:44:17.434232851 +0000 UTC m=+0.024555749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:44:17 np0005593232 podman[391448]: 2026-01-23 10:44:17.533934555 +0000 UTC m=+0.124257423 container start 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:44:17 np0005593232 podman[391448]: 2026-01-23 10:44:17.538440853 +0000 UTC m=+0.128763741 container attach 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]: {
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:        "osd_id": 0,
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:        "type": "bluestore"
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]:    }
Jan 23 05:44:18 np0005593232 eloquent_yonath[391464]: }
Jan 23 05:44:18 np0005593232 nova_compute[250269]: 2026-01-23 10:44:18.370 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:18 np0005593232 nova_compute[250269]: 2026-01-23 10:44:18.371 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:44:18 np0005593232 nova_compute[250269]: 2026-01-23 10:44:18.371 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:44:18 np0005593232 systemd[1]: libpod-530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679.scope: Deactivated successfully.
Jan 23 05:44:18 np0005593232 podman[391448]: 2026-01-23 10:44:18.377399979 +0000 UTC m=+0.967722857 container died 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:44:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cb8982c8727557204c7144999f6942bbb01e1f11abbabcfc718845165967b54c-merged.mount: Deactivated successfully.
Jan 23 05:44:18 np0005593232 nova_compute[250269]: 2026-01-23 10:44:18.414 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:44:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3527: 321 pgs: 321 active+clean; 320 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 52 KiB/s wr, 69 op/s
Jan 23 05:44:18 np0005593232 podman[391448]: 2026-01-23 10:44:18.42318645 +0000 UTC m=+1.013509318 container remove 530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_yonath, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:44:18 np0005593232 systemd[1]: libpod-conmon-530d0f9aab3255999cda3e883efa7e72657ed79290b3b30050bf8047f02ee679.scope: Deactivated successfully.
Jan 23 05:44:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:44:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:44:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 6603d898-4b14-4e34-a638-87618d361a71 does not exist
Jan 23 05:44:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 77389876-442d-42a2-a3e4-2e70196b7eb7 does not exist
Jan 23 05:44:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0ea648f1-f545-4156-ae8a-30159d127e99 does not exist
Jan 23 05:44:19 np0005593232 nova_compute[250269]: 2026-01-23 10:44:19.083 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:19 np0005593232 nova_compute[250269]: 2026-01-23 10:44:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:19.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:19.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:44:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:20 np0005593232 nova_compute[250269]: 2026-01-23 10:44:20.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3528: 321 pgs: 321 active+clean; 320 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 3.9 KiB/s wr, 21 op/s
Jan 23 05:44:21 np0005593232 nova_compute[250269]: 2026-01-23 10:44:21.269 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3529: 321 pgs: 321 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 4.2 KiB/s wr, 22 op/s
Jan 23 05:44:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:23.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:24 np0005593232 nova_compute[250269]: 2026-01-23 10:44:24.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3530: 321 pgs: 321 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 23 05:44:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:25.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.569 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.570 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.570 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:44:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:44:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/806519921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:44:25 np0005593232 nova_compute[250269]: 2026-01-23 10:44:25.996 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.172 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.173 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4054MB free_disk=20.987796783447266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.174 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.174 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.243 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.244 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.260 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.304 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3531: 321 pgs: 321 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 6.8 KiB/s wr, 28 op/s
Jan 23 05:44:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:44:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082056124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.748 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.754 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.772 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.812 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:44:26 np0005593232 nova_compute[250269]: 2026-01-23 10:44:26.812 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:27.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:27.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3532: 321 pgs: 321 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 8.0 KiB/s wr, 54 op/s
Jan 23 05:44:29 np0005593232 nova_compute[250269]: 2026-01-23 10:44:29.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:29.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:29.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3533: 321 pgs: 321 active+clean; 297 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 4.1 KiB/s wr, 32 op/s
Jan 23 05:44:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Jan 23 05:44:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Jan 23 05:44:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Jan 23 05:44:31 np0005593232 nova_compute[250269]: 2026-01-23 10:44:31.273 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:31.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:31.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:31 np0005593232 nova_compute[250269]: 2026-01-23 10:44:31.807 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3535: 321 pgs: 321 active+clean; 263 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 5.4 KiB/s wr, 66 op/s
Jan 23 05:44:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:33.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:33.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:34 np0005593232 nova_compute[250269]: 2026-01-23 10:44:34.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3536: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.3 KiB/s wr, 81 op/s
Jan 23 05:44:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:35.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:35.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:36 np0005593232 nova_compute[250269]: 2026-01-23 10:44:36.275 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3537: 321 pgs: 321 active+clean; 200 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 60 KiB/s rd, 3.3 KiB/s wr, 81 op/s
Jan 23 05:44:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:37.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:37.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:44:37
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:44:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3538: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 50 KiB/s rd, 2.6 KiB/s wr, 68 op/s
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:44:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:44:39 np0005593232 nova_compute[250269]: 2026-01-23 10:44:39.091 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:39.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:39.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Jan 23 05:44:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Jan 23 05:44:39 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Jan 23 05:44:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3540: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 69 op/s
Jan 23 05:44:41 np0005593232 nova_compute[250269]: 2026-01-23 10:44:41.277 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:41.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:41.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3541: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Jan 23 05:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:42.663 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:42.664 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:44:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:44:42.664 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:44:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:43.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:43.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:44 np0005593232 nova_compute[250269]: 2026-01-23 10:44:44.093 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3542: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 716 B/s wr, 17 op/s
Jan 23 05:44:44 np0005593232 podman[391658]: 2026-01-23 10:44:44.447801802 +0000 UTC m=+0.101236919 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.550010) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084550081, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 1298, "num_deletes": 259, "total_data_size": 2036901, "memory_usage": 2063520, "flush_reason": "Manual Compaction"}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788968065' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3788968065' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084636798, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 1992865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77237, "largest_seqno": 78534, "table_properties": {"data_size": 1986614, "index_size": 3453, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13643, "raw_average_key_size": 20, "raw_value_size": 1973875, "raw_average_value_size": 2911, "num_data_blocks": 152, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769164982, "oldest_key_time": 1769164982, "file_creation_time": 1769165084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 86881 microseconds, and 5154 cpu microseconds.
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.636840) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 1992865 bytes OK
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.636910) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.693504) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.693534) EVENT_LOG_v1 {"time_micros": 1769165084693527, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.693556) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 2030988, prev total WAL file size 2030988, number of live WAL files 2.
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.694275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323637' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(1946KB)], [179(10MB)]
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084694301, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 13372194, "oldest_snapshot_seqno": -1}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 10316 keys, 13230908 bytes, temperature: kUnknown
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084796773, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 13230908, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13164595, "index_size": 39448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25797, "raw_key_size": 272750, "raw_average_key_size": 26, "raw_value_size": 12984485, "raw_average_value_size": 1258, "num_data_blocks": 1504, "num_entries": 10316, "num_filter_entries": 10316, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165084, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.797263) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 13230908 bytes
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.822995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.3 rd, 128.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.9 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(13.3) write-amplify(6.6) OK, records in: 10851, records dropped: 535 output_compression: NoCompression
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.823061) EVENT_LOG_v1 {"time_micros": 1769165084823037, "job": 112, "event": "compaction_finished", "compaction_time_micros": 102635, "compaction_time_cpu_micros": 34836, "output_level": 6, "num_output_files": 1, "total_output_size": 13230908, "num_input_records": 10851, "num_output_records": 10316, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084824407, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165084829133, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.694195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.829216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.829222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.829224) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.829225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:44:44.829227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:44:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:45.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:45.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:46 np0005593232 nova_compute[250269]: 2026-01-23 10:44:46.278 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3543: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 716 B/s wr, 17 op/s
Jan 23 05:44:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:47.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:47.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:47 np0005593232 podman[391736]: 2026-01-23 10:44:47.458480206 +0000 UTC m=+0.104870841 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004338823748371487 of space, bias 1.0, pg target 1.301647124511446 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019036880001861158 of space, bias 1.0, pg target 0.5692027120556487 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:44:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:44:48 np0005593232 nova_compute[250269]: 2026-01-23 10:44:48.220 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3544: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 716 B/s wr, 20 op/s
Jan 23 05:44:49 np0005593232 nova_compute[250269]: 2026-01-23 10:44:49.094 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:49.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:49.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641639292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:44:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3641639292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:44:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3545: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 657 B/s wr, 18 op/s
Jan 23 05:44:50 np0005593232 nova_compute[250269]: 2026-01-23 10:44:50.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:50 np0005593232 nova_compute[250269]: 2026-01-23 10:44:50.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:51 np0005593232 nova_compute[250269]: 2026-01-23 10:44:51.280 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:51.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:52 np0005593232 nova_compute[250269]: 2026-01-23 10:44:52.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3546: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 16 op/s
Jan 23 05:44:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Jan 23 05:44:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Jan 23 05:44:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Jan 23 05:44:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:53.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:54 np0005593232 nova_compute[250269]: 2026-01-23 10:44:54.097 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3548: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 921 B/s wr, 27 op/s
Jan 23 05:44:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:44:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:55.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:44:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:56 np0005593232 nova_compute[250269]: 2026-01-23 10:44:56.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:56 np0005593232 nova_compute[250269]: 2026-01-23 10:44:56.305 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:44:56 np0005593232 nova_compute[250269]: 2026-01-23 10:44:56.305 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:44:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3549: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 921 B/s wr, 27 op/s
Jan 23 05:44:56 np0005593232 nova_compute[250269]: 2026-01-23 10:44:56.658 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:44:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:57.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:44:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:44:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3550: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 921 B/s wr, 23 op/s
Jan 23 05:44:59 np0005593232 nova_compute[250269]: 2026-01-23 10:44:59.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:44:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:44:59.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:44:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:44:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:44:59.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:44:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:44:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Jan 23 05:44:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Jan 23 05:44:59 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Jan 23 05:45:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3552: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.1 KiB/s wr, 28 op/s
Jan 23 05:45:01 np0005593232 nova_compute[250269]: 2026-01-23 10:45:01.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:01.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:01.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3553: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 741 B/s wr, 16 op/s
Jan 23 05:45:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:03.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:04 np0005593232 nova_compute[250269]: 2026-01-23 10:45:04.101 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3554: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 23 05:45:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:05.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:06 np0005593232 nova_compute[250269]: 2026-01-23 10:45:06.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3555: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Jan 23 05:45:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:07.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:07.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:07 np0005593232 nova_compute[250269]: 2026-01-23 10:45:07.645 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:07 np0005593232 nova_compute[250269]: 2026-01-23 10:45:07.646 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3556: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Jan 23 05:45:09 np0005593232 nova_compute[250269]: 2026-01-23 10:45:09.103 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:09.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:09.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3557: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 94 B/s wr, 7 op/s
Jan 23 05:45:11 np0005593232 nova_compute[250269]: 2026-01-23 10:45:11.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:11.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:12 np0005593232 nova_compute[250269]: 2026-01-23 10:45:12.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:12 np0005593232 nova_compute[250269]: 2026-01-23 10:45:12.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:12 np0005593232 nova_compute[250269]: 2026-01-23 10:45:12.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:45:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3558: 321 pgs: 321 active+clean; 120 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 255 B/s wr, 17 op/s
Jan 23 05:45:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:13.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:13.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:14 np0005593232 nova_compute[250269]: 2026-01-23 10:45:14.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:14 np0005593232 nova_compute[250269]: 2026-01-23 10:45:14.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3559: 321 pgs: 321 active+clean; 140 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 407 KiB/s wr, 25 op/s
Jan 23 05:45:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:15.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:15.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:15 np0005593232 podman[391834]: 2026-01-23 10:45:15.424952926 +0000 UTC m=+0.081238820 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 23 05:45:16 np0005593232 nova_compute[250269]: 2026-01-23 10:45:16.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3560: 321 pgs: 321 active+clean; 140 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 407 KiB/s wr, 25 op/s
Jan 23 05:45:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:17.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:17.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.310 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:45:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3561: 321 pgs: 321 active+clean; 173 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.9 MiB/s wr, 51 op/s
Jan 23 05:45:18 np0005593232 podman[391862]: 2026-01-23 10:45:18.450025824 +0000 UTC m=+0.095336961 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.970 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.971 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:18 np0005593232 nova_compute[250269]: 2026-01-23 10:45:18.993 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.100 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.101 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.106 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.109 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.109 250273 INFO nova.compute.claims [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.217 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:19.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:19.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637899726' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.669 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.677 250273 DEBUG nova.compute.provider_tree [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.708 250273 DEBUG nova.scheduler.client.report [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.734 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.735 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.792 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.793 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.842 250273 INFO nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:19 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.865 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:45:19 np0005593232 nova_compute[250269]: 2026-01-23 10:45:19.933 250273 INFO nova.virt.block_device [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Booting with volume e5e1339d-ae3b-4104-9429-034d0ceea4fd at /dev/vda#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.082 250273 DEBUG os_brick.utils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.084 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.098 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.099 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[54be23f7-961f-481e-b1b3-e89be64b69dd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.100 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.110 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.110 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[14e66ae8-cff6-404d-b0b3-c389d7fae71b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.113 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.124 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.125 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[021995b2-1338-4b28-8416-113683fc1130]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.128 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[e8858a3e-c6ae-4b8e-a758-4ac428b4fca1]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.129 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.190 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "nvme version" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.193 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.193 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.193 250273 DEBUG os_brick.initiator.connectors.lightos [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.194 250273 DEBUG os_brick.utils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] <== get_connector_properties: return (110ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.194 250273 DEBUG nova.virt.block_device [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updating existing volume attachment record: 60c75f9e-eb69-4216-83bb-d164765eef16 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3562: 321 pgs: 321 active+clean; 173 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 1.9 MiB/s wr, 45 op/s
Jan 23 05:45:20 np0005593232 nova_compute[250269]: 2026-01-23 10:45:20.447 250273 DEBUG nova.policy [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb70c3aee8b64273a1930c0c2c231aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd27c5465284b48a5818ef931d6251c43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5d4324f4-9a56-49e4-83eb-06eeb377dabe does not exist
Jan 23 05:45:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 57c3088e-664c-4c1f-a8e3-fa264d111e5c does not exist
Jan 23 05:45:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1d188e91-4b92-4c3d-ada1-6c4bf1cfca06 does not exist
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:45:21 np0005593232 nova_compute[250269]: 2026-01-23 10:45:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:21.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:21.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.377087015 +0000 UTC m=+0.045998239 container create 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:45:21 np0005593232 nova_compute[250269]: 2026-01-23 10:45:21.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:21 np0005593232 systemd[1]: Started libpod-conmon-375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20.scope.
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.356015486 +0000 UTC m=+0.024926740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.497134797 +0000 UTC m=+0.166046061 container init 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.510307292 +0000 UTC m=+0.179218526 container start 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.513670327 +0000 UTC m=+0.182581561 container attach 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Jan 23 05:45:21 np0005593232 infallible_mahavira[392198]: 167 167
Jan 23 05:45:21 np0005593232 systemd[1]: libpod-375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20.scope: Deactivated successfully.
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.520641516 +0000 UTC m=+0.189552750 container died 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:45:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-20ab51e96891395e280e4c7022e68923ebbefdf9a4e44c7d5cb06ff06d735b43-merged.mount: Deactivated successfully.
Jan 23 05:45:21 np0005593232 podman[392182]: 2026-01-23 10:45:21.568784924 +0000 UTC m=+0.237696168 container remove 375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:45:21 np0005593232 systemd[1]: libpod-conmon-375bbfa383bf70488e814ec83e5df890e0ee2502fb020dc131972ebd896f9b20.scope: Deactivated successfully.
Jan 23 05:45:21 np0005593232 podman[392221]: 2026-01-23 10:45:21.742928994 +0000 UTC m=+0.040538093 container create fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:45:21 np0005593232 systemd[1]: Started libpod-conmon-fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4.scope.
Jan 23 05:45:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:21 np0005593232 podman[392221]: 2026-01-23 10:45:21.726375954 +0000 UTC m=+0.023985073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:21 np0005593232 podman[392221]: 2026-01-23 10:45:21.827967361 +0000 UTC m=+0.125576480 container init fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:45:21 np0005593232 podman[392221]: 2026-01-23 10:45:21.83354361 +0000 UTC m=+0.131152709 container start fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:45:21 np0005593232 podman[392221]: 2026-01-23 10:45:21.836583406 +0000 UTC m=+0.134192505 container attach fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:45:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3563: 321 pgs: 321 active+clean; 181 MiB data, 1.4 GiB used, 20 GiB / 21 GiB avail; 28 KiB/s rd, 2.2 MiB/s wr, 45 op/s
Jan 23 05:45:22 np0005593232 unruffled_gauss[392238]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:45:22 np0005593232 unruffled_gauss[392238]: --> relative data size: 1.0
Jan 23 05:45:22 np0005593232 unruffled_gauss[392238]: --> All data devices are unavailable
Jan 23 05:45:22 np0005593232 systemd[1]: libpod-fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4.scope: Deactivated successfully.
Jan 23 05:45:22 np0005593232 podman[392221]: 2026-01-23 10:45:22.668426492 +0000 UTC m=+0.966035591 container died fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:45:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a18e1dfb1f8c8c7fa95d220caeb24e677b87973de7cfff87225d598a926728bd-merged.mount: Deactivated successfully.
Jan 23 05:45:22 np0005593232 podman[392221]: 2026-01-23 10:45:22.841641115 +0000 UTC m=+1.139250254 container remove fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:45:22 np0005593232 systemd[1]: libpod-conmon-fb9dc7fbd7f29abdf33eeba247c00908b71ba4e0d43fa5bcc1f6cf3be9ef61b4.scope: Deactivated successfully.
Jan 23 05:45:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:23.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:23.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.684472643 +0000 UTC m=+0.048043347 container create b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:45:23 np0005593232 systemd[1]: Started libpod-conmon-b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f.scope.
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.661263203 +0000 UTC m=+0.024833967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.782451468 +0000 UTC m=+0.146022212 container init b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.789218811 +0000 UTC m=+0.152789555 container start b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.793046849 +0000 UTC m=+0.156617563 container attach b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:45:23 np0005593232 cool_robinson[392424]: 167 167
Jan 23 05:45:23 np0005593232 systemd[1]: libpod-b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f.scope: Deactivated successfully.
Jan 23 05:45:23 np0005593232 conmon[392424]: conmon b8acac51b45d60ef62da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f.scope/container/memory.events
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.796280401 +0000 UTC m=+0.159851145 container died b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:45:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-058cadeada2f559d3248cd454df344fb22ad206ee2373ac9d9dbb1d4909f1828-merged.mount: Deactivated successfully.
Jan 23 05:45:23 np0005593232 podman[392408]: 2026-01-23 10:45:23.847576239 +0000 UTC m=+0.211146983 container remove b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_robinson, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:45:23 np0005593232 systemd[1]: libpod-conmon-b8acac51b45d60ef62da628a78b37181bea5a2d3a4fa7589dd43a7700e5b616f.scope: Deactivated successfully.
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.006653741 +0000 UTC m=+0.045155124 container create 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:45:24 np0005593232 systemd[1]: Started libpod-conmon-871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab.scope.
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:23.981909618 +0000 UTC m=+0.020411041 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c16707bd7c200bc61e3b690ed4f6f617f6124c33fe83e8909888839c045f3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c16707bd7c200bc61e3b690ed4f6f617f6124c33fe83e8909888839c045f3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c16707bd7c200bc61e3b690ed4f6f617f6124c33fe83e8909888839c045f3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05c16707bd7c200bc61e3b690ed4f6f617f6124c33fe83e8909888839c045f3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.103679339 +0000 UTC m=+0.142180762 container init 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.116435612 +0000 UTC m=+0.154937005 container start 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.120979031 +0000 UTC m=+0.159480464 container attach 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3564: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 3.5 MiB/s wr, 51 op/s
Jan 23 05:45:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.584 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:24.587 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=80, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=79) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:45:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:24.592 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.646 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.648 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.648 250273 INFO nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Creating image(s)#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.649 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.649 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Ensure instance console log exists: /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.650 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.650 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.650 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:24 np0005593232 nova_compute[250269]: 2026-01-23 10:45:24.738 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Successfully created port: 0e770338-09f5-4af8-b79a-dc01e251fc6a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]: {
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:    "0": [
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:        {
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "devices": [
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "/dev/loop3"
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            ],
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "lv_name": "ceph_lv0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "lv_size": "7511998464",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "name": "ceph_lv0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "tags": {
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.cluster_name": "ceph",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.crush_device_class": "",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.encrypted": "0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.osd_id": "0",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.type": "block",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:                "ceph.vdo": "0"
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            },
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "type": "block",
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:            "vg_name": "ceph_vg0"
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:        }
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]:    ]
Jan 23 05:45:24 np0005593232 goofy_swanson[392464]: }
Jan 23 05:45:24 np0005593232 systemd[1]: libpod-871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab.scope: Deactivated successfully.
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.86681565 +0000 UTC m=+0.905317073 container died 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:45:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-05c16707bd7c200bc61e3b690ed4f6f617f6124c33fe83e8909888839c045f3d-merged.mount: Deactivated successfully.
Jan 23 05:45:24 np0005593232 podman[392448]: 2026-01-23 10:45:24.931914241 +0000 UTC m=+0.970415634 container remove 871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:45:24 np0005593232 systemd[1]: libpod-conmon-871aedd4d76c341fa5becb2ae93f459fab6d4fa2b94a74ba44f0937314bdbaab.scope: Deactivated successfully.
Jan 23 05:45:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:25.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:25.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.58727332 +0000 UTC m=+0.049022815 container create fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:25 np0005593232 systemd[1]: Started libpod-conmon-fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9.scope.
Jan 23 05:45:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.566242422 +0000 UTC m=+0.027991907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.665292537 +0000 UTC m=+0.127042072 container init fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.671267867 +0000 UTC m=+0.133017322 container start fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.674793178 +0000 UTC m=+0.136542713 container attach fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:25 np0005593232 systemd[1]: libpod-fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9.scope: Deactivated successfully.
Jan 23 05:45:25 np0005593232 suspicious_poincare[392644]: 167 167
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.677741561 +0000 UTC m=+0.139491016 container died fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Jan 23 05:45:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c7954248e614d222fd3dcb93a79297929338ed71d2c93c3a7b4cd90aa78752c7-merged.mount: Deactivated successfully.
Jan 23 05:45:25 np0005593232 podman[392628]: 2026-01-23 10:45:25.713557529 +0000 UTC m=+0.175306984 container remove fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_poincare, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:45:25 np0005593232 systemd[1]: libpod-conmon-fef2c36ea1d1dbe5d1880efddb639986458913e7a6bc340448094f0b0ef569d9.scope: Deactivated successfully.
Jan 23 05:45:25 np0005593232 podman[392714]: 2026-01-23 10:45:25.880965238 +0000 UTC m=+0.038972799 container create 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:45:25 np0005593232 systemd[1]: Started libpod-conmon-4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510.scope.
Jan 23 05:45:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f926f97658688826afcaa8ae1f61d58a1a7a83101c4f1b4326785080038784f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f926f97658688826afcaa8ae1f61d58a1a7a83101c4f1b4326785080038784f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f926f97658688826afcaa8ae1f61d58a1a7a83101c4f1b4326785080038784f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f926f97658688826afcaa8ae1f61d58a1a7a83101c4f1b4326785080038784f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:25 np0005593232 podman[392714]: 2026-01-23 10:45:25.864307394 +0000 UTC m=+0.022314975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:45:25 np0005593232 podman[392714]: 2026-01-23 10:45:25.970008959 +0000 UTC m=+0.128016540 container init 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:45:25 np0005593232 podman[392714]: 2026-01-23 10:45:25.978524581 +0000 UTC m=+0.136532142 container start 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:25 np0005593232 podman[392714]: 2026-01-23 10:45:25.981739792 +0000 UTC m=+0.139747353 container attach 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 23 05:45:26 np0005593232 nova_compute[250269]: 2026-01-23 10:45:26.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3565: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.1 MiB/s wr, 42 op/s
Jan 23 05:45:26 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:26.594 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '80'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]: {
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:        "osd_id": 0,
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:        "type": "bluestore"
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]:    }
Jan 23 05:45:26 np0005593232 cranky_joliot[392733]: }
Jan 23 05:45:26 np0005593232 systemd[1]: libpod-4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510.scope: Deactivated successfully.
Jan 23 05:45:26 np0005593232 podman[392714]: 2026-01-23 10:45:26.953657039 +0000 UTC m=+1.111664600 container died 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:45:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7f926f97658688826afcaa8ae1f61d58a1a7a83101c4f1b4326785080038784f-merged.mount: Deactivated successfully.
Jan 23 05:45:27 np0005593232 podman[392714]: 2026-01-23 10:45:27.01664656 +0000 UTC m=+1.174654121 container remove 4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_joliot, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:45:27 np0005593232 systemd[1]: libpod-conmon-4bbad3c791fcf28c161c19085d1f0fcdd9251e16ba71bc69b62da9280fead510.scope: Deactivated successfully.
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d6c889a-7ebb-48c9-8739-b29bcc5d9e22 does not exist
Jan 23 05:45:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b15bb502-862f-435b-b0b3-91fead068f37 does not exist
Jan 23 05:45:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 07f32814-6637-406c-81e6-0166505b5625 does not exist
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.320 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.321 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.321 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.321 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.322 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.352 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Successfully updated port: 0e770338-09f5-4af8-b79a-dc01e251fc6a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:45:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:27.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.370 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.370 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquired lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.371 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:45:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:27.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.422 250273 DEBUG nova.compute.manager [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-changed-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.423 250273 DEBUG nova.compute.manager [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Refreshing instance network info cache due to event network-changed-0e770338-09f5-4af8-b79a-dc01e251fc6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.423 250273 DEBUG oslo_concurrency.lockutils [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.528 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139820264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.768 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.910 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.912 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4059MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.912 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:27 np0005593232 nova_compute[250269]: 2026-01-23 10:45:27.913 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.040 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance b0f3c685-d13a-41b8-925b-67b144c237b8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.040 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.041 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.112 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3566: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 3.1 MiB/s wr, 42 op/s
Jan 23 05:45:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:45:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220754004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.557 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.563 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.585 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.617 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:45:28 np0005593232 nova_compute[250269]: 2026-01-23 10:45:28.617 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:29.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.727 250273 DEBUG nova.network.neutron [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updating instance_info_cache with network_info: [{"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.753 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Releasing lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.753 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Instance network_info: |[{"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.753 250273 DEBUG oslo_concurrency.lockutils [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.754 250273 DEBUG nova.network.neutron [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Refreshing network info cache for port 0e770338-09f5-4af8-b79a-dc01e251fc6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.756 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Start _get_guest_xml network_info=[{"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '60c75f9e-eb69-4216-83bb-d164765eef16', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'b0f3c685-d13a-41b8-925b-67b144c237b8', 'attached_at': '', 'detached_at': '', 'volume_id': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'serial': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.761 250273 WARNING nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.764 250273 DEBUG nova.virt.libvirt.host [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.765 250273 DEBUG nova.virt.libvirt.host [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.773 250273 DEBUG nova.virt.libvirt.host [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.774 250273 DEBUG nova.virt.libvirt.host [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.776 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.778 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.778 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.778 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.779 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.779 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.779 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.779 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.779 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.780 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.780 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.780 250273 DEBUG nova.virt.hardware [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.810 250273 DEBUG nova.storage.rbd_utils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:45:29 np0005593232 nova_compute[250269]: 2026-01-23 10:45:29.814 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:45:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1784996980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.257 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.288 250273 DEBUG nova.virt.libvirt.vif [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-600225361',display_name='tempest-TestVolumeBootPattern-server-600225361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-600225361',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-18mfun4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:45:19Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=b0f3c685-d13a-41b8-925b-67b144c237b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.288 250273 DEBUG nova.network.os_vif_util [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.289 250273 DEBUG nova.network.os_vif_util [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.290 250273 DEBUG nova.objects.instance [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lazy-loading 'pci_devices' on Instance uuid b0f3c685-d13a-41b8-925b-67b144c237b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.312 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <uuid>b0f3c685-d13a-41b8-925b-67b144c237b8</uuid>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <name>instance-000000cd</name>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestVolumeBootPattern-server-600225361</nova:name>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:45:29</nova:creationTime>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:user uuid="eb70c3aee8b64273a1930c0c2c231aff">tempest-TestVolumeBootPattern-2139361132-project-member</nova:user>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:project uuid="d27c5465284b48a5818ef931d6251c43">tempest-TestVolumeBootPattern-2139361132</nova:project>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <nova:port uuid="0e770338-09f5-4af8-b79a-dc01e251fc6a">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="serial">b0f3c685-d13a-41b8-925b-67b144c237b8</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="uuid">b0f3c685-d13a-41b8-925b-67b144c237b8</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-e5e1339d-ae3b-4104-9429-034d0ceea4fd">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <serial>e5e1339d-ae3b-4104-9429-034d0ceea4fd</serial>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:84:27:1e"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <target dev="tap0e770338-09"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/console.log" append="off"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:45:30 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:45:30 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:45:30 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:45:30 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.314 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Preparing to wait for external event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.314 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.315 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.315 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.316 250273 DEBUG nova.virt.libvirt.vif [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-600225361',display_name='tempest-TestVolumeBootPattern-server-600225361',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-600225361',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-18mfun4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:45:19Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=b0f3c685-d13a-41b8-925b-67b144c237b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.316 250273 DEBUG nova.network.os_vif_util [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.317 250273 DEBUG nova.network.os_vif_util [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.317 250273 DEBUG os_vif [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.318 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.318 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.319 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.322 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.322 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0e770338-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.323 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0e770338-09, col_values=(('external_ids', {'iface-id': '0e770338-09f5-4af8-b79a-dc01e251fc6a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:27:1e', 'vm-uuid': 'b0f3c685-d13a-41b8-925b-67b144c237b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.324 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:30 np0005593232 NetworkManager[49057]: <info>  [1769165130.3254] manager: (tap0e770338-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/375)
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.332 250273 INFO os_vif [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09')#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.383 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.383 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.383 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No VIF found with MAC fa:16:3e:84:27:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.384 250273 INFO nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Using config drive#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.413 250273 DEBUG nova.storage.rbd_utils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:45:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3567: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 1.7 MiB/s wr, 16 op/s
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.828 250273 INFO nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Creating config drive at /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.833 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__b7_uxb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.968 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp__b7_uxb" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:30 np0005593232 nova_compute[250269]: 2026-01-23 10:45:30.998 250273 DEBUG nova.storage.rbd_utils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.002 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.417 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.462 250273 DEBUG oslo_concurrency.processutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config b0f3c685-d13a-41b8-925b-67b144c237b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.462 250273 INFO nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Deleting local config drive /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8/disk.config because it was imported into RBD.#033[00m
Jan 23 05:45:31 np0005593232 kernel: tap0e770338-09: entered promiscuous mode
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.5167] manager: (tap0e770338-09): new Tun device (/org/freedesktop/NetworkManager/Devices/376)
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.516 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:31Z|00802|binding|INFO|Claiming lport 0e770338-09f5-4af8-b79a-dc01e251fc6a for this chassis.
Jan 23 05:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:31Z|00803|binding|INFO|0e770338-09f5-4af8-b79a-dc01e251fc6a: Claiming fa:16:3e:84:27:1e 10.100.0.5
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.530 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:27:1e 10.100.0.5'], port_security=['fa:16:3e:84:27:1e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b0f3c685-d13a-41b8-925b-67b144c237b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72854481-c2f9-4651-8ba1-fe321a8a5546', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27c5465284b48a5818ef931d6251c43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbb54d44-fc85-485d-96a6-e6e12258a95a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95d9ae35-aabe-45f7-a103-f14858b94e31, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=0e770338-09f5-4af8-b79a-dc01e251fc6a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.531 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 0e770338-09f5-4af8-b79a-dc01e251fc6a in datapath 72854481-c2f9-4651-8ba1-fe321a8a5546 bound to our chassis#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.532 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72854481-c2f9-4651-8ba1-fe321a8a5546#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.548 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[703abc13-e346-4b1c-aa58-4ceb902864c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.549 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72854481-c1 in ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:45:31 np0005593232 systemd-udevd[392981]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:45:31 np0005593232 systemd-machined[215836]: New machine qemu-91-instance-000000cd.
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.552 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72854481-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.553 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c6565ae5-bb6f-4985-b3d7-fb6a60f09d39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.554 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[741c9a1b-0b51-4ebe-b6e9-3e7797097dad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.5665] device (tap0e770338-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.5672] device (tap0e770338-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:45:31 np0005593232 systemd[1]: Started Virtual Machine qemu-91-instance-000000cd.
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.573 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[c6aea40e-e5e9-4055-a263-5fca3db15552]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.588 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.591 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2228bd07-3b54-4d1f-98fe-83431637bc3b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:31Z|00804|binding|INFO|Setting lport 0e770338-09f5-4af8-b79a-dc01e251fc6a ovn-installed in OVS
Jan 23 05:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:31Z|00805|binding|INFO|Setting lport 0e770338-09f5-4af8-b79a-dc01e251fc6a up in Southbound
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.599 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.617 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[e520a2ab-e378-4289-9a02-5d9dd333062a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.621 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5f0862a0-3c24-4d38-be00-952e8709feab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.6227] manager: (tap72854481-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/377)
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.648 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[feca6d46-9385-47b5-b7aa-374ef50f5dc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.651 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[be537576-4d76-4458-ac51-b41c4ffc3e6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.6691] device (tap72854481-c0): carrier: link connected
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.673 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[c136700c-9e7c-49d2-b7a5-470559e32f22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.690 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f7314e22-2b9a-4eef-bfb6-6d91548fa002]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72854481-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:b6:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904481, 'reachable_time': 25705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393013, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.703 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d8212c00-ce43-48da-b8c5-7c26bc51d73c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:b660'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 904481, 'tstamp': 904481}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393014, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.721 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[251584b8-7965-430e-887f-6e735b540316]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72854481-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:b6:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 243], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904481, 'reachable_time': 25705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393015, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.738 250273 DEBUG nova.network.neutron [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updated VIF entry in instance network info cache for port 0e770338-09f5-4af8-b79a-dc01e251fc6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.738 250273 DEBUG nova.network.neutron [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updating instance_info_cache with network_info: [{"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.753 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[feec25b2-a113-40bc-ba30-11415ad62716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.763 250273 DEBUG oslo_concurrency.lockutils [req-86f8c2cb-1382-435e-924d-b88c4a2237a0 req-0fd6d38e-b65b-42e3-b6d1-2a3f04040743 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.812 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[343db62f-3174-4912-bdf0-d0725689b3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.814 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72854481-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.815 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.816 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72854481-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:31 np0005593232 kernel: tap72854481-c0: entered promiscuous mode
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.818 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 NetworkManager[49057]: <info>  [1769165131.8195] manager: (tap72854481-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.820 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72854481-c0, col_values=(('external_ids', {'iface-id': '6b08537e-a263-4eec-b987-1e42878f483a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:31Z|00806|binding|INFO|Releasing lport 6b08537e-a263-4eec-b987-1e42878f483a from this chassis (sb_readonly=0)
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.834 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.835 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.837 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[9242ba37-3a99-4b72-8bf6-d780480748b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.839 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-72854481-c2f9-4651-8ba1-fe321a8a5546
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 72854481-c2f9-4651-8ba1-fe321a8a5546
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:45:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:31.840 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'env', 'PROCESS_TAG=haproxy-72854481-c2f9-4651-8ba1-fe321a8a5546', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72854481-c2f9-4651-8ba1-fe321a8a5546.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.929 250273 DEBUG nova.compute.manager [req-bd99d8b2-bc0e-4831-b5c1-fcb1a27b90cd req-b1438eb9-8005-4a42-88fa-cbd41d14d225 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.930 250273 DEBUG oslo_concurrency.lockutils [req-bd99d8b2-bc0e-4831-b5c1-fcb1a27b90cd req-b1438eb9-8005-4a42-88fa-cbd41d14d225 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.930 250273 DEBUG oslo_concurrency.lockutils [req-bd99d8b2-bc0e-4831-b5c1-fcb1a27b90cd req-b1438eb9-8005-4a42-88fa-cbd41d14d225 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.930 250273 DEBUG oslo_concurrency.lockutils [req-bd99d8b2-bc0e-4831-b5c1-fcb1a27b90cd req-b1438eb9-8005-4a42-88fa-cbd41d14d225 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.931 250273 DEBUG nova.compute.manager [req-bd99d8b2-bc0e-4831-b5c1-fcb1a27b90cd req-b1438eb9-8005-4a42-88fa-cbd41d14d225 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Processing event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.938 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.939 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165131.938523, b0f3c685-d13a-41b8-925b-67b144c237b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.939 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] VM Started (Lifecycle Event)#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.942 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.946 250273 INFO nova.virt.libvirt.driver [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Instance spawned successfully.#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.946 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.964 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.972 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.976 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.976 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.977 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.978 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.978 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:31 np0005593232 nova_compute[250269]: 2026-01-23 10:45:31.979 250273 DEBUG nova.virt.libvirt.driver [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.016 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.017 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165131.939151, b0f3c685-d13a-41b8-925b-67b144c237b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.017 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.056 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.061 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165131.9419463, b0f3c685-d13a-41b8-925b-67b144c237b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.062 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.067 250273 INFO nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Took 7.42 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.067 250273 DEBUG nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.161 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.165 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.193 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:45:32 np0005593232 podman[393090]: 2026-01-23 10:45:32.197543665 +0000 UTC m=+0.046963346 container create a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.205 250273 INFO nova.compute.manager [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Took 13.14 seconds to build instance.#033[00m
Jan 23 05:45:32 np0005593232 nova_compute[250269]: 2026-01-23 10:45:32.226 250273 DEBUG oslo_concurrency.lockutils [None req-8fa85d02-6f16-4f30-b34a-b967dbcb350a eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:32 np0005593232 systemd[1]: Started libpod-conmon-a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8.scope.
Jan 23 05:45:32 np0005593232 podman[393090]: 2026-01-23 10:45:32.173919724 +0000 UTC m=+0.023339425 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:45:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:45:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf901b3cbd061b3172a36cff47c4f73a269c1ae24628d6ad3b791433b853c32/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:45:32 np0005593232 podman[393090]: 2026-01-23 10:45:32.317475344 +0000 UTC m=+0.166895095 container init a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:45:32 np0005593232 podman[393090]: 2026-01-23 10:45:32.329033253 +0000 UTC m=+0.178452964 container start a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Jan 23 05:45:32 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [NOTICE]   (393109) : New worker (393111) forked
Jan 23 05:45:32 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [NOTICE]   (393109) : Loading success.
Jan 23 05:45:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3568: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Jan 23 05:45:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.187 250273 DEBUG nova.compute.manager [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.188 250273 DEBUG oslo_concurrency.lockutils [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.188 250273 DEBUG oslo_concurrency.lockutils [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.188 250273 DEBUG oslo_concurrency.lockutils [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.188 250273 DEBUG nova.compute.manager [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] No waiting events found dispatching network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:45:34 np0005593232 nova_compute[250269]: 2026-01-23 10:45:34.188 250273 WARNING nova.compute.manager [req-babaa0a8-e425-49a0-b644-64fbedd150ea req-ff573789-3adc-4570-92f4-3c6d3b704038 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received unexpected event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a for instance with vm_state active and task_state None.#033[00m
Jan 23 05:45:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3569: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 292 KiB/s rd, 1.4 MiB/s wr, 43 op/s
Jan 23 05:45:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:35 np0005593232 nova_compute[250269]: 2026-01-23 10:45:35.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:35.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:36 np0005593232 nova_compute[250269]: 2026-01-23 10:45:36.420 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3570: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 281 KiB/s rd, 27 KiB/s wr, 27 op/s
Jan 23 05:45:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:37.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:37.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:45:37
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'images', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:45:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3571: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:45:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:45:38 np0005593232 NetworkManager[49057]: <info>  [1769165138.8175] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/379)
Jan 23 05:45:38 np0005593232 NetworkManager[49057]: <info>  [1769165138.8183] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 23 05:45:38 np0005593232 nova_compute[250269]: 2026-01-23 10:45:38.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:38Z|00807|binding|INFO|Releasing lport 6b08537e-a263-4eec-b987-1e42878f483a from this chassis (sb_readonly=0)
Jan 23 05:45:38 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:38Z|00808|binding|INFO|Releasing lport 6b08537e-a263-4eec-b987-1e42878f483a from this chassis (sb_readonly=0)
Jan 23 05:45:38 np0005593232 nova_compute[250269]: 2026-01-23 10:45:38.997 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:39 np0005593232 nova_compute[250269]: 2026-01-23 10:45:39.300 250273 DEBUG nova.compute.manager [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-changed-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:39 np0005593232 nova_compute[250269]: 2026-01-23 10:45:39.300 250273 DEBUG nova.compute.manager [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Refreshing instance network info cache due to event network-changed-0e770338-09f5-4af8-b79a-dc01e251fc6a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:45:39 np0005593232 nova_compute[250269]: 2026-01-23 10:45:39.301 250273 DEBUG oslo_concurrency.lockutils [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:45:39 np0005593232 nova_compute[250269]: 2026-01-23 10:45:39.301 250273 DEBUG oslo_concurrency.lockutils [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:45:39 np0005593232 nova_compute[250269]: 2026-01-23 10:45:39.302 250273 DEBUG nova.network.neutron [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Refreshing network info cache for port 0e770338-09f5-4af8-b79a-dc01e251fc6a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:45:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:39.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:39.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:40 np0005593232 nova_compute[250269]: 2026-01-23 10:45:40.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3572: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 23 05:45:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.059001677s ======
Jan 23 05:45:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:41.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.059001677s
Jan 23 05:45:41 np0005593232 nova_compute[250269]: 2026-01-23 10:45:41.491 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3573: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 23 05:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:42.664 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:42.666 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:42.668 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:43 np0005593232 nova_compute[250269]: 2026-01-23 10:45:43.125 250273 DEBUG nova.network.neutron [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updated VIF entry in instance network info cache for port 0e770338-09f5-4af8-b79a-dc01e251fc6a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:45:43 np0005593232 nova_compute[250269]: 2026-01-23 10:45:43.126 250273 DEBUG nova.network.neutron [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updating instance_info_cache with network_info: [{"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:45:43 np0005593232 nova_compute[250269]: 2026-01-23 10:45:43.165 250273 DEBUG oslo_concurrency.lockutils [req-557c4314-1654-4814-8abe-f176c44272d1 req-f3fcc084-3b2c-45ca-905e-7c370cc294b6 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-b0f3c685-d13a-41b8-925b-67b144c237b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:45:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:43.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:43.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:44 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #53. Immutable memtables: 9.
Jan 23 05:45:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3574: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Jan 23 05:45:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:45 np0005593232 nova_compute[250269]: 2026-01-23 10:45:45.334 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:45Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:27:1e 10.100.0.5
Jan 23 05:45:45 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:45Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:27:1e 10.100.0.5
Jan 23 05:45:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:45.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:46 np0005593232 podman[393151]: 2026-01-23 10:45:46.126744841 +0000 UTC m=+0.091412640 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Jan 23 05:45:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3575: 321 pgs: 321 active+clean; 213 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.6 MiB/s rd, 120 op/s
Jan 23 05:45:46 np0005593232 nova_compute[250269]: 2026-01-23 10:45:46.528 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001003825551367951 of space, bias 1.0, pg target 0.3011476654103853 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003163967988088591 of space, bias 1.0, pg target 0.9491903964265773 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:45:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:45:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3576: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 242 op/s
Jan 23 05:45:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:45:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:49.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:45:49 np0005593232 podman[393204]: 2026-01-23 10:45:49.427112073 +0000 UTC m=+0.069974660 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 05:45:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:49.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:50 np0005593232 nova_compute[250269]: 2026-01-23 10:45:50.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3577: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 578 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Jan 23 05:45:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:51.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:51.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:51 np0005593232 nova_compute[250269]: 2026-01-23 10:45:51.570 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3578: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 4.3 MiB/s wr, 122 op/s
Jan 23 05:45:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:53.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:53.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3579: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 23 05:45:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:55.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.537 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.538 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.538 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.538 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.539 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.541 250273 INFO nova.compute.manager [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Terminating instance#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.542 250273 DEBUG nova.compute.manager [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:45:55 np0005593232 kernel: tap0e770338-09 (unregistering): left promiscuous mode
Jan 23 05:45:55 np0005593232 NetworkManager[49057]: <info>  [1769165155.7621] device (tap0e770338-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:45:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:55Z|00809|binding|INFO|Releasing lport 0e770338-09f5-4af8-b79a-dc01e251fc6a from this chassis (sb_readonly=0)
Jan 23 05:45:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:55Z|00810|binding|INFO|Setting lport 0e770338-09f5-4af8-b79a-dc01e251fc6a down in Southbound
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.775 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:45:55Z|00811|binding|INFO|Removing iface tap0e770338-09 ovn-installed in OVS
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.777 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:55.786 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:27:1e 10.100.0.5'], port_security=['fa:16:3e:84:27:1e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b0f3c685-d13a-41b8-925b-67b144c237b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72854481-c2f9-4651-8ba1-fe321a8a5546', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27c5465284b48a5818ef931d6251c43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbb54d44-fc85-485d-96a6-e6e12258a95a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95d9ae35-aabe-45f7-a103-f14858b94e31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=0e770338-09f5-4af8-b79a-dc01e251fc6a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:45:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:55.789 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 0e770338-09f5-4af8-b79a-dc01e251fc6a in datapath 72854481-c2f9-4651-8ba1-fe321a8a5546 unbound from our chassis#033[00m
Jan 23 05:45:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:55.791 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72854481-c2f9-4651-8ba1-fe321a8a5546, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:45:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:55.794 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[eca74210-8711-4d74-81b7-a07e01be8281]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:55.795 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 namespace which is not needed anymore#033[00m
Jan 23 05:45:55 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.807 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:55 np0005593232 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cd.scope: Deactivated successfully.
Jan 23 05:45:55 np0005593232 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d000000cd.scope: Consumed 14.527s CPU time.
Jan 23 05:45:55 np0005593232 systemd-machined[215836]: Machine qemu-91-instance-000000cd terminated.
Jan 23 05:45:55 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [NOTICE]   (393109) : haproxy version is 2.8.14-c23fe91
Jan 23 05:45:55 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [NOTICE]   (393109) : path to executable is /usr/sbin/haproxy
Jan 23 05:45:55 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [WARNING]  (393109) : Exiting Master process...
Jan 23 05:45:55 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [ALERT]    (393109) : Current worker (393111) exited with code 143 (Terminated)
Jan 23 05:45:55 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393105]: [WARNING]  (393109) : All workers exited. Exiting... (0)
Jan 23 05:45:55 np0005593232 systemd[1]: libpod-a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8.scope: Deactivated successfully.
Jan 23 05:45:55 np0005593232 podman[393251]: 2026-01-23 10:45:55.991753114 +0000 UTC m=+0.064470914 container died a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.999 250273 INFO nova.virt.libvirt.driver [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Instance destroyed successfully.#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:55.999 250273 DEBUG nova.objects.instance [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lazy-loading 'resources' on Instance uuid b0f3c685-d13a-41b8-925b-67b144c237b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:45:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8-userdata-shm.mount: Deactivated successfully.
Jan 23 05:45:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aaf901b3cbd061b3172a36cff47c4f73a269c1ae24628d6ad3b791433b853c32-merged.mount: Deactivated successfully.
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.030 250273 DEBUG nova.virt.libvirt.vif [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-600225361',display_name='tempest-TestVolumeBootPattern-server-600225361',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-600225361',id=205,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:45:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-18mfun4o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:45:32Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=b0f3c685-d13a-41b8-925b-67b144c237b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.032 250273 DEBUG nova.network.os_vif_util [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "address": "fa:16:3e:84:27:1e", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0e770338-09", "ovs_interfaceid": "0e770338-09f5-4af8-b79a-dc01e251fc6a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.034 250273 DEBUG nova.network.os_vif_util [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.035 250273 DEBUG os_vif [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.038 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.039 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0e770338-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:56 np0005593232 podman[393251]: 2026-01-23 10:45:56.042052623 +0000 UTC m=+0.114770463 container cleanup a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.044 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.046 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.051 250273 INFO os_vif [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:27:1e,bridge_name='br-int',has_traffic_filtering=True,id=0e770338-09f5-4af8-b79a-dc01e251fc6a,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0e770338-09')#033[00m
Jan 23 05:45:56 np0005593232 systemd[1]: libpod-conmon-a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8.scope: Deactivated successfully.
Jan 23 05:45:56 np0005593232 podman[393290]: 2026-01-23 10:45:56.13869468 +0000 UTC m=+0.050912588 container remove a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.148 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[984597f1-685f-496c-9720-f9022f19f0c3]: (4, ('Fri Jan 23 10:45:55 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 (a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8)\na263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8\nFri Jan 23 10:45:56 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 (a263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8)\na263390a30468db2ed65a10fc02fd5908fc2d4ff28bd06691a89ae7e7f623fa8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.151 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[47f935b1-70ba-48ed-8d00-cb4375645486]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.153 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72854481-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:45:56 np0005593232 kernel: tap72854481-c0: left promiscuous mode
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.180 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[052b6970-de34-4946-b7a9-60c950d4519f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.196 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[246ffb57-c057-407f-8625-177c1474f1dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.198 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[450dd2c3-8785-4902-b077-81185b949eaa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.230 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3b018f49-4e42-4989-89d0-545bebc55e4c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 904475, 'reachable_time': 43116, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393322, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.238 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:45:56 np0005593232 systemd[1]: run-netns-ovnmeta\x2d72854481\x2dc2f9\x2d4651\x2d8ba1\x2dfe321a8a5546.mount: Deactivated successfully.
Jan 23 05:45:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:45:56.238 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[880a3068-0ac1-4251-8dbd-e40ba4f04616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.411 250273 DEBUG nova.compute.manager [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-unplugged-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.411 250273 DEBUG oslo_concurrency.lockutils [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.411 250273 DEBUG oslo_concurrency.lockutils [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.411 250273 DEBUG oslo_concurrency.lockutils [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.412 250273 DEBUG nova.compute.manager [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] No waiting events found dispatching network-vif-unplugged-0e770338-09f5-4af8-b79a-dc01e251fc6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.412 250273 DEBUG nova.compute.manager [req-62402de7-7172-4a0b-a8ed-3c03cc77c24a req-c8813d98-1fd3-4ffe-af87-07388adb0cd8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-unplugged-0e770338-09f5-4af8-b79a-dc01e251fc6a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:45:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3580: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 579 KiB/s rd, 4.3 MiB/s wr, 123 op/s
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.573 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.961 250273 INFO nova.virt.libvirt.driver [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Deleting instance files /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8_del#033[00m
Jan 23 05:45:56 np0005593232 nova_compute[250269]: 2026-01-23 10:45:56.964 250273 INFO nova.virt.libvirt.driver [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Deletion of /var/lib/nova/instances/b0f3c685-d13a-41b8-925b-67b144c237b8_del complete#033[00m
Jan 23 05:45:57 np0005593232 nova_compute[250269]: 2026-01-23 10:45:57.028 250273 INFO nova.compute.manager [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Took 1.49 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:45:57 np0005593232 nova_compute[250269]: 2026-01-23 10:45:57.030 250273 DEBUG oslo.service.loopingcall [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:45:57 np0005593232 nova_compute[250269]: 2026-01-23 10:45:57.031 250273 DEBUG nova.compute.manager [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:45:57 np0005593232 nova_compute[250269]: 2026-01-23 10:45:57.032 250273 DEBUG nova.network.neutron [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:45:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:57.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:45:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:57.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:45:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3581: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 592 KiB/s rd, 4.3 MiB/s wr, 140 op/s
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.585 250273 DEBUG nova.compute.manager [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.586 250273 DEBUG oslo_concurrency.lockutils [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.586 250273 DEBUG oslo_concurrency.lockutils [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.586 250273 DEBUG oslo_concurrency.lockutils [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.586 250273 DEBUG nova.compute.manager [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] No waiting events found dispatching network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.586 250273 WARNING nova.compute.manager [req-5a87d1f1-5b35-46b4-902a-3a2c0f22575f req-712857e4-8aeb-43a8-bc5f-c831f3a47e63 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received unexpected event network-vif-plugged-0e770338-09f5-4af8-b79a-dc01e251fc6a for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.674 250273 DEBUG nova.network.neutron [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.706 250273 INFO nova.compute.manager [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Took 1.67 seconds to deallocate network for instance.#033[00m
Jan 23 05:45:58 np0005593232 nova_compute[250269]: 2026-01-23 10:45:58.928 250273 INFO nova.compute.manager [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Took 0.22 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.014 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.015 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.096 250273 DEBUG oslo_concurrency.processutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:45:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:45:59.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:45:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:45:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:45:59.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:45:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:45:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:45:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520748909' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.647 250273 DEBUG oslo_concurrency.processutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.658 250273 DEBUG nova.compute.provider_tree [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.686 250273 DEBUG nova.scheduler.client.report [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.724 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.758 250273 INFO nova.scheduler.client.report [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Deleted allocations for instance b0f3c685-d13a-41b8-925b-67b144c237b8#033[00m
Jan 23 05:45:59 np0005593232 nova_compute[250269]: 2026-01-23 10:45:59.833 250273 DEBUG oslo_concurrency.lockutils [None req-1bd39687-5b17-4c57-a174-f49a24d99ffe eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "b0f3c685-d13a-41b8-925b-67b144c237b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.295s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3582: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 44 KiB/s wr, 18 op/s
Jan 23 05:46:00 np0005593232 nova_compute[250269]: 2026-01-23 10:46:00.706 250273 DEBUG nova.compute.manager [req-2447df58-68e9-4b20-9155-d1825e07d6d7 req-6c5291b4-bf51-4f57-81ea-d284870e87db 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Received event network-vif-deleted-0e770338-09f5-4af8-b79a-dc01e251fc6a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:46:01 np0005593232 nova_compute[250269]: 2026-01-23 10:46:01.043 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:01.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:01.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:01 np0005593232 nova_compute[250269]: 2026-01-23 10:46:01.578 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Jan 23 05:46:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Jan 23 05:46:02 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Jan 23 05:46:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3584: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 52 KiB/s wr, 22 op/s
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.243 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.244 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.264 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.333 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.334 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.380 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.380 250273 INFO nova.compute.claims [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:46:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:03.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:03 np0005593232 nova_compute[250269]: 2026-01-23 10:46:03.555 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:46:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/615658967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.094 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.101 250273 DEBUG nova.compute.provider_tree [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.119 250273 DEBUG nova.scheduler.client.report [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.146 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.147 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.205 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.206 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.227 250273 INFO nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.247 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.312 250273 INFO nova.virt.block_device [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Booting with volume e5e1339d-ae3b-4104-9429-034d0ceea4fd at /dev/vda#033[00m
Jan 23 05:46:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3585: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 44 KiB/s wr, 39 op/s
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.538 250273 DEBUG nova.policy [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb70c3aee8b64273a1930c0c2c231aff', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd27c5465284b48a5818ef931d6251c43', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.563 250273 DEBUG os_brick.utils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.567 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.594 265031 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.595 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[4e6c888d-eacc-41b1-9499-533f44ecea20]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.597 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.612 265031 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.613 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[983a6933-0249-4019-83fb-cd487e5eaeb8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:c6473626b456', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.615 265031 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.631 265031 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.631 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[29d0e483-3fff-42e8-903b-cd7fd5151e62]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.633 265031 DEBUG oslo.privsep.daemon [-] privsep: reply[23ece719-a3c7-4c92-9799-f7f78cbd1d34]: (4, 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.634 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.681 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "nvme version" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.684 250273 DEBUG os_brick.initiator.connectors.lightos [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.684 250273 DEBUG os_brick.initiator.connectors.lightos [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.684 250273 DEBUG os_brick.initiator.connectors.lightos [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.685 250273 DEBUG os_brick.utils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] <== get_connector_properties: return (120ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:c6473626b456', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': 'ebc95db9-b389-4cf6-b3a5-7ae0afc322d2', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 23 05:46:04 np0005593232 nova_compute[250269]: 2026-01-23 10:46:04.685 250273 DEBUG nova.virt.block_device [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating existing volume attachment record: 0cae7e16-8cc7-4dcc-82f1-3e66046ac933 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 23 05:46:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:05.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:05.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:05.917 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=81, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=80) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:46:05 np0005593232 nova_compute[250269]: 2026-01-23 10:46:05.918 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:05.919 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.013 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Successfully created port: e89e1397-f031-41c8-a81d-9efecff06096 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.046 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.430 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.433 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.434 250273 INFO nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Creating image(s)#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.435 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.435 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Ensure instance console log exists: /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.436 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.437 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.437 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3586: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 44 KiB/s wr, 39 op/s
Jan 23 05:46:06 np0005593232 nova_compute[250269]: 2026-01-23 10:46:06.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Jan 23 05:46:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Jan 23 05:46:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Jan 23 05:46:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:07.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.428 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Successfully updated port: e89e1397-f031-41c8-a81d-9efecff06096 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.447 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.447 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.447 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:46:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:07.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.585 250273 DEBUG nova.compute.manager [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-changed-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.585 250273 DEBUG nova.compute.manager [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing instance network info cache due to event network-changed-e89e1397-f031-41c8-a81d-9efecff06096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.585 250273 DEBUG oslo_concurrency.lockutils [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:07 np0005593232 nova_compute[250269]: 2026-01-23 10:46:07.651 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 05:46:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Jan 23 05:46:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Jan 23 05:46:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Jan 23 05:46:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3589: 321 pgs: 321 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 141 op/s
Jan 23 05:46:08 np0005593232 nova_compute[250269]: 2026-01-23 10:46:08.618 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:08 np0005593232 nova_compute[250269]: 2026-01-23 10:46:08.618 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:09.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.540 250273 DEBUG nova.network.neutron [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.568 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.568 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Instance network_info: |[{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.569 250273 DEBUG oslo_concurrency.lockutils [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.569 250273 DEBUG nova.network.neutron [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing network info cache for port e89e1397-f031-41c8-a81d-9efecff06096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.575 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Start _get_guest_xml network_info=[{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'mount_device': '/dev/vda', 'disk_bus': 'virtio', 'delete_on_termination': False, 'attachment_id': '0cae7e16-8cc7-4dcc-82f1-3e66046ac933', 'device_type': 'disk', 'boot_index': 0, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '28049b58-a86b-4eeb-8faa-239ab046508b', 'attached_at': '', 'detached_at': '', 'volume_id': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd', 'serial': 'e5e1339d-ae3b-4104-9429-034d0ceea4fd'}, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:46:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.584 250273 WARNING nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:46:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.590 250273 DEBUG nova.virt.libvirt.host [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.591 250273 DEBUG nova.virt.libvirt.host [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:46:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.605 250273 DEBUG nova.virt.libvirt.host [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.606 250273 DEBUG nova.virt.libvirt.host [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.612 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.613 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.614 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.614 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.614 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.615 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.615 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.616 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.616 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.617 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.617 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.618 250273 DEBUG nova.virt.hardware [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.660 250273 DEBUG nova.storage.rbd_utils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image 28049b58-a86b-4eeb-8faa-239ab046508b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:46:09 np0005593232 nova_compute[250269]: 2026-01-23 10:46:09.665 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:46:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000691082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.158 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.189 250273 DEBUG nova.virt.libvirt.vif [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1352362499',display_name='tempest-TestVolumeBootPattern-server-1352362499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1352362499',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-dbypuz6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:46:04Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=28049b58-a86b-4eeb-8faa-239ab046508b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.190 250273 DEBUG nova.network.os_vif_util [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.191 250273 DEBUG nova.network.os_vif_util [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.193 250273 DEBUG nova.objects.instance [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lazy-loading 'pci_devices' on Instance uuid 28049b58-a86b-4eeb-8faa-239ab046508b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.218 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <uuid>28049b58-a86b-4eeb-8faa-239ab046508b</uuid>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <name>instance-000000ce</name>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestVolumeBootPattern-server-1352362499</nova:name>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:46:09</nova:creationTime>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:user uuid="eb70c3aee8b64273a1930c0c2c231aff">tempest-TestVolumeBootPattern-2139361132-project-member</nova:user>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:project uuid="d27c5465284b48a5818ef931d6251c43">tempest-TestVolumeBootPattern-2139361132</nova:project>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <nova:port uuid="e89e1397-f031-41c8-a81d-9efecff06096">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="serial">28049b58-a86b-4eeb-8faa-239ab046508b</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="uuid">28049b58-a86b-4eeb-8faa-239ab046508b</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/28049b58-a86b-4eeb-8faa-239ab046508b_disk.config">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="volumes/volume-e5e1339d-ae3b-4104-9429-034d0ceea4fd">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <serial>e5e1339d-ae3b-4104-9429-034d0ceea4fd</serial>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:de:58:64"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <target dev="tape89e1397-f0"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/console.log" append="off"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:46:10 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:46:10 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:46:10 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:46:10 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.220 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Preparing to wait for external event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.221 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.221 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.222 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.223 250273 DEBUG nova.virt.libvirt.vif [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T10:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1352362499',display_name='tempest-TestVolumeBootPattern-server-1352362499',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1352362499',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-dbypuz6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:46:04Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=28049b58-a86b-4eeb-8faa-239ab046508b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.223 250273 DEBUG nova.network.os_vif_util [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.224 250273 DEBUG nova.network.os_vif_util [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.225 250273 DEBUG os_vif [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.227 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.228 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.233 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape89e1397-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.234 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape89e1397-f0, col_values=(('external_ids', {'iface-id': 'e89e1397-f031-41c8-a81d-9efecff06096', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:58:64', 'vm-uuid': '28049b58-a86b-4eeb-8faa-239ab046508b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:10 np0005593232 NetworkManager[49057]: <info>  [1769165170.2383] manager: (tape89e1397-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/381)
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.242 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.242 250273 INFO os_vif [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0')#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.344 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.344 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.345 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] No VIF found with MAC fa:16:3e:de:58:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.346 250273 INFO nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Using config drive#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.379 250273 DEBUG nova.storage.rbd_utils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image 28049b58-a86b-4eeb-8faa-239ab046508b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:46:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3591: 321 pgs: 321 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 119 op/s
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.996 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165155.9959474, b0f3c685-d13a-41b8-925b-67b144c237b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:46:10 np0005593232 nova_compute[250269]: 2026-01-23 10:46:10.997 250273 INFO nova.compute.manager [-] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.020 250273 DEBUG nova.compute.manager [None req-19a66862-591a-4ace-940a-b440b6b464fb - - - - - -] [instance: b0f3c685-d13a-41b8-925b-67b144c237b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:46:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:11.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.535 250273 INFO nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Creating config drive at /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.540 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_p4h8114 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.690 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_p4h8114" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.719 250273 DEBUG nova.storage.rbd_utils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] rbd image 28049b58-a86b-4eeb-8faa-239ab046508b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.722 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config 28049b58-a86b-4eeb-8faa-239ab046508b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.906 250273 DEBUG oslo_concurrency.processutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config 28049b58-a86b-4eeb-8faa-239ab046508b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.908 250273 INFO nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Deleting local config drive /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b/disk.config because it was imported into RBD.#033[00m
Jan 23 05:46:11 np0005593232 kernel: tape89e1397-f0: entered promiscuous mode
Jan 23 05:46:11 np0005593232 NetworkManager[49057]: <info>  [1769165171.9757] manager: (tape89e1397-f0): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.977 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:11Z|00812|binding|INFO|Claiming lport e89e1397-f031-41c8-a81d-9efecff06096 for this chassis.
Jan 23 05:46:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:11Z|00813|binding|INFO|e89e1397-f031-41c8-a81d-9efecff06096: Claiming fa:16:3e:de:58:64 10.100.0.6
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.992 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:11 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:11Z|00814|binding|INFO|Setting lport e89e1397-f031-41c8-a81d-9efecff06096 ovn-installed in OVS
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:11 np0005593232 nova_compute[250269]: 2026-01-23 10:46:11.998 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:12Z|00815|binding|INFO|Setting lport e89e1397-f031-41c8-a81d-9efecff06096 up in Southbound
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.000 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:58:64 10.100.0.6'], port_security=['fa:16:3e:de:58:64 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '28049b58-a86b-4eeb-8faa-239ab046508b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72854481-c2f9-4651-8ba1-fe321a8a5546', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27c5465284b48a5818ef931d6251c43', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dbb54d44-fc85-485d-96a6-e6e12258a95a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95d9ae35-aabe-45f7-a103-f14858b94e31, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e89e1397-f031-41c8-a81d-9efecff06096) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.001 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e89e1397-f031-41c8-a81d-9efecff06096 in datapath 72854481-c2f9-4651-8ba1-fe321a8a5546 bound to our chassis#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.002 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72854481-c2f9-4651-8ba1-fe321a8a5546#033[00m
Jan 23 05:46:12 np0005593232 systemd-udevd[393549]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.012 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d340fd15-5f58-4c71-9880-4b629ac8f0b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.013 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72854481-c1 in ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:46:12 np0005593232 systemd-machined[215836]: New machine qemu-92-instance-000000ce.
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.019 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72854481-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.020 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4a97f2-75d6-47c6-b9a5-413c481829d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.021 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[26c3d68c-6832-42b5-afcd-97869945e7e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 NetworkManager[49057]: <info>  [1769165172.0263] device (tape89e1397-f0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:46:12 np0005593232 NetworkManager[49057]: <info>  [1769165172.0280] device (tape89e1397-f0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:46:12 np0005593232 systemd[1]: Started Virtual Machine qemu-92-instance-000000ce.
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.032 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[621c3403-a14e-45e8-931e-57e6546a4cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.049 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[85804167-4b8a-484d-b4fe-28480bfdb352]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.087 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb6ce4c-a947-4f84-8613-b88dff889e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 NetworkManager[49057]: <info>  [1769165172.0955] manager: (tap72854481-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/383)
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.095 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d7551c87-d329-4edd-b949-ae8e1738459e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.142 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[934aab8b-9650-4c02-a646-fb7941e5c589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.146 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[89355735-ff5e-44fc-be2f-6ffe66bc5567]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 NetworkManager[49057]: <info>  [1769165172.1798] device (tap72854481-c0): carrier: link connected
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.187 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[1b226e7a-5f64-4d0f-95d6-d7034417a16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.215 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[182e77d2-eacd-4749-a783-ed828808bf78]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72854481-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:b6:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908532, 'reachable_time': 44344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393584, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.239 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[50978e22-d73c-4f05-9c05-cd62a361b895]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe81:b660'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 908532, 'tstamp': 908532}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393585, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.270 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[686fc142-14ce-48da-91a0-8819d94b7619]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72854481-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:81:b6:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 246], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908532, 'reachable_time': 44344, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 393586, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.312 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[65db2cd2-173e-4e63-a5bd-22c12a862649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.397 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1e8ddfb6-0914-4082-bf3e-83a51c1ffafd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.399 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72854481-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.400 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.400 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72854481-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.402 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 kernel: tap72854481-c0: entered promiscuous mode
Jan 23 05:46:12 np0005593232 NetworkManager[49057]: <info>  [1769165172.4039] manager: (tap72854481-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/384)
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.408 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72854481-c0, col_values=(('external_ids', {'iface-id': '6b08537e-a263-4eec-b987-1e42878f483a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:12Z|00816|binding|INFO|Releasing lport 6b08537e-a263-4eec-b987-1e42878f483a from this chassis (sb_readonly=0)
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.411 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.422 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.422 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8057c5-49c4-40a6-af86-adc4bbb0dd61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.423 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-72854481-c2f9-4651-8ba1-fe321a8a5546
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/72854481-c2f9-4651-8ba1-fe321a8a5546.pid.haproxy
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 72854481-c2f9-4651-8ba1-fe321a8a5546
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:46:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:12.424 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'env', 'PROCESS_TAG=haproxy-72854481-c2f9-4651-8ba1-fe321a8a5546', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72854481-c2f9-4651-8ba1-fe321a8a5546.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:46:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3592: 321 pgs: 321 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 119 op/s
Jan 23 05:46:12 np0005593232 podman[393648]: 2026-01-23 10:46:12.786257549 +0000 UTC m=+0.047180552 container create b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:46:12 np0005593232 systemd[1]: Started libpod-conmon-b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616.scope.
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.837 250273 DEBUG nova.network.neutron [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updated VIF entry in instance network info cache for port e89e1397-f031-41c8-a81d-9efecff06096. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.839 250273 DEBUG nova.network.neutron [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:46:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:12 np0005593232 podman[393648]: 2026-01-23 10:46:12.761084393 +0000 UTC m=+0.022007416 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:46:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b5dbfd5e53dcf768f4904ffb4f733d9bffad7737cc3b3ed4fa956f665a492b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.868 250273 DEBUG oslo_concurrency.lockutils [req-cbe3f2f8-bb21-4ca6-9232-962f2a8e0ed3 req-f03fe010-b5cc-4eba-9883-b8c3377f5a47 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.872 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165172.8710759, 28049b58-a86b-4eeb-8faa-239ab046508b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.872 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] VM Started (Lifecycle Event)#033[00m
Jan 23 05:46:12 np0005593232 podman[393648]: 2026-01-23 10:46:12.874126006 +0000 UTC m=+0.135049019 container init b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 23 05:46:12 np0005593232 podman[393648]: 2026-01-23 10:46:12.885597553 +0000 UTC m=+0.146520576 container start b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.897 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.902 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165172.8717263, 28049b58-a86b-4eeb-8faa-239ab046508b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.903 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] VM Paused (Lifecycle Event)#033[00m
Jan 23 05:46:12 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [NOTICE]   (393676) : New worker (393678) forked
Jan 23 05:46:12 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [NOTICE]   (393676) : Loading success.
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.925 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.929 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:46:12 np0005593232 nova_compute[250269]: 2026-01-23 10:46:12.957 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.264 250273 DEBUG nova.compute.manager [req-7fad9855-c526-48d4-aee6-293d00cc97ae req-0070a38a-5a98-44c5-a638-f708679ffa65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.265 250273 DEBUG oslo_concurrency.lockutils [req-7fad9855-c526-48d4-aee6-293d00cc97ae req-0070a38a-5a98-44c5-a638-f708679ffa65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.266 250273 DEBUG oslo_concurrency.lockutils [req-7fad9855-c526-48d4-aee6-293d00cc97ae req-0070a38a-5a98-44c5-a638-f708679ffa65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.267 250273 DEBUG oslo_concurrency.lockutils [req-7fad9855-c526-48d4-aee6-293d00cc97ae req-0070a38a-5a98-44c5-a638-f708679ffa65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.268 250273 DEBUG nova.compute.manager [req-7fad9855-c526-48d4-aee6-293d00cc97ae req-0070a38a-5a98-44c5-a638-f708679ffa65 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Processing event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.269 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.272 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165173.272272, 28049b58-a86b-4eeb-8faa-239ab046508b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.273 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.275 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.280 250273 INFO nova.virt.libvirt.driver [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Instance spawned successfully.#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.281 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.326 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.337 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.343 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.344 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.345 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.345 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.346 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.347 250273 DEBUG nova.virt.libvirt.driver [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.399 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 05:46:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:13.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.476 250273 INFO nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Took 7.05 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.477 250273 DEBUG nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:46:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:13.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.584 250273 INFO nova.compute.manager [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Took 10.27 seconds to build instance.#033[00m
Jan 23 05:46:13 np0005593232 nova_compute[250269]: 2026-01-23 10:46:13.614 250273 DEBUG oslo_concurrency.lockutils [None req-135f2736-b698-4a30-aeaf-afdfae355633 eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.370s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:13 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:13.922 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '81'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:46:14 np0005593232 nova_compute[250269]: 2026-01-23 10:46:14.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3593: 321 pgs: 321 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 6.0 MiB/s wr, 124 op/s
Jan 23 05:46:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.402 250273 DEBUG nova.compute.manager [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.402 250273 DEBUG oslo_concurrency.lockutils [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.403 250273 DEBUG oslo_concurrency.lockutils [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.403 250273 DEBUG oslo_concurrency.lockutils [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.403 250273 DEBUG nova.compute.manager [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] No waiting events found dispatching network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:46:15 np0005593232 nova_compute[250269]: 2026-01-23 10:46:15.404 250273 WARNING nova.compute.manager [req-b0cedd57-c1b7-4a80-a2fe-2ca9cdd8755a req-f6d98498-4f47-46a2-907b-860da41578f8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received unexpected event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 for instance with vm_state active and task_state None.#033[00m
Jan 23 05:46:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:15.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:16 np0005593232 nova_compute[250269]: 2026-01-23 10:46:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:16 np0005593232 podman[393689]: 2026-01-23 10:46:16.466890441 +0000 UTC m=+0.103659678 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:46:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3594: 321 pgs: 321 active+clean; 358 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 18 KiB/s wr, 28 op/s
Jan 23 05:46:16 np0005593232 nova_compute[250269]: 2026-01-23 10:46:16.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:17.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:17.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:17 np0005593232 nova_compute[250269]: 2026-01-23 10:46:17.595 250273 DEBUG nova.compute.manager [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-changed-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:46:17 np0005593232 nova_compute[250269]: 2026-01-23 10:46:17.596 250273 DEBUG nova.compute.manager [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing instance network info cache due to event network-changed-e89e1397-f031-41c8-a81d-9efecff06096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:46:17 np0005593232 nova_compute[250269]: 2026-01-23 10:46:17.596 250273 DEBUG oslo_concurrency.lockutils [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:46:17 np0005593232 nova_compute[250269]: 2026-01-23 10:46:17.596 250273 DEBUG oslo_concurrency.lockutils [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:46:17 np0005593232 nova_compute[250269]: 2026-01-23 10:46:17.596 250273 DEBUG nova.network.neutron [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing network info cache for port e89e1397-f031-41c8-a81d-9efecff06096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.912096) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165177912176, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1148, "num_deletes": 254, "total_data_size": 1755249, "memory_usage": 1782592, "flush_reason": "Manual Compaction"}
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165177930388, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 1736357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78535, "largest_seqno": 79682, "table_properties": {"data_size": 1730819, "index_size": 2932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12456, "raw_average_key_size": 20, "raw_value_size": 1719490, "raw_average_value_size": 2814, "num_data_blocks": 128, "num_entries": 611, "num_filter_entries": 611, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165085, "oldest_key_time": 1769165085, "file_creation_time": 1769165177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 18334 microseconds, and 4677 cpu microseconds.
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.930426) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 1736357 bytes OK
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.930449) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.932845) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.932858) EVENT_LOG_v1 {"time_micros": 1769165177932854, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.932889) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 1749983, prev total WAL file size 1749983, number of live WAL files 2.
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.933507) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(1695KB)], [182(12MB)]
Jan 23 05:46:17 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165177933647, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 14967265, "oldest_snapshot_seqno": -1}
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 10400 keys, 13035867 bytes, temperature: kUnknown
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165178102420, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 13035867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12969156, "index_size": 39590, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26053, "raw_key_size": 275246, "raw_average_key_size": 26, "raw_value_size": 12787632, "raw_average_value_size": 1229, "num_data_blocks": 1504, "num_entries": 10400, "num_filter_entries": 10400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.102778) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 13035867 bytes
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.104510) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.6 rd, 77.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 12.6 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(16.1) write-amplify(7.5) OK, records in: 10927, records dropped: 527 output_compression: NoCompression
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.104539) EVENT_LOG_v1 {"time_micros": 1769165178104525, "job": 114, "event": "compaction_finished", "compaction_time_micros": 168907, "compaction_time_cpu_micros": 62545, "output_level": 6, "num_output_files": 1, "total_output_size": 13035867, "num_input_records": 10927, "num_output_records": 10400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165178105798, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165178109777, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:17.933373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.110059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.110067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.110070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.110072) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:46:18.110074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:46:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3595: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 17 KiB/s wr, 137 op/s
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.104 250273 DEBUG nova.network.neutron [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updated VIF entry in instance network info cache for port e89e1397-f031-41c8-a81d-9efecff06096. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.105 250273 DEBUG nova.network.neutron [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.130 250273 DEBUG oslo_concurrency.lockutils [req-077daff5-0463-4fa4-a812-64c4be87d9aa req-c5de6b71-6bdf-4da9-88b9-75c0ca13a1df 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.446 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.446 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.447 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:46:19 np0005593232 nova_compute[250269]: 2026-01-23 10:46:19.447 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28049b58-a86b-4eeb-8faa-239ab046508b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:46:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:19.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:19.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:20 np0005593232 nova_compute[250269]: 2026-01-23 10:46:20.238 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:20 np0005593232 podman[393718]: 2026-01-23 10:46:20.407584315 +0000 UTC m=+0.066458160 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:46:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3596: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 16 KiB/s wr, 126 op/s
Jan 23 05:46:20 np0005593232 nova_compute[250269]: 2026-01-23 10:46:20.810 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:46:20 np0005593232 nova_compute[250269]: 2026-01-23 10:46:20.827 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:46:20 np0005593232 nova_compute[250269]: 2026-01-23 10:46:20.828 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:46:21 np0005593232 nova_compute[250269]: 2026-01-23 10:46:21.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:21.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:21.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:21 np0005593232 nova_compute[250269]: 2026-01-23 10:46:21.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:22 np0005593232 nova_compute[250269]: 2026-01-23 10:46:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3597: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 114 op/s
Jan 23 05:46:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:23.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3598: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 114 op/s
Jan 23 05:46:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:25 np0005593232 nova_compute[250269]: 2026-01-23 10:46:25.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3599: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 15 KiB/s wr, 137 op/s
Jan 23 05:46:26 np0005593232 nova_compute[250269]: 2026-01-23 10:46:26.665 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:27.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:27Z|00110|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.6
Jan 23 05:46:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:27Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:58:64 10.100.0.6
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3600: 321 pgs: 321 active+clean; 338 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.5 MiB/s rd, 3.4 MiB/s wr, 195 op/s
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 737c7d93-e772-4ac7-9495-3292512a7d6a does not exist
Jan 23 05:46:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3c3e493a-ab25-48ad-a016-e6922189589e does not exist
Jan 23 05:46:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7eeb76b1-de64-443f-a060-a3121b42f7c2 does not exist
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:46:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.325 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.325 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.326 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.326 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:29 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:29.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:29 np0005593232 podman[394064]: 2026-01-23 10:46:29.566808765 +0000 UTC m=+0.086456749 container create 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:29 np0005593232 podman[394064]: 2026-01-23 10:46:29.54725518 +0000 UTC m=+0.066903184 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:29 np0005593232 systemd[1]: Started libpod-conmon-0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b.scope.
Jan 23 05:46:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:29 np0005593232 podman[394064]: 2026-01-23 10:46:29.674021702 +0000 UTC m=+0.193669706 container init 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:46:29 np0005593232 podman[394064]: 2026-01-23 10:46:29.68240094 +0000 UTC m=+0.202048954 container start 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:46:29 np0005593232 podman[394064]: 2026-01-23 10:46:29.687091513 +0000 UTC m=+0.206739517 container attach 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:46:29 np0005593232 ecstatic_dewdney[394095]: 167 167
Jan 23 05:46:29 np0005593232 systemd[1]: libpod-0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b.scope: Deactivated successfully.
Jan 23 05:46:29 np0005593232 podman[394100]: 2026-01-23 10:46:29.745954026 +0000 UTC m=+0.035257843 container died 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:46:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5ea217d0c5943133a20422656dcfdb83d24db7d618fb3796923897b55a0156a8-merged.mount: Deactivated successfully.
Jan 23 05:46:29 np0005593232 podman[394100]: 2026-01-23 10:46:29.784179263 +0000 UTC m=+0.073483070 container remove 0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_dewdney, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:46:29 np0005593232 systemd[1]: libpod-conmon-0e0149a9efdc265ff1585c3e66dda337bf0589525627cb041ec1aadbd4f65b4b.scope: Deactivated successfully.
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:46:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478669569' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.835 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.934 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:46:29 np0005593232 nova_compute[250269]: 2026-01-23 10:46:29.934 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:30.005758181 +0000 UTC m=+0.069644640 container create ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:46:30 np0005593232 systemd[1]: Started libpod-conmon-ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d.scope.
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:29.978009703 +0000 UTC m=+0.041896232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:30.109375327 +0000 UTC m=+0.173261866 container init ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:30.115324386 +0000 UTC m=+0.179210845 container start ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:30.118969659 +0000 UTC m=+0.182856128 container attach ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.134 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.136 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3912MB free_disk=20.947010040283203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.136 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Jan 23 05:46:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Jan 23 05:46:30 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.243 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.261 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 28049b58-a86b-4eeb-8faa-239ab046508b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.262 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.262 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.308 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:46:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3602: 321 pgs: 321 active+clean; 338 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.1 MiB/s wr, 121 op/s
Jan 23 05:46:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:46:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088931173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.739 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.748 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.783 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.811 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:46:30 np0005593232 nova_compute[250269]: 2026-01-23 10:46:30.812 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:30 np0005593232 exciting_meitner[394141]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:46:30 np0005593232 exciting_meitner[394141]: --> relative data size: 1.0
Jan 23 05:46:30 np0005593232 exciting_meitner[394141]: --> All data devices are unavailable
Jan 23 05:46:30 np0005593232 systemd[1]: libpod-ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d.scope: Deactivated successfully.
Jan 23 05:46:30 np0005593232 podman[394125]: 2026-01-23 10:46:30.965170533 +0000 UTC m=+1.029056992 container died ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:46:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:31Z|00112|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.6
Jan 23 05:46:31 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:31Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:de:58:64 10.100.0.6
Jan 23 05:46:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-43820fb5b39cd2fdc48acfefed2404a8e6c716ce1b0f9ab275211504dbad9fcd-merged.mount: Deactivated successfully.
Jan 23 05:46:31 np0005593232 podman[394125]: 2026-01-23 10:46:31.23243086 +0000 UTC m=+1.296317359 container remove ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:46:31 np0005593232 systemd[1]: libpod-conmon-ecf3d4c92d9d7b1d4914b5deceaf4fc05894ff9145d945f6f25342ca98a2112d.scope: Deactivated successfully.
Jan 23 05:46:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:31 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:31.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:31 np0005593232 nova_compute[250269]: 2026-01-23 10:46:31.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.097278903 +0000 UTC m=+0.048511830 container create 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:46:32 np0005593232 systemd[1]: Started libpod-conmon-814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499.scope.
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.077426969 +0000 UTC m=+0.028659936 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.200224569 +0000 UTC m=+0.151457586 container init 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.214555377 +0000 UTC m=+0.165788334 container start 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.219900919 +0000 UTC m=+0.171133846 container attach 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:46:32 np0005593232 magical_tharp[394351]: 167 167
Jan 23 05:46:32 np0005593232 systemd[1]: libpod-814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499.scope: Deactivated successfully.
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.221816173 +0000 UTC m=+0.173049130 container died 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:46:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1581c738293e0374188cb4fd048027897d5492dcfc8e3dc0f0c85ea78d03d1ee-merged.mount: Deactivated successfully.
Jan 23 05:46:32 np0005593232 podman[394334]: 2026-01-23 10:46:32.267329876 +0000 UTC m=+0.218562813 container remove 814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_tharp, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:46:32 np0005593232 systemd[1]: libpod-conmon-814d51f7e35c0e7bd890644ab59586f729e739b021a041e301fc2be9727e4499.scope: Deactivated successfully.
Jan 23 05:46:32 np0005593232 podman[394376]: 2026-01-23 10:46:32.475499903 +0000 UTC m=+0.046132912 container create 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 05:46:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3603: 321 pgs: 321 active+clean; 298 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 6.1 MiB/s rd, 4.7 MiB/s wr, 202 op/s
Jan 23 05:46:32 np0005593232 systemd[1]: Started libpod-conmon-868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb.scope.
Jan 23 05:46:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:32Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:de:58:64 10.100.0.6
Jan 23 05:46:32 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:32Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:de:58:64 10.100.0.6
Jan 23 05:46:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:32 np0005593232 podman[394376]: 2026-01-23 10:46:32.451054878 +0000 UTC m=+0.021687897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c58162751e0f0dbc239807946673697c0c6f0e2f559f9f6c8b9b138fb72aa2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c58162751e0f0dbc239807946673697c0c6f0e2f559f9f6c8b9b138fb72aa2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c58162751e0f0dbc239807946673697c0c6f0e2f559f9f6c8b9b138fb72aa2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c58162751e0f0dbc239807946673697c0c6f0e2f559f9f6c8b9b138fb72aa2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:32 np0005593232 podman[394376]: 2026-01-23 10:46:32.56578956 +0000 UTC m=+0.136422599 container init 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:46:32 np0005593232 podman[394376]: 2026-01-23 10:46:32.572797199 +0000 UTC m=+0.143430198 container start 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:46:32 np0005593232 podman[394376]: 2026-01-23 10:46:32.576348 +0000 UTC m=+0.146981039 container attach 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:46:33 np0005593232 modest_booth[394392]: {
Jan 23 05:46:33 np0005593232 modest_booth[394392]:    "0": [
Jan 23 05:46:33 np0005593232 modest_booth[394392]:        {
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "devices": [
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "/dev/loop3"
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            ],
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "lv_name": "ceph_lv0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "lv_size": "7511998464",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "name": "ceph_lv0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "tags": {
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.cluster_name": "ceph",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.crush_device_class": "",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.encrypted": "0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.osd_id": "0",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.type": "block",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:                "ceph.vdo": "0"
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            },
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "type": "block",
Jan 23 05:46:33 np0005593232 modest_booth[394392]:            "vg_name": "ceph_vg0"
Jan 23 05:46:33 np0005593232 modest_booth[394392]:        }
Jan 23 05:46:33 np0005593232 modest_booth[394392]:    ]
Jan 23 05:46:33 np0005593232 modest_booth[394392]: }
Jan 23 05:46:33 np0005593232 systemd[1]: libpod-868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb.scope: Deactivated successfully.
Jan 23 05:46:33 np0005593232 podman[394376]: 2026-01-23 10:46:33.341580532 +0000 UTC m=+0.912213531 container died 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:46:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-31c58162751e0f0dbc239807946673697c0c6f0e2f559f9f6c8b9b138fb72aa2-merged.mount: Deactivated successfully.
Jan 23 05:46:33 np0005593232 podman[394376]: 2026-01-23 10:46:33.41116228 +0000 UTC m=+0.981795279 container remove 868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:46:33 np0005593232 systemd[1]: libpod-conmon-868324e957652abb84af65aac2021faf1b67b8ca6e4059cad6818774b5d19edb.scope: Deactivated successfully.
Jan 23 05:46:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:33.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:33.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.100453333 +0000 UTC m=+0.042660144 container create 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:46:34 np0005593232 systemd[1]: Started libpod-conmon-55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df.scope.
Jan 23 05:46:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.174487197 +0000 UTC m=+0.116694028 container init 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.084487039 +0000 UTC m=+0.026693870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.181659561 +0000 UTC m=+0.123866382 container start 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.184855762 +0000 UTC m=+0.127062573 container attach 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:46:34 np0005593232 keen_jones[394572]: 167 167
Jan 23 05:46:34 np0005593232 systemd[1]: libpod-55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df.scope: Deactivated successfully.
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.186805667 +0000 UTC m=+0.129012478 container died 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:46:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5ad801990f98eb6ea02b0942871d98442407a0494c8488e5236bfb5a653deef6-merged.mount: Deactivated successfully.
Jan 23 05:46:34 np0005593232 podman[394555]: 2026-01-23 10:46:34.225688733 +0000 UTC m=+0.167895544 container remove 55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jones, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 05:46:34 np0005593232 systemd[1]: libpod-conmon-55660566660de40c3b432e0e07ce3264230e964f5d7b36b57bd317affd8195df.scope: Deactivated successfully.
Jan 23 05:46:34 np0005593232 podman[394595]: 2026-01-23 10:46:34.407589253 +0000 UTC m=+0.059410810 container create 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:46:34 np0005593232 systemd[1]: Started libpod-conmon-164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5.scope.
Jan 23 05:46:34 np0005593232 podman[394595]: 2026-01-23 10:46:34.383006064 +0000 UTC m=+0.034827701 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:46:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3604: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 MiB/s rd, 4.7 MiB/s wr, 266 op/s
Jan 23 05:46:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:46:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969f778e86ca83effa6dbd5ade57832801cd9528e7af9f311dda61ed2ff6bd2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969f778e86ca83effa6dbd5ade57832801cd9528e7af9f311dda61ed2ff6bd2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969f778e86ca83effa6dbd5ade57832801cd9528e7af9f311dda61ed2ff6bd2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969f778e86ca83effa6dbd5ade57832801cd9528e7af9f311dda61ed2ff6bd2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:46:34 np0005593232 podman[394595]: 2026-01-23 10:46:34.501826282 +0000 UTC m=+0.153647859 container init 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:46:34 np0005593232 podman[394595]: 2026-01-23 10:46:34.519041761 +0000 UTC m=+0.170863308 container start 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:46:34 np0005593232 podman[394595]: 2026-01-23 10:46:34.523106037 +0000 UTC m=+0.174927584 container attach 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:46:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e405 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Jan 23 05:46:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Jan 23 05:46:34 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Jan 23 05:46:35 np0005593232 nova_compute[250269]: 2026-01-23 10:46:35.245 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]: {
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:        "osd_id": 0,
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:        "type": "bluestore"
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]:    }
Jan 23 05:46:35 np0005593232 vigilant_ritchie[394612]: }
Jan 23 05:46:35 np0005593232 systemd[1]: libpod-164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5.scope: Deactivated successfully.
Jan 23 05:46:35 np0005593232 podman[394595]: 2026-01-23 10:46:35.458229168 +0000 UTC m=+1.110050715 container died 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:46:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-969f778e86ca83effa6dbd5ade57832801cd9528e7af9f311dda61ed2ff6bd2b-merged.mount: Deactivated successfully.
Jan 23 05:46:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:35.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:35.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:35 np0005593232 podman[394595]: 2026-01-23 10:46:35.52655656 +0000 UTC m=+1.178378117 container remove 164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:46:35 np0005593232 systemd[1]: libpod-conmon-164a40c7ffa77b41fa9674c4c4608de8f56fe4e8095cd1b2cd8adaf77f06a0b5.scope: Deactivated successfully.
Jan 23 05:46:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:46:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:46:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 17e20da5-3177-4df6-8566-392ef8616e14 does not exist
Jan 23 05:46:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9068c277-a6c9-40e4-a885-41d0075ec114 does not exist
Jan 23 05:46:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3ba02350-0766-463f-87e5-01916d68b972 does not exist
Jan 23 05:46:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3606: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.2 MiB/s rd, 743 KiB/s wr, 180 op/s
Jan 23 05:46:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:46:36 np0005593232 nova_compute[250269]: 2026-01-23 10:46:36.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:36 np0005593232 nova_compute[250269]: 2026-01-23 10:46:36.808 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:46:37
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'vms', 'cephfs.cephfs.meta']
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:46:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:37.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:46:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3607: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 724 KiB/s wr, 175 op/s
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:46:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:46:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:39.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:39 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:39.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:40 np0005593232 nova_compute[250269]: 2026-01-23 10:46:40.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3608: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 598 KiB/s wr, 144 op/s
Jan 23 05:46:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:41.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:41 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:41.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:41 np0005593232 nova_compute[250269]: 2026-01-23 10:46:41.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3609: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 43 KiB/s wr, 72 op/s
Jan 23 05:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:42.665 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:42.666 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:46:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:46:42.667 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:46:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:43.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:43 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:43.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3610: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 313 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 23 05:46:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:45 np0005593232 nova_compute[250269]: 2026-01-23 10:46:45.252 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:45 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:45.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3611: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 363 KiB/s rd, 27 KiB/s wr, 37 op/s
Jan 23 05:46:46 np0005593232 nova_compute[250269]: 2026-01-23 10:46:46.678 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:46 np0005593232 podman[394728]: 2026-01-23 10:46:46.682573989 +0000 UTC m=+0.103766851 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:46:47 np0005593232 ovn_controller[151001]: 2026-01-23T10:46:47Z|00817|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 05:46:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019586f0 =====
Jan 23 05:46:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019586f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:47 np0005593232 radosgw[94687]: beast: 0x7f0a019586f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:47.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021808623557602828 of space, bias 1.0, pg target 0.6542587067280848 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004344458112320863 of space, bias 1.0, pg target 1.303337433696259 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:46:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:46:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3612: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 28 KiB/s wr, 46 op/s
Jan 23 05:46:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:49.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:49.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:50 np0005593232 nova_compute[250269]: 2026-01-23 10:46:50.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3613: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 25 KiB/s wr, 46 op/s
Jan 23 05:46:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Jan 23 05:46:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Jan 23 05:46:50 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Jan 23 05:46:51 np0005593232 podman[394782]: 2026-01-23 10:46:51.427437733 +0000 UTC m=+0.078905964 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 05:46:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:51.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:46:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:51.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:46:51 np0005593232 nova_compute[250269]: 2026-01-23 10:46:51.680 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3615: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 849 KiB/s rd, 31 KiB/s wr, 62 op/s
Jan 23 05:46:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:53.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:53.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3616: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 586 KiB/s rd, 27 KiB/s wr, 45 op/s
Jan 23 05:46:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:46:55 np0005593232 nova_compute[250269]: 2026-01-23 10:46:55.257 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:55.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3617: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 476 KiB/s rd, 17 KiB/s wr, 41 op/s
Jan 23 05:46:56 np0005593232 nova_compute[250269]: 2026-01-23 10:46:56.736 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:46:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3618: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 19 KiB/s wr, 31 op/s
Jan 23 05:46:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:46:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4141575075' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:46:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:46:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:46:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:46:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:46:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:46:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:46:59.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:46:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:00 np0005593232 nova_compute[250269]: 2026-01-23 10:47:00.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:47:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3944936089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:47:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3619: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 270 KiB/s rd, 19 KiB/s wr, 31 op/s
Jan 23 05:47:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:00.916 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=82, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=81) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:47:00 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:00.918 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:47:00 np0005593232 nova_compute[250269]: 2026-01-23 10:47:00.918 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:01.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:01.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:01 np0005593232 nova_compute[250269]: 2026-01-23 10:47:01.738 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3620: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 227 KiB/s rd, 23 KiB/s wr, 27 op/s
Jan 23 05:47:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:03.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:03.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3621: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 13 KiB/s wr, 14 op/s
Jan 23 05:47:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:05 np0005593232 nova_compute[250269]: 2026-01-23 10:47:05.289 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:05.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:05.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:05.920 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '82'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:47:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3622: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.5 KiB/s rd, 26 KiB/s wr, 12 op/s
Jan 23 05:47:06 np0005593232 nova_compute[250269]: 2026-01-23 10:47:06.739 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:07.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3623: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 23 KiB/s wr, 18 op/s
Jan 23 05:47:09 np0005593232 nova_compute[250269]: 2026-01-23 10:47:09.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:09 np0005593232 nova_compute[250269]: 2026-01-23 10:47:09.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:09.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:10 np0005593232 nova_compute[250269]: 2026-01-23 10:47:10.328 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3624: 321 pgs: 321 active+clean; 283 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 20 KiB/s wr, 16 op/s
Jan 23 05:47:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:11.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:11 np0005593232 nova_compute[250269]: 2026-01-23 10:47:11.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3625: 321 pgs: 321 active+clean; 241 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 21 KiB/s wr, 77 op/s
Jan 23 05:47:13 np0005593232 nova_compute[250269]: 2026-01-23 10:47:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:13 np0005593232 nova_compute[250269]: 2026-01-23 10:47:13.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:47:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3626: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 105 op/s
Jan 23 05:47:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:15 np0005593232 nova_compute[250269]: 2026-01-23 10:47:15.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:47:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:47:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:16 np0005593232 nova_compute[250269]: 2026-01-23 10:47:16.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3627: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 105 op/s
Jan 23 05:47:16 np0005593232 nova_compute[250269]: 2026-01-23 10:47:16.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:17 np0005593232 podman[394864]: 2026-01-23 10:47:17.448884401 +0000 UTC m=+0.110374689 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:47:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:17.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:17.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:18 np0005593232 nova_compute[250269]: 2026-01-23 10:47:18.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3628: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.4 KiB/s wr, 105 op/s
Jan 23 05:47:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:19.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:19.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:20 np0005593232 nova_compute[250269]: 2026-01-23 10:47:20.335 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3629: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 4.2 KiB/s wr, 90 op/s
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.492 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.493 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.493 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.493 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 28049b58-a86b-4eeb-8faa-239ab046508b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:47:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:21.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:21.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:21 np0005593232 nova_compute[250269]: 2026-01-23 10:47:21.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:22 np0005593232 podman[394893]: 2026-01-23 10:47:22.403502454 +0000 UTC m=+0.057714980 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:47:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3630: 321 pgs: 321 active+clean; 233 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 664 KiB/s wr, 137 op/s
Jan 23 05:47:22 np0005593232 nova_compute[250269]: 2026-01-23 10:47:22.946 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:47:22 np0005593232 nova_compute[250269]: 2026-01-23 10:47:22.970 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:47:22 np0005593232 nova_compute[250269]: 2026-01-23 10:47:22.970 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 05:47:22 np0005593232 nova_compute[250269]: 2026-01-23 10:47:22.971 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:23 np0005593232 nova_compute[250269]: 2026-01-23 10:47:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:23.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3631: 321 pgs: 321 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.3 MiB/s wr, 108 op/s
Jan 23 05:47:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:25 np0005593232 nova_compute[250269]: 2026-01-23 10:47:25.381 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:25.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3632: 321 pgs: 321 active+clean; 266 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 2.3 MiB/s wr, 83 op/s
Jan 23 05:47:26 np0005593232 nova_compute[250269]: 2026-01-23 10:47:26.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:27.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:27.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3633: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 144 op/s
Jan 23 05:47:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:29.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:29.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:30 np0005593232 nova_compute[250269]: 2026-01-23 10:47:30.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3634: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 MiB/s rd, 2.4 MiB/s wr, 143 op/s
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.323 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.324 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.324 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:47:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:31.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:31.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.793 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:47:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3886666306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.819 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.879 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:47:31 np0005593232 nova_compute[250269]: 2026-01-23 10:47:31.880 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000ce as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.023 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.024 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3942MB free_disk=20.96706771850586GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.025 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.025 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.174 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 28049b58-a86b-4eeb-8faa-239ab046508b actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.174 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.175 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.228 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.282 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.282 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.311 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.340 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.376 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:47:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3635: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 2.4 MiB/s wr, 155 op/s
Jan 23 05:47:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:47:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997848839' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.806 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.812 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.830 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.832 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:47:32 np0005593232 nova_compute[250269]: 2026-01-23 10:47:32.832 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:47:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:33.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3636: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 1.7 MiB/s wr, 109 op/s
Jan 23 05:47:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:35 np0005593232 nova_compute[250269]: 2026-01-23 10:47:35.433 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:35.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3637: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 98 KiB/s wr, 77 op/s
Jan 23 05:47:36 np0005593232 nova_compute[250269]: 2026-01-23 10:47:36.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:47:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:47:37
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes']
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9445d261-2147-4998-be10-5eb1a53db09c does not exist
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d5e3c779-ec2b-47bd-9d42-9f201bc52364 does not exist
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2bbddc24-6f0d-48e3-9f4f-e780d47243a4 does not exist
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:47:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:47:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:37.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:37.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:47:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:47:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:47:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.322855232 +0000 UTC m=+0.046763130 container create 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:47:38 np0005593232 systemd[1]: Started libpod-conmon-3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a.scope.
Jan 23 05:47:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.297051128 +0000 UTC m=+0.020959036 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.397885105 +0000 UTC m=+0.121792983 container init 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.404733629 +0000 UTC m=+0.128641487 container start 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:47:38 np0005593232 awesome_villani[395303]: 167 167
Jan 23 05:47:38 np0005593232 systemd[1]: libpod-3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a.scope: Deactivated successfully.
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.410199795 +0000 UTC m=+0.134107663 container attach 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.411045909 +0000 UTC m=+0.134953767 container died 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:47:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4f854fb58dbf3c228fd6495448c27004df5296d8aff6bfc523b80ba4dfbb045b-merged.mount: Deactivated successfully.
Jan 23 05:47:38 np0005593232 podman[395287]: 2026-01-23 10:47:38.472457944 +0000 UTC m=+0.196365802 container remove 3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:47:38 np0005593232 systemd[1]: libpod-conmon-3b40850b8de1a1fbd34c20d78f1c2a46297b040ba06bb07939c722a96a5d276a.scope: Deactivated successfully.
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3638: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 19 KiB/s wr, 74 op/s
Jan 23 05:47:38 np0005593232 podman[395329]: 2026-01-23 10:47:38.678929313 +0000 UTC m=+0.071875274 container create 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:47:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:47:38 np0005593232 systemd[1]: Started libpod-conmon-5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50.scope.
Jan 23 05:47:38 np0005593232 podman[395329]: 2026-01-23 10:47:38.641940202 +0000 UTC m=+0.034886243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:38 np0005593232 podman[395329]: 2026-01-23 10:47:38.76639475 +0000 UTC m=+0.159340731 container init 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:47:38 np0005593232 podman[395329]: 2026-01-23 10:47:38.775013414 +0000 UTC m=+0.167959366 container start 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:47:38 np0005593232 podman[395329]: 2026-01-23 10:47:38.778647538 +0000 UTC m=+0.171593519 container attach 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Jan 23 05:47:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:39.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:39 np0005593232 condescending_greider[395345]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:47:39 np0005593232 condescending_greider[395345]: --> relative data size: 1.0
Jan 23 05:47:39 np0005593232 condescending_greider[395345]: --> All data devices are unavailable
Jan 23 05:47:39 np0005593232 systemd[1]: libpod-5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50.scope: Deactivated successfully.
Jan 23 05:47:39 np0005593232 podman[395329]: 2026-01-23 10:47:39.653320921 +0000 UTC m=+1.046266892 container died 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:47:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-252355a2c2fc89e02818610e9a1f43bf786c4aa83ba08018ba8fc47e1d945185-merged.mount: Deactivated successfully.
Jan 23 05:47:39 np0005593232 podman[395329]: 2026-01-23 10:47:39.70467845 +0000 UTC m=+1.097624401 container remove 5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:47:39 np0005593232 systemd[1]: libpod-conmon-5655f3f6e318fec05de3656ca0f4cfd73639eb84c641aee2b344f1a78787bc50.scope: Deactivated successfully.
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.319470985 +0000 UTC m=+0.052260057 container create 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 05:47:40 np0005593232 systemd[1]: Started libpod-conmon-182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e.scope.
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.29149453 +0000 UTC m=+0.024283582 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.41253405 +0000 UTC m=+0.145323112 container init 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.421290489 +0000 UTC m=+0.154079541 container start 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.424681546 +0000 UTC m=+0.157470578 container attach 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:47:40 np0005593232 upbeat_varahamihira[395530]: 167 167
Jan 23 05:47:40 np0005593232 systemd[1]: libpod-182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e.scope: Deactivated successfully.
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.425802237 +0000 UTC m=+0.158591279 container died 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 05:47:40 np0005593232 nova_compute[250269]: 2026-01-23 10:47:40.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-22d81de2430451d83ee1628d7b52fe08553d1ccc12f640fffd60969a2cccbf68-merged.mount: Deactivated successfully.
Jan 23 05:47:40 np0005593232 podman[395514]: 2026-01-23 10:47:40.466044061 +0000 UTC m=+0.198833123 container remove 182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:47:40 np0005593232 systemd[1]: libpod-conmon-182b37651b16db6b256392ff81a589c2322333352de78757b95ae98c929eef9e.scope: Deactivated successfully.
Jan 23 05:47:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3639: 321 pgs: 321 active+clean; 267 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 337 KiB/s rd, 3.7 KiB/s wr, 13 op/s
Jan 23 05:47:40 np0005593232 podman[395554]: 2026-01-23 10:47:40.693700763 +0000 UTC m=+0.071385631 container create 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:47:40 np0005593232 systemd[1]: Started libpod-conmon-9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25.scope.
Jan 23 05:47:40 np0005593232 podman[395554]: 2026-01-23 10:47:40.664233485 +0000 UTC m=+0.041918403 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61909bb9fd695fd56e6e933bd914373ae3077ccd669f4184ce1476253b4f2b33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61909bb9fd695fd56e6e933bd914373ae3077ccd669f4184ce1476253b4f2b33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61909bb9fd695fd56e6e933bd914373ae3077ccd669f4184ce1476253b4f2b33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61909bb9fd695fd56e6e933bd914373ae3077ccd669f4184ce1476253b4f2b33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:40 np0005593232 podman[395554]: 2026-01-23 10:47:40.787573491 +0000 UTC m=+0.165258409 container init 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:47:40 np0005593232 podman[395554]: 2026-01-23 10:47:40.799673975 +0000 UTC m=+0.177358803 container start 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:47:40 np0005593232 podman[395554]: 2026-01-23 10:47:40.802951468 +0000 UTC m=+0.180636416 container attach 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:47:41 np0005593232 nervous_golick[395571]: {
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:    "0": [
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:        {
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "devices": [
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "/dev/loop3"
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            ],
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "lv_name": "ceph_lv0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "lv_size": "7511998464",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "name": "ceph_lv0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "tags": {
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.cluster_name": "ceph",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.crush_device_class": "",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.encrypted": "0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.osd_id": "0",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.type": "block",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:                "ceph.vdo": "0"
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            },
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "type": "block",
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:            "vg_name": "ceph_vg0"
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:        }
Jan 23 05:47:41 np0005593232 nervous_golick[395571]:    ]
Jan 23 05:47:41 np0005593232 nervous_golick[395571]: }
Jan 23 05:47:41 np0005593232 systemd[1]: libpod-9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25.scope: Deactivated successfully.
Jan 23 05:47:41 np0005593232 podman[395554]: 2026-01-23 10:47:41.59915049 +0000 UTC m=+0.976835438 container died 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:47:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:41.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-61909bb9fd695fd56e6e933bd914373ae3077ccd669f4184ce1476253b4f2b33-merged.mount: Deactivated successfully.
Jan 23 05:47:41 np0005593232 podman[395554]: 2026-01-23 10:47:41.683325443 +0000 UTC m=+1.061010281 container remove 9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:47:41 np0005593232 systemd[1]: libpod-conmon-9214a62b049e6f0e83b87e284464575ea2e97b2613f65e4372ab746e05512b25.scope: Deactivated successfully.
Jan 23 05:47:41 np0005593232 nova_compute[250269]: 2026-01-23 10:47:41.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.290280456 +0000 UTC m=+0.039502514 container create 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:47:42 np0005593232 systemd[1]: Started libpod-conmon-5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b.scope.
Jan 23 05:47:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.358429973 +0000 UTC m=+0.107652021 container init 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.365824343 +0000 UTC m=+0.115046401 container start 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.271417849 +0000 UTC m=+0.020639947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.368755296 +0000 UTC m=+0.117977364 container attach 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:47:42 np0005593232 trusting_faraday[395752]: 167 167
Jan 23 05:47:42 np0005593232 systemd[1]: libpod-5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b.scope: Deactivated successfully.
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.374257263 +0000 UTC m=+0.123479321 container died 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:47:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e3beca3ec98c86de04fb0129faa393f2900f1962d56c261e02425c802df58afa-merged.mount: Deactivated successfully.
Jan 23 05:47:42 np0005593232 podman[395735]: 2026-01-23 10:47:42.410514313 +0000 UTC m=+0.159736361 container remove 5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_faraday, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:47:42 np0005593232 systemd[1]: libpod-conmon-5ac9aacb5e6f93616204a2dcdc6a644b486becbd751ab31c4894e4eb2f0f351b.scope: Deactivated successfully.
Jan 23 05:47:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3640: 321 pgs: 321 active+clean; 274 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 628 KiB/s rd, 754 KiB/s wr, 63 op/s
Jan 23 05:47:42 np0005593232 podman[395775]: 2026-01-23 10:47:42.559425676 +0000 UTC m=+0.036148038 container create c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:47:42 np0005593232 systemd[1]: Started libpod-conmon-c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845.scope.
Jan 23 05:47:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:47:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902700341acd48fe75b7da564a0a58876c3e0da3116fdabcbe794d002ca627db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902700341acd48fe75b7da564a0a58876c3e0da3116fdabcbe794d002ca627db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902700341acd48fe75b7da564a0a58876c3e0da3116fdabcbe794d002ca627db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902700341acd48fe75b7da564a0a58876c3e0da3116fdabcbe794d002ca627db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:47:42 np0005593232 podman[395775]: 2026-01-23 10:47:42.54443933 +0000 UTC m=+0.021161702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:47:42 np0005593232 podman[395775]: 2026-01-23 10:47:42.646837321 +0000 UTC m=+0.123559683 container init c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:47:42 np0005593232 podman[395775]: 2026-01-23 10:47:42.653028767 +0000 UTC m=+0.129751119 container start c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:47:42 np0005593232 podman[395775]: 2026-01-23 10:47:42.655504587 +0000 UTC m=+0.132226969 container attach c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 05:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:42.666 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:42.669 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:47:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:47:42.670 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:47:43 np0005593232 festive_galileo[395791]: {
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:        "osd_id": 0,
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:        "type": "bluestore"
Jan 23 05:47:43 np0005593232 festive_galileo[395791]:    }
Jan 23 05:47:43 np0005593232 festive_galileo[395791]: }
Jan 23 05:47:43 np0005593232 systemd[1]: libpod-c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845.scope: Deactivated successfully.
Jan 23 05:47:43 np0005593232 podman[395775]: 2026-01-23 10:47:43.492924441 +0000 UTC m=+0.969646813 container died c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:47:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-902700341acd48fe75b7da564a0a58876c3e0da3116fdabcbe794d002ca627db-merged.mount: Deactivated successfully.
Jan 23 05:47:43 np0005593232 podman[395775]: 2026-01-23 10:47:43.538394013 +0000 UTC m=+1.015116375 container remove c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:47:43 np0005593232 systemd[1]: libpod-conmon-c1786d694a1adaae704b65fadba95dd6f27c50a79b364949ce8c9d7f275bc845.scope: Deactivated successfully.
Jan 23 05:47:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:47:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:43.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:47:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e0ad174b-562f-40d6-8a2a-091f546aae96 does not exist
Jan 23 05:47:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4cdb52c4-b146-4b4d-9b44-76ecdc3138c4 does not exist
Jan 23 05:47:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 08105464-2e86-48be-a6b4-71b8d0022dca does not exist
Jan 23 05:47:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3641: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 362 KiB/s rd, 2.2 MiB/s wr, 64 op/s
Jan 23 05:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:47:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920840808' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:47:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:47:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/920840808' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:47:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:47:45 np0005593232 nova_compute[250269]: 2026-01-23 10:47:45.440 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:45.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:45.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3642: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 423 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 23 05:47:46 np0005593232 nova_compute[250269]: 2026-01-23 10:47:46.801 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:47:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:47.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:47:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:47.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002178136050623488 of space, bias 1.0, pg target 0.6534408151870463 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0046774308463614364 of space, bias 1.0, pg target 1.403229253908431 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:47:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 05:47:48 np0005593232 podman[395929]: 2026-01-23 10:47:48.49375878 +0000 UTC m=+0.137514620 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 05:47:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3643: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 577 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 23 05:47:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:49.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:49.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:50 np0005593232 nova_compute[250269]: 2026-01-23 10:47:50.445 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3644: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 575 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Jan 23 05:47:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:51.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:51.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:51 np0005593232 nova_compute[250269]: 2026-01-23 10:47:51.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3645: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 577 KiB/s rd, 2.2 MiB/s wr, 74 op/s
Jan 23 05:47:53 np0005593232 podman[395958]: 2026-01-23 10:47:53.42107102 +0000 UTC m=+0.071327999 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 23 05:47:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:53.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:53.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3646: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 293 KiB/s rd, 1.5 MiB/s wr, 33 op/s
Jan 23 05:47:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:55 np0005593232 nova_compute[250269]: 2026-01-23 10:47:55.447 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:55.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:55.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3647: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 224 KiB/s rd, 13 KiB/s wr, 19 op/s
Jan 23 05:47:56 np0005593232 nova_compute[250269]: 2026-01-23 10:47:56.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:47:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:57.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:57.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:47:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3648: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 168 KiB/s rd, 13 KiB/s wr, 23 op/s
Jan 23 05:47:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:47:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:47:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:47:59.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:47:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:47:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:47:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:47:59.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:00 np0005593232 nova_compute[250269]: 2026-01-23 10:48:00.451 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3649: 321 pgs: 321 active+clean; 303 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 12 KiB/s wr, 18 op/s
Jan 23 05:48:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Jan 23 05:48:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Jan 23 05:48:01 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Jan 23 05:48:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:01.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:01.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:01 np0005593232 nova_compute[250269]: 2026-01-23 10:48:01.847 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.457 250273 DEBUG nova.compute.manager [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-changed-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.458 250273 DEBUG nova.compute.manager [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing instance network info cache due to event network-changed-e89e1397-f031-41c8-a81d-9efecff06096. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.458 250273 DEBUG oslo_concurrency.lockutils [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.458 250273 DEBUG oslo_concurrency.lockutils [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.459 250273 DEBUG nova.network.neutron [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Refreshing network info cache for port e89e1397-f031-41c8-a81d-9efecff06096 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.496 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.496 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.496 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.497 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.497 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.498 250273 INFO nova.compute.manager [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Terminating instance#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.499 250273 DEBUG nova.compute.manager [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.516 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=83, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=82) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.518 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.519 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '83'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:48:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3651: 321 pgs: 321 active+clean; 288 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 2.4 KiB/s wr, 36 op/s
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.552 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 kernel: tape89e1397-f0 (unregistering): left promiscuous mode
Jan 23 05:48:02 np0005593232 NetworkManager[49057]: <info>  [1769165282.6873] device (tape89e1397-f0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:48:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:48:02Z|00818|binding|INFO|Releasing lport e89e1397-f031-41c8-a81d-9efecff06096 from this chassis (sb_readonly=0)
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:48:02Z|00819|binding|INFO|Setting lport e89e1397-f031-41c8-a81d-9efecff06096 down in Southbound
Jan 23 05:48:02 np0005593232 ovn_controller[151001]: 2026-01-23T10:48:02Z|00820|binding|INFO|Removing iface tape89e1397-f0 ovn-installed in OVS
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.717 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:58:64 10.100.0.6'], port_security=['fa:16:3e:de:58:64 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '28049b58-a86b-4eeb-8faa-239ab046508b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72854481-c2f9-4651-8ba1-fe321a8a5546', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd27c5465284b48a5818ef931d6251c43', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dbb54d44-fc85-485d-96a6-e6e12258a95a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=95d9ae35-aabe-45f7-a103-f14858b94e31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=e89e1397-f031-41c8-a81d-9efecff06096) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.718 161902 INFO neutron.agent.ovn.metadata.agent [-] Port e89e1397-f031-41c8-a81d-9efecff06096 in datapath 72854481-c2f9-4651-8ba1-fe321a8a5546 unbound from our chassis#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.720 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72854481-c2f9-4651-8ba1-fe321a8a5546, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.721 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3d8167ff-1991-4419-bd58-724fcf450f65]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:02 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:02.722 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 namespace which is not needed anymore#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.733 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ce.scope: Deactivated successfully.
Jan 23 05:48:02 np0005593232 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d000000ce.scope: Consumed 18.321s CPU time.
Jan 23 05:48:02 np0005593232 systemd-machined[215836]: Machine qemu-92-instance-000000ce terminated.
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [NOTICE]   (393676) : haproxy version is 2.8.14-c23fe91
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [NOTICE]   (393676) : path to executable is /usr/sbin/haproxy
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [WARNING]  (393676) : Exiting Master process...
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [WARNING]  (393676) : Exiting Master process...
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [ALERT]    (393676) : Current worker (393678) exited with code 143 (Terminated)
Jan 23 05:48:02 np0005593232 neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546[393671]: [WARNING]  (393676) : All workers exited. Exiting... (0)
Jan 23 05:48:02 np0005593232 systemd[1]: libpod-b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616.scope: Deactivated successfully.
Jan 23 05:48:02 np0005593232 podman[396008]: 2026-01-23 10:48:02.864651 +0000 UTC m=+0.046040289 container died b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:48:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616-userdata-shm.mount: Deactivated successfully.
Jan 23 05:48:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c8b5dbfd5e53dcf768f4904ffb4f733d9bffad7737cc3b3ed4fa956f665a492b-merged.mount: Deactivated successfully.
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.925 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.931 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.944 250273 INFO nova.virt.libvirt.driver [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Instance destroyed successfully.#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.944 250273 DEBUG nova.objects.instance [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lazy-loading 'resources' on Instance uuid 28049b58-a86b-4eeb-8faa-239ab046508b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:48:02 np0005593232 podman[396008]: 2026-01-23 10:48:02.946380574 +0000 UTC m=+0.127769903 container cleanup b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 05:48:02 np0005593232 systemd[1]: libpod-conmon-b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616.scope: Deactivated successfully.
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.977 250273 DEBUG nova.virt.libvirt.vif [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:46:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1352362499',display_name='tempest-TestVolumeBootPattern-server-1352362499',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1352362499',id=206,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIC+r2L0+q9PBY99aBdo5XmxGSuvAYYpgonPD6n/fI0f2obfQ97Vwf3Ee9Eeaa+EcBJLE3HyG34aomsAcNn3g+1JMNr5TfA5Vs6CyFMbO26Wy1l0eWLB9DEd+vOA6lCnxQ==',key_name='tempest-TestVolumeBootPattern-909159234',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:46:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d27c5465284b48a5818ef931d6251c43',ramdisk_id='',reservation_id='r-dbypuz6j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-2139361132',owner_user_name='tempest-TestVolumeBootPattern-2139361132-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:46:13Z,user_data=None,user_id='eb70c3aee8b64273a1930c0c2c231aff',uuid=28049b58-a86b-4eeb-8faa-239ab046508b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.977 250273 DEBUG nova.network.os_vif_util [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converting VIF {"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.978 250273 DEBUG nova.network.os_vif_util [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.979 250273 DEBUG os_vif [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.980 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.980 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape89e1397-f0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.982 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.983 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:02 np0005593232 nova_compute[250269]: 2026-01-23 10:48:02.987 250273 INFO os_vif [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:de:58:64,bridge_name='br-int',has_traffic_filtering=True,id=e89e1397-f031-41c8-a81d-9efecff06096,network=Network(72854481-c2f9-4651-8ba1-fe321a8a5546),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape89e1397-f0')#033[00m
Jan 23 05:48:03 np0005593232 podman[396050]: 2026-01-23 10:48:03.068191276 +0000 UTC m=+0.098480170 container remove b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.074 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0089f920-deb2-47ca-8da0-ee6dfc366d46]: (4, ('Fri Jan 23 10:48:02 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 (b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616)\nb67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616\nFri Jan 23 10:48:02 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 (b67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616)\nb67037fb8f734ee8d4a663432cc3c92ab2f449214e1e6da45f88023811f3e616\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.076 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e108d37a-d120-4d88-a1d2-e349757fdaac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.078 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72854481-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:03 np0005593232 kernel: tap72854481-c0: left promiscuous mode
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.092 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.096 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a45d06e1-5d1b-49f1-8c2e-32c7c6aba3d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.113 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc01891-cab9-43d0-a3f8-9c6dfed57e5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.115 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6f7226-57d7-4ca4-82f9-a5d56ca9e298]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.139 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e73b08-49cc-4ca4-8443-9f756d0b50bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 908522, 'reachable_time': 20857, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396082, 'error': None, 'target': 'ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 systemd[1]: run-netns-ovnmeta\x2d72854481\x2dc2f9\x2d4651\x2d8ba1\x2dfe321a8a5546.mount: Deactivated successfully.
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.144 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72854481-c2f9-4651-8ba1-fe321a8a5546 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:48:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:03.144 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd18f84-b2e0-4aff-84ff-e04b43595f68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.621 250273 DEBUG nova.compute.manager [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-unplugged-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.622 250273 DEBUG oslo_concurrency.lockutils [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.623 250273 DEBUG oslo_concurrency.lockutils [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.624 250273 DEBUG oslo_concurrency.lockutils [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.624 250273 DEBUG nova.compute.manager [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] No waiting events found dispatching network-vif-unplugged-e89e1397-f031-41c8-a81d-9efecff06096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:48:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:03 np0005593232 nova_compute[250269]: 2026-01-23 10:48:03.625 250273 DEBUG nova.compute.manager [req-5e319826-19ef-460f-9aa0-701592fd5fb9 req-0475413e-22ad-4a1a-b213-549202bede90 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-unplugged-e89e1397-f031-41c8-a81d-9efecff06096 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:48:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:03.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:03.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3652: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 3.0 KiB/s wr, 35 op/s
Jan 23 05:48:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:05.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:05.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.795 250273 DEBUG nova.compute.manager [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.796 250273 DEBUG oslo_concurrency.lockutils [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.796 250273 DEBUG oslo_concurrency.lockutils [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.796 250273 DEBUG oslo_concurrency.lockutils [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.797 250273 DEBUG nova.compute.manager [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] No waiting events found dispatching network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:48:05 np0005593232 nova_compute[250269]: 2026-01-23 10:48:05.797 250273 WARNING nova.compute.manager [req-fbb5ba00-de51-4c1c-a461-f8b889866891 req-4822f7be-d480-43da-8dd7-282ef1e5cab5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received unexpected event network-vif-plugged-e89e1397-f031-41c8-a81d-9efecff06096 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:48:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3653: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 10 KiB/s wr, 40 op/s
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.794 250273 INFO nova.virt.libvirt.driver [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Deleting instance files /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b_del#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.795 250273 INFO nova.virt.libvirt.driver [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Deletion of /var/lib/nova/instances/28049b58-a86b-4eeb-8faa-239ab046508b_del complete#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.848 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.883 250273 INFO nova.compute.manager [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Took 4.38 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.884 250273 DEBUG oslo.service.loopingcall [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.884 250273 DEBUG nova.compute.manager [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:48:06 np0005593232 nova_compute[250269]: 2026-01-23 10:48:06.884 250273 DEBUG nova.network.neutron [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:48:07 np0005593232 nova_compute[250269]: 2026-01-23 10:48:07.089 250273 DEBUG nova.network.neutron [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updated VIF entry in instance network info cache for port e89e1397-f031-41c8-a81d-9efecff06096. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:48:07 np0005593232 nova_compute[250269]: 2026-01-23 10:48:07.090 250273 DEBUG nova.network.neutron [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [{"id": "e89e1397-f031-41c8-a81d-9efecff06096", "address": "fa:16:3e:de:58:64", "network": {"id": "72854481-c2f9-4651-8ba1-fe321a8a5546", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-823196877-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d27c5465284b48a5818ef931d6251c43", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape89e1397-f0", "ovs_interfaceid": "e89e1397-f031-41c8-a81d-9efecff06096", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:48:07 np0005593232 nova_compute[250269]: 2026-01-23 10:48:07.133 250273 DEBUG oslo_concurrency.lockutils [req-3eb0f85a-229b-4942-8f59-5d25de1277e0 req-a08074ac-601d-4f5d-8b96-365f81ea2f58 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-28049b58-a86b-4eeb-8faa-239ab046508b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:48:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:07.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:07.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:07 np0005593232 nova_compute[250269]: 2026-01-23 10:48:07.983 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.169 250273 DEBUG nova.network.neutron [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.214 250273 INFO nova.compute.manager [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Took 1.33 seconds to deallocate network for instance.#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.340 250273 DEBUG nova.compute.manager [req-2d02233f-dc5d-4232-a095-b82e59300db8 req-b3231cdd-4d52-4fc3-90cc-188ed4377f2c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Received event network-vif-deleted-e89e1397-f031-41c8-a81d-9efecff06096 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:48:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3654: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 35 KiB/s rd, 25 KiB/s wr, 51 op/s
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.561 250273 INFO nova.compute.manager [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Took 0.35 seconds to detach 1 volumes for instance.#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.647 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.649 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:08 np0005593232 nova_compute[250269]: 2026-01-23 10:48:08.727 250273 DEBUG oslo_concurrency.processutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353908367' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.247 250273 DEBUG oslo_concurrency.processutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.259 250273 DEBUG nova.compute.provider_tree [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.286 250273 DEBUG nova.scheduler.client.report [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.331 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.378 250273 INFO nova.scheduler.client.report [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Deleted allocations for instance 28049b58-a86b-4eeb-8faa-239ab046508b#033[00m
Jan 23 05:48:09 np0005593232 nova_compute[250269]: 2026-01-23 10:48:09.514 250273 DEBUG oslo_concurrency.lockutils [None req-6425256d-8def-40c6-adc0-83c3ae38189b eb70c3aee8b64273a1930c0c2c231aff d27c5465284b48a5818ef931d6251c43 - - default default] Lock "28049b58-a86b-4eeb-8faa-239ab046508b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.017s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Jan 23 05:48:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:09.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:09.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Jan 23 05:48:09 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Jan 23 05:48:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3656: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 26 KiB/s wr, 54 op/s
Jan 23 05:48:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Jan 23 05:48:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Jan 23 05:48:10 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Jan 23 05:48:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:48:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2584578071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:48:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:11.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:11.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:11 np0005593232 nova_compute[250269]: 2026-01-23 10:48:11.833 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:11 np0005593232 nova_compute[250269]: 2026-01-23 10:48:11.834 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:11 np0005593232 nova_compute[250269]: 2026-01-23 10:48:11.895 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3658: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 28 KiB/s wr, 29 op/s
Jan 23 05:48:12 np0005593232 nova_compute[250269]: 2026-01-23 10:48:12.990 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:13.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:13.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3659: 321 pgs: 321 active+clean; 281 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 45 op/s
Jan 23 05:48:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:48:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2011873793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:48:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:48:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2011873793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:48:15 np0005593232 nova_compute[250269]: 2026-01-23 10:48:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:15 np0005593232 nova_compute[250269]: 2026-01-23 10:48:15.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:48:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Jan 23 05:48:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Jan 23 05:48:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:15.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:15 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Jan 23 05:48:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:15.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3661: 321 pgs: 321 active+clean; 293 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.2 MiB/s wr, 109 op/s
Jan 23 05:48:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Jan 23 05:48:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Jan 23 05:48:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Jan 23 05:48:16 np0005593232 nova_compute[250269]: 2026-01-23 10:48:16.921 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:17 np0005593232 nova_compute[250269]: 2026-01-23 10:48:17.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:17.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:17.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:17 np0005593232 nova_compute[250269]: 2026-01-23 10:48:17.943 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165282.9421763, 28049b58-a86b-4eeb-8faa-239ab046508b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:48:17 np0005593232 nova_compute[250269]: 2026-01-23 10:48:17.944 250273 INFO nova.compute.manager [-] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:48:17 np0005593232 nova_compute[250269]: 2026-01-23 10:48:17.982 250273 DEBUG nova.compute.manager [None req-62f808e2-0493-46ce-81fe-63efd8d528cd - - - - - -] [instance: 28049b58-a86b-4eeb-8faa-239ab046508b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:48:18 np0005593232 nova_compute[250269]: 2026-01-23 10:48:18.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3663: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.2 MiB/s rd, 6.1 MiB/s wr, 138 op/s
Jan 23 05:48:19 np0005593232 nova_compute[250269]: 2026-01-23 10:48:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:19 np0005593232 podman[396164]: 2026-01-23 10:48:19.4299119 +0000 UTC m=+0.091806930 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:48:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:19.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:19.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3664: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.336 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.560 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:21.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:21.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.723 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:21 np0005593232 nova_compute[250269]: 2026-01-23 10:48:21.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3665: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.6 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Jan 23 05:48:23 np0005593232 nova_compute[250269]: 2026-01-23 10:48:23.058 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:23 np0005593232 nova_compute[250269]: 2026-01-23 10:48:23.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:23.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:23.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:24 np0005593232 podman[396195]: 2026-01-23 10:48:24.399052557 +0000 UTC m=+0.053436200 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 05:48:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3666: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 MiB/s rd, 5.2 MiB/s wr, 118 op/s
Jan 23 05:48:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Jan 23 05:48:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Jan 23 05:48:24 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Jan 23 05:48:25 np0005593232 nova_compute[250269]: 2026-01-23 10:48:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 23 05:48:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:25.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 23 05:48:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:25.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3668: 321 pgs: 321 active+clean; 262 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 61 op/s
Jan 23 05:48:26 np0005593232 nova_compute[250269]: 2026-01-23 10:48:26.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:27.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:27.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:28 np0005593232 nova_compute[250269]: 2026-01-23 10:48:28.099 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3669: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 KiB/s wr, 52 op/s
Jan 23 05:48:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:29.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:29.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3670: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 KiB/s wr, 52 op/s
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.355 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.356 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.356 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.356 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.357 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:48:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:31.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:31.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:31 np0005593232 nova_compute[250269]: 2026-01-23 10:48:31.852 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.078 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.080 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4150MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.080 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.081 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.244 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.244 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.279 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:48:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3671: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 34 op/s
Jan 23 05:48:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:48:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988745399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.801 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.809 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.835 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.866 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:48:32 np0005593232 nova_compute[250269]: 2026-01-23 10:48:32.867 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:33 np0005593232 nova_compute[250269]: 2026-01-23 10:48:33.145 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:33.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:48:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:48:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3672: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 23 05:48:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:35.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:35.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3673: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:48:36 np0005593232 nova_compute[250269]: 2026-01-23 10:48:36.862 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:48:37 np0005593232 nova_compute[250269]: 2026-01-23 10:48:37.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:48:37
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root']
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:48:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:48:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:37.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:37.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:38 np0005593232 nova_compute[250269]: 2026-01-23 10:48:38.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3674: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 0 B/s wr, 18 op/s
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:48:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:48:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:39.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:39.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3675: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 05:48:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:41.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:41.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:42 np0005593232 nova_compute[250269]: 2026-01-23 10:48:42.011 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3676: 321 pgs: 321 active+clean; 225 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 1.1 MiB/s wr, 30 op/s
Jan 23 05:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:42.669 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:42.670 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:48:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:48:42.670 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:48:43 np0005593232 nova_compute[250269]: 2026-01-23 10:48:43.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:43.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:43.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Jan 23 05:48:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Jan 23 05:48:44 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Jan 23 05:48:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3678: 321 pgs: 321 active+clean; 279 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Jan 23 05:48:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f855085-cffc-4fa5-b232-160f6bf192ea does not exist
Jan 23 05:48:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e61d9f0d-4ab4-4b8e-b1b7-ff1c3d29613e does not exist
Jan 23 05:48:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eef265cf-e7da-4b45-8387-b5bab5c575fc does not exist
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:45 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:48:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:45.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:45.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:45 np0005593232 podman[396592]: 2026-01-23 10:48:45.923185148 +0000 UTC m=+0.046946685 container create f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:48:45 np0005593232 systemd[1]: Started libpod-conmon-f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2.scope.
Jan 23 05:48:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:45 np0005593232 podman[396592]: 2026-01-23 10:48:45.898060804 +0000 UTC m=+0.021822361 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:46 np0005593232 podman[396592]: 2026-01-23 10:48:46.003052719 +0000 UTC m=+0.126814276 container init f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:48:46 np0005593232 podman[396592]: 2026-01-23 10:48:46.010194962 +0000 UTC m=+0.133956499 container start f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:48:46 np0005593232 podman[396592]: 2026-01-23 10:48:46.013574298 +0000 UTC m=+0.137335865 container attach f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:48:46 np0005593232 dreamy_mendeleev[396608]: 167 167
Jan 23 05:48:46 np0005593232 systemd[1]: libpod-f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2.scope: Deactivated successfully.
Jan 23 05:48:46 np0005593232 podman[396592]: 2026-01-23 10:48:46.016906793 +0000 UTC m=+0.140668330 container died f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:48:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2d6e58fec3600903bd6e40720aa0def75c030cd293b9ad636ee12e07735d474e-merged.mount: Deactivated successfully.
Jan 23 05:48:46 np0005593232 podman[396592]: 2026-01-23 10:48:46.056403385 +0000 UTC m=+0.180164922 container remove f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_mendeleev, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:48:46 np0005593232 systemd[1]: libpod-conmon-f806b118ef10a71acf7bbdeea3884e2273f1650c75f6cc85ad16d5598fcda1d2.scope: Deactivated successfully.
Jan 23 05:48:46 np0005593232 podman[396632]: 2026-01-23 10:48:46.218129562 +0000 UTC m=+0.046487272 container create 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:48:46 np0005593232 systemd[1]: Started libpod-conmon-16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f.scope.
Jan 23 05:48:46 np0005593232 podman[396632]: 2026-01-23 10:48:46.199290427 +0000 UTC m=+0.027648137 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:46 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:46 np0005593232 podman[396632]: 2026-01-23 10:48:46.320469751 +0000 UTC m=+0.148827541 container init 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:48:46 np0005593232 podman[396632]: 2026-01-23 10:48:46.32710168 +0000 UTC m=+0.155459410 container start 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 23 05:48:46 np0005593232 podman[396632]: 2026-01-23 10:48:46.330951729 +0000 UTC m=+0.159309439 container attach 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 05:48:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3679: 321 pgs: 321 active+clean; 245 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 117 op/s
Jan 23 05:48:47 np0005593232 nova_compute[250269]: 2026-01-23 10:48:47.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:47 np0005593232 nice_goldwasser[396648]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:48:47 np0005593232 nice_goldwasser[396648]: --> relative data size: 1.0
Jan 23 05:48:47 np0005593232 nice_goldwasser[396648]: --> All data devices are unavailable
Jan 23 05:48:47 np0005593232 systemd[1]: libpod-16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f.scope: Deactivated successfully.
Jan 23 05:48:47 np0005593232 podman[396632]: 2026-01-23 10:48:47.206219289 +0000 UTC m=+1.034576989 container died 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:48:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cf17ca79997221886eec748fadbf59f0406c4b8548d39b15f9e20d5ad0dee3b4-merged.mount: Deactivated successfully.
Jan 23 05:48:47 np0005593232 podman[396632]: 2026-01-23 10:48:47.269147198 +0000 UTC m=+1.097504908 container remove 16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_goldwasser, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:48:47 np0005593232 systemd[1]: libpod-conmon-16dae0f0cb477adb09b2f7d4b1a872ca6528f65e394dc7bb41e0ad18a06ece2f.scope: Deactivated successfully.
Jan 23 05:48:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:47.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002174319223431975 of space, bias 1.0, pg target 0.6522957670295925 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021759550065140517 of space, bias 1.0, pg target 0.6527865019542155 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.003059459624511446 of space, bias 1.0, pg target 0.9178378873534339 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:48:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:48:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:47.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:48:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1587 writes, 6963 keys, 1587 commit groups, 1.0 writes per commit group, ingest: 10.74 MB, 0.02 MB/s#012Interval WAL: 1587 writes, 1587 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     48.4      2.24              0.42        57    0.039       0      0       0.0       0.0#012  L6      1/0   12.43 MB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   5.2     86.1     73.6      7.67              1.98        56    0.137    423K    30K       0.0       0.0#012 Sum      1/0   12.43 MB   0.0      0.6     0.1      0.5       0.7      0.1       0.0   6.2     66.6     67.9      9.91              2.41       113    0.088    423K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1     74.7     74.1      0.91              0.29        10    0.091     53K   2601       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.6      0.0       0.0   0.0     86.1     73.6      7.67              1.98        56    0.137    423K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     48.4      2.24              0.42        56    0.040       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.106, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.66 GB write, 0.10 MB/s write, 0.64 GB read, 0.10 MB/s read, 9.9 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 73.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000509 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4221,70.20 MB,23.0909%) FilterBlock(114,1.14 MB,0.375703%) IndexBlock(114,1.86 MB,0.611762%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.91052882 +0000 UTC m=+0.040987386 container create 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:48:47 np0005593232 systemd[1]: Started libpod-conmon-782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345.scope.
Jan 23 05:48:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.983940217 +0000 UTC m=+0.114398793 container init 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.892195929 +0000 UTC m=+0.022654515 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.990592566 +0000 UTC m=+0.121051152 container start 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.99463344 +0000 UTC m=+0.125092106 container attach 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:48:47 np0005593232 elastic_jang[396883]: 167 167
Jan 23 05:48:47 np0005593232 systemd[1]: libpod-782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345.scope: Deactivated successfully.
Jan 23 05:48:47 np0005593232 podman[396867]: 2026-01-23 10:48:47.99601808 +0000 UTC m=+0.126476646 container died 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:48:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e816ab7b66457697ae753c19a85001bd5dffbe3a10f19dedb17ffff3dfdfe19f-merged.mount: Deactivated successfully.
Jan 23 05:48:48 np0005593232 podman[396867]: 2026-01-23 10:48:48.032429135 +0000 UTC m=+0.162887691 container remove 782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:48:48 np0005593232 systemd[1]: libpod-conmon-782f1bfe6e50343304812d4913035000207676f01522e70031ad711442418345.scope: Deactivated successfully.
Jan 23 05:48:48 np0005593232 nova_compute[250269]: 2026-01-23 10:48:48.166 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:48 np0005593232 podman[396908]: 2026-01-23 10:48:48.193213115 +0000 UTC m=+0.047580003 container create f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 05:48:48 np0005593232 systemd[1]: Started libpod-conmon-f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a.scope.
Jan 23 05:48:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:48 np0005593232 podman[396908]: 2026-01-23 10:48:48.173660829 +0000 UTC m=+0.028027707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ede95aa62a30453d6c3a1280c0d4a39debf2734aca288a82197e7a0f5ad978b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ede95aa62a30453d6c3a1280c0d4a39debf2734aca288a82197e7a0f5ad978b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ede95aa62a30453d6c3a1280c0d4a39debf2734aca288a82197e7a0f5ad978b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ede95aa62a30453d6c3a1280c0d4a39debf2734aca288a82197e7a0f5ad978b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:48 np0005593232 podman[396908]: 2026-01-23 10:48:48.286975669 +0000 UTC m=+0.141342557 container init f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:48:48 np0005593232 podman[396908]: 2026-01-23 10:48:48.29366846 +0000 UTC m=+0.148035318 container start f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:48:48 np0005593232 podman[396908]: 2026-01-23 10:48:48.297149049 +0000 UTC m=+0.151515937 container attach f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:48:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3680: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 23 05:48:49 np0005593232 recursing_brown[396926]: {
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:    "0": [
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:        {
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "devices": [
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "/dev/loop3"
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            ],
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "lv_name": "ceph_lv0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "lv_size": "7511998464",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "name": "ceph_lv0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "tags": {
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.cluster_name": "ceph",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.crush_device_class": "",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.encrypted": "0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.osd_id": "0",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.type": "block",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:                "ceph.vdo": "0"
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            },
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "type": "block",
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:            "vg_name": "ceph_vg0"
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:        }
Jan 23 05:48:49 np0005593232 recursing_brown[396926]:    ]
Jan 23 05:48:49 np0005593232 recursing_brown[396926]: }
Jan 23 05:48:49 np0005593232 systemd[1]: libpod-f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a.scope: Deactivated successfully.
Jan 23 05:48:49 np0005593232 podman[396908]: 2026-01-23 10:48:49.067465045 +0000 UTC m=+0.921831943 container died f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:48:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5ede95aa62a30453d6c3a1280c0d4a39debf2734aca288a82197e7a0f5ad978b-merged.mount: Deactivated successfully.
Jan 23 05:48:49 np0005593232 podman[396908]: 2026-01-23 10:48:49.221107462 +0000 UTC m=+1.075474330 container remove f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_brown, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:48:49 np0005593232 systemd[1]: libpod-conmon-f8a3a891958f664b31880c480d5d82e43003d32a81dc04f5016e8f735a40f13a.scope: Deactivated successfully.
Jan 23 05:48:49 np0005593232 podman[397023]: 2026-01-23 10:48:49.584725918 +0000 UTC m=+0.103215305 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:48:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Jan 23 05:48:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Jan 23 05:48:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Jan 23 05:48:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:49.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.874447094 +0000 UTC m=+0.039987538 container create cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:48:49 np0005593232 systemd[1]: Started libpod-conmon-cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be.scope.
Jan 23 05:48:49 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.855018101 +0000 UTC m=+0.020558565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.958676718 +0000 UTC m=+0.124217202 container init cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.966389697 +0000 UTC m=+0.131930151 container start cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.971144762 +0000 UTC m=+0.136685236 container attach cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 05:48:49 np0005593232 brave_dhawan[397132]: 167 167
Jan 23 05:48:49 np0005593232 systemd[1]: libpod-cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be.scope: Deactivated successfully.
Jan 23 05:48:49 np0005593232 podman[397115]: 2026-01-23 10:48:49.973989843 +0000 UTC m=+0.139530287 container died cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:48:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ed21a3e106602b16cf9c9048c86b162b4e65340a3dd9c9256b0f6b39ea3147e7-merged.mount: Deactivated successfully.
Jan 23 05:48:50 np0005593232 podman[397115]: 2026-01-23 10:48:50.010057238 +0000 UTC m=+0.175597672 container remove cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:48:50 np0005593232 systemd[1]: libpod-conmon-cc1039639193079187d15f03f5372ec15c0070ed53285bbbd7ce79e5c96433be.scope: Deactivated successfully.
Jan 23 05:48:50 np0005593232 podman[397157]: 2026-01-23 10:48:50.191499886 +0000 UTC m=+0.050311571 container create c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:48:50 np0005593232 systemd[1]: Started libpod-conmon-c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372.scope.
Jan 23 05:48:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:48:50 np0005593232 podman[397157]: 2026-01-23 10:48:50.169187252 +0000 UTC m=+0.027998917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:48:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aae022cd91c136f0ae89ca3ea64af07376bce6520dbbb7bc2de466034fa7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aae022cd91c136f0ae89ca3ea64af07376bce6520dbbb7bc2de466034fa7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aae022cd91c136f0ae89ca3ea64af07376bce6520dbbb7bc2de466034fa7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/117aae022cd91c136f0ae89ca3ea64af07376bce6520dbbb7bc2de466034fa7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:48:50 np0005593232 podman[397157]: 2026-01-23 10:48:50.285744805 +0000 UTC m=+0.144556460 container init c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:48:50 np0005593232 podman[397157]: 2026-01-23 10:48:50.292338732 +0000 UTC m=+0.151150377 container start c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:48:50 np0005593232 podman[397157]: 2026-01-23 10:48:50.29541232 +0000 UTC m=+0.154223965 container attach c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:48:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3682: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.1 MiB/s rd, 4.2 MiB/s wr, 214 op/s
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]: {
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:        "osd_id": 0,
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:        "type": "bluestore"
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]:    }
Jan 23 05:48:51 np0005593232 quizzical_beaver[397175]: }
Jan 23 05:48:51 np0005593232 systemd[1]: libpod-c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372.scope: Deactivated successfully.
Jan 23 05:48:51 np0005593232 podman[397157]: 2026-01-23 10:48:51.148997683 +0000 UTC m=+1.007809338 container died c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 05:48:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-117aae022cd91c136f0ae89ca3ea64af07376bce6520dbbb7bc2de466034fa7b-merged.mount: Deactivated successfully.
Jan 23 05:48:51 np0005593232 podman[397157]: 2026-01-23 10:48:51.219092715 +0000 UTC m=+1.077904360 container remove c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:48:51 np0005593232 systemd[1]: libpod-conmon-c609f1ed906cbfdb54e71e15e46f9d39dbab85af84ea2aa88686bcedc3779372.scope: Deactivated successfully.
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aaeebc81-3e27-4c28-a784-010a7a8d5965 does not exist
Jan 23 05:48:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5e951579-3ec2-4d4b-bbdb-8d97cdfc107b does not exist
Jan 23 05:48:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 411a9a55-fdbc-460e-8dc5-ad228ff8870a does not exist
Jan 23 05:48:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:48:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:52 np0005593232 nova_compute[250269]: 2026-01-23 10:48:52.017 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3683: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 144 op/s
Jan 23 05:48:53 np0005593232 nova_compute[250269]: 2026-01-23 10:48:53.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:53.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3684: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 19 KiB/s wr, 116 op/s
Jan 23 05:48:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:55 np0005593232 podman[397258]: 2026-01-23 10:48:55.40990975 +0000 UTC m=+0.062996442 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 05:48:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:48:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:55.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:48:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:55.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3685: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 1.7 KiB/s wr, 92 op/s
Jan 23 05:48:57 np0005593232 nova_compute[250269]: 2026-01-23 10:48:57.019 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:57 np0005593232 ovn_controller[151001]: 2026-01-23T10:48:57Z|00821|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 23 05:48:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:57.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:58 np0005593232 nova_compute[250269]: 2026-01-23 10:48:58.212 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:48:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3686: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 582 KiB/s rd, 15 KiB/s wr, 46 op/s
Jan 23 05:48:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:48:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:48:59.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:48:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:48:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:48:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:48:59.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3687: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 534 KiB/s rd, 14 KiB/s wr, 42 op/s
Jan 23 05:49:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:01.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:01.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:02 np0005593232 nova_compute[250269]: 2026-01-23 10:49:02.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3688: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 48 op/s
Jan 23 05:49:03 np0005593232 nova_compute[250269]: 2026-01-23 10:49:03.248 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:03.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:03.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3689: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 12 KiB/s wr, 48 op/s
Jan 23 05:49:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:05.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3690: 321 pgs: 321 active+clean; 189 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 23 KiB/s wr, 51 op/s
Jan 23 05:49:07 np0005593232 nova_compute[250269]: 2026-01-23 10:49:07.063 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:07.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:07.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:07.861 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=84, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=83) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:49:07 np0005593232 nova_compute[250269]: 2026-01-23 10:49:07.862 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:07.863 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:49:08 np0005593232 nova_compute[250269]: 2026-01-23 10:49:08.292 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3691: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 547 KiB/s rd, 23 KiB/s wr, 75 op/s
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.670644) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349670688, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1882, "num_deletes": 254, "total_data_size": 3264411, "memory_usage": 3331480, "flush_reason": "Manual Compaction"}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349693437, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1972973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79683, "largest_seqno": 81564, "table_properties": {"data_size": 1966413, "index_size": 3441, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17287, "raw_average_key_size": 21, "raw_value_size": 1951870, "raw_average_value_size": 2409, "num_data_blocks": 152, "num_entries": 810, "num_filter_entries": 810, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165178, "oldest_key_time": 1769165178, "file_creation_time": 1769165349, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 23065 microseconds, and 12025 cpu microseconds.
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.693698) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1972973 bytes OK
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.693785) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.695672) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.695696) EVENT_LOG_v1 {"time_micros": 1769165349695688, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.695720) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3256431, prev total WAL file size 3256431, number of live WAL files 2.
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.697942) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303039' seq:72057594037927935, type:22 .. '6D6772737461740033323630' seq:0, type:0; will stop at (end)
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1926KB)], [185(12MB)]
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349698001, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 15008840, "oldest_snapshot_seqno": -1}
Jan 23 05:49:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:09.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 10764 keys, 12381743 bytes, temperature: kUnknown
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349817958, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 12381743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12314727, "index_size": 38999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26949, "raw_key_size": 283160, "raw_average_key_size": 26, "raw_value_size": 12129061, "raw_average_value_size": 1126, "num_data_blocks": 1483, "num_entries": 10764, "num_filter_entries": 10764, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165349, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.819180) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 12381743 bytes
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.821989) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 124.3 rd, 102.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(13.9) write-amplify(6.3) OK, records in: 11210, records dropped: 446 output_compression: NoCompression
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.822030) EVENT_LOG_v1 {"time_micros": 1769165349822012, "job": 116, "event": "compaction_finished", "compaction_time_micros": 120777, "compaction_time_cpu_micros": 54845, "output_level": 6, "num_output_files": 1, "total_output_size": 12381743, "num_input_records": 11210, "num_output_records": 10764, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349823538, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165349828801, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.697836) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.828925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.828933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.828938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.828942) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:09 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:09.828946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:10 np0005593232 nova_compute[250269]: 2026-01-23 10:49:10.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3692: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 12 KiB/s wr, 38 op/s
Jan 23 05:49:11 np0005593232 nova_compute[250269]: 2026-01-23 10:49:11.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:11.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:12 np0005593232 nova_compute[250269]: 2026-01-23 10:49:12.065 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3693: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 68 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 23 05:49:13 np0005593232 nova_compute[250269]: 2026-01-23 10:49:13.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:13.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3694: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 12 KiB/s wr, 41 op/s
Jan 23 05:49:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:15 np0005593232 nova_compute[250269]: 2026-01-23 10:49:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:15 np0005593232 nova_compute[250269]: 2026-01-23 10:49:15.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:49:15 np0005593232 nova_compute[250269]: 2026-01-23 10:49:15.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:15 np0005593232 nova_compute[250269]: 2026-01-23 10:49:15.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:49:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:15.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:15 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:15.865 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '84'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:49:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3695: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 32 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 23 05:49:17 np0005593232 nova_compute[250269]: 2026-01-23 10:49:17.065 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:49:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:49:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:17.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:18 np0005593232 nova_compute[250269]: 2026-01-23 10:49:18.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3696: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 39 KiB/s rd, 1.2 KiB/s wr, 52 op/s
Jan 23 05:49:19 np0005593232 nova_compute[250269]: 2026-01-23 10:49:19.318 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:19 np0005593232 nova_compute[250269]: 2026-01-23 10:49:19.319 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:20 np0005593232 podman[397341]: 2026-01-23 10:49:20.446902626 +0000 UTC m=+0.093734165 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 05:49:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3697: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 23 05:49:21 np0005593232 nova_compute[250269]: 2026-01-23 10:49:21.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:21 np0005593232 nova_compute[250269]: 2026-01-23 10:49:21.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:49:21 np0005593232 nova_compute[250269]: 2026-01-23 10:49:21.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:49:21 np0005593232 nova_compute[250269]: 2026-01-23 10:49:21.339 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:49:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:49:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:49:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:22 np0005593232 nova_compute[250269]: 2026-01-23 10:49:22.068 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3698: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 20 KiB/s rd, 511 B/s wr, 26 op/s
Jan 23 05:49:23 np0005593232 nova_compute[250269]: 2026-01-23 10:49:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:23 np0005593232 nova_compute[250269]: 2026-01-23 10:49:23.390 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:23.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:23.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3699: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 511 B/s wr, 25 op/s
Jan 23 05:49:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:25.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:25.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:25 np0005593232 podman[397369]: 2026-01-23 10:49:25.943249797 +0000 UTC m=+0.090172164 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:49:26 np0005593232 nova_compute[250269]: 2026-01-23 10:49:26.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3700: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 511 B/s wr, 14 op/s
Jan 23 05:49:27 np0005593232 nova_compute[250269]: 2026-01-23 10:49:27.071 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:27.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:27.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:28 np0005593232 nova_compute[250269]: 2026-01-23 10:49:28.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3701: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 9.0 KiB/s rd, 0 B/s wr, 11 op/s
Jan 23 05:49:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:29.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:29.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3702: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.358 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.359 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.359 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.360 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:49:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:31.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:31.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:49:31 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3897209201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:49:31 np0005593232 nova_compute[250269]: 2026-01-23 10:49:31.884 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.056 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.057 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4135MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.058 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.058 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.073 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.175 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.176 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.198 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:49:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3703: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:49:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025321835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.751 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:49:32 np0005593232 nova_compute[250269]: 2026-01-23 10:49:32.760 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:49:33 np0005593232 nova_compute[250269]: 2026-01-23 10:49:33.410 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:33.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:33.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3704: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:35 np0005593232 nova_compute[250269]: 2026-01-23 10:49:35.408 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:49:35 np0005593232 nova_compute[250269]: 2026-01-23 10:49:35.411 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:49:35 np0005593232 nova_compute[250269]: 2026-01-23 10:49:35.412 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:49:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:35.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:35.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3705: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:37 np0005593232 nova_compute[250269]: 2026-01-23 10:49:37.252 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:49:37
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'volumes', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'vms']
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:49:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:49:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:37.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:37.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:38 np0005593232 nova_compute[250269]: 2026-01-23 10:49:38.446 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3706: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:49:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:49:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:39.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:39.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3707: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.701131) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381701225, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 530, "num_deletes": 251, "total_data_size": 556701, "memory_usage": 566680, "flush_reason": "Manual Compaction"}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381710780, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 551401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 81565, "largest_seqno": 82094, "table_properties": {"data_size": 548474, "index_size": 898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7009, "raw_average_key_size": 19, "raw_value_size": 542600, "raw_average_value_size": 1478, "num_data_blocks": 40, "num_entries": 367, "num_filter_entries": 367, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165349, "oldest_key_time": 1769165349, "file_creation_time": 1769165381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 9787 microseconds, and 4509 cpu microseconds.
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.710860) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 551401 bytes OK
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.710948) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.713330) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.713378) EVENT_LOG_v1 {"time_micros": 1769165381713350, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.713403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 553700, prev total WAL file size 553700, number of live WAL files 2.
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.714530) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(538KB)], [188(11MB)]
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381714618, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 12933144, "oldest_snapshot_seqno": -1}
Jan 23 05:49:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:41.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 10617 keys, 11027031 bytes, temperature: kUnknown
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381803825, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 11027031, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10962185, "index_size": 37213, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 280807, "raw_average_key_size": 26, "raw_value_size": 10780239, "raw_average_value_size": 1015, "num_data_blocks": 1401, "num_entries": 10617, "num_filter_entries": 10617, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.804310) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 11027031 bytes
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.806263) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.7 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(43.5) write-amplify(20.0) OK, records in: 11131, records dropped: 514 output_compression: NoCompression
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.806289) EVENT_LOG_v1 {"time_micros": 1769165381806276, "job": 118, "event": "compaction_finished", "compaction_time_micros": 89398, "compaction_time_cpu_micros": 34847, "output_level": 6, "num_output_files": 1, "total_output_size": 11027031, "num_input_records": 11131, "num_output_records": 10617, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381806638, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165381810407, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.714411) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.810540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.810553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.810557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.810560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:49:41.810564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:49:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:41.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:42 np0005593232 nova_compute[250269]: 2026-01-23 10:49:42.282 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3709: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 307 B/s rd, 409 B/s wr, 0 op/s
Jan 23 05:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:42.671 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:42.672 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:49:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:49:42.672 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:49:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Jan 23 05:49:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Jan 23 05:49:42 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Jan 23 05:49:43 np0005593232 nova_compute[250269]: 2026-01-23 10:49:43.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:43.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:43.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3711: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 23 05:49:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:49:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806638353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:49:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:49:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806638353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:49:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:45.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:45.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3712: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 44 op/s
Jan 23 05:49:47 np0005593232 nova_compute[250269]: 2026-01-23 10:49:47.285 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:49:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:49:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:47.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:47.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:48 np0005593232 nova_compute[250269]: 2026-01-23 10:49:48.531 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3713: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Jan 23 05:49:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Jan 23 05:49:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Jan 23 05:49:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Jan 23 05:49:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:49.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:49.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3715: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 33 KiB/s rd, 1.6 KiB/s wr, 44 op/s
Jan 23 05:49:51 np0005593232 podman[397547]: 2026-01-23 10:49:51.487346156 +0000 UTC m=+0.145391683 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 05:49:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:49:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:51.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:49:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:51.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:52 np0005593232 nova_compute[250269]: 2026-01-23 10:49:52.287 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3716: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 27 KiB/s rd, 1.3 KiB/s wr, 35 op/s
Jan 23 05:49:53 np0005593232 nova_compute[250269]: 2026-01-23 10:49:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:53 np0005593232 nova_compute[250269]: 2026-01-23 10:49:53.534 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:53.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:53 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9a3d3808-efe1-416a-8d17-f654487711b1 does not exist
Jan 23 05:49:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 80c203bf-9640-4df8-aa7f-93cfacd3961e does not exist
Jan 23 05:49:54 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9776b032-ea46-4e36-8389-9f79081cb19a does not exist
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3717: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 26 KiB/s rd, 1.3 KiB/s wr, 34 op/s
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:49:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:49:54 np0005593232 podman[397848]: 2026-01-23 10:49:54.948142259 +0000 UTC m=+0.039962857 container create ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:49:54 np0005593232 systemd[1]: Started libpod-conmon-ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533.scope.
Jan 23 05:49:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:55.018057566 +0000 UTC m=+0.109878184 container init ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:55.024385626 +0000 UTC m=+0.116206234 container start ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:54.929312554 +0000 UTC m=+0.021133182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:55.027530826 +0000 UTC m=+0.119351454 container attach ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:49:55 np0005593232 boring_lederberg[397864]: 167 167
Jan 23 05:49:55 np0005593232 systemd[1]: libpod-ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533.scope: Deactivated successfully.
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:55.031464318 +0000 UTC m=+0.123284946 container died ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:49:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-528cd4a6259cac04632e197047f82671d10bb9f50da5a66450ecfce535033d7c-merged.mount: Deactivated successfully.
Jan 23 05:49:55 np0005593232 podman[397848]: 2026-01-23 10:49:55.067162622 +0000 UTC m=+0.158983230 container remove ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:49:55 np0005593232 systemd[1]: libpod-conmon-ffb1d11dac10f74bedca8b6ca0cf3475bf1e5659b6c69e118ce6f28028634533.scope: Deactivated successfully.
Jan 23 05:49:55 np0005593232 podman[397888]: 2026-01-23 10:49:55.23983174 +0000 UTC m=+0.044123115 container create 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:49:55 np0005593232 systemd[1]: Started libpod-conmon-0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e.scope.
Jan 23 05:49:55 np0005593232 podman[397888]: 2026-01-23 10:49:55.218927126 +0000 UTC m=+0.023218501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:49:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:49:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:55 np0005593232 podman[397888]: 2026-01-23 10:49:55.345686549 +0000 UTC m=+0.149977924 container init 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:49:55 np0005593232 podman[397888]: 2026-01-23 10:49:55.351440003 +0000 UTC m=+0.155731358 container start 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:49:55 np0005593232 podman[397888]: 2026-01-23 10:49:55.354445358 +0000 UTC m=+0.158736743 container attach 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:49:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:55.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:55.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:56 np0005593232 frosty_shaw[397904]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:49:56 np0005593232 frosty_shaw[397904]: --> relative data size: 1.0
Jan 23 05:49:56 np0005593232 frosty_shaw[397904]: --> All data devices are unavailable
Jan 23 05:49:56 np0005593232 systemd[1]: libpod-0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e.scope: Deactivated successfully.
Jan 23 05:49:56 np0005593232 conmon[397904]: conmon 0b21433191c6d21c87e2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e.scope/container/memory.events
Jan 23 05:49:56 np0005593232 podman[397920]: 2026-01-23 10:49:56.248151032 +0000 UTC m=+0.029832119 container died 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:49:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b0130c475ae6b3906e6bea35ddfe8ed6a7cc01a433f8d462c5c3d895615b7865-merged.mount: Deactivated successfully.
Jan 23 05:49:56 np0005593232 podman[397920]: 2026-01-23 10:49:56.328587717 +0000 UTC m=+0.110268734 container remove 0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:49:56 np0005593232 systemd[1]: libpod-conmon-0b21433191c6d21c87e2d9d74507af183c0123262e59f65e1ae8391f895eaf5e.scope: Deactivated successfully.
Jan 23 05:49:56 np0005593232 podman[397921]: 2026-01-23 10:49:56.340672741 +0000 UTC m=+0.094014983 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 05:49:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3718: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 921 B/s rd, 307 B/s wr, 1 op/s
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.052164855 +0000 UTC m=+0.055834058 container create c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:49:57 np0005593232 systemd[1]: Started libpod-conmon-c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8.scope.
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.030822558 +0000 UTC m=+0.034491791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:49:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.15963699 +0000 UTC m=+0.163306253 container init c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.169920152 +0000 UTC m=+0.173589375 container start c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.173825163 +0000 UTC m=+0.177494396 container attach c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:49:57 np0005593232 relaxed_ganguly[398113]: 167 167
Jan 23 05:49:57 np0005593232 systemd[1]: libpod-c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8.scope: Deactivated successfully.
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.178069674 +0000 UTC m=+0.181738877 container died c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:49:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-02fe5f46986a658360c286b4995227b75d721a5965125f770ca84b7064d23dfd-merged.mount: Deactivated successfully.
Jan 23 05:49:57 np0005593232 podman[398096]: 2026-01-23 10:49:57.221586141 +0000 UTC m=+0.225255364 container remove c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:49:57 np0005593232 systemd[1]: libpod-conmon-c9a298bd664e027d47e07f519cbf686a62bedac589f79d68e472df62df707be8.scope: Deactivated successfully.
Jan 23 05:49:57 np0005593232 nova_compute[250269]: 2026-01-23 10:49:57.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:57 np0005593232 podman[398137]: 2026-01-23 10:49:57.483131245 +0000 UTC m=+0.061704665 container create 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:49:57 np0005593232 systemd[1]: Started libpod-conmon-10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768.scope.
Jan 23 05:49:57 np0005593232 podman[398137]: 2026-01-23 10:49:57.467599543 +0000 UTC m=+0.046172983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:49:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:49:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7024e2e5bfe337fcfeeab152bf1ff69029606df13a99a6bb7230f5768c8b3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7024e2e5bfe337fcfeeab152bf1ff69029606df13a99a6bb7230f5768c8b3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7024e2e5bfe337fcfeeab152bf1ff69029606df13a99a6bb7230f5768c8b3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:57 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7024e2e5bfe337fcfeeab152bf1ff69029606df13a99a6bb7230f5768c8b3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:49:57 np0005593232 podman[398137]: 2026-01-23 10:49:57.593895193 +0000 UTC m=+0.172468623 container init 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 05:49:57 np0005593232 podman[398137]: 2026-01-23 10:49:57.609460186 +0000 UTC m=+0.188033606 container start 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 05:49:57 np0005593232 podman[398137]: 2026-01-23 10:49:57.613354196 +0000 UTC m=+0.191927686 container attach 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:49:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:57.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:57.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:58 np0005593232 ovn_controller[151001]: 2026-01-23T10:49:58Z|00822|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]: {
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:    "0": [
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:        {
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "devices": [
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "/dev/loop3"
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            ],
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "lv_name": "ceph_lv0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "lv_size": "7511998464",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "name": "ceph_lv0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "tags": {
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.cluster_name": "ceph",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.crush_device_class": "",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.encrypted": "0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.osd_id": "0",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.type": "block",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:                "ceph.vdo": "0"
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            },
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "type": "block",
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:            "vg_name": "ceph_vg0"
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:        }
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]:    ]
Jan 23 05:49:58 np0005593232 intelligent_ellis[398154]: }
Jan 23 05:49:58 np0005593232 systemd[1]: libpod-10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768.scope: Deactivated successfully.
Jan 23 05:49:58 np0005593232 podman[398164]: 2026-01-23 10:49:58.455162535 +0000 UTC m=+0.030055955 container died 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 05:49:58 np0005593232 nova_compute[250269]: 2026-01-23 10:49:58.578 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:49:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3719: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 818 B/s wr, 2 op/s
Jan 23 05:49:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ce7024e2e5bfe337fcfeeab152bf1ff69029606df13a99a6bb7230f5768c8b3d-merged.mount: Deactivated successfully.
Jan 23 05:49:58 np0005593232 podman[398164]: 2026-01-23 10:49:58.957736201 +0000 UTC m=+0.532629541 container remove 10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:49:58 np0005593232 systemd[1]: libpod-conmon-10b9409f7d3155f5680e1b531197da868a796ae60cd9cf9538c4b0516f0a6768.scope: Deactivated successfully.
Jan 23 05:49:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.765496542 +0000 UTC m=+0.066992575 container create c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 05:49:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:49:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:49:59.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:49:59 np0005593232 systemd[1]: Started libpod-conmon-c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19.scope.
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.735507749 +0000 UTC m=+0.037003842 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:49:59 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:49:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:49:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:49:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:49:59.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.873943873 +0000 UTC m=+0.175439916 container init c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.881736845 +0000 UTC m=+0.183232858 container start c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.885220514 +0000 UTC m=+0.186716607 container attach c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 05:49:59 np0005593232 mystifying_mahavira[398336]: 167 167
Jan 23 05:49:59 np0005593232 systemd[1]: libpod-c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19.scope: Deactivated successfully.
Jan 23 05:49:59 np0005593232 conmon[398336]: conmon c129ae57c394656a6b2b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19.scope/container/memory.events
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.892048128 +0000 UTC m=+0.193544201 container died c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 05:49:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e16def8ce5a9c1022996fc8573e0c802b3644b5ed93e7846de3fee76a24b7e2b-merged.mount: Deactivated successfully.
Jan 23 05:49:59 np0005593232 podman[398320]: 2026-01-23 10:49:59.940060853 +0000 UTC m=+0.241556866 container remove c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:49:59 np0005593232 systemd[1]: libpod-conmon-c129ae57c394656a6b2b6070768b79fe81a24c7871a1e52723eaeb60c6936d19.scope: Deactivated successfully.
Jan 23 05:50:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 05:50:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 05:50:00 np0005593232 podman[398362]: 2026-01-23 10:50:00.175508986 +0000 UTC m=+0.051173586 container create 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:50:00 np0005593232 systemd[1]: Started libpod-conmon-6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d.scope.
Jan 23 05:50:00 np0005593232 podman[398362]: 2026-01-23 10:50:00.152627055 +0000 UTC m=+0.028291645 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:50:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:50:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808eae4198e2bba8ef763835c5da67f68324dd73092465935f741e818af05eb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:50:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808eae4198e2bba8ef763835c5da67f68324dd73092465935f741e818af05eb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:50:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808eae4198e2bba8ef763835c5da67f68324dd73092465935f741e818af05eb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:50:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/808eae4198e2bba8ef763835c5da67f68324dd73092465935f741e818af05eb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:50:00 np0005593232 podman[398362]: 2026-01-23 10:50:00.274766197 +0000 UTC m=+0.150430857 container init 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:50:00 np0005593232 podman[398362]: 2026-01-23 10:50:00.287691974 +0000 UTC m=+0.163356564 container start 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:50:00 np0005593232 podman[398362]: 2026-01-23 10:50:00.291204794 +0000 UTC m=+0.166869394 container attach 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Jan 23 05:50:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3720: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 KiB/s rd, 751 B/s wr, 2 op/s
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]: {
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:        "osd_id": 0,
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:        "type": "bluestore"
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]:    }
Jan 23 05:50:01 np0005593232 boring_hofstadter[398380]: }
Jan 23 05:50:01 np0005593232 systemd[1]: libpod-6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d.scope: Deactivated successfully.
Jan 23 05:50:01 np0005593232 podman[398401]: 2026-01-23 10:50:01.229950028 +0000 UTC m=+0.029843739 container died 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 05:50:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-808eae4198e2bba8ef763835c5da67f68324dd73092465935f741e818af05eb0-merged.mount: Deactivated successfully.
Jan 23 05:50:01 np0005593232 podman[398401]: 2026-01-23 10:50:01.283295935 +0000 UTC m=+0.083189626 container remove 6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hofstadter, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:50:01 np0005593232 systemd[1]: libpod-conmon-6857293a91e4ca4ffbc3fb7281695dc6b17b0187653dfd51f4df01f54114af6d.scope: Deactivated successfully.
Jan 23 05:50:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:50:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:50:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:50:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:50:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c8ae2166-0559-4129-9f68-6481ec370e72 does not exist
Jan 23 05:50:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2316452f-336f-4fd3-ac47-d95f8121ecf6 does not exist
Jan 23 05:50:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 211ddd30-4026-4767-8dc4-469a585e715b does not exist
Jan 23 05:50:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:01.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:01.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:02 np0005593232 nova_compute[250269]: 2026-01-23 10:50:02.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:50:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:50:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3721: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s
Jan 23 05:50:03 np0005593232 nova_compute[250269]: 2026-01-23 10:50:03.608 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 05:50:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:03.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:03.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3722: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 1023 B/s wr, 2 op/s
Jan 23 05:50:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:50:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3424775311' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:50:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:50:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3424775311' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:50:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:05.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:05.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:06 np0005593232 nova_compute[250269]: 2026-01-23 10:50:06.316 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:06 np0005593232 nova_compute[250269]: 2026-01-23 10:50:06.317 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:50:06 np0005593232 nova_compute[250269]: 2026-01-23 10:50:06.350 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:50:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3723: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.7 KiB/s wr, 20 op/s
Jan 23 05:50:07 np0005593232 nova_compute[250269]: 2026-01-23 10:50:07.385 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:07.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:07.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3724: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 31 KiB/s rd, 1.5 KiB/s wr, 42 op/s
Jan 23 05:50:08 np0005593232 nova_compute[250269]: 2026-01-23 10:50:08.649 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:09.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:09.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3725: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 23 05:50:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:11.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:12 np0005593232 nova_compute[250269]: 2026-01-23 10:50:12.326 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:12 np0005593232 nova_compute[250269]: 2026-01-23 10:50:12.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3726: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 1.1 KiB/s wr, 41 op/s
Jan 23 05:50:13 np0005593232 nova_compute[250269]: 2026-01-23 10:50:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:13 np0005593232 nova_compute[250269]: 2026-01-23 10:50:13.651 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:13.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:13.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3727: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 40 op/s
Jan 23 05:50:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:15.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:16 np0005593232 nova_compute[250269]: 2026-01-23 10:50:16.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:16 np0005593232 nova_compute[250269]: 2026-01-23 10:50:16.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:50:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3728: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 767 B/s wr, 40 op/s
Jan 23 05:50:17 np0005593232 nova_compute[250269]: 2026-01-23 10:50:17.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:17.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:17.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3729: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 18 KiB/s rd, 0 B/s wr, 22 op/s
Jan 23 05:50:18 np0005593232 nova_compute[250269]: 2026-01-23 10:50:18.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:19.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:19.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:20 np0005593232 nova_compute[250269]: 2026-01-23 10:50:20.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3730: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:21 np0005593232 nova_compute[250269]: 2026-01-23 10:50:21.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:21.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:21.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:22 np0005593232 nova_compute[250269]: 2026-01-23 10:50:22.415 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:22 np0005593232 podman[398529]: 2026-01-23 10:50:22.436277857 +0000 UTC m=+0.090358789 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:50:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3731: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:22.917 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=85, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=84) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:50:22 np0005593232 nova_compute[250269]: 2026-01-23 10:50:22.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:22.918 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:50:23 np0005593232 nova_compute[250269]: 2026-01-23 10:50:23.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:23 np0005593232 nova_compute[250269]: 2026-01-23 10:50:23.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:50:23 np0005593232 nova_compute[250269]: 2026-01-23 10:50:23.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:50:23 np0005593232 nova_compute[250269]: 2026-01-23 10:50:23.328 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:50:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:23.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:23 np0005593232 nova_compute[250269]: 2026-01-23 10:50:23.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:23.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:24 np0005593232 nova_compute[250269]: 2026-01-23 10:50:24.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3732: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:25.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:25.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3733: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:27 np0005593232 nova_compute[250269]: 2026-01-23 10:50:27.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:27 np0005593232 nova_compute[250269]: 2026-01-23 10:50:27.418 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:27 np0005593232 podman[398558]: 2026-01-23 10:50:27.450772943 +0000 UTC m=+0.098860101 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:50:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:27.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:27.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3734: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:28 np0005593232 nova_compute[250269]: 2026-01-23 10:50:28.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:29.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:29.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3735: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:31.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:32 np0005593232 nova_compute[250269]: 2026-01-23 10:50:32.421 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3736: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:32.920 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '85'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.331 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.331 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.332 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.332 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:50:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:50:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1895236892' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:50:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:33.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.834 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:33 np0005593232 nova_compute[250269]: 2026-01-23 10:50:33.836 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:50:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:33.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.129 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.131 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4144MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.131 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.132 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.223 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.224 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.243 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:50:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3737: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:50:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969937587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.724 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.730 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.751 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.754 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:50:34 np0005593232 nova_compute[250269]: 2026-01-23 10:50:34.754 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:50:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:35.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:35.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3738: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:37 np0005593232 nova_compute[250269]: 2026-01-23 10:50:37.422 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:50:37
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'backups', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:50:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:50:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:37.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:37.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3739: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:50:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:50:38 np0005593232 nova_compute[250269]: 2026-01-23 10:50:38.836 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:39 np0005593232 nova_compute[250269]: 2026-01-23 10:50:39.750 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:50:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:39.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:39.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3740: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:41.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:41.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:42 np0005593232 nova_compute[250269]: 2026-01-23 10:50:42.424 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3741: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:42.672 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:42.672 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:50:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:50:42.672 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:50:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:43.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:43 np0005593232 nova_compute[250269]: 2026-01-23 10:50:43.839 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:43.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3742: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:45.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:45.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3743: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:47 np0005593232 nova_compute[250269]: 2026-01-23 10:50:47.426 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:50:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:50:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:47.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3744: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:48 np0005593232 nova_compute[250269]: 2026-01-23 10:50:48.842 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:49.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:49.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3745: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:50:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:51.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:51.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:52 np0005593232 nova_compute[250269]: 2026-01-23 10:50:52.428 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3746: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 23 05:50:53 np0005593232 podman[398735]: 2026-01-23 10:50:53.483780802 +0000 UTC m=+0.136980225 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:50:53 np0005593232 ovn_controller[151001]: 2026-01-23T10:50:53Z|00823|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 23 05:50:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:53.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:53 np0005593232 nova_compute[250269]: 2026-01-23 10:50:53.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:53.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3747: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 23 05:50:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:55.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:50:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:55.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:50:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3748: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 23 05:50:57 np0005593232 nova_compute[250269]: 2026-01-23 10:50:57.430 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:57.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:50:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:57.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:58 np0005593232 podman[398766]: 2026-01-23 10:50:58.390666489 +0000 UTC m=+0.049628251 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 23 05:50:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3749: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 23 05:50:58 np0005593232 nova_compute[250269]: 2026-01-23 10:50:58.846 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:50:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:50:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:50:59.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:50:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:50:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:50:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:50:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:50:59.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3750: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Jan 23 05:51:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:01.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:01.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:02 np0005593232 nova_compute[250269]: 2026-01-23 10:51:02.432 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3751: 321 pgs: 321 active+clean; 145 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 13 KiB/s rd, 1.3 MiB/s wr, 18 op/s
Jan 23 05:51:02 np0005593232 podman[398960]: 2026-01-23 10:51:02.82449661 +0000 UTC m=+0.091926574 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:51:02 np0005593232 podman[398960]: 2026-01-23 10:51:02.957259964 +0000 UTC m=+0.224689918 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:03 np0005593232 podman[399113]: 2026-01-23 10:51:03.754597008 +0000 UTC m=+0.123212433 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:51:03 np0005593232 podman[399113]: 2026-01-23 10:51:03.781329968 +0000 UTC m=+0.149945373 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 05:51:03 np0005593232 nova_compute[250269]: 2026-01-23 10:51:03.848 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:03.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:03.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:04 np0005593232 podman[399178]: 2026-01-23 10:51:04.037979344 +0000 UTC m=+0.066209553 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, name=keepalived, release=1793, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, distribution-scope=public, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 23 05:51:04 np0005593232 podman[399178]: 2026-01-23 10:51:04.050204521 +0000 UTC m=+0.078434750 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4)
Jan 23 05:51:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:51:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:51:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3752: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:51:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d83fbad5-c171-402b-94c8-fa1da112b8a5 does not exist
Jan 23 05:51:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 309fcfb3-6900-452c-b76f-b714b2522b58 does not exist
Jan 23 05:51:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c894ab84-6617-4c8c-b35f-b765c275c7f8 does not exist
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:51:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:51:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:05.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:05.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.132347576 +0000 UTC m=+0.040716438 container create f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:51:06 np0005593232 systemd[1]: Started libpod-conmon-f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78.scope.
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.115498367 +0000 UTC m=+0.023867249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.285087128 +0000 UTC m=+0.193456080 container init f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.298820538 +0000 UTC m=+0.207189430 container start f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.303746388 +0000 UTC m=+0.212115300 container attach f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:06 np0005593232 goofy_bohr[399499]: 167 167
Jan 23 05:51:06 np0005593232 systemd[1]: libpod-f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78.scope: Deactivated successfully.
Jan 23 05:51:06 np0005593232 conmon[399499]: conmon f90b2b4e3d8ac0949500 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78.scope/container/memory.events
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.309942354 +0000 UTC m=+0.218311226 container died f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5e0b6e7354352c8341f48014560dc9c0ad741a88b96a8a59da470747652e9b67-merged.mount: Deactivated successfully.
Jan 23 05:51:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:51:06 np0005593232 podman[399482]: 2026-01-23 10:51:06.533830458 +0000 UTC m=+0.442199360 container remove f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:51:06 np0005593232 systemd[1]: libpod-conmon-f90b2b4e3d8ac094950002003d22b5dc0c518f77d051c6d1c054c4f1c389af78.scope: Deactivated successfully.
Jan 23 05:51:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3753: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:51:06 np0005593232 podman[399524]: 2026-01-23 10:51:06.788629501 +0000 UTC m=+0.068649093 container create 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:51:06 np0005593232 systemd[1]: Started libpod-conmon-32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0.scope.
Jan 23 05:51:06 np0005593232 podman[399524]: 2026-01-23 10:51:06.76680264 +0000 UTC m=+0.046822212 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:07 np0005593232 podman[399524]: 2026-01-23 10:51:07.091969583 +0000 UTC m=+0.371989165 container init 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 05:51:07 np0005593232 podman[399524]: 2026-01-23 10:51:07.101058942 +0000 UTC m=+0.381078534 container start 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:51:07 np0005593232 podman[399524]: 2026-01-23 10:51:07.106322202 +0000 UTC m=+0.386341824 container attach 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:51:07 np0005593232 nova_compute[250269]: 2026-01-23 10:51:07.434 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:07 np0005593232 magical_wu[399540]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:51:07 np0005593232 magical_wu[399540]: --> relative data size: 1.0
Jan 23 05:51:07 np0005593232 magical_wu[399540]: --> All data devices are unavailable
Jan 23 05:51:07 np0005593232 systemd[1]: libpod-32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0.scope: Deactivated successfully.
Jan 23 05:51:07 np0005593232 podman[399524]: 2026-01-23 10:51:07.950155167 +0000 UTC m=+1.230174719 container died 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:51:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:07.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ed9d26af9feb2faf5b0d382ed73a9a7e6539677c084904740716069b4f2dc9a7-merged.mount: Deactivated successfully.
Jan 23 05:51:08 np0005593232 podman[399524]: 2026-01-23 10:51:08.460288338 +0000 UTC m=+1.740307880 container remove 32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wu, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:51:08 np0005593232 systemd[1]: libpod-conmon-32238e3a16e30501b40114479762c61754757e9e780da7367de402c5f88e78f0.scope: Deactivated successfully.
Jan 23 05:51:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3754: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:51:08 np0005593232 nova_compute[250269]: 2026-01-23 10:51:08.851 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.116586833 +0000 UTC m=+0.040067940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.410860598 +0000 UTC m=+0.334341735 container create 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:51:09 np0005593232 systemd[1]: Started libpod-conmon-0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334.scope.
Jan 23 05:51:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.653189006 +0000 UTC m=+0.576670103 container init 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.66459418 +0000 UTC m=+0.588075297 container start 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.668750378 +0000 UTC m=+0.592231565 container attach 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:51:09 np0005593232 strange_kilby[399773]: 167 167
Jan 23 05:51:09 np0005593232 systemd[1]: libpod-0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334.scope: Deactivated successfully.
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.672633999 +0000 UTC m=+0.596115086 container died 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 05:51:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3a4b97718cc01432083176f507df09f1d763d83e4c8530cdeff6b4911794f46c-merged.mount: Deactivated successfully.
Jan 23 05:51:09 np0005593232 podman[399756]: 2026-01-23 10:51:09.722384243 +0000 UTC m=+0.645865330 container remove 0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_kilby, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:09 np0005593232 systemd[1]: libpod-conmon-0a6edea54c4be10b310fb3f575559f37935bf4b42552f77328f32f0d8bdd6334.scope: Deactivated successfully.
Jan 23 05:51:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:09.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:10 np0005593232 podman[399796]: 2026-01-23 10:51:09.924726054 +0000 UTC m=+0.030766505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:10 np0005593232 podman[399796]: 2026-01-23 10:51:10.559797327 +0000 UTC m=+0.665837738 container create 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 05:51:10 np0005593232 systemd[1]: Started libpod-conmon-22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd.scope.
Jan 23 05:51:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3755: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:51:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c44a1d4d71cde88d04b718c541701435991c4f582b5d3d55b8cba6bf66d7b7b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c44a1d4d71cde88d04b718c541701435991c4f582b5d3d55b8cba6bf66d7b7b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c44a1d4d71cde88d04b718c541701435991c4f582b5d3d55b8cba6bf66d7b7b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c44a1d4d71cde88d04b718c541701435991c4f582b5d3d55b8cba6bf66d7b7b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:10 np0005593232 podman[399796]: 2026-01-23 10:51:10.799662235 +0000 UTC m=+0.905702716 container init 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:51:10 np0005593232 podman[399796]: 2026-01-23 10:51:10.807694353 +0000 UTC m=+0.913734774 container start 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:51:10 np0005593232 podman[399796]: 2026-01-23 10:51:10.824050348 +0000 UTC m=+0.930090779 container attach 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]: {
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:    "0": [
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:        {
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "devices": [
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "/dev/loop3"
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            ],
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "lv_name": "ceph_lv0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "lv_size": "7511998464",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "name": "ceph_lv0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "tags": {
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.cluster_name": "ceph",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.crush_device_class": "",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.encrypted": "0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.osd_id": "0",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.type": "block",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:                "ceph.vdo": "0"
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            },
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "type": "block",
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:            "vg_name": "ceph_vg0"
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:        }
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]:    ]
Jan 23 05:51:11 np0005593232 tender_mendeleev[399814]: }
Jan 23 05:51:11 np0005593232 systemd[1]: libpod-22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd.scope: Deactivated successfully.
Jan 23 05:51:11 np0005593232 podman[399796]: 2026-01-23 10:51:11.647828193 +0000 UTC m=+1.753868584 container died 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:51:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0c44a1d4d71cde88d04b718c541701435991c4f582b5d3d55b8cba6bf66d7b7b-merged.mount: Deactivated successfully.
Jan 23 05:51:11 np0005593232 podman[399796]: 2026-01-23 10:51:11.805084283 +0000 UTC m=+1.911124694 container remove 22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_mendeleev, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:51:11 np0005593232 systemd[1]: libpod-conmon-22d606c0fd5f2d5dc110533c73c12ef2bee4b5b7563707615ef44e2af6f375bd.scope: Deactivated successfully.
Jan 23 05:51:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:11.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:12.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:12 np0005593232 nova_compute[250269]: 2026-01-23 10:51:12.436 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:12 np0005593232 podman[399981]: 2026-01-23 10:51:12.534218789 +0000 UTC m=+0.027253486 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3756: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:51:12 np0005593232 podman[399981]: 2026-01-23 10:51:12.847855404 +0000 UTC m=+0.340890141 container create df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:51:12 np0005593232 systemd[1]: Started libpod-conmon-df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57.scope.
Jan 23 05:51:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:12 np0005593232 podman[399981]: 2026-01-23 10:51:12.981479023 +0000 UTC m=+0.474513770 container init df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:51:12 np0005593232 podman[399981]: 2026-01-23 10:51:12.990047646 +0000 UTC m=+0.483082373 container start df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:51:12 np0005593232 zealous_diffie[399998]: 167 167
Jan 23 05:51:12 np0005593232 systemd[1]: libpod-df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57.scope: Deactivated successfully.
Jan 23 05:51:13 np0005593232 podman[399981]: 2026-01-23 10:51:13.06862159 +0000 UTC m=+0.561656317 container attach df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:13 np0005593232 podman[399981]: 2026-01-23 10:51:13.070434821 +0000 UTC m=+0.563469558 container died df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:51:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0ede9ce11fdb4b9b3bff9a3eec8826ca1ad0f971367d082b7daf4af753da88af-merged.mount: Deactivated successfully.
Jan 23 05:51:13 np0005593232 podman[399981]: 2026-01-23 10:51:13.209216006 +0000 UTC m=+0.702250703 container remove df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:51:13 np0005593232 systemd[1]: libpod-conmon-df107902b764561c10a960406fb7bd9ee3059101a2be9dbb45e59b0a34ba2d57.scope: Deactivated successfully.
Jan 23 05:51:13 np0005593232 nova_compute[250269]: 2026-01-23 10:51:13.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:13 np0005593232 nova_compute[250269]: 2026-01-23 10:51:13.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:13 np0005593232 podman[400023]: 2026-01-23 10:51:13.393980138 +0000 UTC m=+0.031837426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:51:13 np0005593232 podman[400023]: 2026-01-23 10:51:13.528226184 +0000 UTC m=+0.166083412 container create 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:51:13 np0005593232 systemd[1]: Started libpod-conmon-81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e.scope.
Jan 23 05:51:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:51:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b396465bd9abee8992856fe3d44c2e94ebc77e8362984b5f38d79782778a830/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b396465bd9abee8992856fe3d44c2e94ebc77e8362984b5f38d79782778a830/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b396465bd9abee8992856fe3d44c2e94ebc77e8362984b5f38d79782778a830/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b396465bd9abee8992856fe3d44c2e94ebc77e8362984b5f38d79782778a830/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:51:13 np0005593232 podman[400023]: 2026-01-23 10:51:13.773047763 +0000 UTC m=+0.410905061 container init 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:51:13 np0005593232 podman[400023]: 2026-01-23 10:51:13.790907751 +0000 UTC m=+0.428764939 container start 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:51:13 np0005593232 podman[400023]: 2026-01-23 10:51:13.79511653 +0000 UTC m=+0.432973738 container attach 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:51:13 np0005593232 nova_compute[250269]: 2026-01-23 10:51:13.857 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:13.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:14.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3757: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 522 KiB/s wr, 13 op/s
Jan 23 05:51:14 np0005593232 condescending_bell[400040]: {
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:        "osd_id": 0,
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:        "type": "bluestore"
Jan 23 05:51:14 np0005593232 condescending_bell[400040]:    }
Jan 23 05:51:14 np0005593232 condescending_bell[400040]: }
Jan 23 05:51:14 np0005593232 systemd[1]: libpod-81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e.scope: Deactivated successfully.
Jan 23 05:51:14 np0005593232 podman[400023]: 2026-01-23 10:51:14.768982193 +0000 UTC m=+1.406839421 container died 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:51:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2b396465bd9abee8992856fe3d44c2e94ebc77e8362984b5f38d79782778a830-merged.mount: Deactivated successfully.
Jan 23 05:51:14 np0005593232 podman[400023]: 2026-01-23 10:51:14.976435989 +0000 UTC m=+1.614293177 container remove 81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:51:15 np0005593232 systemd[1]: libpod-conmon-81dc5b1828a2b2c1fda112a605b24a6a8697fdc6b01a7ecfb30a70d5ceabe37e.scope: Deactivated successfully.
Jan 23 05:51:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:51:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:51:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 93f06f99-2e1f-4351-a540-e1c0cac2dd8a does not exist
Jan 23 05:51:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d6d87dd7-755b-4245-8efb-37a548535a37 does not exist
Jan 23 05:51:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 405ac894-5a8e-47f5-9272-405df8109f39 does not exist
Jan 23 05:51:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:15.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:16.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:51:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3758: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.2 KiB/s rd, 2 op/s
Jan 23 05:51:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:51:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 219K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.63 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4075 writes, 11K keys, 4075 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 4075 writes, 1682 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 05:51:17 np0005593232 nova_compute[250269]: 2026-01-23 10:51:17.470 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:51:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:51:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:18.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:18 np0005593232 nova_compute[250269]: 2026-01-23 10:51:18.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:18 np0005593232 nova_compute[250269]: 2026-01-23 10:51:18.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:51:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3759: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 48 op/s
Jan 23 05:51:18 np0005593232 nova_compute[250269]: 2026-01-23 10:51:18.860 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.874678) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165479874735, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 256, "total_data_size": 1853155, "memory_usage": 1880128, "flush_reason": "Manual Compaction"}
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 23 05:51:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:19.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165479892484, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1812004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82095, "largest_seqno": 83269, "table_properties": {"data_size": 1806451, "index_size": 2884, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12372, "raw_average_key_size": 19, "raw_value_size": 1794978, "raw_average_value_size": 2876, "num_data_blocks": 127, "num_entries": 624, "num_filter_entries": 624, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165382, "oldest_key_time": 1769165382, "file_creation_time": 1769165479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 17877 microseconds, and 6774 cpu microseconds.
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.892547) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1812004 bytes OK
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.892578) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.897929) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.897964) EVENT_LOG_v1 {"time_micros": 1769165479897955, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.897988) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1847770, prev total WAL file size 1847770, number of live WAL files 2.
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.898998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373731' seq:0, type:0; will stop at (end)
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1769KB)], [191(10MB)]
Jan 23 05:51:19 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165479899122, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 12839035, "oldest_snapshot_seqno": -1}
Jan 23 05:51:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:20 np0005593232 nova_compute[250269]: 2026-01-23 10:51:20.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 10710 keys, 12699417 bytes, temperature: kUnknown
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165480318398, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 12699417, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12631977, "index_size": 39551, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26821, "raw_key_size": 283734, "raw_average_key_size": 26, "raw_value_size": 12446366, "raw_average_value_size": 1162, "num_data_blocks": 1499, "num_entries": 10710, "num_filter_entries": 10710, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.318735) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 12699417 bytes
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.320827) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.6 rd, 30.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.5 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(14.1) write-amplify(7.0) OK, records in: 11241, records dropped: 531 output_compression: NoCompression
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.320850) EVENT_LOG_v1 {"time_micros": 1769165480320839, "job": 120, "event": "compaction_finished", "compaction_time_micros": 419370, "compaction_time_cpu_micros": 46771, "output_level": 6, "num_output_files": 1, "total_output_size": 12699417, "num_input_records": 11241, "num_output_records": 10710, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165480321575, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165480324131, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:19.898812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.324241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.324250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.324253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.324256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:51:20.324259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:51:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3760: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 14 KiB/s wr, 48 op/s
Jan 23 05:51:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:21.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:22.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:22 np0005593232 nova_compute[250269]: 2026-01-23 10:51:22.473 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3761: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:51:23 np0005593232 nova_compute[250269]: 2026-01-23 10:51:23.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:23 np0005593232 nova_compute[250269]: 2026-01-23 10:51:23.862 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:23.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:24.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:24 np0005593232 nova_compute[250269]: 2026-01-23 10:51:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:24 np0005593232 podman[400127]: 2026-01-23 10:51:24.427095254 +0000 UTC m=+0.088667752 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 05:51:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3762: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:51:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:25 np0005593232 nova_compute[250269]: 2026-01-23 10:51:25.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:25 np0005593232 nova_compute[250269]: 2026-01-23 10:51:25.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:51:25 np0005593232 nova_compute[250269]: 2026-01-23 10:51:25.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:51:25 np0005593232 nova_compute[250269]: 2026-01-23 10:51:25.316 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:51:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:25.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:26.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3763: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 23 05:51:27 np0005593232 nova_compute[250269]: 2026-01-23 10:51:27.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:27 np0005593232 nova_compute[250269]: 2026-01-23 10:51:27.474 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 05:51:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:27.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:28.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3764: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 75 op/s
Jan 23 05:51:28 np0005593232 nova_compute[250269]: 2026-01-23 10:51:28.865 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:29 np0005593232 podman[400180]: 2026-01-23 10:51:29.124155687 +0000 UTC m=+0.074595541 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:51:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:30.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3765: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 811 KiB/s rd, 29 op/s
Jan 23 05:51:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:31.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:32.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:32 np0005593232 nova_compute[250269]: 2026-01-23 10:51:32.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3766: 321 pgs: 321 active+clean; 173 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 942 KiB/s rd, 797 KiB/s wr, 42 op/s
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.337 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.337 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.338 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.338 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.868 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:51:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605131236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:51:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:33 np0005593232 nova_compute[250269]: 2026-01-23 10:51:33.918 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:51:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:34.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.126 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.128 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4100MB free_disk=20.958274841308594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.128 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.128 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.187 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.187 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.204 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:51:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:51:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2624817665' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:51:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3767: 321 pgs: 321 active+clean; 188 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 256 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.661 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.668 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.692 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.693 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:51:34 np0005593232 nova_compute[250269]: 2026-01-23 10:51:34.694 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:51:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:36.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3768: 321 pgs: 321 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:51:37
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'images', 'backups']
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:51:37 np0005593232 nova_compute[250269]: 2026-01-23 10:51:37.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:51:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:51:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:37.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:38.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3769: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:51:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:51:38 np0005593232 nova_compute[250269]: 2026-01-23 10:51:38.870 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:39.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3770: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 05:51:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:42 np0005593232 nova_compute[250269]: 2026-01-23 10:51:42.479 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3771: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 314 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 23 05:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:51:42.673 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:51:42.673 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:51:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:51:42.674 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:51:43 np0005593232 nova_compute[250269]: 2026-01-23 10:51:43.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:44.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:51:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2868405052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:51:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:51:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2868405052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:51:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3772: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 1.4 MiB/s wr, 47 op/s
Jan 23 05:51:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:45.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:46.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3773: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 74 KiB/s rd, 108 KiB/s wr, 18 op/s
Jan 23 05:51:47 np0005593232 nova_compute[250269]: 2026-01-23 10:51:47.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002170865903592034 of space, bias 1.0, pg target 0.6512597710776102 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021872237344128047 of space, bias 1.0, pg target 0.6561671203238414 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:51:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:51:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:47.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3774: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 77 KiB/s wr, 11 op/s
Jan 23 05:51:48 np0005593232 nova_compute[250269]: 2026-01-23 10:51:48.876 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:49.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3775: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 23 05:51:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:51.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:52.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:52 np0005593232 nova_compute[250269]: 2026-01-23 10:51:52.524 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3776: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 23 05:51:53 np0005593232 nova_compute[250269]: 2026-01-23 10:51:53.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:51:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:53.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:51:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:54.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3777: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 0 op/s
Jan 23 05:51:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:55.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:56.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:56 np0005593232 podman[400332]: 2026-01-23 10:51:56.19456553 +0000 UTC m=+0.090059441 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:51:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3778: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 05:51:57 np0005593232 nova_compute[250269]: 2026-01-23 10:51:57.526 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:57.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:51:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:51:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:51:58.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:51:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3779: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 23 05:51:58 np0005593232 nova_compute[250269]: 2026-01-23 10:51:58.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:51:59 np0005593232 podman[400360]: 2026-01-23 10:51:59.444926882 +0000 UTC m=+0.084325158 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:51:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:51:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:51:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:51:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:51:59.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:00.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3780: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 1023 B/s wr, 0 op/s
Jan 23 05:52:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:01.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:02.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:02 np0005593232 nova_compute[250269]: 2026-01-23 10:52:02.529 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3781: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Jan 23 05:52:03 np0005593232 nova_compute[250269]: 2026-01-23 10:52:03.882 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:03.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3782: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 2.8 KiB/s wr, 3 op/s
Jan 23 05:52:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:05.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:06.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3783: 321 pgs: 321 active+clean; 185 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 32 KiB/s rd, 3.0 KiB/s wr, 14 op/s
Jan 23 05:52:07 np0005593232 nova_compute[250269]: 2026-01-23 10:52:07.571 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:07 np0005593232 nova_compute[250269]: 2026-01-23 10:52:07.574 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:07.575 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=86, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=85) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:52:07 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:07.576 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:08.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3784: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.3 KiB/s wr, 31 op/s
Jan 23 05:52:08 np0005593232 nova_compute[250269]: 2026-01-23 10:52:08.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:10.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:10 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:10.577 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '86'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:52:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3785: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 42 KiB/s rd, 5.3 KiB/s wr, 31 op/s
Jan 23 05:52:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:12.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:12 np0005593232 nova_compute[250269]: 2026-01-23 10:52:12.575 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3786: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 52 KiB/s rd, 5.7 KiB/s wr, 44 op/s
Jan 23 05:52:13 np0005593232 nova_compute[250269]: 2026-01-23 10:52:13.885 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3787: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 29 KiB/s rd, 5.5 KiB/s wr, 42 op/s
Jan 23 05:52:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:15 np0005593232 nova_compute[250269]: 2026-01-23 10:52:15.694 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:15 np0005593232 nova_compute[250269]: 2026-01-23 10:52:15.695 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:16.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3788: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 30 KiB/s rd, 4.1 KiB/s wr, 43 op/s
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b14376c1-8396-4221-a078-61f6bd19582b does not exist
Jan 23 05:52:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2a55271c-362a-4f09-a51f-a283287020c8 does not exist
Jan 23 05:52:16 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0510df98-f78e-480e-ac5b-bc7741ae4fdf does not exist
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:16 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.313650097 +0000 UTC m=+0.048360666 container create 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:52:17 np0005593232 systemd[1]: Started libpod-conmon-8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0.scope.
Jan 23 05:52:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.290284623 +0000 UTC m=+0.024995222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.499033217 +0000 UTC m=+0.233743806 container init 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.510153903 +0000 UTC m=+0.244864472 container start 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:52:17 np0005593232 intelligent_proskuriakova[400728]: 167 167
Jan 23 05:52:17 np0005593232 systemd[1]: libpod-8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0.scope: Deactivated successfully.
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.575033897 +0000 UTC m=+0.309744456 container attach 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.576295313 +0000 UTC m=+0.311005882 container died 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:52:17 np0005593232 nova_compute[250269]: 2026-01-23 10:52:17.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bd7eb2f382f6fafea5823335beaf3d4257d2c7c0e5fc24279269f5d14a0e05c1-merged.mount: Deactivated successfully.
Jan 23 05:52:17 np0005593232 podman[400712]: 2026-01-23 10:52:17.802059861 +0000 UTC m=+0.536770430 container remove 8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:52:17 np0005593232 systemd[1]: libpod-conmon-8d5f108ec9bb5ad6d11fdff72e86f361aef295c1f1589d68f0faf89ce09492a0.scope: Deactivated successfully.
Jan 23 05:52:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:17.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:18 np0005593232 podman[400754]: 2026-01-23 10:52:18.005333479 +0000 UTC m=+0.058641498 container create f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:52:18 np0005593232 systemd[1]: Started libpod-conmon-f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29.scope.
Jan 23 05:52:18 np0005593232 podman[400754]: 2026-01-23 10:52:17.976931862 +0000 UTC m=+0.030239931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:18.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:18 np0005593232 podman[400754]: 2026-01-23 10:52:18.14712749 +0000 UTC m=+0.200435559 container init f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:52:18 np0005593232 podman[400754]: 2026-01-23 10:52:18.15628946 +0000 UTC m=+0.209597439 container start f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Jan 23 05:52:18 np0005593232 podman[400754]: 2026-01-23 10:52:18.182523136 +0000 UTC m=+0.235831105 container attach f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:52:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3789: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 22 KiB/s rd, 2.9 KiB/s wr, 32 op/s
Jan 23 05:52:18 np0005593232 nova_compute[250269]: 2026-01-23 10:52:18.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:18 np0005593232 affectionate_heyrovsky[400771]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:52:18 np0005593232 affectionate_heyrovsky[400771]: --> relative data size: 1.0
Jan 23 05:52:18 np0005593232 affectionate_heyrovsky[400771]: --> All data devices are unavailable
Jan 23 05:52:19 np0005593232 systemd[1]: libpod-f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29.scope: Deactivated successfully.
Jan 23 05:52:19 np0005593232 podman[400754]: 2026-01-23 10:52:19.006835987 +0000 UTC m=+1.060143966 container died f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:52:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ae421dfaec16cafc8fc2bb43db1c9997086664dd80f4202bb820b6f867b31d47-merged.mount: Deactivated successfully.
Jan 23 05:52:19 np0005593232 podman[400754]: 2026-01-23 10:52:19.171064035 +0000 UTC m=+1.224372014 container remove f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_heyrovsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:52:19 np0005593232 systemd[1]: libpod-conmon-f50269d235208f4b12b4f80a7a51a1d3fdc98aa50cceb08072c3be647dfe4d29.scope: Deactivated successfully.
Jan 23 05:52:19 np0005593232 nova_compute[250269]: 2026-01-23 10:52:19.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:19 np0005593232 nova_compute[250269]: 2026-01-23 10:52:19.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:52:19 np0005593232 podman[400937]: 2026-01-23 10:52:19.767799787 +0000 UTC m=+0.058763172 container create f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:52:19 np0005593232 podman[400937]: 2026-01-23 10:52:19.732958506 +0000 UTC m=+0.023921941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:19.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:20 np0005593232 systemd[1]: Started libpod-conmon-f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd.scope.
Jan 23 05:52:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:20 np0005593232 podman[400937]: 2026-01-23 10:52:20.231816586 +0000 UTC m=+0.522779991 container init f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:52:20 np0005593232 podman[400937]: 2026-01-23 10:52:20.24319858 +0000 UTC m=+0.534161985 container start f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:52:20 np0005593232 confident_feynman[400953]: 167 167
Jan 23 05:52:20 np0005593232 systemd[1]: libpod-f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd.scope: Deactivated successfully.
Jan 23 05:52:20 np0005593232 podman[400937]: 2026-01-23 10:52:20.344796338 +0000 UTC m=+0.635759733 container attach f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:52:20 np0005593232 podman[400937]: 2026-01-23 10:52:20.345417155 +0000 UTC m=+0.636380550 container died f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:52:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3790: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 23 05:52:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a6ad63c3f7d737f7371cc8ad1b8063d943f982dfcb3fa2c6e56ccac91a7eba4-merged.mount: Deactivated successfully.
Jan 23 05:52:20 np0005593232 podman[400937]: 2026-01-23 10:52:20.759046603 +0000 UTC m=+1.050009998 container remove f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:52:20 np0005593232 systemd[1]: libpod-conmon-f183fdc4eb03e61f8f7691cd6fc1b5d9059ab3bebe6de360f2d28771404d15bd.scope: Deactivated successfully.
Jan 23 05:52:20 np0005593232 podman[400978]: 2026-01-23 10:52:20.955072275 +0000 UTC m=+0.042385796 container create 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:52:20 np0005593232 systemd[1]: Started libpod-conmon-9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe.scope.
Jan 23 05:52:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:20.934199632 +0000 UTC m=+0.021513153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a926de9fbd6f590228898045db7dad34d3f12813735f9946849d9c7f1bf8ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a926de9fbd6f590228898045db7dad34d3f12813735f9946849d9c7f1bf8ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a926de9fbd6f590228898045db7dad34d3f12813735f9946849d9c7f1bf8ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:21 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7a926de9fbd6f590228898045db7dad34d3f12813735f9946849d9c7f1bf8ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:21.05268546 +0000 UTC m=+0.139998981 container init 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:21.06748625 +0000 UTC m=+0.154799741 container start 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:21.071371311 +0000 UTC m=+0.158684862 container attach 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 05:52:21 np0005593232 eager_saha[400994]: {
Jan 23 05:52:21 np0005593232 eager_saha[400994]:    "0": [
Jan 23 05:52:21 np0005593232 eager_saha[400994]:        {
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "devices": [
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "/dev/loop3"
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            ],
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "lv_name": "ceph_lv0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "lv_size": "7511998464",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "name": "ceph_lv0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "tags": {
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.cluster_name": "ceph",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.crush_device_class": "",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.encrypted": "0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.osd_id": "0",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.type": "block",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:                "ceph.vdo": "0"
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            },
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "type": "block",
Jan 23 05:52:21 np0005593232 eager_saha[400994]:            "vg_name": "ceph_vg0"
Jan 23 05:52:21 np0005593232 eager_saha[400994]:        }
Jan 23 05:52:21 np0005593232 eager_saha[400994]:    ]
Jan 23 05:52:21 np0005593232 eager_saha[400994]: }
Jan 23 05:52:21 np0005593232 systemd[1]: libpod-9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe.scope: Deactivated successfully.
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:21.833386601 +0000 UTC m=+0.920700112 container died 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:52:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d7a926de9fbd6f590228898045db7dad34d3f12813735f9946849d9c7f1bf8ca-merged.mount: Deactivated successfully.
Jan 23 05:52:21 np0005593232 podman[400978]: 2026-01-23 10:52:21.901189249 +0000 UTC m=+0.988502750 container remove 9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:52:21 np0005593232 systemd[1]: libpod-conmon-9142dbce0169009cd8490bcdc0a7e7190170d95c647bfc05b913710a370886fe.scope: Deactivated successfully.
Jan 23 05:52:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:22.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:22 np0005593232 nova_compute[250269]: 2026-01-23 10:52:22.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:22 np0005593232 nova_compute[250269]: 2026-01-23 10:52:22.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.605959532 +0000 UTC m=+0.065683998 container create 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:52:22 np0005593232 systemd[1]: Started libpod-conmon-310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468.scope.
Jan 23 05:52:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3791: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 11 KiB/s rd, 597 B/s wr, 14 op/s
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.574664082 +0000 UTC m=+0.034388548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.686006057 +0000 UTC m=+0.145730523 container init 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.693961013 +0000 UTC m=+0.153685449 container start 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Jan 23 05:52:22 np0005593232 sad_napier[401172]: 167 167
Jan 23 05:52:22 np0005593232 systemd[1]: libpod-310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468.scope: Deactivated successfully.
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.705668036 +0000 UTC m=+0.165392472 container attach 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.706046947 +0000 UTC m=+0.165771403 container died 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:52:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a36f90265caebd33564adbc4f2d82fdd77ea4505c83a373f55713fca9ebb65d7-merged.mount: Deactivated successfully.
Jan 23 05:52:22 np0005593232 podman[401156]: 2026-01-23 10:52:22.747262339 +0000 UTC m=+0.206986775 container remove 310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_napier, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:52:22 np0005593232 systemd[1]: libpod-conmon-310f220f96c2ebc115a33f3b88771f95dbf1e2a3403c89542071125941b34468.scope: Deactivated successfully.
Jan 23 05:52:22 np0005593232 podman[401195]: 2026-01-23 10:52:22.92741929 +0000 UTC m=+0.041259884 container create 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:52:22 np0005593232 systemd[1]: Started libpod-conmon-578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865.scope.
Jan 23 05:52:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:52:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f5debacbae55364de5c9652914ee878eb04e41f4493da738e8b1c9660fc815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f5debacbae55364de5c9652914ee878eb04e41f4493da738e8b1c9660fc815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f5debacbae55364de5c9652914ee878eb04e41f4493da738e8b1c9660fc815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5f5debacbae55364de5c9652914ee878eb04e41f4493da738e8b1c9660fc815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:52:23 np0005593232 podman[401195]: 2026-01-23 10:52:22.912796354 +0000 UTC m=+0.026636968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:52:23 np0005593232 podman[401195]: 2026-01-23 10:52:23.011092278 +0000 UTC m=+0.124932892 container init 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:52:23 np0005593232 podman[401195]: 2026-01-23 10:52:23.019358352 +0000 UTC m=+0.133198946 container start 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:52:23 np0005593232 podman[401195]: 2026-01-23 10:52:23.022651246 +0000 UTC m=+0.136491860 container attach 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:52:23 np0005593232 nova_compute[250269]: 2026-01-23 10:52:23.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:23 np0005593232 interesting_nash[401210]: {
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:        "osd_id": 0,
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:        "type": "bluestore"
Jan 23 05:52:23 np0005593232 interesting_nash[401210]:    }
Jan 23 05:52:23 np0005593232 interesting_nash[401210]: }
Jan 23 05:52:23 np0005593232 systemd[1]: libpod-578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865.scope: Deactivated successfully.
Jan 23 05:52:23 np0005593232 podman[401195]: 2026-01-23 10:52:23.883374782 +0000 UTC m=+0.997215396 container died 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:52:23 np0005593232 nova_compute[250269]: 2026-01-23 10:52:23.889 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:23.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:24.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b5f5debacbae55364de5c9652914ee878eb04e41f4493da738e8b1c9660fc815-merged.mount: Deactivated successfully.
Jan 23 05:52:24 np0005593232 podman[401195]: 2026-01-23 10:52:24.307610701 +0000 UTC m=+1.421451295 container remove 578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:52:24 np0005593232 systemd[1]: libpod-conmon-578f2be3bd77737fbf1f867aa80e1f3c0ca39c44021bcfd4eee046e7e2800865.scope: Deactivated successfully.
Jan 23 05:52:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:52:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:52:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 363e47c8-8d76-46c9-9d33-556fe43ba9ed does not exist
Jan 23 05:52:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ac12b5f9-85f0-414d-9b57-4e5569401510 does not exist
Jan 23 05:52:24 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 147cf900-1ae7-4aa0-a4b4-b9190b15b3ba does not exist
Jan 23 05:52:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3792: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 23 05:52:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:25 np0005593232 nova_compute[250269]: 2026-01-23 10:52:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:25 np0005593232 nova_compute[250269]: 2026-01-23 10:52:25.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:52:25 np0005593232 nova_compute[250269]: 2026-01-23 10:52:25.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:52:25 np0005593232 nova_compute[250269]: 2026-01-23 10:52:25.314 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:52:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:25 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:52:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:26 np0005593232 nova_compute[250269]: 2026-01-23 10:52:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:26 np0005593232 podman[401297]: 2026-01-23 10:52:26.437319689 +0000 UTC m=+0.091161642 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:52:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3793: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 852 B/s rd, 255 B/s wr, 1 op/s
Jan 23 05:52:27 np0005593232 nova_compute[250269]: 2026-01-23 10:52:27.582 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:28 np0005593232 nova_compute[250269]: 2026-01-23 10:52:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3794: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:28 np0005593232 nova_compute[250269]: 2026-01-23 10:52:28.891 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:29 np0005593232 podman[401349]: 2026-01-23 10:52:29.686520247 +0000 UTC m=+0.067165490 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:52:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:29.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3795: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:31.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:32.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:32 np0005593232 nova_compute[250269]: 2026-01-23 10:52:32.586 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3796: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:33 np0005593232 nova_compute[250269]: 2026-01-23 10:52:33.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:33.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:34.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3797: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.340 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.340 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.341 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:52:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:52:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395310914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:52:35 np0005593232 nova_compute[250269]: 2026-01-23 10:52:35.863 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:52:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:35.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.127 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.129 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4099MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.130 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.131 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:52:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:36.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.369 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.370 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.454 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.532 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.532 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.554 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.574 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:52:36 np0005593232 nova_compute[250269]: 2026-01-23 10:52:36.593 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:52:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3798: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:52:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019337917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.097 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.108 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.142 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.145 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.146 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:52:37
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr']
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:52:37 np0005593232 nova_compute[250269]: 2026-01-23 10:52:37.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:52:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:52:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:37.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:38.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3799: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:52:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:52:38 np0005593232 nova_compute[250269]: 2026-01-23 10:52:38.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:39.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:40.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3800: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:41 np0005593232 nova_compute[250269]: 2026-01-23 10:52:41.142 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:52:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:41.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:42.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:42 np0005593232 nova_compute[250269]: 2026-01-23 10:52:42.633 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:42.674 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:42.675 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:52:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:52:42.676 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:52:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3801: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:52:43 np0005593232 nova_compute[250269]: 2026-01-23 10:52:43.896 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:43.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:44.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3802: 321 pgs: 321 active+clean; 125 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.1 KiB/s rd, 258 KiB/s wr, 3 op/s
Jan 23 05:52:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:46.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3803: 321 pgs: 321 active+clean; 142 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 2.7 KiB/s rd, 603 KiB/s wr, 5 op/s
Jan 23 05:52:47 np0005593232 nova_compute[250269]: 2026-01-23 10:52:47.636 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006783047180346176 of space, bias 1.0, pg target 0.20349141541038526 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:52:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:52:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.004000114s ======
Jan 23 05:52:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000114s
Jan 23 05:52:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:48.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3804: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:52:48 np0005593232 nova_compute[250269]: 2026-01-23 10:52:48.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:49.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3805: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:52:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:52:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:51.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:52:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:52.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:52 np0005593232 nova_compute[250269]: 2026-01-23 10:52:52.639 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3806: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:52:53 np0005593232 nova_compute[250269]: 2026-01-23 10:52:53.901 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:52:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:53.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:52:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:52:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:54.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:52:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3807: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:52:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:52:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:55.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:56.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3808: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 350 KiB/s rd, 1.5 MiB/s wr, 37 op/s
Jan 23 05:52:57 np0005593232 podman[401503]: 2026-01-23 10:52:57.439384891 +0000 UTC m=+0.096792532 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 05:52:57 np0005593232 nova_compute[250269]: 2026-01-23 10:52:57.642 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:57.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:52:58.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:52:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3809: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 23 05:52:58 np0005593232 nova_compute[250269]: 2026-01-23 10:52:58.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:52:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:52:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:52:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:52:59.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:00.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:00 np0005593232 podman[401533]: 2026-01-23 10:53:00.401960662 +0000 UTC m=+0.062463706 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 23 05:53:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3810: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 05:53:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:01.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:02 np0005593232 nova_compute[250269]: 2026-01-23 10:53:02.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3811: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 05:53:03 np0005593232 nova_compute[250269]: 2026-01-23 10:53:03.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:03.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3812: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 05:53:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:06.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:06.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3813: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:07 np0005593232 nova_compute[250269]: 2026-01-23 10:53:07.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:08.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:08.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3814: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.6 MiB/s rd, 1.1 MiB/s wr, 77 op/s
Jan 23 05:53:08 np0005593232 nova_compute[250269]: 2026-01-23 10:53:08.906 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3815: 321 pgs: 321 active+clean; 177 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 57 KiB/s rd, 1.1 MiB/s wr, 17 op/s
Jan 23 05:53:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:12 np0005593232 nova_compute[250269]: 2026-01-23 10:53:12.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3816: 321 pgs: 321 active+clean; 194 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 23 05:53:13 np0005593232 nova_compute[250269]: 2026-01-23 10:53:13.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:14.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:14.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3817: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 23 05:53:14 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 23 05:53:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.004932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596005002, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1183, "num_deletes": 251, "total_data_size": 1966783, "memory_usage": 1991712, "flush_reason": "Manual Compaction"}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 23 05:53:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596026723, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 1947382, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83270, "largest_seqno": 84452, "table_properties": {"data_size": 1941751, "index_size": 3025, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12135, "raw_average_key_size": 19, "raw_value_size": 1930427, "raw_average_value_size": 3164, "num_data_blocks": 135, "num_entries": 610, "num_filter_entries": 610, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165480, "oldest_key_time": 1769165480, "file_creation_time": 1769165596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 22083 microseconds, and 7750 cpu microseconds.
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.027003) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 1947382 bytes OK
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.027050) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.030141) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.030186) EVENT_LOG_v1 {"time_micros": 1769165596030173, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.030236) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1961455, prev total WAL file size 1961455, number of live WAL files 2.
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.031845) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(1901KB)], [194(12MB)]
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596032025, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 14646799, "oldest_snapshot_seqno": -1}
Jan 23 05:53:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:16.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:16 np0005593232 nova_compute[250269]: 2026-01-23 10:53:16.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 10805 keys, 12731566 bytes, temperature: kUnknown
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596328591, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 12731566, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12663571, "index_size": 39850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27077, "raw_key_size": 286407, "raw_average_key_size": 26, "raw_value_size": 12476435, "raw_average_value_size": 1154, "num_data_blocks": 1506, "num_entries": 10805, "num_filter_entries": 10805, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.329124) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 12731566 bytes
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.425641) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 49.4 rd, 42.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 12.1 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(14.1) write-amplify(6.5) OK, records in: 11320, records dropped: 515 output_compression: NoCompression
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.425710) EVENT_LOG_v1 {"time_micros": 1769165596425684, "job": 122, "event": "compaction_finished", "compaction_time_micros": 296723, "compaction_time_cpu_micros": 58553, "output_level": 6, "num_output_files": 1, "total_output_size": 12731566, "num_input_records": 11320, "num_output_records": 10805, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596426896, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165596430473, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.031644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.430718) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.430733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.430737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.430741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:53:16.430745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:53:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3818: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 275 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 05:53:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:17.109 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=87, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=86) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:53:17 np0005593232 nova_compute[250269]: 2026-01-23 10:53:17.109 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:17 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:17.111 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:53:17 np0005593232 nova_compute[250269]: 2026-01-23 10:53:17.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:17 np0005593232 nova_compute[250269]: 2026-01-23 10:53:17.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:18.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3819: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 05:53:18 np0005593232 nova_compute[250269]: 2026-01-23 10:53:18.910 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:20.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3820: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 257 KiB/s rd, 1.1 MiB/s wr, 106 op/s
Jan 23 05:53:21 np0005593232 nova_compute[250269]: 2026-01-23 10:53:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:21 np0005593232 nova_compute[250269]: 2026-01-23 10:53:21.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:53:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:22.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:22.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:22 np0005593232 nova_compute[250269]: 2026-01-23 10:53:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:22 np0005593232 nova_compute[250269]: 2026-01-23 10:53:22.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3821: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 305 KiB/s rd, 1.1 MiB/s wr, 186 op/s
Jan 23 05:53:23 np0005593232 nova_compute[250269]: 2026-01-23 10:53:23.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:23 np0005593232 nova_compute[250269]: 2026-01-23 10:53:23.890 250273 DEBUG nova.compute.manager [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 23 05:53:23 np0005593232 nova_compute[250269]: 2026-01-23 10:53:23.913 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:23 np0005593232 nova_compute[250269]: 2026-01-23 10:53:23.986 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:23 np0005593232 nova_compute[250269]: 2026-01-23 10:53:23.987 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.021 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'pci_requests' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.041 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.042 250273 INFO nova.compute.claims [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.043 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'resources' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.070 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'numa_topology' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.089 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'pci_devices' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.142 250273 INFO nova.compute.resource_tracker [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating resource usage from migration 8ab60eb4-5a21-4192-9b1a-3ef645cee3f1#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.142 250273 DEBUG nova.compute.resource_tracker [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Starting to track incoming migration 8ab60eb4-5a21-4192-9b1a-3ef645cee3f1 with flavor 68d42077-c749-4366-ba3e-07758debb02d _update_usage_from_migration /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1431#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.194 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:53:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:24.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3822: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 183 KiB/s rd, 54 KiB/s wr, 160 op/s
Jan 23 05:53:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:53:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4237478819' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.742 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.748 250273 DEBUG nova.compute.provider_tree [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.765 250273 DEBUG nova.scheduler.client.report [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.791 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:24 np0005593232 nova_compute[250269]: 2026-01-23 10:53:24.792 250273 INFO nova.compute.manager [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Migrating#033[00m
Jan 23 05:53:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:25.113 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '87'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:26.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:26 np0005593232 nova_compute[250269]: 2026-01-23 10:53:26.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:26 np0005593232 nova_compute[250269]: 2026-01-23 10:53:26.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:53:26 np0005593232 nova_compute[250269]: 2026-01-23 10:53:26.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:53:26 np0005593232 nova_compute[250269]: 2026-01-23 10:53:26.525 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:53:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3823: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 14 KiB/s wr, 147 op/s
Jan 23 05:53:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:27 np0005593232 nova_compute[250269]: 2026-01-23 10:53:27.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 39bb8e44-689c-4920-bcec-e66883393b55 does not exist
Jan 23 05:53:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 46c2a01f-a9f8-45ab-9f30-7a0bd5e50a4f does not exist
Jan 23 05:53:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d555973a-e072-467e-9d54-7f5b81ca8a03 does not exist
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:53:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:53:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:28.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:53:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:53:28 np0005593232 podman[401792]: 2026-01-23 10:53:28.215952613 +0000 UTC m=+0.115525305 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:53:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:28 np0005593232 nova_compute[250269]: 2026-01-23 10:53:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:28 np0005593232 systemd[1]: Created slice User Slice of UID 42436.
Jan 23 05:53:28 np0005593232 systemd[1]: Starting User Runtime Directory /run/user/42436...
Jan 23 05:53:28 np0005593232 systemd-logind[808]: New session 61 of user nova.
Jan 23 05:53:28 np0005593232 systemd[1]: Finished User Runtime Directory /run/user/42436.
Jan 23 05:53:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3824: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 87 KiB/s rd, 2.0 KiB/s wr, 145 op/s
Jan 23 05:53:28 np0005593232 systemd[1]: Starting User Manager for UID 42436...
Jan 23 05:53:28 np0005593232 podman[401940]: 2026-01-23 10:53:28.797653938 +0000 UTC m=+0.058128723 container create 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 05:53:28 np0005593232 systemd[1]: Started libpod-conmon-11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57.scope.
Jan 23 05:53:28 np0005593232 podman[401940]: 2026-01-23 10:53:28.774986754 +0000 UTC m=+0.035461579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:28 np0005593232 systemd[401939]: Queued start job for default target Main User Target.
Jan 23 05:53:28 np0005593232 systemd[401939]: Created slice User Application Slice.
Jan 23 05:53:28 np0005593232 systemd[401939]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:53:28 np0005593232 systemd[401939]: Started Daily Cleanup of User's Temporary Directories.
Jan 23 05:53:28 np0005593232 systemd[401939]: Reached target Paths.
Jan 23 05:53:28 np0005593232 systemd[401939]: Reached target Timers.
Jan 23 05:53:28 np0005593232 systemd[401939]: Starting D-Bus User Message Bus Socket...
Jan 23 05:53:28 np0005593232 systemd[401939]: Starting Create User's Volatile Files and Directories...
Jan 23 05:53:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:28 np0005593232 nova_compute[250269]: 2026-01-23 10:53:28.915 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:28 np0005593232 systemd[401939]: Finished Create User's Volatile Files and Directories.
Jan 23 05:53:28 np0005593232 systemd[401939]: Listening on D-Bus User Message Bus Socket.
Jan 23 05:53:28 np0005593232 systemd[401939]: Reached target Sockets.
Jan 23 05:53:28 np0005593232 systemd[401939]: Reached target Basic System.
Jan 23 05:53:28 np0005593232 systemd[401939]: Reached target Main User Target.
Jan 23 05:53:28 np0005593232 systemd[401939]: Startup finished in 169ms.
Jan 23 05:53:28 np0005593232 systemd[1]: Started User Manager for UID 42436.
Jan 23 05:53:28 np0005593232 podman[401940]: 2026-01-23 10:53:28.928201089 +0000 UTC m=+0.188675914 container init 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Jan 23 05:53:28 np0005593232 podman[401940]: 2026-01-23 10:53:28.938467061 +0000 UTC m=+0.198941846 container start 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 05:53:28 np0005593232 systemd[1]: Started Session 61 of User nova.
Jan 23 05:53:28 np0005593232 podman[401940]: 2026-01-23 10:53:28.941510007 +0000 UTC m=+0.201984792 container attach 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:53:28 np0005593232 keen_jepsen[401969]: 167 167
Jan 23 05:53:28 np0005593232 systemd[1]: libpod-11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57.scope: Deactivated successfully.
Jan 23 05:53:28 np0005593232 conmon[401969]: conmon 11b1899d172c906062a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57.scope/container/memory.events
Jan 23 05:53:29 np0005593232 podman[401976]: 2026-01-23 10:53:29.023806607 +0000 UTC m=+0.053174833 container died 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:53:29 np0005593232 systemd[1]: session-61.scope: Deactivated successfully.
Jan 23 05:53:29 np0005593232 systemd-logind[808]: Session 61 logged out. Waiting for processes to exit.
Jan 23 05:53:29 np0005593232 systemd-logind[808]: Removed session 61.
Jan 23 05:53:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d1d0b1b20fdb07aa88e844c876f0b17f5ded2509b932ab6f2da01960f005317b-merged.mount: Deactivated successfully.
Jan 23 05:53:29 np0005593232 podman[401976]: 2026-01-23 10:53:29.082923747 +0000 UTC m=+0.112291953 container remove 11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_jepsen, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:53:29 np0005593232 systemd[1]: libpod-conmon-11b1899d172c906062a7bd0f32ec9671cbc47203de64ea2d105fa86f2b532d57.scope: Deactivated successfully.
Jan 23 05:53:29 np0005593232 systemd-logind[808]: New session 63 of user nova.
Jan 23 05:53:29 np0005593232 systemd[1]: Started Session 63 of User nova.
Jan 23 05:53:29 np0005593232 systemd[1]: session-63.scope: Deactivated successfully.
Jan 23 05:53:29 np0005593232 systemd-logind[808]: Session 63 logged out. Waiting for processes to exit.
Jan 23 05:53:29 np0005593232 systemd-logind[808]: Removed session 63.
Jan 23 05:53:29 np0005593232 podman[402004]: 2026-01-23 10:53:29.324703799 +0000 UTC m=+0.057228587 container create 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:53:29 np0005593232 systemd[1]: Started libpod-conmon-74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9.scope.
Jan 23 05:53:29 np0005593232 podman[402004]: 2026-01-23 10:53:29.302159359 +0000 UTC m=+0.034684217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:29 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:29 np0005593232 podman[402004]: 2026-01-23 10:53:29.44361989 +0000 UTC m=+0.176144768 container init 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:53:29 np0005593232 podman[402004]: 2026-01-23 10:53:29.461051675 +0000 UTC m=+0.193576503 container start 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:53:29 np0005593232 podman[402004]: 2026-01-23 10:53:29.46614065 +0000 UTC m=+0.198665548 container attach 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:53:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:30.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:30 np0005593232 nova_compute[250269]: 2026-01-23 10:53:30.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:30 np0005593232 elegant_vaughan[402022]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:53:30 np0005593232 elegant_vaughan[402022]: --> relative data size: 1.0
Jan 23 05:53:30 np0005593232 elegant_vaughan[402022]: --> All data devices are unavailable
Jan 23 05:53:30 np0005593232 systemd[1]: libpod-74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9.scope: Deactivated successfully.
Jan 23 05:53:30 np0005593232 podman[402004]: 2026-01-23 10:53:30.418891762 +0000 UTC m=+1.151416560 container died 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:53:30 np0005593232 systemd[1]: var-lib-containers-storage-overlay-007915a8921f13370a1696d79ea8fab254d5de7f006e0318cd1518237b3774bd-merged.mount: Deactivated successfully.
Jan 23 05:53:30 np0005593232 podman[402004]: 2026-01-23 10:53:30.491616939 +0000 UTC m=+1.224141737 container remove 74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_vaughan, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:53:30 np0005593232 systemd[1]: libpod-conmon-74285b280475df5d0538a813ec14098f8a9859397154594f4ffbe24221d570c9.scope: Deactivated successfully.
Jan 23 05:53:30 np0005593232 podman[402089]: 2026-01-23 10:53:30.5676615 +0000 UTC m=+0.101083134 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:53:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3825: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 51 KiB/s rd, 1023 B/s wr, 84 op/s
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.402394327 +0000 UTC m=+0.054849220 container create 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 23 05:53:31 np0005593232 systemd[1]: Started libpod-conmon-59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385.scope.
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.380502084 +0000 UTC m=+0.032957007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.503406758 +0000 UTC m=+0.155861671 container init 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.51508199 +0000 UTC m=+0.167536873 container start 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.519149835 +0000 UTC m=+0.171604748 container attach 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:53:31 np0005593232 stoic_pascal[402273]: 167 167
Jan 23 05:53:31 np0005593232 systemd[1]: libpod-59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385.scope: Deactivated successfully.
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.523983493 +0000 UTC m=+0.176438426 container died 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:53:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a2f701b760ca453fbd501c85199157713f1a6fd5d84665d53a5dbd4a0a85e599-merged.mount: Deactivated successfully.
Jan 23 05:53:31 np0005593232 podman[402256]: 2026-01-23 10:53:31.586212382 +0000 UTC m=+0.238667285 container remove 59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 05:53:31 np0005593232 systemd[1]: libpod-conmon-59f2c3adccb7ea24f5d324ee30438e12602baf1d3e1a9af193a82d5454a4f385.scope: Deactivated successfully.
Jan 23 05:53:31 np0005593232 podman[402298]: 2026-01-23 10:53:31.843749152 +0000 UTC m=+0.083562646 container create 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:53:31 np0005593232 podman[402298]: 2026-01-23 10:53:31.810976671 +0000 UTC m=+0.050790215 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:31 np0005593232 systemd[1]: Started libpod-conmon-6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729.scope.
Jan 23 05:53:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18164c9572e8a0fec850f7c3d9910074c146091d3540dfd69ebbcce310e93969/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18164c9572e8a0fec850f7c3d9910074c146091d3540dfd69ebbcce310e93969/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18164c9572e8a0fec850f7c3d9910074c146091d3540dfd69ebbcce310e93969/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18164c9572e8a0fec850f7c3d9910074c146091d3540dfd69ebbcce310e93969/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:31 np0005593232 podman[402298]: 2026-01-23 10:53:31.979348087 +0000 UTC m=+0.219161561 container init 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:53:31 np0005593232 podman[402298]: 2026-01-23 10:53:31.993246752 +0000 UTC m=+0.233060206 container start 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:53:31 np0005593232 podman[402298]: 2026-01-23 10:53:31.99844451 +0000 UTC m=+0.238257964 container attach 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:53:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:32 np0005593232 nova_compute[250269]: 2026-01-23 10:53:32.695 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3826: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 52 KiB/s rd, 6.7 KiB/s wr, 86 op/s
Jan 23 05:53:32 np0005593232 silly_kirch[402314]: {
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:    "0": [
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:        {
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "devices": [
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "/dev/loop3"
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            ],
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "lv_name": "ceph_lv0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "lv_size": "7511998464",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "name": "ceph_lv0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "tags": {
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.cluster_name": "ceph",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.crush_device_class": "",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.encrypted": "0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.osd_id": "0",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.type": "block",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:                "ceph.vdo": "0"
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            },
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "type": "block",
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:            "vg_name": "ceph_vg0"
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:        }
Jan 23 05:53:32 np0005593232 silly_kirch[402314]:    ]
Jan 23 05:53:32 np0005593232 silly_kirch[402314]: }
Jan 23 05:53:32 np0005593232 systemd[1]: libpod-6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729.scope: Deactivated successfully.
Jan 23 05:53:32 np0005593232 podman[402298]: 2026-01-23 10:53:32.870778846 +0000 UTC m=+1.110592360 container died 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Jan 23 05:53:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-18164c9572e8a0fec850f7c3d9910074c146091d3540dfd69ebbcce310e93969-merged.mount: Deactivated successfully.
Jan 23 05:53:32 np0005593232 podman[402298]: 2026-01-23 10:53:32.949036091 +0000 UTC m=+1.188849555 container remove 6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:53:32 np0005593232 systemd[1]: libpod-conmon-6bd9a76347198bf33b414a76d12e9f5d7c840f05b40c7f71834d59ed988d5729.scope: Deactivated successfully.
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.657 250273 DEBUG nova.compute.manager [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.659 250273 DEBUG oslo_concurrency.lockutils [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.659 250273 DEBUG oslo_concurrency.lockutils [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.659 250273 DEBUG oslo_concurrency.lockutils [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.659 250273 DEBUG nova.compute.manager [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.659 250273 WARNING nova.compute.manager [req-adea3133-33e8-4155-be19-d024bc44aff8 req-df20ec4f-b402-4a12-b4b2-800ed1d6276f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received unexpected event network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.669348326 +0000 UTC m=+0.039441502 container create a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 05:53:33 np0005593232 systemd[1]: Started libpod-conmon-a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa.scope.
Jan 23 05:53:33 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.652287971 +0000 UTC m=+0.022381167 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.758347116 +0000 UTC m=+0.128440392 container init a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.765446067 +0000 UTC m=+0.135539263 container start a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:53:33 np0005593232 vigorous_gagarin[402490]: 167 167
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.769588305 +0000 UTC m=+0.139681521 container attach a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:53:33 np0005593232 conmon[402490]: conmon a2a0f4f69bbd35a32bd5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa.scope/container/memory.events
Jan 23 05:53:33 np0005593232 systemd[1]: libpod-a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa.scope: Deactivated successfully.
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.771587642 +0000 UTC m=+0.141680858 container died a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:53:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fa5372afc70e10be8da306215cfdf00e90e128b09cafba9e622ba11b9f8fdf0e-merged.mount: Deactivated successfully.
Jan 23 05:53:33 np0005593232 podman[402474]: 2026-01-23 10:53:33.825181945 +0000 UTC m=+0.195275121 container remove a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:53:33 np0005593232 systemd[1]: libpod-conmon-a2a0f4f69bbd35a32bd58c9e3904880e929d0b8280d4e7e30ea1abdbb670cffa.scope: Deactivated successfully.
Jan 23 05:53:33 np0005593232 nova_compute[250269]: 2026-01-23 10:53:33.919 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:33 np0005593232 podman[402513]: 2026-01-23 10:53:33.976393253 +0000 UTC m=+0.043395204 container create e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:53:34 np0005593232 nova_compute[250269]: 2026-01-23 10:53:34.009 250273 INFO nova.network.neutron [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating port d08a5642-c043-410d-8d3a-e63134c79cd2 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Jan 23 05:53:34 np0005593232 systemd[1]: Started libpod-conmon-e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28.scope.
Jan 23 05:53:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:34.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d3acb0b5405a061fc4c9ad157403790dda74b6639b87e819e4ba47fe24534f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d3acb0b5405a061fc4c9ad157403790dda74b6639b87e819e4ba47fe24534f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d3acb0b5405a061fc4c9ad157403790dda74b6639b87e819e4ba47fe24534f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4d3acb0b5405a061fc4c9ad157403790dda74b6639b87e819e4ba47fe24534f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:33.959010429 +0000 UTC m=+0.026012410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:34.056230133 +0000 UTC m=+0.123232104 container init e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:34.068295326 +0000 UTC m=+0.135297277 container start e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:34.071314172 +0000 UTC m=+0.138316123 container attach e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 05:53:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:34.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3827: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 KiB/s rd, 18 KiB/s wr, 8 op/s
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]: {
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:        "osd_id": 0,
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:        "type": "bluestore"
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]:    }
Jan 23 05:53:34 np0005593232 stoic_joliot[402529]: }
Jan 23 05:53:34 np0005593232 systemd[1]: libpod-e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28.scope: Deactivated successfully.
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:34.898122503 +0000 UTC m=+0.965124474 container died e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:53:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e4d3acb0b5405a061fc4c9ad157403790dda74b6639b87e819e4ba47fe24534f-merged.mount: Deactivated successfully.
Jan 23 05:53:34 np0005593232 podman[402513]: 2026-01-23 10:53:34.966132916 +0000 UTC m=+1.033134897 container remove e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:53:34 np0005593232 systemd[1]: libpod-conmon-e963c3c59174fe3bd47148f74a674e7bf7b78878b88f7e8bbac72c25527c5c28.scope: Deactivated successfully.
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 561ca93f-f15f-49e4-bca9-bcf5e6491a99 does not exist
Jan 23 05:53:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 683fb051-32b8-45bd-8598-669c788fe693 does not exist
Jan 23 05:53:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 48b8a7d9-e0b5-47e9-aaa2-6dd288a3dfda does not exist
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.324 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.325 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.326 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.326 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.326 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:53:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1124358948' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.800 250273 DEBUG nova.compute.manager [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.801 250273 DEBUG oslo_concurrency.lockutils [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.801 250273 DEBUG oslo_concurrency.lockutils [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.802 250273 DEBUG oslo_concurrency.lockutils [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.802 250273 DEBUG nova.compute.manager [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.802 250273 WARNING nova.compute.manager [req-2b6b2ba7-984c-4ea1-9870-80c30e11d9da req-fdeb1e0a-b050-4a89-b8d2-f3f78362a9d9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received unexpected event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with vm_state active and task_state resize_migrated.#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.803 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.990 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.991 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4068MB free_disk=20.94268035888672GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.991 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:35 np0005593232 nova_compute[250269]: 2026-01-23 10:53:35.992 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:53:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:36.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.103 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Applying migration context for instance 21edef97-3531-4772-8aa5-a3feeb9ff3f5 as it has an incoming, in-progress migration 8ab60eb4-5a21-4192-9b1a-3ef645cee3f1. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.104 250273 INFO nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating resource usage from migration 8ab60eb4-5a21-4192-9b1a-3ef645cee3f1#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.141 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 21edef97-3531-4772-8aa5-a3feeb9ff3f5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.142 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.198 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:53:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:36.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:53:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777290526' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.658 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.664 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.700 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:53:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3828: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 18 KiB/s wr, 4 op/s
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.741 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:53:36 np0005593232 nova_compute[250269]: 2026-01-23 10:53:36.742 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.207 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Acquiring lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.208 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Acquired lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.209 250273 DEBUG nova.network.neutron [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.331 250273 DEBUG nova.compute.manager [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-changed-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.331 250273 DEBUG nova.compute.manager [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Refreshing instance network info cache due to event network-changed-d08a5642-c043-410d-8d3a-e63134c79cd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.332 250273 DEBUG oslo_concurrency.lockutils [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:53:37
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.rgw.root', 'vms', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:53:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:53:37 np0005593232 nova_compute[250269]: 2026-01-23 10:53:37.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:38.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3829: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 17 KiB/s wr, 4 op/s
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:53:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:53:38 np0005593232 nova_compute[250269]: 2026-01-23 10:53:38.920 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:39 np0005593232 systemd[1]: Stopping User Manager for UID 42436...
Jan 23 05:53:39 np0005593232 systemd[401939]: Activating special unit Exit the Session...
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped target Main User Target.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped target Basic System.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped target Paths.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped target Sockets.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped target Timers.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 23 05:53:39 np0005593232 systemd[401939]: Closed D-Bus User Message Bus Socket.
Jan 23 05:53:39 np0005593232 systemd[401939]: Stopped Create User's Volatile Files and Directories.
Jan 23 05:53:39 np0005593232 systemd[401939]: Removed slice User Application Slice.
Jan 23 05:53:39 np0005593232 systemd[401939]: Reached target Shutdown.
Jan 23 05:53:39 np0005593232 systemd[401939]: Finished Exit the Session.
Jan 23 05:53:39 np0005593232 systemd[401939]: Reached target Exit the Session.
Jan 23 05:53:39 np0005593232 systemd[1]: user@42436.service: Deactivated successfully.
Jan 23 05:53:39 np0005593232 systemd[1]: Stopped User Manager for UID 42436.
Jan 23 05:53:39 np0005593232 systemd[1]: Stopping User Runtime Directory /run/user/42436...
Jan 23 05:53:39 np0005593232 systemd[1]: run-user-42436.mount: Deactivated successfully.
Jan 23 05:53:39 np0005593232 systemd[1]: user-runtime-dir@42436.service: Deactivated successfully.
Jan 23 05:53:39 np0005593232 systemd[1]: Stopped User Runtime Directory /run/user/42436.
Jan 23 05:53:39 np0005593232 systemd[1]: Removed slice User Slice of UID 42436.
Jan 23 05:53:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.380 250273 DEBUG nova.network.neutron [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating instance_info_cache with network_info: [{"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.405 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Releasing lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.410 250273 DEBUG oslo_concurrency.lockutils [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.410 250273 DEBUG nova.network.neutron [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Refreshing network info cache for port d08a5642-c043-410d-8d3a-e63134c79cd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.502 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.504 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.505 250273 INFO nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Creating image(s)#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.565 250273 DEBUG nova.storage.rbd_utils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] creating snapshot(nova-resize) on rbd image(21edef97-3531-4772-8aa5-a3feeb9ff3f5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 05:53:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Jan 23 05:53:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Jan 23 05:53:40 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Jan 23 05:53:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3831: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.9 KiB/s rd, 20 KiB/s wr, 5 op/s
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.745 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'trusted_certs' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.879 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.879 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Ensure instance console log exists: /var/lib/nova/instances/21edef97-3531-4772-8aa5-a3feeb9ff3f5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.880 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.880 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.880 250273 DEBUG oslo_concurrency.lockutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.882 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Start _get_guest_xml network_info=[{"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1773492812", "vif_mac": "fa:16:3e:59:40:79"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.888 250273 WARNING nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.893 250273 DEBUG nova.virt.libvirt.host [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.894 250273 DEBUG nova.virt.libvirt.host [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.896 250273 DEBUG nova.virt.libvirt.host [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.897 250273 DEBUG nova.virt.libvirt.host [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.898 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.898 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.899 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.899 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.899 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.899 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.900 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.900 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.900 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.900 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.901 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.901 250273 DEBUG nova.virt.hardware [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.901 250273 DEBUG nova.objects.instance [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Lazy-loading 'vcpu_model' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:53:40 np0005593232 nova_compute[250269]: 2026-01-23 10:53:40.925 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:53:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:53:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/311689685' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.394 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.435 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:53:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 05:53:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366660501' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.885 250273 DEBUG oslo_concurrency.processutils [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.887 250273 DEBUG nova.virt.libvirt.vif [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044974910',display_name='tempest-TestNetworkAdvancedServerOps-server-1044974910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044974910',id=210,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1Au+/nbNJXswn+/lBT/NUlG+C4qWpKV8LfTLcEIq/JQHMntJ0r6AZpvvHSolbGhgNEDJ2I0R+q+8ASoXZhoeZdCjKoEqwhuN6XwpPG1I72EeOI415/JreWIAcqSMa5Mw==',key_name='tempest-TestNetworkAdvancedServerOps-840850991',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:52:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c06f98b51aeb48de91d116fda54a161f',ramdisk_id='',reservation_id='r-70lt2fy5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1886747874',owner_user_name='tempest-TestNetworkAdvancedServerOps-1886747874-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:53:33Z,user_data=None,user_id='420c366dc5dc45a48da4e0b18c93043f',uuid=21edef97-3531-4772-8aa5-a3feeb9ff3f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1773492812", "vif_mac": "fa:16:3e:59:40:79"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.887 250273 DEBUG nova.network.os_vif_util [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Converting VIF {"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1773492812", "vif_mac": "fa:16:3e:59:40:79"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.888 250273 DEBUG nova.network.os_vif_util [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.891 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] End _get_guest_xml xml=<domain type="kvm">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <uuid>21edef97-3531-4772-8aa5-a3feeb9ff3f5</uuid>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <name>instance-000000d2</name>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1044974910</nova:name>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 10:53:40</nova:creationTime>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:user uuid="420c366dc5dc45a48da4e0b18c93043f">tempest-TestNetworkAdvancedServerOps-1886747874-project-member</nova:user>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:project uuid="c06f98b51aeb48de91d116fda54a161f">tempest-TestNetworkAdvancedServerOps-1886747874</nova:project>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <nova:port uuid="d08a5642-c043-410d-8d3a-e63134c79cd2">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <system>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="serial">21edef97-3531-4772-8aa5-a3feeb9ff3f5</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="uuid">21edef97-3531-4772-8aa5-a3feeb9ff3f5</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </system>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <os>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </os>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <features>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </features>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </clock>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  <devices>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/21edef97-3531-4772-8aa5-a3feeb9ff3f5_disk">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/21edef97-3531-4772-8aa5-a3feeb9ff3f5_disk.config">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </source>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      </auth>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </disk>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:59:40:79"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <target dev="tapd08a5642-c0"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </interface>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/21edef97-3531-4772-8aa5-a3feeb9ff3f5/console.log" append="off"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </serial>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <video>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </video>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </rng>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 05:53:41 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 05:53:41 np0005593232 nova_compute[250269]:  </devices>
Jan 23 05:53:41 np0005593232 nova_compute[250269]: </domain>
Jan 23 05:53:41 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.893 250273 DEBUG nova.virt.libvirt.vif [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044974910',display_name='tempest-TestNetworkAdvancedServerOps-server-1044974910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044974910',id=210,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1Au+/nbNJXswn+/lBT/NUlG+C4qWpKV8LfTLcEIq/JQHMntJ0r6AZpvvHSolbGhgNEDJ2I0R+q+8ASoXZhoeZdCjKoEqwhuN6XwpPG1I72EeOI415/JreWIAcqSMa5Mw==',key_name='tempest-TestNetworkAdvancedServerOps-840850991',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:52:54Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=MigrationContext,new_flavor=Flavor(1),node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=Flavor(1),os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c06f98b51aeb48de91d116fda54a161f',ramdisk_id='',reservation_id='r-70lt2fy5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=ServiceList,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-1886747874',owner_user_name='tempest-TestNetworkAdvancedServerOps-1886747874-project-member'},tags=<?>,task_state='resize_finish',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T10:53:33Z,user_data=None,user_id='420c366dc5dc45a48da4e0b18c93043f',uuid=21edef97-3531-4772-8aa5-a3feeb9ff3f5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1773492812", "vif_mac": "fa:16:3e:59:40:79"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.893 250273 DEBUG nova.network.os_vif_util [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Converting VIF {"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}], "label": "tempest-network-smoke--1773492812", "vif_mac": "fa:16:3e:59:40:79"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.893 250273 DEBUG nova.network.os_vif_util [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.894 250273 DEBUG os_vif [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.894 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.895 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.895 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.900 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd08a5642-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.900 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd08a5642-c0, col_values=(('external_ids', {'iface-id': 'd08a5642-c043-410d-8d3a-e63134c79cd2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:59:40:79', 'vm-uuid': '21edef97-3531-4772-8aa5-a3feeb9ff3f5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:41 np0005593232 NetworkManager[49057]: <info>  [1769165621.9033] manager: (tapd08a5642-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.904 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.910 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.911 250273 INFO os_vif [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0')#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.963 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.963 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.964 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] No VIF found with MAC fa:16:3e:59:40:79, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 05:53:41 np0005593232 nova_compute[250269]: 2026-01-23 10:53:41.964 250273 INFO nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Using config drive#033[00m
Jan 23 05:53:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:42.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:42 np0005593232 kernel: tapd08a5642-c0: entered promiscuous mode
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.0566] manager: (tapd08a5642-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.057 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:42Z|00824|binding|INFO|Claiming lport d08a5642-c043-410d-8d3a-e63134c79cd2 for this chassis.
Jan 23 05:53:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:42Z|00825|binding|INFO|d08a5642-c043-410d-8d3a-e63134c79cd2: Claiming fa:16:3e:59:40:79 10.100.0.12
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.064 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.0733] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.0738] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/388)
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.073 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:40:79 10.100.0.12'], port_security=['fa:16:3e:59:40:79 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '21edef97-3531-4772-8aa5-a3feeb9ff3f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c06f98b51aeb48de91d116fda54a161f', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcfc02e2-e46d-4519-82a3-86ab6ef1b36e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=272555c7-6cc4-4fe4-972e-530a53d5843d, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d08a5642-c043-410d-8d3a-e63134c79cd2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.075 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d08a5642-c043-410d-8d3a-e63134c79cd2 in datapath fae0cfd0-9ee7-400c-bda6-94fd3af3625d bound to our chassis#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.076 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fae0cfd0-9ee7-400c-bda6-94fd3af3625d#033[00m
Jan 23 05:53:42 np0005593232 systemd-udevd[402829]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.086 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7e6e41-1eba-448f-8760-5cd51df212a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.087 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfae0cfd0-91 in ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.089 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfae0cfd0-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.089 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f4bf52d1-e85e-4e69-a1c4-085f3f3ec1aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.090 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b415f4dc-c33b-49ff-9d07-a9133b2d3ecf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 systemd-machined[215836]: New machine qemu-93-instance-000000d2.
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.0986] device (tapd08a5642-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.0992] device (tapd08a5642-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.105 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[4560a441-9e1d-411c-ab94-6e86d5f9a52f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.111 250273 DEBUG nova.network.neutron [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updated VIF entry in instance network info cache for port d08a5642-c043-410d-8d3a-e63134c79cd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.111 250273 DEBUG nova.network.neutron [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating instance_info_cache with network_info: [{"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.132 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f40daa94-e8be-4db8-8ad1-781eaba66f5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 systemd[1]: Started Virtual Machine qemu-93-instance-000000d2.
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.142 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.144 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.150 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.152 250273 DEBUG oslo_concurrency.lockutils [req-117bd410-eb86-40f7-ab92-c914af82a08d req-eefc5a7c-470a-498f-b6a5-6f8a1a0a70a9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:53:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:42Z|00826|binding|INFO|Setting lport d08a5642-c043-410d-8d3a-e63134c79cd2 ovn-installed in OVS
Jan 23 05:53:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:42Z|00827|binding|INFO|Setting lport d08a5642-c043-410d-8d3a-e63134c79cd2 up in Southbound
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.162 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.170 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc8a9bb-5247-47bb-a35a-abd179731591]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.1758] manager: (tapfae0cfd0-90): new Veth device (/org/freedesktop/NetworkManager/Devices/389)
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.175 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[54cb2c66-a7c2-4c3f-a2ac-0a0dbcf9a76a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 systemd-udevd[402832]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.206 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[772a9c77-7882-467e-b6bb-48c240d54670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.210 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3d503a-5fbc-4e7a-822a-ddc246e581e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.2297] device (tapfae0cfd0-90): carrier: link connected
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.234 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd44bd2-78d5-446b-bdaa-f886a413b441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:42.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.250 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2da651e9-856d-435b-9d07-f72df26b0aee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfae0cfd0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ca:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953537, 'reachable_time': 30762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402862, 'error': None, 'target': 'ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.264 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[501a416a-9d9a-4a82-ac18-fee736cf1204]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4c:ca82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 953537, 'tstamp': 953537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402863, 'error': None, 'target': 'ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.279 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d93e32ff-1730-4373-8c57-453a647ecfbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfae0cfd0-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4c:ca:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 249], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953537, 'reachable_time': 30762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402864, 'error': None, 'target': 'ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.311 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[b167db85-b778-463e-969a-0cc51ee91f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.373 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[76fbc440-4ba9-464f-83be-57d2fb9c088d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.374 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfae0cfd0-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.374 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.375 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfae0cfd0-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 NetworkManager[49057]: <info>  [1769165622.3772] manager: (tapfae0cfd0-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 23 05:53:42 np0005593232 kernel: tapfae0cfd0-90: entered promiscuous mode
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.382 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfae0cfd0-90, col_values=(('external_ids', {'iface-id': '60ff24a2-99cd-4fae-9593-d07701636aa7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:42Z|00828|binding|INFO|Releasing lport 60ff24a2-99cd-4fae-9593-d07701636aa7 from this chassis (sb_readonly=0)
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.386 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fae0cfd0-9ee7-400c-bda6-94fd3af3625d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fae0cfd0-9ee7-400c-bda6-94fd3af3625d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.386 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2d3c27ba-72fc-48b3-a48d-5914409b7d7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.387 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-fae0cfd0-9ee7-400c-bda6-94fd3af3625d
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/fae0cfd0-9ee7-400c-bda6-94fd3af3625d.pid.haproxy
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID fae0cfd0-9ee7-400c-bda6-94fd3af3625d
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.388 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'env', 'PROCESS_TAG=haproxy-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fae0cfd0-9ee7-400c-bda6-94fd3af3625d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.391 250273 DEBUG nova.compute.manager [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.391 250273 DEBUG oslo_concurrency.lockutils [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.392 250273 DEBUG oslo_concurrency.lockutils [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.392 250273 DEBUG oslo_concurrency.lockutils [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.392 250273 DEBUG nova.compute.manager [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.392 250273 WARNING nova.compute.manager [req-a2b460ea-8260-4b38-affa-965f26eff5ac req-64259559-744e-4c2a-8eca-db432cf7068e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received unexpected event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with vm_state active and task_state resize_finish.#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.396 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.608 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165622.6076727, 21edef97-3531-4772-8aa5-a3feeb9ff3f5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.608 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] VM Resumed (Lifecycle Event)#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.611 250273 DEBUG nova.compute.manager [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.614 250273 INFO nova.virt.libvirt.driver [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Instance running successfully.#033[00m
Jan 23 05:53:42 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.615 250273 DEBUG nova.virt.libvirt.guest [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.616 250273 DEBUG nova.virt.libvirt.driver [None req-4761d977-e2ca-41cc-84df-3adf7f03a56c 22656a4d33784250b8f522a77dc0909d eac31b2500aa40729c9ae6441d1a3f2e - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.639 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.642 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.675 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.676 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:53:42.676 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.685 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.685 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769165622.6110084, 21edef97-3531-4772-8aa5-a3feeb9ff3f5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.685 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] VM Started (Lifecycle Event)#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.705 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3832: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.5 KiB/s rd, 13 KiB/s wr, 9 op/s
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.722 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:53:42 np0005593232 nova_compute[250269]: 2026-01-23 10:53:42.736 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 05:53:42 np0005593232 podman[402938]: 2026-01-23 10:53:42.766998046 +0000 UTC m=+0.053240825 container create b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:53:42 np0005593232 systemd[1]: Started libpod-conmon-b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf.scope.
Jan 23 05:53:42 np0005593232 podman[402938]: 2026-01-23 10:53:42.742975863 +0000 UTC m=+0.029218662 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 05:53:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:53:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb98a7ccec78f04bce060c95a8cdd9846627f5557640b60d648f39c37ae55910/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 05:53:42 np0005593232 podman[402938]: 2026-01-23 10:53:42.859902017 +0000 UTC m=+0.146144796 container init b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 05:53:42 np0005593232 podman[402938]: 2026-01-23 10:53:42.864650652 +0000 UTC m=+0.150893431 container start b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 23 05:53:42 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [NOTICE]   (402958) : New worker (402960) forked
Jan 23 05:53:42 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [NOTICE]   (402958) : Loading success.
Jan 23 05:53:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:44.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.516 250273 DEBUG nova.compute.manager [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.516 250273 DEBUG oslo_concurrency.lockutils [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.516 250273 DEBUG oslo_concurrency.lockutils [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.517 250273 DEBUG oslo_concurrency.lockutils [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.517 250273 DEBUG nova.compute.manager [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:53:44 np0005593232 nova_compute[250269]: 2026-01-23 10:53:44.517 250273 WARNING nova.compute.manager [req-4185b033-d0cd-410a-8fc6-8b9826f00c17 req-18f9d693-3451-4455-aaf2-56ab66b1e017 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received unexpected event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with vm_state resized and task_state None.#033[00m
Jan 23 05:53:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3833: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 409 B/s wr, 23 op/s
Jan 23 05:53:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:46.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3834: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 429 KiB/s rd, 409 B/s wr, 39 op/s
Jan 23 05:53:46 np0005593232 nova_compute[250269]: 2026-01-23 10:53:46.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:47 np0005593232 nova_compute[250269]: 2026-01-23 10:53:47.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002172865194025684 of space, bias 1.0, pg target 0.6518595582077051 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:53:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:53:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:48.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3835: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 104 op/s
Jan 23 05:53:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Jan 23 05:53:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Jan 23 05:53:49 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Jan 23 05:53:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:50.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3837: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 104 op/s
Jan 23 05:53:51 np0005593232 nova_compute[250269]: 2026-01-23 10:53:51.905 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:53:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:52.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:53:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3838: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 409 B/s wr, 99 op/s
Jan 23 05:53:52 np0005593232 nova_compute[250269]: 2026-01-23 10:53:52.737 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:54.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:54.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3839: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 614 B/s wr, 90 op/s
Jan 23 05:53:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:53:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Jan 23 05:53:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Jan 23 05:53:55 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Jan 23 05:53:55 np0005593232 ovn_controller[151001]: 2026-01-23T10:53:55Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:59:40:79 10.100.0.12
Jan 23 05:53:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:56.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3841: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 181 KiB/s rd, 2.6 KiB/s wr, 30 op/s
Jan 23 05:53:56 np0005593232 nova_compute[250269]: 2026-01-23 10:53:56.908 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:57 np0005593232 nova_compute[250269]: 2026-01-23 10:53:57.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:53:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:53:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:53:58.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:53:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:53:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:53:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:53:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:53:58 np0005593232 podman[403028]: 2026-01-23 10:53:58.513280615 +0000 UTC m=+0.148766010 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:53:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3842: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 707 KiB/s rd, 17 KiB/s wr, 71 op/s
Jan 23 05:54:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:00.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3843: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 644 KiB/s rd, 15 KiB/s wr, 65 op/s
Jan 23 05:54:00 np0005593232 nova_compute[250269]: 2026-01-23 10:54:00.900 250273 INFO nova.compute.manager [None req-d4480d1a-8577-4692-8d45-8fa8d94ec088 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Get console output#033[00m
Jan 23 05:54:00 np0005593232 nova_compute[250269]: 2026-01-23 10:54:00.916 312104 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Jan 23 05:54:01 np0005593232 podman[403056]: 2026-01-23 10:54:01.464162343 +0000 UTC m=+0.102763932 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:54:01 np0005593232 nova_compute[250269]: 2026-01-23 10:54:01.911 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:02.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:02.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3844: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 645 KiB/s rd, 27 KiB/s wr, 65 op/s
Jan 23 05:54:02 np0005593232 nova_compute[250269]: 2026-01-23 10:54:02.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:04.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.280 250273 DEBUG nova.compute.manager [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-changed-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.281 250273 DEBUG nova.compute.manager [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Refreshing instance network info cache due to event network-changed-d08a5642-c043-410d-8d3a-e63134c79cd2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.282 250273 DEBUG oslo_concurrency.lockutils [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.282 250273 DEBUG oslo_concurrency.lockutils [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.282 250273 DEBUG nova.network.neutron [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Refreshing network info cache for port d08a5642-c043-410d-8d3a-e63134c79cd2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.364 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.365 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.365 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.366 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.366 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.368 250273 INFO nova.compute.manager [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Terminating instance#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.370 250273 DEBUG nova.compute.manager [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 05:54:04 np0005593232 kernel: tapd08a5642-c0 (unregistering): left promiscuous mode
Jan 23 05:54:04 np0005593232 NetworkManager[49057]: <info>  [1769165644.4540] device (tapd08a5642-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 05:54:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:54:04Z|00829|binding|INFO|Releasing lport d08a5642-c043-410d-8d3a-e63134c79cd2 from this chassis (sb_readonly=0)
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.472 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:54:04Z|00830|binding|INFO|Setting lport d08a5642-c043-410d-8d3a-e63134c79cd2 down in Southbound
Jan 23 05:54:04 np0005593232 ovn_controller[151001]: 2026-01-23T10:54:04Z|00831|binding|INFO|Removing iface tapd08a5642-c0 ovn-installed in OVS
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.476 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.492 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:59:40:79 10.100.0.12'], port_security=['fa:16:3e:59:40:79 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '21edef97-3531-4772-8aa5-a3feeb9ff3f5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c06f98b51aeb48de91d116fda54a161f', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'dcfc02e2-e46d-4519-82a3-86ab6ef1b36e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=272555c7-6cc4-4fe4-972e-530a53d5843d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=d08a5642-c043-410d-8d3a-e63134c79cd2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.495 161902 INFO neutron.agent.ovn.metadata.agent [-] Port d08a5642-c043-410d-8d3a-e63134c79cd2 in datapath fae0cfd0-9ee7-400c-bda6-94fd3af3625d unbound from our chassis#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.498 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fae0cfd0-9ee7-400c-bda6-94fd3af3625d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.500 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[14412258-ce83-4f70-9f34-bcce1f39c449]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.501 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d namespace which is not needed anymore#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.518 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d2.scope: Deactivated successfully.
Jan 23 05:54:04 np0005593232 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d000000d2.scope: Consumed 13.936s CPU time.
Jan 23 05:54:04 np0005593232 systemd-machined[215836]: Machine qemu-93-instance-000000d2 terminated.
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.633 250273 INFO nova.virt.libvirt.driver [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Instance destroyed successfully.#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.634 250273 DEBUG nova.objects.instance [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lazy-loading 'resources' on Instance uuid 21edef97-3531-4772-8aa5-a3feeb9ff3f5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.678 250273 DEBUG nova.virt.libvirt.vif [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T10:52:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1044974910',display_name='tempest-TestNetworkAdvancedServerOps-server-1044974910',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1044974910',id=210,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN1Au+/nbNJXswn+/lBT/NUlG+C4qWpKV8LfTLcEIq/JQHMntJ0r6AZpvvHSolbGhgNEDJ2I0R+q+8ASoXZhoeZdCjKoEqwhuN6XwpPG1I72EeOI415/JreWIAcqSMa5Mw==',key_name='tempest-TestNetworkAdvancedServerOps-840850991',keypairs=<?>,launch_index=0,launched_at=2026-01-23T10:53:42Z,launched_on='compute-1.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c06f98b51aeb48de91d116fda54a161f',ramdisk_id='',reservation_id='r-70lt2fy5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-1886747874',owner_user_name='tempest-TestNetworkAdvancedServerOps-1886747874-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T10:53:50Z,user_data=None,user_id='420c366dc5dc45a48da4e0b18c93043f',uuid=21edef97-3531-4772-8aa5-a3feeb9ff3f5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.679 250273 DEBUG nova.network.os_vif_util [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Converting VIF {"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.680 250273 DEBUG nova.network.os_vif_util [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.680 250273 DEBUG os_vif [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.682 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.682 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd08a5642-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.684 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.690 250273 INFO os_vif [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:59:40:79,bridge_name='br-int',has_traffic_filtering=True,id=d08a5642-c043-410d-8d3a-e63134c79cd2,network=Network(fae0cfd0-9ee7-400c-bda6-94fd3af3625d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd08a5642-c0')#033[00m
Jan 23 05:54:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3845: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 639 KiB/s rd, 27 KiB/s wr, 56 op/s
Jan 23 05:54:04 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [NOTICE]   (402958) : haproxy version is 2.8.14-c23fe91
Jan 23 05:54:04 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [NOTICE]   (402958) : path to executable is /usr/sbin/haproxy
Jan 23 05:54:04 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [WARNING]  (402958) : Exiting Master process...
Jan 23 05:54:04 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [ALERT]    (402958) : Current worker (402960) exited with code 143 (Terminated)
Jan 23 05:54:04 np0005593232 neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d[402954]: [WARNING]  (402958) : All workers exited. Exiting... (0)
Jan 23 05:54:04 np0005593232 systemd[1]: libpod-b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf.scope: Deactivated successfully.
Jan 23 05:54:04 np0005593232 podman[403112]: 2026-01-23 10:54:04.753783381 +0000 UTC m=+0.102209597 container died b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:54:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf-userdata-shm.mount: Deactivated successfully.
Jan 23 05:54:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fb98a7ccec78f04bce060c95a8cdd9846627f5557640b60d648f39c37ae55910-merged.mount: Deactivated successfully.
Jan 23 05:54:04 np0005593232 podman[403112]: 2026-01-23 10:54:04.80900351 +0000 UTC m=+0.157429726 container cleanup b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 23 05:54:04 np0005593232 systemd[1]: libpod-conmon-b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf.scope: Deactivated successfully.
Jan 23 05:54:04 np0005593232 podman[403161]: 2026-01-23 10:54:04.909467836 +0000 UTC m=+0.070139705 container remove b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.921 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7e7ad715-055c-4dcb-bb9a-349bb19b364f]: (4, ('Fri Jan 23 10:54:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d (b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf)\nb08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf\nFri Jan 23 10:54:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d (b08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf)\nb08a63d4878d2161ead90b769c93622b24edf364fff2225cc6d1067bf4a1fccf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.925 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf12005-61fa-4874-8879-fcb1bb11dcac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.927 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfae0cfd0-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.930 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 kernel: tapfae0cfd0-90: left promiscuous mode
Jan 23 05:54:04 np0005593232 nova_compute[250269]: 2026-01-23 10:54:04.958 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.966 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[c0f87b8e-072b-4713-a975-4fd828336f9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.984 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaa7af6-1c13-4270-b1c5-1ca4e848582c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:04.987 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d724ec76-fb72-4d95-8b94-abfcdfea48bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:05.016 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0088066f-1625-4d0b-abc5-e69c9453de32]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 953530, 'reachable_time': 31187, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 403176, 'error': None, 'target': 'ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:05 np0005593232 systemd[1]: run-netns-ovnmeta\x2dfae0cfd0\x2d9ee7\x2d400c\x2dbda6\x2d94fd3af3625d.mount: Deactivated successfully.
Jan 23 05:54:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:05.024 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fae0cfd0-9ee7-400c-bda6-94fd3af3625d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 05:54:05 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:05.025 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[3697f9ba-81dd-4fdc-b5c6-2113e9ccd96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 05:54:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.281 250273 DEBUG nova.compute.manager [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.282 250273 DEBUG oslo_concurrency.lockutils [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.282 250273 DEBUG oslo_concurrency.lockutils [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.282 250273 DEBUG oslo_concurrency.lockutils [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.282 250273 DEBUG nova.compute.manager [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.283 250273 DEBUG nova.compute.manager [req-4edf5aae-c7ef-43b5-a73e-1f639dbc689e req-a22ff71e-7e49-4ec6-bb03-259fc272a333 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-unplugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.647 250273 INFO nova.virt.libvirt.driver [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Deleting instance files /var/lib/nova/instances/21edef97-3531-4772-8aa5-a3feeb9ff3f5_del#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.648 250273 INFO nova.virt.libvirt.driver [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Deletion of /var/lib/nova/instances/21edef97-3531-4772-8aa5-a3feeb9ff3f5_del complete#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.711 250273 INFO nova.compute.manager [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Took 1.34 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.712 250273 DEBUG oslo.service.loopingcall [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.714 250273 DEBUG nova.compute.manager [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 05:54:05 np0005593232 nova_compute[250269]: 2026-01-23 10:54:05.714 250273 DEBUG nova.network.neutron [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 05:54:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:06.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:06.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3846: 321 pgs: 321 active+clean; 171 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 560 KiB/s rd, 24 KiB/s wr, 52 op/s
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.386 250273 DEBUG nova.compute.manager [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.387 250273 DEBUG oslo_concurrency.lockutils [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.387 250273 DEBUG oslo_concurrency.lockutils [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.387 250273 DEBUG oslo_concurrency.lockutils [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.388 250273 DEBUG nova.compute.manager [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] No waiting events found dispatching network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.388 250273 WARNING nova.compute.manager [req-a80f591e-16d4-4125-91da-5b2e8b7f9a02 req-9d797849-6393-49cb-8b6a-e87891cc5f00 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received unexpected event network-vif-plugged-d08a5642-c043-410d-8d3a-e63134c79cd2 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.402 250273 DEBUG nova.network.neutron [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.405 250273 DEBUG nova.network.neutron [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updated VIF entry in instance network info cache for port d08a5642-c043-410d-8d3a-e63134c79cd2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.405 250273 DEBUG nova.network.neutron [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Updating instance_info_cache with network_info: [{"id": "d08a5642-c043-410d-8d3a-e63134c79cd2", "address": "fa:16:3e:59:40:79", "network": {"id": "fae0cfd0-9ee7-400c-bda6-94fd3af3625d", "bridge": "br-int", "label": "tempest-network-smoke--1773492812", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c06f98b51aeb48de91d116fda54a161f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd08a5642-c0", "ovs_interfaceid": "d08a5642-c043-410d-8d3a-e63134c79cd2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.430 250273 DEBUG oslo_concurrency.lockutils [req-edd0c04d-750d-46e2-a349-c3d5bcfb5e5b req-a43a8e91-6fb4-4593-a9bb-5b8513cfc139 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-21edef97-3531-4772-8aa5-a3feeb9ff3f5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.433 250273 INFO nova.compute.manager [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Took 1.72 seconds to deallocate network for instance.#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.476 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.477 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.494 250273 DEBUG nova.compute.manager [req-8cbe6287-7e91-4a04-b209-eb3c09e45f60 req-7abf7910-d3ed-4331-9b9b-fd89f0d7fa4e 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Received event network-vif-deleted-d08a5642-c043-410d-8d3a-e63134c79cd2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.533 250273 DEBUG oslo_concurrency.processutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:07 np0005593232 nova_compute[250269]: 2026-01-23 10:54:07.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:54:08 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465737491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.052 250273 DEBUG oslo_concurrency.processutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.066 250273 DEBUG nova.compute.provider_tree [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:54:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:08.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.089 250273 DEBUG nova.scheduler.client.report [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.115 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.146 250273 INFO nova.scheduler.client.report [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Deleted allocations for instance 21edef97-3531-4772-8aa5-a3feeb9ff3f5#033[00m
Jan 23 05:54:08 np0005593232 nova_compute[250269]: 2026-01-23 10:54:08.214 250273 DEBUG oslo_concurrency.lockutils [None req-00e3448c-72bc-4c4e-8815-8cd2b963dc6d 420c366dc5dc45a48da4e0b18c93043f c06f98b51aeb48de91d116fda54a161f - - default default] Lock "21edef97-3531-4772-8aa5-a3feeb9ff3f5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:08.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3847: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 437 KiB/s rd, 22 KiB/s wr, 63 op/s
Jan 23 05:54:09 np0005593232 nova_compute[250269]: 2026-01-23 10:54:09.095 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:09 np0005593232 nova_compute[250269]: 2026-01-23 10:54:09.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:10.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3848: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Jan 23 05:54:11 np0005593232 nova_compute[250269]: 2026-01-23 10:54:11.513 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:11 np0005593232 nova_compute[250269]: 2026-01-23 10:54:11.584 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:12.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:12.234 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=88, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=87) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:54:12 np0005593232 nova_compute[250269]: 2026-01-23 10:54:12.236 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:12 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:12.236 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:54:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:12.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:12 np0005593232 nova_compute[250269]: 2026-01-23 10:54:12.779 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3849: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 11 KiB/s wr, 28 op/s
Jan 23 05:54:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:14.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:14 np0005593232 nova_compute[250269]: 2026-01-23 10:54:14.688 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3850: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.1 KiB/s wr, 27 op/s
Jan 23 05:54:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:16.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3851: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:54:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:54:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1085101738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:54:17 np0005593232 nova_compute[250269]: 2026-01-23 10:54:17.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:18.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:18 np0005593232 nova_compute[250269]: 2026-01-23 10:54:18.296 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3852: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 673 B/s wr, 24 op/s
Jan 23 05:54:19 np0005593232 nova_compute[250269]: 2026-01-23 10:54:19.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:19 np0005593232 nova_compute[250269]: 2026-01-23 10:54:19.629 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769165644.6281755, 21edef97-3531-4772-8aa5-a3feeb9ff3f5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 05:54:19 np0005593232 nova_compute[250269]: 2026-01-23 10:54:19.630 250273 INFO nova.compute.manager [-] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] VM Stopped (Lifecycle Event)#033[00m
Jan 23 05:54:19 np0005593232 nova_compute[250269]: 2026-01-23 10:54:19.691 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:20.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:20.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3853: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:20 np0005593232 nova_compute[250269]: 2026-01-23 10:54:20.906 250273 DEBUG nova.compute.manager [None req-9d4c10c3-5787-4e16-beeb-0dc13f30f0b8 - - - - - -] [instance: 21edef97-3531-4772-8aa5-a3feeb9ff3f5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 05:54:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:21.240 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '88'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:54:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:22.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:22 np0005593232 nova_compute[250269]: 2026-01-23 10:54:22.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:22 np0005593232 nova_compute[250269]: 2026-01-23 10:54:22.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:54:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:22.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:22 np0005593232 nova_compute[250269]: 2026-01-23 10:54:22.824 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3854: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:23 np0005593232 nova_compute[250269]: 2026-01-23 10:54:23.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:24.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:24 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:54:24 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 05:54:24 np0005593232 nova_compute[250269]: 2026-01-23 10:54:24.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:24 np0005593232 nova_compute[250269]: 2026-01-23 10:54:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:24 np0005593232 nova_compute[250269]: 2026-01-23 10:54:24.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:54:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:24.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:24 np0005593232 nova_compute[250269]: 2026-01-23 10:54:24.694 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3855: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:26.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3856: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:27 np0005593232 nova_compute[250269]: 2026-01-23 10:54:27.826 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:28.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:28.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:28 np0005593232 nova_compute[250269]: 2026-01-23 10:54:28.656 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:28 np0005593232 nova_compute[250269]: 2026-01-23 10:54:28.656 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:54:28 np0005593232 nova_compute[250269]: 2026-01-23 10:54:28.656 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:54:28 np0005593232 nova_compute[250269]: 2026-01-23 10:54:28.680 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:54:28 np0005593232 nova_compute[250269]: 2026-01-23 10:54:28.680 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3857: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:29 np0005593232 podman[403264]: 2026-01-23 10:54:29.463807239 +0000 UTC m=+0.114523186 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:54:29 np0005593232 nova_compute[250269]: 2026-01-23 10:54:29.696 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:30.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:30 np0005593232 nova_compute[250269]: 2026-01-23 10:54:30.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:30.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3858: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:32.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:32 np0005593232 podman[403343]: 2026-01-23 10:54:32.42178161 +0000 UTC m=+0.064544116 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:54:32 np0005593232 nova_compute[250269]: 2026-01-23 10:54:32.828 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3859: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:34.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:34.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:34 np0005593232 nova_compute[250269]: 2026-01-23 10:54:34.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3860: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:35 np0005593232 nova_compute[250269]: 2026-01-23 10:54:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:54:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:36.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:36.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:54:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:54:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3861: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e6765b79-e9e8-4b54-a258-23951ffca7fe does not exist
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e0ea2422-ee0e-4213-8d05-8438edfdf467 does not exist
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 46623e5b-f85c-4a9d-a715-b38741c18b67 does not exist
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:54:37
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'volumes', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms']
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:54:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.710897344 +0000 UTC m=+0.042585792 container create 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:54:37 np0005593232 systemd[1]: Started libpod-conmon-9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7.scope.
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.768 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.769 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.770 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.770 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.770 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:54:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.691988766 +0000 UTC m=+0.023677224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.794050607 +0000 UTC m=+0.125739055 container init 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.800332716 +0000 UTC m=+0.132021174 container start 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.803417274 +0000 UTC m=+0.135105722 container attach 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:54:37 np0005593232 bold_wiles[403773]: 167 167
Jan 23 05:54:37 np0005593232 systemd[1]: libpod-9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7.scope: Deactivated successfully.
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.806789559 +0000 UTC m=+0.138478007 container died 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Jan 23 05:54:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9cda0054668f1f7d0a25b1943ba220620bfc457f600ae3293b22aa2f5fb942fa-merged.mount: Deactivated successfully.
Jan 23 05:54:37 np0005593232 nova_compute[250269]: 2026-01-23 10:54:37.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:37 np0005593232 podman[403757]: 2026-01-23 10:54:37.851218512 +0000 UTC m=+0.182906960 container remove 9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wiles, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:54:37 np0005593232 systemd[1]: libpod-conmon-9993aab0b99f1369ffa8600af8f7574a19f18d5b4eef3c7f8e3eb7fb412e77a7.scope: Deactivated successfully.
Jan 23 05:54:38 np0005593232 podman[403819]: 2026-01-23 10:54:38.025514637 +0000 UTC m=+0.040843152 container create 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:54:38 np0005593232 systemd[1]: Started libpod-conmon-320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d.scope.
Jan 23 05:54:38 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:38 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:38 np0005593232 podman[403819]: 2026-01-23 10:54:38.006368192 +0000 UTC m=+0.021696727 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:38.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:38 np0005593232 podman[403819]: 2026-01-23 10:54:38.137213432 +0000 UTC m=+0.152541967 container init 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 05:54:38 np0005593232 podman[403819]: 2026-01-23 10:54:38.143779708 +0000 UTC m=+0.159108233 container start 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:54:38 np0005593232 podman[403819]: 2026-01-23 10:54:38.233232991 +0000 UTC m=+0.248561526 container attach 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:54:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:54:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1280582932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:54:38 np0005593232 nova_compute[250269]: 2026-01-23 10:54:38.263 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:54:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:38.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:38 np0005593232 nova_compute[250269]: 2026-01-23 10:54:38.448 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:54:38 np0005593232 nova_compute[250269]: 2026-01-23 10:54:38.449 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4072MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:54:38 np0005593232 nova_compute[250269]: 2026-01-23 10:54:38.450 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:38 np0005593232 nova_compute[250269]: 2026-01-23 10:54:38.450 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:54:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3862: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:39 np0005593232 magical_saha[403835]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:54:39 np0005593232 magical_saha[403835]: --> relative data size: 1.0
Jan 23 05:54:39 np0005593232 magical_saha[403835]: --> All data devices are unavailable
Jan 23 05:54:39 np0005593232 systemd[1]: libpod-320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d.scope: Deactivated successfully.
Jan 23 05:54:39 np0005593232 podman[403819]: 2026-01-23 10:54:39.047100905 +0000 UTC m=+1.062429430 container died 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 05:54:39 np0005593232 nova_compute[250269]: 2026-01-23 10:54:39.700 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e4ac6c4452a86e9496e85fb020eb89306d1f2fc330a38038b0eb0ff183668ffa-merged.mount: Deactivated successfully.
Jan 23 05:54:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:40.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:40 np0005593232 nova_compute[250269]: 2026-01-23 10:54:40.138 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:54:40 np0005593232 nova_compute[250269]: 2026-01-23 10:54:40.139 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:54:40 np0005593232 nova_compute[250269]: 2026-01-23 10:54:40.154 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:54:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:40.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:40 np0005593232 podman[403819]: 2026-01-23 10:54:40.454549942 +0000 UTC m=+2.469878467 container remove 320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_saha, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:54:40 np0005593232 systemd[1]: libpod-conmon-320a450ba256a451286a9968347479ce7fbe8ad042b95febe36ddd938d5e700d.scope: Deactivated successfully.
Jan 23 05:54:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:54:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852676264' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:54:40 np0005593232 nova_compute[250269]: 2026-01-23 10:54:40.627 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:54:40 np0005593232 nova_compute[250269]: 2026-01-23 10:54:40.634 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:54:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3863: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.090123579 +0000 UTC m=+0.030333904 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.280942893 +0000 UTC m=+0.221153238 container create 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 05:54:41 np0005593232 systemd[1]: Started libpod-conmon-985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a.scope.
Jan 23 05:54:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.387590734 +0000 UTC m=+0.327801039 container init 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.397042563 +0000 UTC m=+0.337252868 container start 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 23 05:54:41 np0005593232 elastic_joliot[404046]: 167 167
Jan 23 05:54:41 np0005593232 systemd[1]: libpod-985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a.scope: Deactivated successfully.
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.410496155 +0000 UTC m=+0.350706490 container attach 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.411080232 +0000 UTC m=+0.351290537 container died 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 05:54:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-974e11d14c26018a40b48844472b276adb88103ca04c0bf1de9c1c0f0d6d04ff-merged.mount: Deactivated successfully.
Jan 23 05:54:41 np0005593232 podman[404030]: 2026-01-23 10:54:41.52289517 +0000 UTC m=+0.463105475 container remove 985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:54:41 np0005593232 systemd[1]: libpod-conmon-985dda3f78a1ed5668c0eff849004007a9fa8f23277b371178fd182f1baceb3a.scope: Deactivated successfully.
Jan 23 05:54:41 np0005593232 podman[404070]: 2026-01-23 10:54:41.750062988 +0000 UTC m=+0.084423991 container create a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:54:41 np0005593232 podman[404070]: 2026-01-23 10:54:41.693970113 +0000 UTC m=+0.028331096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:41 np0005593232 systemd[1]: Started libpod-conmon-a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8.scope.
Jan 23 05:54:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34391cbe7a4ab99cf597981795ed7de0f14d5272c9e0b8626f3721d7205f2636/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34391cbe7a4ab99cf597981795ed7de0f14d5272c9e0b8626f3721d7205f2636/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34391cbe7a4ab99cf597981795ed7de0f14d5272c9e0b8626f3721d7205f2636/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34391cbe7a4ab99cf597981795ed7de0f14d5272c9e0b8626f3721d7205f2636/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:41 np0005593232 podman[404070]: 2026-01-23 10:54:41.873472296 +0000 UTC m=+0.207833279 container init a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:54:41 np0005593232 podman[404070]: 2026-01-23 10:54:41.881081692 +0000 UTC m=+0.215442655 container start a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:54:41 np0005593232 podman[404070]: 2026-01-23 10:54:41.901461161 +0000 UTC m=+0.235822124 container attach a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 05:54:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:42.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:42.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:42 np0005593232 clever_benz[404086]: {
Jan 23 05:54:42 np0005593232 clever_benz[404086]:    "0": [
Jan 23 05:54:42 np0005593232 clever_benz[404086]:        {
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "devices": [
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "/dev/loop3"
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            ],
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "lv_name": "ceph_lv0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "lv_size": "7511998464",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "name": "ceph_lv0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "tags": {
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.cluster_name": "ceph",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.crush_device_class": "",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.encrypted": "0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.osd_id": "0",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.type": "block",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:                "ceph.vdo": "0"
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            },
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "type": "block",
Jan 23 05:54:42 np0005593232 clever_benz[404086]:            "vg_name": "ceph_vg0"
Jan 23 05:54:42 np0005593232 clever_benz[404086]:        }
Jan 23 05:54:42 np0005593232 clever_benz[404086]:    ]
Jan 23 05:54:42 np0005593232 clever_benz[404086]: }
Jan 23 05:54:42 np0005593232 systemd[1]: libpod-a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8.scope: Deactivated successfully.
Jan 23 05:54:42 np0005593232 podman[404070]: 2026-01-23 10:54:42.636534245 +0000 UTC m=+0.970895208 container died a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:54:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-34391cbe7a4ab99cf597981795ed7de0f14d5272c9e0b8626f3721d7205f2636-merged.mount: Deactivated successfully.
Jan 23 05:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:42.675 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:54:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:54:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:42 np0005593232 podman[404070]: 2026-01-23 10:54:42.696025256 +0000 UTC m=+1.030386229 container remove a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_benz, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:54:42 np0005593232 systemd[1]: libpod-conmon-a81e285ddfeb9af802d3b6d70632aa151c0a1582b1fb8d2c14a5ea3d98cd80d8.scope: Deactivated successfully.
Jan 23 05:54:42 np0005593232 nova_compute[250269]: 2026-01-23 10:54:42.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3864: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.318105679 +0000 UTC m=+0.026601878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.447817446 +0000 UTC m=+0.156313685 container create 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:54:43 np0005593232 systemd[1]: Started libpod-conmon-5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c.scope.
Jan 23 05:54:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.856306337 +0000 UTC m=+0.564802586 container init 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.863229323 +0000 UTC m=+0.571725532 container start 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:54:43 np0005593232 hungry_napier[404265]: 167 167
Jan 23 05:54:43 np0005593232 systemd[1]: libpod-5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c.scope: Deactivated successfully.
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.870826299 +0000 UTC m=+0.579322488 container attach 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 05:54:43 np0005593232 podman[404248]: 2026-01-23 10:54:43.871311223 +0000 UTC m=+0.579807412 container died 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:54:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ef7da25f1fd6b7c3f8b81a2459ec53b341fe4c7732dd7e241c53e345edfb6b67-merged.mount: Deactivated successfully.
Jan 23 05:54:44 np0005593232 podman[404248]: 2026-01-23 10:54:44.057846875 +0000 UTC m=+0.766343054 container remove 5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_napier, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:54:44 np0005593232 systemd[1]: libpod-conmon-5caf08e677e15d098c0cb103877c7759178898ed2ef27b62c10b5282493d845c.scope: Deactivated successfully.
Jan 23 05:54:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:44.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:44 np0005593232 podman[404291]: 2026-01-23 10:54:44.226021186 +0000 UTC m=+0.045833544 container create 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:54:44 np0005593232 systemd[1]: Started libpod-conmon-9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace.scope.
Jan 23 05:54:44 np0005593232 podman[404291]: 2026-01-23 10:54:44.203137065 +0000 UTC m=+0.022949443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:54:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:54:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502f9d645c57e96e90a4c616b5ea49779f2894ed84514332c0837d1002a1b792/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502f9d645c57e96e90a4c616b5ea49779f2894ed84514332c0837d1002a1b792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502f9d645c57e96e90a4c616b5ea49779f2894ed84514332c0837d1002a1b792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/502f9d645c57e96e90a4c616b5ea49779f2894ed84514332c0837d1002a1b792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:54:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:44.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:44 np0005593232 podman[404291]: 2026-01-23 10:54:44.385366825 +0000 UTC m=+0.205179203 container init 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:54:44 np0005593232 podman[404291]: 2026-01-23 10:54:44.39928297 +0000 UTC m=+0.219095328 container start 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:54:44 np0005593232 podman[404291]: 2026-01-23 10:54:44.40244546 +0000 UTC m=+0.222257818 container attach 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 05:54:44 np0005593232 nova_compute[250269]: 2026-01-23 10:54:44.703 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:44 np0005593232 nova_compute[250269]: 2026-01-23 10:54:44.841 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:54:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:54:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3993084551' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:54:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:54:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3993084551' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:54:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3865: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]: {
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:        "osd_id": 0,
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:        "type": "bluestore"
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]:    }
Jan 23 05:54:45 np0005593232 admiring_archimedes[404308]: }
Jan 23 05:54:45 np0005593232 systemd[1]: libpod-9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace.scope: Deactivated successfully.
Jan 23 05:54:45 np0005593232 podman[404291]: 2026-01-23 10:54:45.307347472 +0000 UTC m=+1.127159840 container died 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:54:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-502f9d645c57e96e90a4c616b5ea49779f2894ed84514332c0837d1002a1b792-merged.mount: Deactivated successfully.
Jan 23 05:54:45 np0005593232 podman[404291]: 2026-01-23 10:54:45.569767551 +0000 UTC m=+1.389579909 container remove 9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_archimedes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:54:45 np0005593232 systemd[1]: libpod-conmon-9e6585faa72a11b63cc71c1822dc25b27ed04bfd7ce3d24b2a3410e25cb41ace.scope: Deactivated successfully.
Jan 23 05:54:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:54:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:54:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7029a877-d329-4c11-96b8-f65eb69ff165 does not exist
Jan 23 05:54:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e597ccaa-d7db-41c8-92b2-c4748bc690b9 does not exist
Jan 23 05:54:45 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e42f982c-0ae8-47e6-92ce-8d7ce4fefb52 does not exist
Jan 23 05:54:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:46.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:46 np0005593232 nova_compute[250269]: 2026-01-23 10:54:46.222 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:54:46 np0005593232 nova_compute[250269]: 2026-01-23 10:54:46.223 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:54:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:46.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:46 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:54:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3866: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:54:47 np0005593232 nova_compute[250269]: 2026-01-23 10:54:47.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:54:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:54:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:48.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:48.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3867: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:49 np0005593232 nova_compute[250269]: 2026-01-23 10:54:49.706 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:50.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:50.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3868: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:52.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:52 np0005593232 nova_compute[250269]: 2026-01-23 10:54:52.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3869: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:53 np0005593232 nova_compute[250269]: 2026-01-23 10:54:53.218 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:54:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:54.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:54 np0005593232 nova_compute[250269]: 2026-01-23 10:54:54.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3870: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:54:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:56.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:54:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:54:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3871: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:57 np0005593232 nova_compute[250269]: 2026-01-23 10:54:57.899 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:54:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:54:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:54:58.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:54:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:54:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:54:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:54:58.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:54:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3872: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:54:59 np0005593232 nova_compute[250269]: 2026-01-23 10:54:59.710 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:00.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:00.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:00 np0005593232 podman[404450]: 2026-01-23 10:55:00.476613755 +0000 UTC m=+0.127226047 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:55:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3873: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:01 np0005593232 nova_compute[250269]: 2026-01-23 10:55:01.221 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:02.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:02.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:02 np0005593232 nova_compute[250269]: 2026-01-23 10:55:02.373 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:02 np0005593232 nova_compute[250269]: 2026-01-23 10:55:02.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3874: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:03 np0005593232 podman[404477]: 2026-01-23 10:55:03.39325443 +0000 UTC m=+0.051221467 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:55:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:04.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:04 np0005593232 nova_compute[250269]: 2026-01-23 10:55:04.713 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3875: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:06.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3876: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:07 np0005593232 nova_compute[250269]: 2026-01-23 10:55:07.883 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:07 np0005593232 nova_compute[250269]: 2026-01-23 10:55:07.884 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 05:55:07 np0005593232 nova_compute[250269]: 2026-01-23 10:55:07.948 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:07 np0005593232 nova_compute[250269]: 2026-01-23 10:55:07.985 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 05:55:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:08.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:08.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3877: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:09 np0005593232 nova_compute[250269]: 2026-01-23 10:55:09.716 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:10.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3878: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:55:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:12.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3879: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 23 05:55:12 np0005593232 nova_compute[250269]: 2026-01-23 10:55:12.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:14.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:14.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:14 np0005593232 nova_compute[250269]: 2026-01-23 10:55:14.718 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3880: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 23 05:55:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:16.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3881: 321 pgs: 321 active+clean; 141 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 796 KiB/s wr, 13 op/s
Jan 23 05:55:17 np0005593232 nova_compute[250269]: 2026-01-23 10:55:17.953 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:18.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3882: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:55:19 np0005593232 nova_compute[250269]: 2026-01-23 10:55:19.394 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:19 np0005593232 nova_compute[250269]: 2026-01-23 10:55:19.395 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:19 np0005593232 nova_compute[250269]: 2026-01-23 10:55:19.722 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:20.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3883: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:55:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:22.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:22.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3884: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:55:22 np0005593232 nova_compute[250269]: 2026-01-23 10:55:22.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:23 np0005593232 nova_compute[250269]: 2026-01-23 10:55:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:24.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:24 np0005593232 nova_compute[250269]: 2026-01-23 10:55:24.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:24 np0005593232 nova_compute[250269]: 2026-01-23 10:55:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:24 np0005593232 nova_compute[250269]: 2026-01-23 10:55:24.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:55:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:24.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:24 np0005593232 nova_compute[250269]: 2026-01-23 10:55:24.724 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3885: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 23 05:55:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:26.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3886: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:55:27 np0005593232 ovn_controller[151001]: 2026-01-23T10:55:27Z|00832|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 23 05:55:27 np0005593232 nova_compute[250269]: 2026-01-23 10:55:27.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:28 np0005593232 nova_compute[250269]: 2026-01-23 10:55:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:28 np0005593232 nova_compute[250269]: 2026-01-23 10:55:28.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:55:28 np0005593232 nova_compute[250269]: 2026-01-23 10:55:28.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:55:28 np0005593232 nova_compute[250269]: 2026-01-23 10:55:28.318 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:55:28 np0005593232 nova_compute[250269]: 2026-01-23 10:55:28.319 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3887: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 9.2 KiB/s rd, 1019 KiB/s wr, 17 op/s
Jan 23 05:55:29 np0005593232 nova_compute[250269]: 2026-01-23 10:55:29.726 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3888: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 341 B/s wr, 3 op/s
Jan 23 05:55:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:31.389 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=89, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=88) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:55:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:31.390 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:55:31 np0005593232 nova_compute[250269]: 2026-01-23 10:55:31.431 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:31 np0005593232 podman[404562]: 2026-01-23 10:55:31.516742696 +0000 UTC m=+0.155784759 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 05:55:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:32.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:32 np0005593232 nova_compute[250269]: 2026-01-23 10:55:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:32.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3889: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Jan 23 05:55:32 np0005593232 nova_compute[250269]: 2026-01-23 10:55:32.961 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:34.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:34 np0005593232 podman[404642]: 2026-01-23 10:55:34.416437729 +0000 UTC m=+0.070805984 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Jan 23 05:55:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:34 np0005593232 nova_compute[250269]: 2026-01-23 10:55:34.729 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3890: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.1 KiB/s rd, 12 KiB/s wr, 5 op/s
Jan 23 05:55:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:36.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:36.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3891: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 769 KiB/s rd, 12 KiB/s wr, 32 op/s
Jan 23 05:55:37 np0005593232 nova_compute[250269]: 2026-01-23 10:55:37.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:55:37
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'backups', '.mgr', 'volumes']
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:55:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:55:37 np0005593232 nova_compute[250269]: 2026-01-23 10:55:37.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:38.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:38 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:38.393 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '89'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:55:38 np0005593232 nova_compute[250269]: 2026-01-23 10:55:38.422 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:55:38 np0005593232 nova_compute[250269]: 2026-01-23 10:55:38.423 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:55:38 np0005593232 nova_compute[250269]: 2026-01-23 10:55:38.423 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:55:38 np0005593232 nova_compute[250269]: 2026-01-23 10:55:38.423 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:55:38 np0005593232 nova_compute[250269]: 2026-01-23 10:55:38.423 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:55:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:38.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:55:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3892: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 23 05:55:39 np0005593232 nova_compute[250269]: 2026-01-23 10:55:39.732 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:55:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3139681051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:55:39 np0005593232 nova_compute[250269]: 2026-01-23 10:55:39.970 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:55:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:40.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:40 np0005593232 nova_compute[250269]: 2026-01-23 10:55:40.203 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:55:40 np0005593232 nova_compute[250269]: 2026-01-23 10:55:40.204 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4121MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:55:40 np0005593232 nova_compute[250269]: 2026-01-23 10:55:40.204 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:55:40 np0005593232 nova_compute[250269]: 2026-01-23 10:55:40.205 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:55:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:40.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3893: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 23 05:55:41 np0005593232 nova_compute[250269]: 2026-01-23 10:55:41.734 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:55:41 np0005593232 nova_compute[250269]: 2026-01-23 10:55:41.735 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:55:41 np0005593232 nova_compute[250269]: 2026-01-23 10:55:41.768 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:55:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:42.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:55:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:42.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:55:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:55:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920530398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.526 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.536 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:42.677 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:55:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:55:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.752 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.755 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.755 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:55:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3894: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 69 op/s
Jan 23 05:55:42 np0005593232 nova_compute[250269]: 2026-01-23 10:55:42.966 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:44.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:44.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:55:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/430926934' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:55:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:55:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/430926934' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:55:44 np0005593232 nova_compute[250269]: 2026-01-23 10:55:44.738 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3895: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 23 05:55:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:46.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:46.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 05:55:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3896: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 73 op/s
Jan 23 05:55:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 05:55:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:55:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:55:48 np0005593232 nova_compute[250269]: 2026-01-23 10:55:48.021 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:48.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aacf94e4-29b2-4736-9122-5a1ec87ddd2c does not exist
Jan 23 05:55:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4fee267-b501-4880-a52b-acd8ac28d5eb does not exist
Jan 23 05:55:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 94dbe4d4-be40-4b73-a774-12f1b6c1a751 does not exist
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:55:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:55:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3897: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.4 MiB/s wr, 62 op/s
Jan 23 05:55:49 np0005593232 podman[404984]: 2026-01-23 10:55:49.580161321 +0000 UTC m=+0.026713171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:55:49 np0005593232 nova_compute[250269]: 2026-01-23 10:55:49.740 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:50.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:50.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:55:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:55:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:55:50 np0005593232 podman[404984]: 2026-01-23 10:55:50.488994444 +0000 UTC m=+0.935546224 container create 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 05:55:50 np0005593232 systemd[1]: Started libpod-conmon-143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b.scope.
Jan 23 05:55:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:55:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3898: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 1.4 MiB/s wr, 20 op/s
Jan 23 05:55:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:51 np0005593232 podman[404984]: 2026-01-23 10:55:51.063283337 +0000 UTC m=+1.509835157 container init 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:55:51 np0005593232 podman[404984]: 2026-01-23 10:55:51.070886983 +0000 UTC m=+1.517438753 container start 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:55:51 np0005593232 condescending_jepsen[405001]: 167 167
Jan 23 05:55:51 np0005593232 systemd[1]: libpod-143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b.scope: Deactivated successfully.
Jan 23 05:55:51 np0005593232 podman[404984]: 2026-01-23 10:55:51.098430216 +0000 UTC m=+1.544981986 container attach 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:55:51 np0005593232 podman[404984]: 2026-01-23 10:55:51.100260978 +0000 UTC m=+1.546812748 container died 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 05:55:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4900a40d0275f61f77230eeef58432afa7f6a312c7d958767c69cca6b491845c-merged.mount: Deactivated successfully.
Jan 23 05:55:51 np0005593232 podman[404984]: 2026-01-23 10:55:51.17491327 +0000 UTC m=+1.621465050 container remove 143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_jepsen, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 05:55:51 np0005593232 systemd[1]: libpod-conmon-143ee71e9f59b74321c24f07327e35ec47ed3d714666afd509ff9b345cadf81b.scope: Deactivated successfully.
Jan 23 05:55:51 np0005593232 podman[405027]: 2026-01-23 10:55:51.362938094 +0000 UTC m=+0.067592002 container create b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 05:55:51 np0005593232 systemd[1]: Started libpod-conmon-b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7.scope.
Jan 23 05:55:51 np0005593232 podman[405027]: 2026-01-23 10:55:51.316979018 +0000 UTC m=+0.021632916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:55:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:55:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:51 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:51 np0005593232 podman[405027]: 2026-01-23 10:55:51.463772731 +0000 UTC m=+0.168426649 container init b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:55:51 np0005593232 podman[405027]: 2026-01-23 10:55:51.471395017 +0000 UTC m=+0.176048895 container start b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:55:51 np0005593232 podman[405027]: 2026-01-23 10:55:51.47501939 +0000 UTC m=+0.179673308 container attach b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 05:55:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:52.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:52 np0005593232 vibrant_nobel[405044]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:55:52 np0005593232 vibrant_nobel[405044]: --> relative data size: 1.0
Jan 23 05:55:52 np0005593232 vibrant_nobel[405044]: --> All data devices are unavailable
Jan 23 05:55:52 np0005593232 systemd[1]: libpod-b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7.scope: Deactivated successfully.
Jan 23 05:55:52 np0005593232 podman[405027]: 2026-01-23 10:55:52.366667416 +0000 UTC m=+1.071321304 container died b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 05:55:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1129e0d3e855f38db3e99e6ec521bac096094bef190d24f983e87b804d311d87-merged.mount: Deactivated successfully.
Jan 23 05:55:52 np0005593232 podman[405027]: 2026-01-23 10:55:52.433490795 +0000 UTC m=+1.138144653 container remove b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:55:52 np0005593232 systemd[1]: libpod-conmon-b88f50abf1de4600ea7912001cead56aadb7f68ee438d1cd8ac4244f4cc3a3c7.scope: Deactivated successfully.
Jan 23 05:55:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3899: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 05:55:53 np0005593232 nova_compute[250269]: 2026-01-23 10:55:53.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:53 np0005593232 podman[405263]: 2026-01-23 10:55:53.096591984 +0000 UTC m=+0.049902440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:55:53 np0005593232 podman[405263]: 2026-01-23 10:55:53.476139163 +0000 UTC m=+0.429449509 container create 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:55:54 np0005593232 systemd[1]: Started libpod-conmon-99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4.scope.
Jan 23 05:55:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:55:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:55:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:55:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:54 np0005593232 nova_compute[250269]: 2026-01-23 10:55:54.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:54 np0005593232 podman[405263]: 2026-01-23 10:55:54.748515849 +0000 UTC m=+1.701826225 container init 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:55:54 np0005593232 podman[405263]: 2026-01-23 10:55:54.762843557 +0000 UTC m=+1.716153933 container start 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:55:54 np0005593232 agitated_montalcini[405280]: 167 167
Jan 23 05:55:54 np0005593232 systemd[1]: libpod-99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4.scope: Deactivated successfully.
Jan 23 05:55:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3900: 321 pgs: 321 active+clean; 196 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 382 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 05:55:55 np0005593232 podman[405263]: 2026-01-23 10:55:55.12250343 +0000 UTC m=+2.075813836 container attach 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:55:55 np0005593232 podman[405263]: 2026-01-23 10:55:55.123076096 +0000 UTC m=+2.076386482 container died 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:55:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:56.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 05:55:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:56.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 05:55:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:55:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3901: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 23 05:55:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c1ed560b44887a066429a3ed621651edf26b8745bae59226a2942b82e8afb466-merged.mount: Deactivated successfully.
Jan 23 05:55:58 np0005593232 nova_compute[250269]: 2026-01-23 10:55:58.027 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:55:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:55:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:55:58 np0005593232 podman[405263]: 2026-01-23 10:55:58.424424967 +0000 UTC m=+5.377735323 container remove 99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:55:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:55:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:55:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:55:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:55:58 np0005593232 systemd[1]: libpod-conmon-99707237539c12b9669d784ad362a1dc03b2b123095d42986fb66bcc4e369ca4.scope: Deactivated successfully.
Jan 23 05:55:58 np0005593232 podman[405308]: 2026-01-23 10:55:58.623554307 +0000 UTC m=+0.031867586 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:55:58 np0005593232 podman[405308]: 2026-01-23 10:55:58.884798554 +0000 UTC m=+0.293111803 container create 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 05:55:58 np0005593232 systemd[1]: Started libpod-conmon-4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323.scope.
Jan 23 05:55:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3902: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 375 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 05:55:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:55:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231974dbf0434d5651059763992eb713abba947baf476c54ae75d22fc14353b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231974dbf0434d5651059763992eb713abba947baf476c54ae75d22fc14353b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231974dbf0434d5651059763992eb713abba947baf476c54ae75d22fc14353b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:59 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/231974dbf0434d5651059763992eb713abba947baf476c54ae75d22fc14353b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:55:59 np0005593232 podman[405308]: 2026-01-23 10:55:59.297169996 +0000 UTC m=+0.705483335 container init 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:55:59 np0005593232 podman[405308]: 2026-01-23 10:55:59.309581708 +0000 UTC m=+0.717895007 container start 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:55:59 np0005593232 nova_compute[250269]: 2026-01-23 10:55:59.747 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:55:59 np0005593232 podman[405308]: 2026-01-23 10:55:59.899498927 +0000 UTC m=+1.307849467 container attach 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]: {
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:    "0": [
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:        {
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "devices": [
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "/dev/loop3"
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            ],
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "lv_name": "ceph_lv0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "lv_size": "7511998464",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "name": "ceph_lv0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "tags": {
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.cluster_name": "ceph",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.crush_device_class": "",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.encrypted": "0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.osd_id": "0",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.type": "block",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:                "ceph.vdo": "0"
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            },
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "type": "block",
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:            "vg_name": "ceph_vg0"
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:        }
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]:    ]
Jan 23 05:56:00 np0005593232 blissful_faraday[405325]: }
Jan 23 05:56:00 np0005593232 systemd[1]: libpod-4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323.scope: Deactivated successfully.
Jan 23 05:56:00 np0005593232 podman[405308]: 2026-01-23 10:56:00.151270954 +0000 UTC m=+1.559584243 container died 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:56:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:00.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3903: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 364 KiB/s rd, 722 KiB/s wr, 46 op/s
Jan 23 05:56:01 np0005593232 systemd[1]: var-lib-containers-storage-overlay-231974dbf0434d5651059763992eb713abba947baf476c54ae75d22fc14353b2-merged.mount: Deactivated successfully.
Jan 23 05:56:01 np0005593232 podman[405308]: 2026-01-23 10:56:01.427459648 +0000 UTC m=+2.835772917 container remove 4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_faraday, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:56:01 np0005593232 systemd[1]: libpod-conmon-4949a8363bf2b6aa57b7b9b30dd20cf54fc55da3ab660248ede5fbab08d43323.scope: Deactivated successfully.
Jan 23 05:56:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:01 np0005593232 podman[405358]: 2026-01-23 10:56:01.735781163 +0000 UTC m=+0.156189771 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 05:56:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:02.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.408618768 +0000 UTC m=+0.060773808 container create e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:56:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:02.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.386763787 +0000 UTC m=+0.038918807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:56:02 np0005593232 systemd[1]: Started libpod-conmon-e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7.scope.
Jan 23 05:56:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.948487324 +0000 UTC m=+0.600642414 container init e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.961703359 +0000 UTC m=+0.613858389 container start e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.966478805 +0000 UTC m=+0.618633905 container attach e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 05:56:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3904: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 364 KiB/s rd, 734 KiB/s wr, 47 op/s
Jan 23 05:56:02 np0005593232 keen_swanson[405530]: 167 167
Jan 23 05:56:02 np0005593232 systemd[1]: libpod-e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7.scope: Deactivated successfully.
Jan 23 05:56:02 np0005593232 podman[405514]: 2026-01-23 10:56:02.972507327 +0000 UTC m=+0.624662367 container died e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:56:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8ffd161293d12cc23d50ff4b2a702566e64313370fa64dc311d3331139d62889-merged.mount: Deactivated successfully.
Jan 23 05:56:03 np0005593232 nova_compute[250269]: 2026-01-23 10:56:03.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:03 np0005593232 podman[405514]: 2026-01-23 10:56:03.121598134 +0000 UTC m=+0.773753144 container remove e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_swanson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:56:03 np0005593232 systemd[1]: libpod-conmon-e8cd02a1be0710035da281021d6acf6b1420bb56c2f8696f9a5517fbed79f4d7.scope: Deactivated successfully.
Jan 23 05:56:03 np0005593232 podman[405557]: 2026-01-23 10:56:03.288062206 +0000 UTC m=+0.033877224 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:56:03 np0005593232 podman[405557]: 2026-01-23 10:56:03.474611679 +0000 UTC m=+0.220426717 container create c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:56:03 np0005593232 systemd[1]: Started libpod-conmon-c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65.scope.
Jan 23 05:56:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:56:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e026404eb39d0703e25dc23fee4108f4709821c645c9403137d6ff2525d57a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:56:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e026404eb39d0703e25dc23fee4108f4709821c645c9403137d6ff2525d57a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:56:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e026404eb39d0703e25dc23fee4108f4709821c645c9403137d6ff2525d57a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:56:03 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e026404eb39d0703e25dc23fee4108f4709821c645c9403137d6ff2525d57a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:56:03 np0005593232 podman[405557]: 2026-01-23 10:56:03.922687745 +0000 UTC m=+0.668502843 container init c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:56:03 np0005593232 podman[405557]: 2026-01-23 10:56:03.938744542 +0000 UTC m=+0.684559580 container start c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:56:04 np0005593232 podman[405557]: 2026-01-23 10:56:04.124340988 +0000 UTC m=+0.870156026 container attach c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:56:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:04.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:04 np0005593232 nova_compute[250269]: 2026-01-23 10:56:04.750 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:04 np0005593232 goofy_payne[405573]: {
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:        "osd_id": 0,
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:        "type": "bluestore"
Jan 23 05:56:04 np0005593232 goofy_payne[405573]:    }
Jan 23 05:56:04 np0005593232 goofy_payne[405573]: }
Jan 23 05:56:04 np0005593232 systemd[1]: libpod-c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65.scope: Deactivated successfully.
Jan 23 05:56:04 np0005593232 podman[405557]: 2026-01-23 10:56:04.922180796 +0000 UTC m=+1.667995834 container died c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:56:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3905: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 5 op/s
Jan 23 05:56:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5e026404eb39d0703e25dc23fee4108f4709821c645c9403137d6ff2525d57a4-merged.mount: Deactivated successfully.
Jan 23 05:56:06 np0005593232 podman[405596]: 2026-01-23 10:56:06.128497806 +0000 UTC m=+1.151405431 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 23 05:56:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:06.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:06 np0005593232 podman[405557]: 2026-01-23 10:56:06.332671389 +0000 UTC m=+3.078486377 container remove c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:56:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:56:06 np0005593232 systemd[1]: libpod-conmon-c6bdcc0708df0dcb4f030c93d3c3a8b9ea249b705f24ee71e5b3347ae14ded65.scope: Deactivated successfully.
Jan 23 05:56:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:06.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:56:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:56:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3906: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 23 KiB/s wr, 5 op/s
Jan 23 05:56:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev ada84126-99a6-4a73-b1f8-5e1f401db9eb does not exist
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d2b0b2e9-b4d5-4b32-8623-46808a4e6ca8 does not exist
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 84afd165-46fa-4257-87aa-aba6e82074d4 does not exist
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:08 np0005593232 nova_compute[250269]: 2026-01-23 10:56:08.053 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:08.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3907: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 05:56:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:56:09 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:56:09 np0005593232 nova_compute[250269]: 2026-01-23 10:56:09.754 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:10.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:10.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3908: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 05:56:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:12.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:56:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:12.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:56:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3909: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 05:56:13 np0005593232 nova_compute[250269]: 2026-01-23 10:56:13.056 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:56:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:14.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:56:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:14.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:14 np0005593232 nova_compute[250269]: 2026-01-23 10:56:14.757 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3910: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 KiB/s wr, 0 op/s
Jan 23 05:56:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:16.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:16.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3911: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 3.5 KiB/s wr, 0 op/s
Jan 23 05:56:18 np0005593232 nova_compute[250269]: 2026-01-23 10:56:18.058 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:18.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:18.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3912: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 23 05:56:19 np0005593232 nova_compute[250269]: 2026-01-23 10:56:19.760 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:20.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:20.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3913: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 23 05:56:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:22.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:56:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:22.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:56:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3914: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 23 05:56:23 np0005593232 nova_compute[250269]: 2026-01-23 10:56:23.061 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:24.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:24.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:24 np0005593232 nova_compute[250269]: 2026-01-23 10:56:24.763 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3915: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 3 op/s
Jan 23 05:56:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Jan 23 05:56:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Jan 23 05:56:25 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.755 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.756 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.756 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.757 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.757 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:25 np0005593232 nova_compute[250269]: 2026-01-23 10:56:25.758 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:56:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:26.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:26.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3917: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 19 KiB/s wr, 16 op/s
Jan 23 05:56:28 np0005593232 nova_compute[250269]: 2026-01-23 10:56:28.063 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:28.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:28.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3918: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 409 B/s wr, 20 op/s
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.295 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.296 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.391 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.391 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:29 np0005593232 nova_compute[250269]: 2026-01-23 10:56:29.766 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:30.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:30.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3919: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 409 B/s wr, 20 op/s
Jan 23 05:56:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:32.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:32 np0005593232 podman[405764]: 2026-01-23 10:56:32.387770904 +0000 UTC m=+0.191781272 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:56:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:32.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3920: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 104 op/s
Jan 23 05:56:33 np0005593232 nova_compute[250269]: 2026-01-23 10:56:33.065 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:34.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:34 np0005593232 nova_compute[250269]: 2026-01-23 10:56:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:34.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:34 np0005593232 nova_compute[250269]: 2026-01-23 10:56:34.770 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3921: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 104 op/s
Jan 23 05:56:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:36.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:36 np0005593232 podman[405819]: 2026-01-23 10:56:36.420721452 +0000 UTC m=+0.067731867 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 23 05:56:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:36.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3922: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 443 B/s wr, 90 op/s
Jan 23 05:56:37 np0005593232 nova_compute[250269]: 2026-01-23 10:56:37.233 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:37.244 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=90, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=89) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:56:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:37.245 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:56:37
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.log', 'volumes', '.mgr']
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:56:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.119 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.462 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.463 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.463 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.464 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.464 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:56:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:56:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:38.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:56:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:56:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/955693843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:56:38 np0005593232 nova_compute[250269]: 2026-01-23 10:56:38.967 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:56:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3923: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 76 op/s
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.138 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.139 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4110MB free_disk=20.942668914794922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.140 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.140 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.199 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.199 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.214 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:56:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:56:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/711046315' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.685 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.691 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.712 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.714 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.714 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:56:39 np0005593232 nova_compute[250269]: 2026-01-23 10:56:39.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:40.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:40.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3924: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 69 op/s
Jan 23 05:56:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:42.248 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '90'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:56:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:42.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:42.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:42.678 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:56:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:56:42.679 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:56:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3925: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 70 op/s
Jan 23 05:56:43 np0005593232 nova_compute[250269]: 2026-01-23 10:56:43.121 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Jan 23 05:56:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:44.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:44.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:44 np0005593232 nova_compute[250269]: 2026-01-23 10:56:44.814 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3926: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 23 05:56:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Jan 23 05:56:45 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Jan 23 05:56:45 np0005593232 nova_compute[250269]: 2026-01-23 10:56:45.709 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:56:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:46.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:56:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:46.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:56:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3928: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 409 B/s wr, 12 op/s
Jan 23 05:56:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021730469477014703 of space, bias 1.0, pg target 0.6519140843104411 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:56:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:56:48 np0005593232 nova_compute[250269]: 2026-01-23 10:56:48.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:48.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:48.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3929: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 409 B/s wr, 14 op/s
Jan 23 05:56:49 np0005593232 nova_compute[250269]: 2026-01-23 10:56:49.817 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:50.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:50.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3930: 321 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 317 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s rd, 409 B/s wr, 14 op/s
Jan 23 05:56:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:56:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:52.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:56:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3931: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 409 B/s wr, 21 op/s
Jan 23 05:56:53 np0005593232 nova_compute[250269]: 2026-01-23 10:56:53.172 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:54.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:54 np0005593232 nova_compute[250269]: 2026-01-23 10:56:54.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3932: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 48 KiB/s rd, 409 B/s wr, 21 op/s
Jan 23 05:56:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:56.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3933: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 845 KiB/s rd, 357 B/s wr, 47 op/s
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.599483) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817599565, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2121, "num_deletes": 252, "total_data_size": 3925182, "memory_usage": 3997968, "flush_reason": "Manual Compaction"}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817682094, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3852097, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84453, "largest_seqno": 86573, "table_properties": {"data_size": 3842288, "index_size": 6238, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19971, "raw_average_key_size": 20, "raw_value_size": 3822848, "raw_average_value_size": 3928, "num_data_blocks": 271, "num_entries": 973, "num_filter_entries": 973, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165597, "oldest_key_time": 1769165597, "file_creation_time": 1769165817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 82691 microseconds, and 9151 cpu microseconds.
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.682170) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3852097 bytes OK
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.682196) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.696299) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.696402) EVENT_LOG_v1 {"time_micros": 1769165817696381, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.697127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3916431, prev total WAL file size 3924058, number of live WAL files 2.
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.699247) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3761KB)], [197(12MB)]
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817699444, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 16583663, "oldest_snapshot_seqno": -1}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 11253 keys, 14633420 bytes, temperature: kUnknown
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817878147, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 14633420, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14560825, "index_size": 43362, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28165, "raw_key_size": 296498, "raw_average_key_size": 26, "raw_value_size": 14364200, "raw_average_value_size": 1276, "num_data_blocks": 1651, "num_entries": 11253, "num_filter_entries": 11253, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165817, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.878407) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 14633420 bytes
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.880170) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.8 rd, 81.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 12.1 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 11778, records dropped: 525 output_compression: NoCompression
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.880190) EVENT_LOG_v1 {"time_micros": 1769165817880181, "job": 124, "event": "compaction_finished", "compaction_time_micros": 178766, "compaction_time_cpu_micros": 50636, "output_level": 6, "num_output_files": 1, "total_output_size": 14633420, "num_input_records": 11778, "num_output_records": 11253, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817881339, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165817884928, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.699014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.885131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.885136) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.885138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.885140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:56:57.885143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:56:58 np0005593232 nova_compute[250269]: 2026-01-23 10:56:58.215 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:56:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:56:58.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:56:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:56:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:56:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:56:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3935: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 86 op/s
Jan 23 05:56:59 np0005593232 nova_compute[250269]: 2026-01-23 10:56:59.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:00.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3936: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 102 B/s wr, 86 op/s
Jan 23 05:57:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:02.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3937: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 77 op/s
Jan 23 05:57:03 np0005593232 nova_compute[250269]: 2026-01-23 10:57:03.253 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:03 np0005593232 podman[405945]: 2026-01-23 10:57:03.449784567 +0000 UTC m=+0.107551988 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 23 05:57:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:04.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:04 np0005593232 nova_compute[250269]: 2026-01-23 10:57:04.823 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3938: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 77 op/s
Jan 23 05:57:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:06.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:06.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3939: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.6 MiB/s rd, 2.4 KiB/s wr, 72 op/s
Jan 23 05:57:07 np0005593232 podman[405975]: 2026-01-23 10:57:07.404667225 +0000 UTC m=+0.065193924 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 05:57:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:08.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:08 np0005593232 nova_compute[250269]: 2026-01-23 10:57:08.308 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:08.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 05:57:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3940: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:57:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:09 np0005593232 nova_compute[250269]: 2026-01-23 10:57:09.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:57:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:10.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 79a42fdc-6e9b-4c2f-bd17-4ecf77df4efe does not exist
Jan 23 05:57:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b60a6c8f-1906-499d-b65c-6ef753dba787 does not exist
Jan 23 05:57:10 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5dfbba4b-f3f1-406b-ba1b-16be6907d08a does not exist
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:57:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:57:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3941: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 23 05:57:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:57:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.487855389 +0000 UTC m=+0.043047684 container create 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 05:57:11 np0005593232 systemd[1]: Started libpod-conmon-397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966.scope.
Jan 23 05:57:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.466266336 +0000 UTC m=+0.021458671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.572706851 +0000 UTC m=+0.127899156 container init 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.582961443 +0000 UTC m=+0.138153728 container start 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.586148133 +0000 UTC m=+0.141340428 container attach 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:57:11 np0005593232 nifty_shaw[406286]: 167 167
Jan 23 05:57:11 np0005593232 systemd[1]: libpod-397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966.scope: Deactivated successfully.
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.589726975 +0000 UTC m=+0.144919260 container died 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f226515f11488242300722985159c0f0ee12b586a44f1213485eaade3b566666-merged.mount: Deactivated successfully.
Jan 23 05:57:11 np0005593232 podman[406269]: 2026-01-23 10:57:11.640978612 +0000 UTC m=+0.196170897 container remove 397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:11 np0005593232 systemd[1]: libpod-conmon-397d77d3cc36a3d324f07c919dd434a1a0399e82203a1d397db725586f461966.scope: Deactivated successfully.
Jan 23 05:57:11 np0005593232 podman[406308]: 2026-01-23 10:57:11.814885775 +0000 UTC m=+0.044714202 container create 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:57:11 np0005593232 systemd[1]: Started libpod-conmon-2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9.scope.
Jan 23 05:57:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:11 np0005593232 podman[406308]: 2026-01-23 10:57:11.794785744 +0000 UTC m=+0.024614161 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:11 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:11 np0005593232 podman[406308]: 2026-01-23 10:57:11.901381694 +0000 UTC m=+0.131210151 container init 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:57:11 np0005593232 podman[406308]: 2026-01-23 10:57:11.910064491 +0000 UTC m=+0.139892918 container start 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 05:57:11 np0005593232 podman[406308]: 2026-01-23 10:57:11.913356194 +0000 UTC m=+0.143184651 container attach 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:57:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:12.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:12.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:12 np0005593232 practical_allen[406324]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:57:12 np0005593232 practical_allen[406324]: --> relative data size: 1.0
Jan 23 05:57:12 np0005593232 practical_allen[406324]: --> All data devices are unavailable
Jan 23 05:57:12 np0005593232 systemd[1]: libpod-2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9.scope: Deactivated successfully.
Jan 23 05:57:12 np0005593232 podman[406308]: 2026-01-23 10:57:12.758694003 +0000 UTC m=+0.988522470 container died 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3942: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 43 op/s
Jan 23 05:57:13 np0005593232 nova_compute[250269]: 2026-01-23 10:57:13.308 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:14.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:14 np0005593232 nova_compute[250269]: 2026-01-23 10:57:14.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3943: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 530 KiB/s rd, 22 KiB/s wr, 43 op/s
Jan 23 05:57:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-fc8ea96e458227ecc13843c613a83d27183dbb2c80576e5f37566e218c08b387-merged.mount: Deactivated successfully.
Jan 23 05:57:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:16.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:16.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:16 np0005593232 podman[406308]: 2026-01-23 10:57:16.619169396 +0000 UTC m=+4.848997853 container remove 2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:16 np0005593232 systemd[1]: libpod-conmon-2c63e8db80a9cd4c56f7541d804bfb982e8d7de928f902628792ca514055f8d9.scope: Deactivated successfully.
Jan 23 05:57:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3944: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 531 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.379677283 +0000 UTC m=+0.064056891 container create 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 05:57:17 np0005593232 systemd[1]: Started libpod-conmon-3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479.scope.
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.351515613 +0000 UTC m=+0.035895241 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.489603108 +0000 UTC m=+0.173982736 container init 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.496444483 +0000 UTC m=+0.180824091 container start 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:57:17 np0005593232 vibrant_bardeen[406560]: 167 167
Jan 23 05:57:17 np0005593232 systemd[1]: libpod-3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479.scope: Deactivated successfully.
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.811363444 +0000 UTC m=+0.495743152 container attach 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.813411613 +0000 UTC m=+0.497791261 container died 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 05:57:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dc2b40ed271076bcc5cccc5279624cf745f2e3e1025a1532bf8c2efa972776f0-merged.mount: Deactivated successfully.
Jan 23 05:57:17 np0005593232 podman[406544]: 2026-01-23 10:57:17.994701506 +0000 UTC m=+0.679081134 container remove 3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 05:57:18 np0005593232 systemd[1]: libpod-conmon-3403672c5321bf6a3c156d3e2d7870c53bf718ae65a037a10dd9b34343e4a479.scope: Deactivated successfully.
Jan 23 05:57:18 np0005593232 podman[406587]: 2026-01-23 10:57:18.167849658 +0000 UTC m=+0.043578000 container create c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 05:57:18 np0005593232 systemd[1]: Started libpod-conmon-c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0.scope.
Jan 23 05:57:18 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1876028110d1b2cd581f49ded6a559bf15f9af094d4182aaa6ca20a1ccef42c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1876028110d1b2cd581f49ded6a559bf15f9af094d4182aaa6ca20a1ccef42c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1876028110d1b2cd581f49ded6a559bf15f9af094d4182aaa6ca20a1ccef42c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:18 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1876028110d1b2cd581f49ded6a559bf15f9af094d4182aaa6ca20a1ccef42c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:18 np0005593232 podman[406587]: 2026-01-23 10:57:18.149363882 +0000 UTC m=+0.025092244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:18 np0005593232 podman[406587]: 2026-01-23 10:57:18.249603681 +0000 UTC m=+0.125332053 container init c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:57:18 np0005593232 podman[406587]: 2026-01-23 10:57:18.255580321 +0000 UTC m=+0.131308663 container start c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:18 np0005593232 podman[406587]: 2026-01-23 10:57:18.259174724 +0000 UTC m=+0.134903066 container attach c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:18.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:18 np0005593232 nova_compute[250269]: 2026-01-23 10:57:18.360 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:18.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3945: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 345 KiB/s rd, 20 KiB/s wr, 21 op/s
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]: {
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:    "0": [
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:        {
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "devices": [
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "/dev/loop3"
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            ],
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "lv_name": "ceph_lv0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "lv_size": "7511998464",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "name": "ceph_lv0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "tags": {
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.cluster_name": "ceph",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.crush_device_class": "",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.encrypted": "0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.osd_id": "0",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.type": "block",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:                "ceph.vdo": "0"
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            },
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "type": "block",
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:            "vg_name": "ceph_vg0"
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:        }
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]:    ]
Jan 23 05:57:19 np0005593232 distracted_lederberg[406603]: }
Jan 23 05:57:19 np0005593232 systemd[1]: libpod-c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0.scope: Deactivated successfully.
Jan 23 05:57:19 np0005593232 podman[406587]: 2026-01-23 10:57:19.104803601 +0000 UTC m=+0.980531943 container died c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:57:19 np0005593232 nova_compute[250269]: 2026-01-23 10:57:19.829 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:20 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1876028110d1b2cd581f49ded6a559bf15f9af094d4182aaa6ca20a1ccef42c5-merged.mount: Deactivated successfully.
Jan 23 05:57:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:20 np0005593232 nova_compute[250269]: 2026-01-23 10:57:20.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:20.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:20.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3946: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 KiB/s rd, 10 KiB/s wr, 1 op/s
Jan 23 05:57:21 np0005593232 nova_compute[250269]: 2026-01-23 10:57:21.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:21 np0005593232 podman[406587]: 2026-01-23 10:57:21.946013433 +0000 UTC m=+3.821741785 container remove c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lederberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:57:21 np0005593232 systemd[1]: libpod-conmon-c2d084f939494072b1896aad9f64fa8427d1594e0f2077dc64bd1937a4d86ad0.scope: Deactivated successfully.
Jan 23 05:57:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:22.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:22.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:22 np0005593232 podman[406767]: 2026-01-23 10:57:22.712641124 +0000 UTC m=+0.031022222 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3947: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.0 KiB/s rd, 13 KiB/s wr, 1 op/s
Jan 23 05:57:23 np0005593232 nova_compute[250269]: 2026-01-23 10:57:23.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:23 np0005593232 podman[406767]: 2026-01-23 10:57:23.457225949 +0000 UTC m=+0.775606977 container create ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:57:23 np0005593232 systemd[1]: Started libpod-conmon-ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08.scope.
Jan 23 05:57:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:24.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:24 np0005593232 podman[406767]: 2026-01-23 10:57:24.315374771 +0000 UTC m=+1.633755819 container init ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:24 np0005593232 podman[406767]: 2026-01-23 10:57:24.324417098 +0000 UTC m=+1.642798126 container start ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 23 05:57:24 np0005593232 suspicious_curran[406783]: 167 167
Jan 23 05:57:24 np0005593232 systemd[1]: libpod-ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08.scope: Deactivated successfully.
Jan 23 05:57:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:24.431 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=91, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=90) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:57:24 np0005593232 nova_compute[250269]: 2026-01-23 10:57:24.432 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:24.433 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:57:24 np0005593232 podman[406767]: 2026-01-23 10:57:24.524046523 +0000 UTC m=+1.842427551 container attach ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:57:24 np0005593232 podman[406767]: 2026-01-23 10:57:24.524884817 +0000 UTC m=+1.843265835 container died ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:57:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:24.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:24 np0005593232 nova_compute[250269]: 2026-01-23 10:57:24.831 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3948: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 0 op/s
Jan 23 05:57:25 np0005593232 nova_compute[250269]: 2026-01-23 10:57:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:25 np0005593232 nova_compute[250269]: 2026-01-23 10:57:25.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:25 np0005593232 nova_compute[250269]: 2026-01-23 10:57:25.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:57:26 np0005593232 nova_compute[250269]: 2026-01-23 10:57:26.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:26.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-29ce84552d58beb8f16c1e94bb8e753c3c8b023032a03773817934c2e2ccffe0-merged.mount: Deactivated successfully.
Jan 23 05:57:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:26.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:26 np0005593232 podman[406767]: 2026-01-23 10:57:26.65373018 +0000 UTC m=+3.972111208 container remove ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:57:26 np0005593232 systemd[1]: libpod-conmon-ec6b59851c08c159bee08d1704b006d0ab890f7fb0db718ddff5e4cd8703da08.scope: Deactivated successfully.
Jan 23 05:57:26 np0005593232 podman[406810]: 2026-01-23 10:57:26.805240567 +0000 UTC m=+0.028236024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:57:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3949: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 B/s rd, 7.1 KiB/s wr, 1 op/s
Jan 23 05:57:27 np0005593232 podman[406810]: 2026-01-23 10:57:27.044023444 +0000 UTC m=+0.267018851 container create cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 05:57:27 np0005593232 systemd[1]: Started libpod-conmon-cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1.scope.
Jan 23 05:57:27 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:57:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48a3a3e65f56db599c8cc215281bdf8e859186932eb94a42e7afbdef41e4a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48a3a3e65f56db599c8cc215281bdf8e859186932eb94a42e7afbdef41e4a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48a3a3e65f56db599c8cc215281bdf8e859186932eb94a42e7afbdef41e4a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:27 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e48a3a3e65f56db599c8cc215281bdf8e859186932eb94a42e7afbdef41e4a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:57:27 np0005593232 podman[406810]: 2026-01-23 10:57:27.329031785 +0000 UTC m=+0.552027212 container init cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 05:57:27 np0005593232 podman[406810]: 2026-01-23 10:57:27.335373395 +0000 UTC m=+0.558368842 container start cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:27 np0005593232 podman[406810]: 2026-01-23 10:57:27.382795703 +0000 UTC m=+0.605791110 container attach cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:57:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]: {
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:        "osd_id": 0,
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:        "type": "bluestore"
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]:    }
Jan 23 05:57:28 np0005593232 optimistic_cerf[406826]: }
Jan 23 05:57:28 np0005593232 systemd[1]: libpod-cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1.scope: Deactivated successfully.
Jan 23 05:57:28 np0005593232 podman[406810]: 2026-01-23 10:57:28.239559887 +0000 UTC m=+1.462555294 container died cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:57:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:28.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:28 np0005593232 nova_compute[250269]: 2026-01-23 10:57:28.364 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:28.435 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '91'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:57:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:28.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3950: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 23 05:57:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1e48a3a3e65f56db599c8cc215281bdf8e859186932eb94a42e7afbdef41e4a9-merged.mount: Deactivated successfully.
Jan 23 05:57:29 np0005593232 nova_compute[250269]: 2026-01-23 10:57:29.832 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:30 np0005593232 nova_compute[250269]: 2026-01-23 10:57:30.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:30 np0005593232 nova_compute[250269]: 2026-01-23 10:57:30.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:57:30 np0005593232 nova_compute[250269]: 2026-01-23 10:57:30.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:57:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:30.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:30.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:30 np0005593232 nova_compute[250269]: 2026-01-23 10:57:30.867 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:57:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3951: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 23 05:57:31 np0005593232 nova_compute[250269]: 2026-01-23 10:57:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:31 np0005593232 podman[406810]: 2026-01-23 10:57:31.447589874 +0000 UTC m=+4.670585321 container remove cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 05:57:31 np0005593232 systemd[1]: libpod-conmon-cac9b1aa76d67d71e7280359e224bc7e117b64e5035e4a2f07a545be8f2cfbe1.scope: Deactivated successfully.
Jan 23 05:57:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:57:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:57:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:32.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:32.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 22d771a2-852a-498c-afa1-645d6d257749 does not exist
Jan 23 05:57:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c2f8e64d-f184-415a-bd2f-7bbf3fb0d117 does not exist
Jan 23 05:57:32 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 984d92a5-84f2-42df-89f2-33578eaa7b9b does not exist
Jan 23 05:57:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3952: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 23 05:57:33 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:33 np0005593232 nova_compute[250269]: 2026-01-23 10:57:33.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:34.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:34 np0005593232 podman[406963]: 2026-01-23 10:57:34.486850126 +0000 UTC m=+0.127297148 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 23 05:57:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:34.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:57:34 np0005593232 nova_compute[250269]: 2026-01-23 10:57:34.864 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3953: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Jan 23 05:57:35 np0005593232 nova_compute[250269]: 2026-01-23 10:57:35.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:36.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:36.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3954: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:57:37
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'vms', 'backups']
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:57:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:57:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:57:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:38.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.332 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.333 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:38 np0005593232 podman[406992]: 2026-01-23 10:57:38.470047089 +0000 UTC m=+0.118298363 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:57:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:38.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:57:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:57:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:57:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/184115445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:57:38 np0005593232 nova_compute[250269]: 2026-01-23 10:57:38.852 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:57:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3955: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 28 op/s
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.076 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.077 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4136MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.078 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.078 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.262 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.262 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.345 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.466 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.467 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.487 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.511 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.529 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:57:39 np0005593232 nova_compute[250269]: 2026-01-23 10:57:39.868 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:57:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2081273794' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:57:40 np0005593232 nova_compute[250269]: 2026-01-23 10:57:40.067 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:57:40 np0005593232 nova_compute[250269]: 2026-01-23 10:57:40.075 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:57:40 np0005593232 nova_compute[250269]: 2026-01-23 10:57:40.132 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:57:40 np0005593232 nova_compute[250269]: 2026-01-23 10:57:40.135 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:57:40 np0005593232 nova_compute[250269]: 2026-01-23 10:57:40.135 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:57:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:40.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:40.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3956: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 23 05:57:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:42.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:42.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:42.679 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:42.680 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:57:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:57:42.681 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:57:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3957: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Jan 23 05:57:43 np0005593232 nova_compute[250269]: 2026-01-23 10:57:43.430 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:44.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:44.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:57:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402458660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:57:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:57:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3402458660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:57:44 np0005593232 nova_compute[250269]: 2026-01-23 10:57:44.871 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3958: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:46.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:46.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3959: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:57:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:57:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:48.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:48 np0005593232 nova_compute[250269]: 2026-01-23 10:57:48.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:48.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3960: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:49 np0005593232 nova_compute[250269]: 2026-01-23 10:57:49.873 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:50.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:50.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3961: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:52.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:57:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:52.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:52.883854) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165872883953, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 729, "num_deletes": 258, "total_data_size": 988270, "memory_usage": 1002496, "flush_reason": "Manual Compaction"}
Jan 23 05:57:52 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Jan 23 05:57:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3962: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165873246837, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 673149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86574, "largest_seqno": 87302, "table_properties": {"data_size": 669770, "index_size": 1158, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9221, "raw_average_key_size": 21, "raw_value_size": 662569, "raw_average_value_size": 1519, "num_data_blocks": 50, "num_entries": 436, "num_filter_entries": 436, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165817, "oldest_key_time": 1769165817, "file_creation_time": 1769165872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 363036 microseconds, and 3739 cpu microseconds.
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.246914) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 673149 bytes OK
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.246934) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.445953) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.445992) EVENT_LOG_v1 {"time_micros": 1769165873445982, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.446018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 984493, prev total WAL file size 984493, number of live WAL files 2.
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.446837) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323539' seq:72057594037927935, type:22 .. '6D6772737461740033353137' seq:0, type:0; will stop at (end)
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(657KB)], [200(13MB)]
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165873446936, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 15306569, "oldest_snapshot_seqno": -1}
Jan 23 05:57:53 np0005593232 nova_compute[250269]: 2026-01-23 10:57:53.496 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 11173 keys, 11655283 bytes, temperature: kUnknown
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165873891472, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 11655283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11587504, "index_size": 38738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 295054, "raw_average_key_size": 26, "raw_value_size": 11396614, "raw_average_value_size": 1020, "num_data_blocks": 1460, "num_entries": 11173, "num_filter_entries": 11173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:57:53 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.891972) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 11655283 bytes
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.074818) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.4 rd, 26.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(40.1) write-amplify(17.3) OK, records in: 11689, records dropped: 516 output_compression: NoCompression
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.074935) EVENT_LOG_v1 {"time_micros": 1769165874074860, "job": 126, "event": "compaction_finished", "compaction_time_micros": 444663, "compaction_time_cpu_micros": 63396, "output_level": 6, "num_output_files": 1, "total_output_size": 11655283, "num_input_records": 11689, "num_output_records": 11173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165874075439, "job": 126, "event": "table_file_deletion", "file_number": 202}
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165874079735, "job": 126, "event": "table_file_deletion", "file_number": 200}
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:53.446710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.079956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.079968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.079970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.079972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:57:54.079974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:57:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:54.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:54.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:54 np0005593232 nova_compute[250269]: 2026-01-23 10:57:54.875 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3963: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:56.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:56.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3964: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:57:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:57:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:57:58.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:57:58 np0005593232 nova_compute[250269]: 2026-01-23 10:57:58.529 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:57:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:57:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:57:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:57:58.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:57:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3965: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:57:59 np0005593232 nova_compute[250269]: 2026-01-23 10:57:59.877 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:00.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3966: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:58:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:02.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:02.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3967: 321 pgs: 321 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Jan 23 05:58:03 np0005593232 nova_compute[250269]: 2026-01-23 10:58:03.568 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:04.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:04 np0005593232 nova_compute[250269]: 2026-01-23 10:58:04.879 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3968: 321 pgs: 321 active+clean; 148 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Jan 23 05:58:05 np0005593232 podman[407118]: 2026-01-23 10:58:05.428909934 +0000 UTC m=+0.080403587 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:58:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:06.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:06.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3969: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:08.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:08 np0005593232 nova_compute[250269]: 2026-01-23 10:58:08.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:08.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3970: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 23 05:58:09 np0005593232 podman[407146]: 2026-01-23 10:58:09.40692114 +0000 UTC m=+0.066582004 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 05:58:09 np0005593232 nova_compute[250269]: 2026-01-23 10:58:09.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:10.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3971: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 23 05:58:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:12.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3972: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 05:58:13 np0005593232 nova_compute[250269]: 2026-01-23 10:58:13.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:14.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:14 np0005593232 nova_compute[250269]: 2026-01-23 10:58:14.882 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3973: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 786 KiB/s wr, 86 op/s
Jan 23 05:58:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:16.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:16.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3974: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 786 KiB/s wr, 86 op/s
Jan 23 05:58:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:18.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:18 np0005593232 nova_compute[250269]: 2026-01-23 10:58:18.652 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:18.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3975: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 05:58:19 np0005593232 nova_compute[250269]: 2026-01-23 10:58:19.884 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:20.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:20.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3976: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 69 op/s
Jan 23 05:58:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:22.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:22.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3977: 321 pgs: 321 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 117 op/s
Jan 23 05:58:23 np0005593232 nova_compute[250269]: 2026-01-23 10:58:23.136 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:23 np0005593232 nova_compute[250269]: 2026-01-23 10:58:23.137 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:23 np0005593232 nova_compute[250269]: 2026-01-23 10:58:23.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:24.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:24.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:24 np0005593232 nova_compute[250269]: 2026-01-23 10:58:24.887 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3978: 321 pgs: 321 active+clean; 197 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Jan 23 05:58:25 np0005593232 nova_compute[250269]: 2026-01-23 10:58:25.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:25 np0005593232 nova_compute[250269]: 2026-01-23 10:58:25.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:58:26 np0005593232 nova_compute[250269]: 2026-01-23 10:58:26.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:26 np0005593232 nova_compute[250269]: 2026-01-23 10:58:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:26.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3979: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 05:58:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:28.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:28 np0005593232 nova_compute[250269]: 2026-01-23 10:58:28.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3980: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 05:58:29 np0005593232 nova_compute[250269]: 2026-01-23 10:58:29.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:30 np0005593232 nova_compute[250269]: 2026-01-23 10:58:30.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:30 np0005593232 nova_compute[250269]: 2026-01-23 10:58:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:58:30 np0005593232 nova_compute[250269]: 2026-01-23 10:58:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:58:30 np0005593232 nova_compute[250269]: 2026-01-23 10:58:30.318 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:58:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:30.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:30.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3981: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 05:58:32 np0005593232 nova_compute[250269]: 2026-01-23 10:58:32.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:32.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:32.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3982: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 05:58:33 np0005593232 nova_compute[250269]: 2026-01-23 10:58:33.749 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 05:58:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5bb15636-4f5b-445b-bdb4-14e38c64c5db does not exist
Jan 23 05:58:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 67e02b64-dacc-4303-8302-79f61417fa4c does not exist
Jan 23 05:58:34 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2f569df3-20e0-4ffd-b00a-8e2f4a61d05e does not exist
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:58:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:58:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:34.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:34.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.76622149 +0000 UTC m=+0.043000233 container create 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 05:58:34 np0005593232 systemd[1]: Started libpod-conmon-2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c.scope.
Jan 23 05:58:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.839838543 +0000 UTC m=+0.116617306 container init 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.745621594 +0000 UTC m=+0.022400377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.8481709 +0000 UTC m=+0.124949643 container start 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.851627368 +0000 UTC m=+0.128406131 container attach 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 05:58:34 np0005593232 unruffled_leakey[407565]: 167 167
Jan 23 05:58:34 np0005593232 systemd[1]: libpod-2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c.scope: Deactivated successfully.
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.853717507 +0000 UTC m=+0.130496250 container died 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 05:58:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6e23a103e974cfe1192fcc9875bceca9af07413d51555e6a5156bb78f017ed67-merged.mount: Deactivated successfully.
Jan 23 05:58:34 np0005593232 podman[407548]: 2026-01-23 10:58:34.889669199 +0000 UTC m=+0.166447942 container remove 2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_leakey, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 05:58:34 np0005593232 nova_compute[250269]: 2026-01-23 10:58:34.893 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:34 np0005593232 systemd[1]: libpod-conmon-2224a47310b530a61c49222225810780f6950abdbe5d4f2fe482084c711cc68c.scope: Deactivated successfully.
Jan 23 05:58:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3983: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 92 KiB/s rd, 57 KiB/s wr, 16 op/s
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.089826729 +0000 UTC m=+0.051483455 container create 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:58:35 np0005593232 systemd[1]: Started libpod-conmon-426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda.scope.
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.070766097 +0000 UTC m=+0.032422823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:35 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:35 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.197151489 +0000 UTC m=+0.158808245 container init 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:58:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:58:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:35 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.211336143 +0000 UTC m=+0.172992909 container start 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.215395038 +0000 UTC m=+0.177051784 container attach 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:58:35 np0005593232 nova_compute[250269]: 2026-01-23 10:58:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:35 np0005593232 keen_bhaskara[407603]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:58:35 np0005593232 keen_bhaskara[407603]: --> relative data size: 1.0
Jan 23 05:58:35 np0005593232 keen_bhaskara[407603]: --> All data devices are unavailable
Jan 23 05:58:35 np0005593232 systemd[1]: libpod-426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda.scope: Deactivated successfully.
Jan 23 05:58:35 np0005593232 podman[407587]: 2026-01-23 10:58:35.992233659 +0000 UTC m=+0.953890375 container died 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:58:36 np0005593232 systemd[1]: var-lib-containers-storage-overlay-14d90d88bb51366b6dfd6e477acc606edb930e2213fe289d668e3c47a9a42087-merged.mount: Deactivated successfully.
Jan 23 05:58:36 np0005593232 podman[407587]: 2026-01-23 10:58:36.352223941 +0000 UTC m=+1.313880667 container remove 426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 05:58:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:36.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:36 np0005593232 systemd[1]: libpod-conmon-426d71bd1baaa2bfbca664ba25ef1f51ba3e28a3b5bb87708f750a2c9414cfda.scope: Deactivated successfully.
Jan 23 05:58:36 np0005593232 podman[407618]: 2026-01-23 10:58:36.47777417 +0000 UTC m=+0.445307669 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:58:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:36.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:36 np0005593232 podman[407796]: 2026-01-23 10:58:36.975257421 +0000 UTC m=+0.069721433 container create 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:58:37 np0005593232 systemd[1]: Started libpod-conmon-5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865.scope.
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:36.927485943 +0000 UTC m=+0.021950005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:37.04523342 +0000 UTC m=+0.139697442 container init 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:37.053924217 +0000 UTC m=+0.148388239 container start 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3984: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 92 KiB/s rd, 57 KiB/s wr, 16 op/s
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:37.058004433 +0000 UTC m=+0.152468475 container attach 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 23 05:58:37 np0005593232 quirky_meninsky[407812]: 167 167
Jan 23 05:58:37 np0005593232 systemd[1]: libpod-5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865.scope: Deactivated successfully.
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:37.059358792 +0000 UTC m=+0.153822804 container died 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 05:58:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-63f30232432073ced685b0c293453d040f6e1ff73213e4aa53cec30d118280c2-merged.mount: Deactivated successfully.
Jan 23 05:58:37 np0005593232 podman[407796]: 2026-01-23 10:58:37.102306523 +0000 UTC m=+0.196770565 container remove 5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_meninsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:58:37 np0005593232 systemd[1]: libpod-conmon-5cd12cd2d19f2cfee947260abfeb72c27a6afddea0a08df08782fa8e1668f865.scope: Deactivated successfully.
Jan 23 05:58:37 np0005593232 podman[407836]: 2026-01-23 10:58:37.314031811 +0000 UTC m=+0.053936164 container create fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 05:58:37 np0005593232 systemd[1]: Started libpod-conmon-fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec.scope.
Jan 23 05:58:37 np0005593232 podman[407836]: 2026-01-23 10:58:37.285761927 +0000 UTC m=+0.025666360 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5932f0f1de4e41c269c816571acf26d9840b849d87231e8d7517ac33c3ffba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5932f0f1de4e41c269c816571acf26d9840b849d87231e8d7517ac33c3ffba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5932f0f1de4e41c269c816571acf26d9840b849d87231e8d7517ac33c3ffba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c5932f0f1de4e41c269c816571acf26d9840b849d87231e8d7517ac33c3ffba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:37 np0005593232 podman[407836]: 2026-01-23 10:58:37.417541273 +0000 UTC m=+0.157445646 container init fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3)
Jan 23 05:58:37 np0005593232 podman[407836]: 2026-01-23 10:58:37.424605074 +0000 UTC m=+0.164509447 container start fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 05:58:37 np0005593232 podman[407836]: 2026-01-23 10:58:37.428276348 +0000 UTC m=+0.168180721 container attach fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:58:37
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.meta', 'backups', 'vms', 'default.rgw.log']
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:58:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:58:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:38 np0005593232 kind_brown[407852]: {
Jan 23 05:58:38 np0005593232 kind_brown[407852]:    "0": [
Jan 23 05:58:38 np0005593232 kind_brown[407852]:        {
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "devices": [
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "/dev/loop3"
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            ],
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "lv_name": "ceph_lv0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "lv_size": "7511998464",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "name": "ceph_lv0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "tags": {
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.cluster_name": "ceph",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.crush_device_class": "",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.encrypted": "0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.osd_id": "0",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.type": "block",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:                "ceph.vdo": "0"
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            },
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "type": "block",
Jan 23 05:58:38 np0005593232 kind_brown[407852]:            "vg_name": "ceph_vg0"
Jan 23 05:58:38 np0005593232 kind_brown[407852]:        }
Jan 23 05:58:38 np0005593232 kind_brown[407852]:    ]
Jan 23 05:58:38 np0005593232 kind_brown[407852]: }
Jan 23 05:58:38 np0005593232 systemd[1]: libpod-fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec.scope: Deactivated successfully.
Jan 23 05:58:38 np0005593232 podman[407836]: 2026-01-23 10:58:38.215208997 +0000 UTC m=+0.955113390 container died fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 05:58:38 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2c5932f0f1de4e41c269c816571acf26d9840b849d87231e8d7517ac33c3ffba-merged.mount: Deactivated successfully.
Jan 23 05:58:38 np0005593232 podman[407836]: 2026-01-23 10:58:38.283832478 +0000 UTC m=+1.023736841 container remove fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:58:38 np0005593232 systemd[1]: libpod-conmon-fe52327e0dea693537efcdd63dc3c589664ae26b875348a8691aba50a9fe75ec.scope: Deactivated successfully.
Jan 23 05:58:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:38.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:38 np0005593232 nova_compute[250269]: 2026-01-23 10:58:38.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:58:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:58:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3985: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 14 KiB/s wr, 0 op/s
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.108656903 +0000 UTC m=+0.046667068 container create 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:58:39 np0005593232 systemd[1]: Started libpod-conmon-0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824.scope.
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.088291734 +0000 UTC m=+0.026301959 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.203496958 +0000 UTC m=+0.141507143 container init 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.209821648 +0000 UTC m=+0.147831813 container start 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.213354009 +0000 UTC m=+0.151364204 container attach 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 05:58:39 np0005593232 systemd[1]: libpod-0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824.scope: Deactivated successfully.
Jan 23 05:58:39 np0005593232 fervent_jepsen[408033]: 167 167
Jan 23 05:58:39 np0005593232 conmon[408033]: conmon 0c607c98c6e9482ec99a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824.scope/container/memory.events
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.218042112 +0000 UTC m=+0.156052357 container died 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 05:58:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-dc19320bdbbf678bbaa97d442ead40c3d4abd7b057bcc8455d2fe1c45cd2aa5f-merged.mount: Deactivated successfully.
Jan 23 05:58:39 np0005593232 podman[408016]: 2026-01-23 10:58:39.259635384 +0000 UTC m=+0.197645549 container remove 0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:58:39 np0005593232 systemd[1]: libpod-conmon-0c607c98c6e9482ec99abb84f1a1f8860de6643a003d321dcf19ce5cbea0c824.scope: Deactivated successfully.
Jan 23 05:58:39 np0005593232 podman[408057]: 2026-01-23 10:58:39.439876578 +0000 UTC m=+0.043452357 container create d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:58:39 np0005593232 systemd[1]: Started libpod-conmon-d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6.scope.
Jan 23 05:58:39 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:58:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd60e2ecd36aaaa2e91967fdbcda219bb8646bf43a6f9cee911726b82259d9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd60e2ecd36aaaa2e91967fdbcda219bb8646bf43a6f9cee911726b82259d9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd60e2ecd36aaaa2e91967fdbcda219bb8646bf43a6f9cee911726b82259d9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:39 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bd60e2ecd36aaaa2e91967fdbcda219bb8646bf43a6f9cee911726b82259d9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:58:39 np0005593232 podman[408057]: 2026-01-23 10:58:39.508459727 +0000 UTC m=+0.112035556 container init d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 05:58:39 np0005593232 podman[408057]: 2026-01-23 10:58:39.421390402 +0000 UTC m=+0.024966211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:58:39 np0005593232 podman[408057]: 2026-01-23 10:58:39.518941575 +0000 UTC m=+0.122517354 container start d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:58:39 np0005593232 podman[408057]: 2026-01-23 10:58:39.529158265 +0000 UTC m=+0.132734064 container attach d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 05:58:39 np0005593232 podman[408071]: 2026-01-23 10:58:39.549604487 +0000 UTC m=+0.069888008 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 05:58:39 np0005593232 nova_compute[250269]: 2026-01-23 10:58:39.932 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.319 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.320 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.320 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.320 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.320 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:58:40 np0005593232 sad_edison[408075]: {
Jan 23 05:58:40 np0005593232 sad_edison[408075]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:58:40 np0005593232 sad_edison[408075]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:58:40 np0005593232 sad_edison[408075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:58:40 np0005593232 sad_edison[408075]:        "osd_id": 0,
Jan 23 05:58:40 np0005593232 sad_edison[408075]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:58:40 np0005593232 sad_edison[408075]:        "type": "bluestore"
Jan 23 05:58:40 np0005593232 sad_edison[408075]:    }
Jan 23 05:58:40 np0005593232 sad_edison[408075]: }
Jan 23 05:58:40 np0005593232 systemd[1]: libpod-d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6.scope: Deactivated successfully.
Jan 23 05:58:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:40 np0005593232 podman[408057]: 2026-01-23 10:58:40.382989806 +0000 UTC m=+0.986565585 container died d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:58:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:40.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-5bd60e2ecd36aaaa2e91967fdbcda219bb8646bf43a6f9cee911726b82259d9e-merged.mount: Deactivated successfully.
Jan 23 05:58:40 np0005593232 podman[408057]: 2026-01-23 10:58:40.43835902 +0000 UTC m=+1.041934799 container remove d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_edison, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 05:58:40 np0005593232 systemd[1]: libpod-conmon-d1d120d204233c7887ccbf8c0469fd1cdc3e9531888c245cc62f0c30099ac5c6.scope: Deactivated successfully.
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 450dfb2d-3142-45c6-a3c1-3320d3ed0a60 does not exist
Jan 23 05:58:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c8daa426-f8ed-42ff-a7ad-96d01eb7c295 does not exist
Jan 23 05:58:40 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d2a5ff74-04aa-459d-bf6c-130b8cf64f7a does not exist
Jan 23 05:58:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:58:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735419528' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.829 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.989 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.993 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4051MB free_disk=20.942729949951172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.994 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:58:40 np0005593232 nova_compute[250269]: 2026-01-23 10:58:40.994 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:58:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3986: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 3.0 KiB/s wr, 0 op/s
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.062 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.063 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.083 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:58:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:58:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1869907766' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.513 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.519 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.537 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.539 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:58:41 np0005593232 nova_compute[250269]: 2026-01-23 10:58:41.540 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:58:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:41 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:58:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:42.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:42.681 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:42.681 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:58:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:42.682 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:58:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3987: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s rd, 4.0 KiB/s wr, 5 op/s
Jan 23 05:58:43 np0005593232 nova_compute[250269]: 2026-01-23 10:58:43.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:44.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 05:58:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952995353' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 05:58:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 05:58:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/952995353' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 05:58:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:44.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:44 np0005593232 nova_compute[250269]: 2026-01-23 10:58:44.934 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3988: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 3.0 KiB/s wr, 5 op/s
Jan 23 05:58:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:46.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:46.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3989: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 3.0 KiB/s wr, 5 op/s
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.804159) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927804194, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 717, "num_deletes": 256, "total_data_size": 982941, "memory_usage": 996224, "flush_reason": "Manual Compaction"}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927812373, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 974112, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87303, "largest_seqno": 88019, "table_properties": {"data_size": 970325, "index_size": 1566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8391, "raw_average_key_size": 19, "raw_value_size": 962768, "raw_average_value_size": 2183, "num_data_blocks": 69, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165874, "oldest_key_time": 1769165874, "file_creation_time": 1769165927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 8264 microseconds, and 3280 cpu microseconds.
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.812422) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 974112 bytes OK
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.812439) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.814397) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.814410) EVENT_LOG_v1 {"time_micros": 1769165927814405, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.814424) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 979238, prev total WAL file size 984639, number of live WAL files 2.
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.815163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373730' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end)
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(951KB)], [203(11MB)]
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927815219, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 12629395, "oldest_snapshot_seqno": -1}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 19K writes, 87K keys, 19K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s#012Cumulative WAL: 19K writes, 19K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1456 writes, 6914 keys, 1456 commit groups, 1.0 writes per commit group, ingest: 10.20 MB, 0.02 MB/s#012Interval WAL: 1456 writes, 1456 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/1   951.28 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     43.2      2.77              0.47        64    0.043       0      0       0.0       0.0#012  L6      1/1   11.12 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.4     80.7     69.0      9.22              2.29        62    0.149    492K    33K       0.0       0.0#012 Sum      2/2   12.04 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.3     62.0     63.0     11.98              2.76       126    0.095    492K    33K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.4     40.1     39.9      2.08              0.36        13    0.160     68K   3047       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0     80.7     69.0      9.22              2.29        62    0.149    492K    33K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     43.2      2.76              0.47        63    0.044       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.117, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.74 GB write, 0.10 MB/s write, 0.73 GB read, 0.10 MB/s read, 12.0 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 2.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 81.65 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000719 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4703,78.18 MB,25.7176%) FilterBlock(127,1.34 MB,0.439358%) IndexBlock(127,2.12 MB,0.698341%) Misc(2,8.08 KB,0.002595%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002170865903592034 of space, bias 1.0, pg target 0.6512597710776102 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:58:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 11090 keys, 12495175 bytes, temperature: kUnknown
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927961111, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 12495175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12426600, "index_size": 39726, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 294290, "raw_average_key_size": 26, "raw_value_size": 12235809, "raw_average_value_size": 1103, "num_data_blocks": 1499, "num_entries": 11090, "num_filter_entries": 11090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769165927, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.961418) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12495175 bytes
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.982250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 86.5 rd, 85.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.1 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(25.8) write-amplify(12.8) OK, records in: 11614, records dropped: 524 output_compression: NoCompression
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.982305) EVENT_LOG_v1 {"time_micros": 1769165927982286, "job": 128, "event": "compaction_finished", "compaction_time_micros": 145969, "compaction_time_cpu_micros": 59293, "output_level": 6, "num_output_files": 1, "total_output_size": 12495175, "num_input_records": 11614, "num_output_records": 11090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927982735, "job": 128, "event": "table_file_deletion", "file_number": 205}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769165927984973, "job": 128, "event": "table_file_deletion", "file_number": 203}
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.815011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.985078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.985085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.985086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.985088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:47 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-10:58:47.985089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 05:58:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:48.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:48 np0005593232 nova_compute[250269]: 2026-01-23 10:58:48.851 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3990: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 3.0 KiB/s wr, 5 op/s
Jan 23 05:58:49 np0005593232 nova_compute[250269]: 2026-01-23 10:58:49.535 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:58:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:49.579 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=92, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=91) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:58:49 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:49.579 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:58:49 np0005593232 nova_compute[250269]: 2026-01-23 10:58:49.580 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:49 np0005593232 nova_compute[250269]: 2026-01-23 10:58:49.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:50.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3991: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 23 05:58:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:52.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3992: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 2.2 KiB/s wr, 32 op/s
Jan 23 05:58:53 np0005593232 nova_compute[250269]: 2026-01-23 10:58:53.853 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:54.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:54 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:58:54.582 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '92'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 05:58:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:58:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:58:54 np0005593232 nova_compute[250269]: 2026-01-23 10:58:54.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3993: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:58:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:56.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:58:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:56.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:58:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3994: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:58:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:58:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:58:58.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:58:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:58:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:58:58.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:58:58 np0005593232 nova_compute[250269]: 2026-01-23 10:58:58.856 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:58:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3995: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:59:00 np0005593232 nova_compute[250269]: 2026-01-23 10:59:00.013 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3996: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:59:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3997: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 05:59:03 np0005593232 nova_compute[250269]: 2026-01-23 10:59:03.859 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:04.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:04.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:05 np0005593232 nova_compute[250269]: 2026-01-23 10:59:05.017 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3998: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 05:59:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:06.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v3999: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 05:59:07 np0005593232 podman[408286]: 2026-01-23 10:59:07.522948696 +0000 UTC m=+0.165095073 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:08.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:08.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:08 np0005593232 nova_compute[250269]: 2026-01-23 10:59:08.908 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4000: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:59:10 np0005593232 nova_compute[250269]: 2026-01-23 10:59:10.079 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:10.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:10 np0005593232 podman[408316]: 2026-01-23 10:59:10.439263053 +0000 UTC m=+0.084688978 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 05:59:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:10.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4001: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:59:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:12.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4002: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:59:13 np0005593232 nova_compute[250269]: 2026-01-23 10:59:13.911 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:14.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:14.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4003: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 05:59:15 np0005593232 nova_compute[250269]: 2026-01-23 10:59:15.081 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:16.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:16.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4004: 321 pgs: 321 active+clean; 141 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 511 B/s rd, 934 KiB/s wr, 1 op/s
Jan 23 05:59:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:18.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:18.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:18 np0005593232 nova_compute[250269]: 2026-01-23 10:59:18.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4005: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 23 05:59:20 np0005593232 nova_compute[250269]: 2026-01-23 10:59:20.084 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:59:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:20.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:59:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:20.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4006: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 23 05:59:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:22.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:22.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4007: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 23 05:59:23 np0005593232 nova_compute[250269]: 2026-01-23 10:59:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:23 np0005593232 nova_compute[250269]: 2026-01-23 10:59:23.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:23 np0005593232 nova_compute[250269]: 2026-01-23 10:59:23.943 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:59:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:24.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:59:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:24.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4008: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 23 05:59:25 np0005593232 nova_compute[250269]: 2026-01-23 10:59:25.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:26 np0005593232 nova_compute[250269]: 2026-01-23 10:59:26.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:26 np0005593232 nova_compute[250269]: 2026-01-23 10:59:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:26.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:26.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4009: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 05:59:27 np0005593232 nova_compute[250269]: 2026-01-23 10:59:27.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:27 np0005593232 nova_compute[250269]: 2026-01-23 10:59:27.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 05:59:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:59:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:28.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:59:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:28.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:29 np0005593232 nova_compute[250269]: 2026-01-23 10:59:29.001 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4010: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 893 KiB/s wr, 99 op/s
Jan 23 05:59:29 np0005593232 nova_compute[250269]: 2026-01-23 10:59:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:29 np0005593232 nova_compute[250269]: 2026-01-23 10:59:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 05:59:30 np0005593232 nova_compute[250269]: 2026-01-23 10:59:30.132 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:30.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4011: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 05:59:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:32.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:32.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4012: 321 pgs: 321 active+clean; 170 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 436 KiB/s wr, 78 op/s
Jan 23 05:59:34 np0005593232 nova_compute[250269]: 2026-01-23 10:59:34.005 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:59:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:34.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:59:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4013: 321 pgs: 321 active+clean; 170 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 904 KiB/s rd, 423 KiB/s wr, 33 op/s
Jan 23 05:59:35 np0005593232 nova_compute[250269]: 2026-01-23 10:59:35.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 05:59:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 05:59:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:36.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4014: 321 pgs: 321 active+clean; 188 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 981 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_10:59:37
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.data', 'backups']
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 05:59:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:37 np0005593232 ceph-mgr[74726]: client.0 ms_handle_reset on v2:192.168.122.100:6800/530399322
Jan 23 05:59:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:38.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:38 np0005593232 podman[408453]: 2026-01-23 10:59:38.505925608 +0000 UTC m=+0.160001239 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 23 05:59:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:38.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:59:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.008 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4015: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.673 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.673 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.674 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.697 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.697 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:39 np0005593232 nova_compute[250269]: 2026-01-23 10:59:39.697 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:40 np0005593232 nova_compute[250269]: 2026-01-23 10:59:40.137 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:40.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:40.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4016: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:59:41 np0005593232 podman[408505]: 2026-01-23 10:59:41.161981876 +0000 UTC m=+0.059164163 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.316 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.317 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.317 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.317 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.317 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1559935842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.775 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.965 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.966 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4107MB free_disk=20.942886352539062GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.967 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:59:41 np0005593232 nova_compute[250269]: 2026-01-23 10:59:41.967 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7fd87f24-30fc-4ec1-ad61-9600d71cd34d does not exist
Jan 23 05:59:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 74c4fad5-06aa-40a9-80be-b8bed377ed80 does not exist
Jan 23 05:59:41 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 244984b3-4458-42cb-b8c4-a81704ecb51d does not exist
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 05:59:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.024 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.025 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.061 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 05:59:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658423723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.504 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.510 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.531 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.533 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 05:59:42 np0005593232 nova_compute[250269]: 2026-01-23 10:59:42.533 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.542456706 +0000 UTC m=+0.044781593 container create cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:59:42 np0005593232 systemd[1]: Started libpod-conmon-cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710.scope.
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.518562007 +0000 UTC m=+0.020886924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.635417729 +0000 UTC m=+0.137742646 container init cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.641667077 +0000 UTC m=+0.143991974 container start cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.645803074 +0000 UTC m=+0.148127971 container attach cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:59:42 np0005593232 recursing_lamarr[408829]: 167 167
Jan 23 05:59:42 np0005593232 systemd[1]: libpod-cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710.scope: Deactivated successfully.
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.647922794 +0000 UTC m=+0.150247701 container died cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:59:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-619ea776b60906ddb46ce4559e0b83c724fce5541d62f8237b0f47ed2ad70c11-merged.mount: Deactivated successfully.
Jan 23 05:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:59:42.681 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 05:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:59:42.683 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 05:59:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:59:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 05:59:42 np0005593232 podman[408812]: 2026-01-23 10:59:42.691291247 +0000 UTC m=+0.193616134 container remove cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:59:42 np0005593232 systemd[1]: libpod-conmon-cbe37e78f6d8d680840e6779f8202e88bd1ff0ae50c8bcdcd70e03e6a2425710.scope: Deactivated successfully.
Jan 23 05:59:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:42.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:42 np0005593232 podman[408852]: 2026-01-23 10:59:42.875788462 +0000 UTC m=+0.049295513 container create 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:59:42 np0005593232 systemd[1]: Started libpod-conmon-4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617.scope.
Jan 23 05:59:42 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:42 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:42 np0005593232 podman[408852]: 2026-01-23 10:59:42.856923825 +0000 UTC m=+0.030430926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:42 np0005593232 podman[408852]: 2026-01-23 10:59:42.962936409 +0000 UTC m=+0.136443490 container init 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 05:59:42 np0005593232 podman[408852]: 2026-01-23 10:59:42.970601827 +0000 UTC m=+0.144108878 container start 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:59:42 np0005593232 podman[408852]: 2026-01-23 10:59:42.976219876 +0000 UTC m=+0.149726967 container attach 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:59:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4017: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 05:59:43 np0005593232 quizzical_meninsky[408869]: --> passed data devices: 0 physical, 1 LVM
Jan 23 05:59:43 np0005593232 quizzical_meninsky[408869]: --> relative data size: 1.0
Jan 23 05:59:43 np0005593232 quizzical_meninsky[408869]: --> All data devices are unavailable
Jan 23 05:59:43 np0005593232 systemd[1]: libpod-4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617.scope: Deactivated successfully.
Jan 23 05:59:43 np0005593232 podman[408852]: 2026-01-23 10:59:43.789568565 +0000 UTC m=+0.963075646 container died 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 05:59:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4ee7f6c21cbacd8788be6de9637f1d026234c9bbe190775bab82fb2afac2f11e-merged.mount: Deactivated successfully.
Jan 23 05:59:43 np0005593232 podman[408852]: 2026-01-23 10:59:43.854217043 +0000 UTC m=+1.027724094 container remove 4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 05:59:43 np0005593232 systemd[1]: libpod-conmon-4f91cc63264f63e10efdbcd4bcd6489b01c8855ab1300a18ad0d752ab7296617.scope: Deactivated successfully.
Jan 23 05:59:44 np0005593232 nova_compute[250269]: 2026-01-23 10:59:44.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.549373533 +0000 UTC m=+0.050175758 container create e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 05:59:44 np0005593232 systemd[1]: Started libpod-conmon-e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb.scope.
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.529550809 +0000 UTC m=+0.030353034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.657503176 +0000 UTC m=+0.158305431 container init e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.670147486 +0000 UTC m=+0.170949711 container start e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.674386176 +0000 UTC m=+0.175188461 container attach e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 05:59:44 np0005593232 zealous_montalcini[409054]: 167 167
Jan 23 05:59:44 np0005593232 systemd[1]: libpod-e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb.scope: Deactivated successfully.
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.680012416 +0000 UTC m=+0.180814641 container died e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 05:59:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4748700da0524506694c8f26214e85ded43c96545d56abe324bcdaa0912f14f5-merged.mount: Deactivated successfully.
Jan 23 05:59:44 np0005593232 podman[409038]: 2026-01-23 10:59:44.736945004 +0000 UTC m=+0.237747229 container remove e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_montalcini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:59:44 np0005593232 systemd[1]: libpod-conmon-e626938df3a2a5f22baaa7896c7b5c2840a10fcfe48df79d925695409cb72eeb.scope: Deactivated successfully.
Jan 23 05:59:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:44 np0005593232 podman[409076]: 2026-01-23 10:59:44.940036057 +0000 UTC m=+0.045615627 container create 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 23 05:59:44 np0005593232 systemd[1]: Started libpod-conmon-1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d.scope.
Jan 23 05:59:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77335f19f8161e185dd37a42e432cad340caeae5d87836b1139f78fc7b405bc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77335f19f8161e185dd37a42e432cad340caeae5d87836b1139f78fc7b405bc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77335f19f8161e185dd37a42e432cad340caeae5d87836b1139f78fc7b405bc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77335f19f8161e185dd37a42e432cad340caeae5d87836b1139f78fc7b405bc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:44.921590923 +0000 UTC m=+0.027170473 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:45.023460349 +0000 UTC m=+0.129039909 container init 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:45.033211716 +0000 UTC m=+0.138791246 container start 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:45.038831516 +0000 UTC m=+0.144411076 container attach 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 05:59:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4018: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Jan 23 05:59:45 np0005593232 nova_compute[250269]: 2026-01-23 10:59:45.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]: {
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:    "0": [
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:        {
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "devices": [
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "/dev/loop3"
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            ],
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "lv_name": "ceph_lv0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "lv_size": "7511998464",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "name": "ceph_lv0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "tags": {
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.cephx_lockbox_secret": "",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.cluster_name": "ceph",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.crush_device_class": "",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.encrypted": "0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.osd_id": "0",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.type": "block",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:                "ceph.vdo": "0"
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            },
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "type": "block",
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:            "vg_name": "ceph_vg0"
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:        }
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]:    ]
Jan 23 05:59:45 np0005593232 modest_mcnulty[409093]: }
Jan 23 05:59:45 np0005593232 systemd[1]: libpod-1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d.scope: Deactivated successfully.
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:45.817681895 +0000 UTC m=+0.923261465 container died 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 05:59:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-77335f19f8161e185dd37a42e432cad340caeae5d87836b1139f78fc7b405bc9-merged.mount: Deactivated successfully.
Jan 23 05:59:45 np0005593232 podman[409076]: 2026-01-23 10:59:45.895077295 +0000 UTC m=+1.000656835 container remove 1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_mcnulty, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 05:59:45 np0005593232 systemd[1]: libpod-conmon-1c9de34ccad05499800ce8b5e26bae31666ef4ee374dafa41ed309e9c9a3701d.scope: Deactivated successfully.
Jan 23 05:59:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.670749773 +0000 UTC m=+0.085208733 container create 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.61542206 +0000 UTC m=+0.029881090 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:46 np0005593232 systemd[1]: Started libpod-conmon-2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e.scope.
Jan 23 05:59:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.756317745 +0000 UTC m=+0.170776705 container init 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.762231864 +0000 UTC m=+0.176690804 container start 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.765143506 +0000 UTC m=+0.179602546 container attach 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 05:59:46 np0005593232 elastic_mclaren[409275]: 167 167
Jan 23 05:59:46 np0005593232 systemd[1]: libpod-2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e.scope: Deactivated successfully.
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.769562752 +0000 UTC m=+0.184021702 container died 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 05:59:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8db561514d84d64103431eb279394b637b6867625e1879eaabad77e7c4b13395-merged.mount: Deactivated successfully.
Jan 23 05:59:46 np0005593232 podman[409259]: 2026-01-23 10:59:46.815113637 +0000 UTC m=+0.229572597 container remove 2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 05:59:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:46 np0005593232 systemd[1]: libpod-conmon-2f546a1eb0a27d24ed47d7645dadde0c1a5192635cdd0794dd4d130d3b0e9a7e.scope: Deactivated successfully.
Jan 23 05:59:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:46.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:47 np0005593232 podman[409298]: 2026-01-23 10:59:47.002557374 +0000 UTC m=+0.047165122 container create bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 05:59:47 np0005593232 systemd[1]: Started libpod-conmon-bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2.scope.
Jan 23 05:59:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 05:59:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e3918d823e24bc819461a12194da7a229b7bddf02ce04afb20a92ba8b3868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:47 np0005593232 podman[409298]: 2026-01-23 10:59:46.981426703 +0000 UTC m=+0.026034441 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 05:59:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e3918d823e24bc819461a12194da7a229b7bddf02ce04afb20a92ba8b3868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e3918d823e24bc819461a12194da7a229b7bddf02ce04afb20a92ba8b3868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3e3918d823e24bc819461a12194da7a229b7bddf02ce04afb20a92ba8b3868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4019: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Jan 23 05:59:47 np0005593232 podman[409298]: 2026-01-23 10:59:47.106457947 +0000 UTC m=+0.151065755 container init bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 05:59:47 np0005593232 podman[409298]: 2026-01-23 10:59:47.118360936 +0000 UTC m=+0.162968654 container start bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 05:59:47 np0005593232 podman[409298]: 2026-01-23 10:59:47.121607338 +0000 UTC m=+0.166215096 container attach bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 23 05:59:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021688666131583845 of space, bias 1.0, pg target 0.6506599839475153 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 05:59:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]: {
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:        "osd_id": 0,
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:        "type": "bluestore"
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]:    }
Jan 23 05:59:48 np0005593232 epic_brahmagupta[409315]: }
Jan 23 05:59:48 np0005593232 systemd[1]: libpod-bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2.scope: Deactivated successfully.
Jan 23 05:59:48 np0005593232 podman[409298]: 2026-01-23 10:59:48.06135663 +0000 UTC m=+1.105964358 container died bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 05:59:48 np0005593232 systemd[1]: var-lib-containers-storage-overlay-af3e3918d823e24bc819461a12194da7a229b7bddf02ce04afb20a92ba8b3868-merged.mount: Deactivated successfully.
Jan 23 05:59:48 np0005593232 podman[409298]: 2026-01-23 10:59:48.11621388 +0000 UTC m=+1.160821588 container remove bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 05:59:48 np0005593232 systemd[1]: libpod-conmon-bd5827e1c20a99ed87e0fdb371a50edbb33ea0e568fb5eca1523585d221693a2.scope: Deactivated successfully.
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dd8eea61-025c-48af-ac66-8e0fd047f1f6 does not exist
Jan 23 05:59:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3db2a6b5-2ca3-4104-82de-5ea2e48a641e does not exist
Jan 23 05:59:48 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 30001d84-11b7-45d2-8d67-a92e17b7fbae does not exist
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 05:59:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:48.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:48.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:49 np0005593232 nova_compute[250269]: 2026-01-23 10:59:49.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4020: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 248 KiB/s rd, 619 KiB/s wr, 42 op/s
Jan 23 05:59:50 np0005593232 nova_compute[250269]: 2026-01-23 10:59:50.142 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:50.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4021: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 11 KiB/s wr, 0 op/s
Jan 23 05:59:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 05:59:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 05:59:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:52.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4022: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 11 KiB/s wr, 0 op/s
Jan 23 05:59:54 np0005593232 nova_compute[250269]: 2026-01-23 10:59:54.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:54.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:54.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4023: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 0 op/s
Jan 23 05:59:55 np0005593232 nova_compute[250269]: 2026-01-23 10:59:55.146 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:56 np0005593232 nova_compute[250269]: 2026-01-23 10:59:56.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:59:56.211 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=93, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=92) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 05:59:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 10:59:56.213 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 05:59:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:56.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4024: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 682 B/s rd, 341 B/s wr, 0 op/s
Jan 23 05:59:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 05:59:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:10:59:58.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 05:59:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 05:59:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:10:59:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 05:59:59 np0005593232 nova_compute[250269]: 2026-01-23 10:59:59.041 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 05:59:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4025: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 23 06:00:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 06:00:00 np0005593232 nova_compute[250269]: 2026-01-23 11:00:00.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 06:00:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4026: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 4.7 KiB/s wr, 0 op/s
Jan 23 06:00:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:02.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4027: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 23 06:00:03 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:00:03.216 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '93'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:00:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 06:00:04 np0005593232 nova_compute[250269]: 2026-01-23 11:00:04.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4028: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 23 06:00:05 np0005593232 nova_compute[250269]: 2026-01-23 11:00:05.253 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:05 np0005593232 nova_compute[250269]: 2026-01-23 11:00:05.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:06.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:06.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4029: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.8 KiB/s wr, 27 op/s
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:08 np0005593232 nova_compute[250269]: 2026-01-23 11:00:08.484 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:08 np0005593232 nova_compute[250269]: 2026-01-23 11:00:08.484 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 06:00:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:08.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:08 np0005593232 nova_compute[250269]: 2026-01-23 11:00:08.502 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 06:00:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:08.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:09 np0005593232 nova_compute[250269]: 2026-01-23 11:00:09.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4030: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 5.5 KiB/s wr, 27 op/s
Jan 23 06:00:09 np0005593232 podman[409462]: 2026-01-23 11:00:09.422139711 +0000 UTC m=+0.080138399 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 06:00:10 np0005593232 nova_compute[250269]: 2026-01-23 11:00:10.255 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:10.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:10.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4031: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:00:11 np0005593232 podman[409489]: 2026-01-23 11:00:11.383034909 +0000 UTC m=+0.048337005 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 06:00:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:12.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:12.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4032: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:00:14 np0005593232 nova_compute[250269]: 2026-01-23 11:00:14.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:14.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:14.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4033: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:15 np0005593232 nova_compute[250269]: 2026-01-23 11:00:15.293 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:16.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4034: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:18.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4035: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:19 np0005593232 nova_compute[250269]: 2026-01-23 11:00:19.137 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:20 np0005593232 nova_compute[250269]: 2026-01-23 11:00:20.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:20.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:20.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4036: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:22.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:22.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4037: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:24 np0005593232 nova_compute[250269]: 2026-01-23 11:00:24.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:24 np0005593232 nova_compute[250269]: 2026-01-23 11:00:24.310 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:24 np0005593232 nova_compute[250269]: 2026-01-23 11:00:24.311 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:24.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4038: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:25 np0005593232 nova_compute[250269]: 2026-01-23 11:00:25.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:26 np0005593232 nova_compute[250269]: 2026-01-23 11:00:26.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:26.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:26.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4039: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:27 np0005593232 nova_compute[250269]: 2026-01-23 11:00:27.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:28.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4040: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:29 np0005593232 nova_compute[250269]: 2026-01-23 11:00:29.142 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:29 np0005593232 nova_compute[250269]: 2026-01-23 11:00:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:29 np0005593232 nova_compute[250269]: 2026-01-23 11:00:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:00:30 np0005593232 nova_compute[250269]: 2026-01-23 11:00:30.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:30 np0005593232 nova_compute[250269]: 2026-01-23 11:00:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:00:30 np0005593232 nova_compute[250269]: 2026-01-23 11:00:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:00:30 np0005593232 nova_compute[250269]: 2026-01-23 11:00:30.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:30 np0005593232 nova_compute[250269]: 2026-01-23 11:00:30.333 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:00:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:30.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:30.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4041: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:00:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:32.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4042: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:00:33 np0005593232 nova_compute[250269]: 2026-01-23 11:00:33.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:34 np0005593232 nova_compute[250269]: 2026-01-23 11:00:34.203 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:34.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4043: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:00:35 np0005593232 nova_compute[250269]: 2026-01-23 11:00:35.365 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:36.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4044: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:00:37
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'images', '.mgr']
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:00:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:00:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:38 np0005593232 nova_compute[250269]: 2026-01-23 11:00:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.413253) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038413288, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1161, "num_deletes": 251, "total_data_size": 1874668, "memory_usage": 1900592, "flush_reason": "Manual Compaction"}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038431948, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 1855448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88020, "largest_seqno": 89180, "table_properties": {"data_size": 1849926, "index_size": 2916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11948, "raw_average_key_size": 19, "raw_value_size": 1838809, "raw_average_value_size": 3054, "num_data_blocks": 130, "num_entries": 602, "num_filter_entries": 602, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769165927, "oldest_key_time": 1769165927, "file_creation_time": 1769166038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 18885 microseconds, and 4708 cpu microseconds.
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.432125) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 1855448 bytes OK
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.432160) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.436134) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.436162) EVENT_LOG_v1 {"time_micros": 1769166038436152, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.436187) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1869434, prev total WAL file size 1869434, number of live WAL files 2.
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.437305) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(1811KB)], [206(11MB)]
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038437358, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 14350623, "oldest_snapshot_seqno": -1}
Jan 23 06:00:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 11173 keys, 12421984 bytes, temperature: kUnknown
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038587778, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 12421984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12352861, "index_size": 40082, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 296679, "raw_average_key_size": 26, "raw_value_size": 12160641, "raw_average_value_size": 1088, "num_data_blocks": 1509, "num_entries": 11173, "num_filter_entries": 11173, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166038, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.588514) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 12421984 bytes
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.601145) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.3 rd, 82.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(14.4) write-amplify(6.7) OK, records in: 11692, records dropped: 519 output_compression: NoCompression
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.601207) EVENT_LOG_v1 {"time_micros": 1769166038601182, "job": 130, "event": "compaction_finished", "compaction_time_micros": 150632, "compaction_time_cpu_micros": 38904, "output_level": 6, "num_output_files": 1, "total_output_size": 12421984, "num_input_records": 11692, "num_output_records": 11173, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038602278, "job": 130, "event": "table_file_deletion", "file_number": 208}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166038608453, "job": 130, "event": "table_file_deletion", "file_number": 206}
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.437181) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.608590) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.608598) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.608602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.608605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:00:38.608608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:00:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:00:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4045: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:00:39 np0005593232 nova_compute[250269]: 2026-01-23 11:00:39.241 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:40 np0005593232 nova_compute[250269]: 2026-01-23 11:00:40.367 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:40 np0005593232 podman[409623]: 2026-01-23 11:00:40.446554458 +0000 UTC m=+0.097212774 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 06:00:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:40.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4046: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.314 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.314 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.315 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.315 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.315 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:00:42 np0005593232 podman[409649]: 2026-01-23 11:00:42.403214005 +0000 UTC m=+0.058035580 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 06:00:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:00:42.683 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:00:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:00:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:00:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:00:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:00:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4041272742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.756 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:00:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.928 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.929 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4129MB free_disk=20.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.929 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:00:42 np0005593232 nova_compute[250269]: 2026-01-23 11:00:42.930 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.027 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.028 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.058 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:00:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4047: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1018 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Jan 23 06:00:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:00:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3497294164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.519 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.526 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.552 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.554 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:00:43 np0005593232 nova_compute[250269]: 2026-01-23 11:00:43.555 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:00:44 np0005593232 nova_compute[250269]: 2026-01-23 11:00:44.243 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:44.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:44.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4048: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1001 KiB/s rd, 12 KiB/s wr, 39 op/s
Jan 23 06:00:45 np0005593232 nova_compute[250269]: 2026-01-23 11:00:45.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:00:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:46.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:00:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:46.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4049: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Jan 23 06:00:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:00:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:00:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:48.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:48.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4050: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:00:49 np0005593232 nova_compute[250269]: 2026-01-23 11:00:49.244 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:00:50 np0005593232 nova_compute[250269]: 2026-01-23 11:00:50.371 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:00:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:50.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4051: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 68e6a1ed-d13e-42d9-bdd2-a59c57c0104e does not exist
Jan 23 06:00:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d1c6eebf-e7a5-4a8d-ad77-8d45ae45997f does not exist
Jan 23 06:00:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0b699ab2-c109-4d38-bc6e-369b949352dc does not exist
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:00:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.152223131 +0000 UTC m=+0.059927325 container create 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:00:52 np0005593232 systemd[1]: Started libpod-conmon-73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473.scope.
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.134847547 +0000 UTC m=+0.042551761 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.256656269 +0000 UTC m=+0.164360493 container init 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.264887833 +0000 UTC m=+0.172592037 container start 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.268103365 +0000 UTC m=+0.175807559 container attach 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 06:00:52 np0005593232 systemd[1]: libpod-73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473.scope: Deactivated successfully.
Jan 23 06:00:52 np0005593232 dreamy_hawking[410005]: 167 167
Jan 23 06:00:52 np0005593232 conmon[410005]: conmon 73e0b38f15f0993881d9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473.scope/container/memory.events
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.273425206 +0000 UTC m=+0.181129440 container died 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Jan 23 06:00:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-293eb27720011b072c2646bc2ae126e496ff4136e17200fd7fd86fc3066055b4-merged.mount: Deactivated successfully.
Jan 23 06:00:52 np0005593232 podman[409988]: 2026-01-23 11:00:52.321930055 +0000 UTC m=+0.229634249 container remove 73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 06:00:52 np0005593232 systemd[1]: libpod-conmon-73e0b38f15f0993881d9335f621bc3e941f5b6385dbde29d19bfad86f6f47473.scope: Deactivated successfully.
Jan 23 06:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:00:52 np0005593232 podman[410029]: 2026-01-23 11:00:52.487000347 +0000 UTC m=+0.047698907 container create 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Jan 23 06:00:52 np0005593232 systemd[1]: Started libpod-conmon-466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b.scope.
Jan 23 06:00:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:52 np0005593232 nova_compute[250269]: 2026-01-23 11:00:52.550 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:00:52 np0005593232 podman[410029]: 2026-01-23 11:00:52.46601966 +0000 UTC m=+0.026718250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:52 np0005593232 podman[410029]: 2026-01-23 11:00:52.578080866 +0000 UTC m=+0.138779416 container init 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:00:52 np0005593232 podman[410029]: 2026-01-23 11:00:52.591291771 +0000 UTC m=+0.151990331 container start 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 06:00:52 np0005593232 podman[410029]: 2026-01-23 11:00:52.596322154 +0000 UTC m=+0.157020724 container attach 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Jan 23 06:00:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:52.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4052: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 23 06:00:53 np0005593232 cranky_borg[410046]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:00:53 np0005593232 cranky_borg[410046]: --> relative data size: 1.0
Jan 23 06:00:53 np0005593232 cranky_borg[410046]: --> All data devices are unavailable
Jan 23 06:00:53 np0005593232 systemd[1]: libpod-466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b.scope: Deactivated successfully.
Jan 23 06:00:53 np0005593232 podman[410029]: 2026-01-23 11:00:53.529660115 +0000 UTC m=+1.090358665 container died 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 06:00:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b7196cc79ba31464495c52e92edc932176e502a72cc132897a7e480c9b994047-merged.mount: Deactivated successfully.
Jan 23 06:00:53 np0005593232 podman[410029]: 2026-01-23 11:00:53.718917164 +0000 UTC m=+1.279615714 container remove 466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:00:53 np0005593232 systemd[1]: libpod-conmon-466f5799dbcf49d19bcd173d2a5e5f69a3012db5d49ce5b2c57f1b4c7b8ea51b.scope: Deactivated successfully.
Jan 23 06:00:54 np0005593232 nova_compute[250269]: 2026-01-23 11:00:54.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.465651291 +0000 UTC m=+0.070697801 container create 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:00:54 np0005593232 systemd[1]: Started libpod-conmon-83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196.scope.
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.440520456 +0000 UTC m=+0.045567056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.565249502 +0000 UTC m=+0.170296022 container init 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.574425523 +0000 UTC m=+0.179472063 container start 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 06:00:54 np0005593232 boring_einstein[410280]: 167 167
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.580529296 +0000 UTC m=+0.185575816 container attach 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 06:00:54 np0005593232 systemd[1]: libpod-83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196.scope: Deactivated successfully.
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.581631777 +0000 UTC m=+0.186678307 container died 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:00:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6f94caab82a13a2c8dc9d5f09fdd1ec62a1989ebd3ab6798cf3c74609d8e856e-merged.mount: Deactivated successfully.
Jan 23 06:00:54 np0005593232 podman[410215]: 2026-01-23 11:00:54.645980496 +0000 UTC m=+0.251027036 container remove 83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_einstein, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 06:00:54 np0005593232 systemd[1]: libpod-conmon-83f17858e3726576c4b6063bd74ec86cfcf44b8cd1c2966cd562d905f27b5196.scope: Deactivated successfully.
Jan 23 06:00:54 np0005593232 podman[410305]: 2026-01-23 11:00:54.889388534 +0000 UTC m=+0.058237515 container create ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 06:00:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:54.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:54 np0005593232 systemd[1]: Started libpod-conmon-ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9.scope.
Jan 23 06:00:54 np0005593232 podman[410305]: 2026-01-23 11:00:54.868415078 +0000 UTC m=+0.037264079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb03e31c9044b98897b3e1796bf3685e08bbe19fa4ec31398b3252edb3196dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb03e31c9044b98897b3e1796bf3685e08bbe19fa4ec31398b3252edb3196dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb03e31c9044b98897b3e1796bf3685e08bbe19fa4ec31398b3252edb3196dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb03e31c9044b98897b3e1796bf3685e08bbe19fa4ec31398b3252edb3196dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:55 np0005593232 podman[410305]: 2026-01-23 11:00:55.003846808 +0000 UTC m=+0.172695809 container init ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:00:55 np0005593232 podman[410305]: 2026-01-23 11:00:55.016883398 +0000 UTC m=+0.185732379 container start ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:00:55 np0005593232 podman[410305]: 2026-01-23 11:00:55.019842022 +0000 UTC m=+0.188691003 container attach ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 06:00:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4053: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 960 KiB/s rd, 34 op/s
Jan 23 06:00:55 np0005593232 nova_compute[250269]: 2026-01-23 11:00:55.373 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]: {
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:    "0": [
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:        {
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "devices": [
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "/dev/loop3"
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            ],
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "lv_name": "ceph_lv0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "lv_size": "7511998464",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "name": "ceph_lv0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "tags": {
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.cluster_name": "ceph",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.crush_device_class": "",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.encrypted": "0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.osd_id": "0",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.type": "block",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:                "ceph.vdo": "0"
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            },
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "type": "block",
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:            "vg_name": "ceph_vg0"
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:        }
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]:    ]
Jan 23 06:00:55 np0005593232 flamboyant_shamir[410322]: }
Jan 23 06:00:55 np0005593232 systemd[1]: libpod-ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9.scope: Deactivated successfully.
Jan 23 06:00:55 np0005593232 podman[410305]: 2026-01-23 11:00:55.826764549 +0000 UTC m=+0.995613530 container died ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 06:00:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cb03e31c9044b98897b3e1796bf3685e08bbe19fa4ec31398b3252edb3196dd3-merged.mount: Deactivated successfully.
Jan 23 06:00:56 np0005593232 podman[410305]: 2026-01-23 11:00:56.523130524 +0000 UTC m=+1.691979545 container remove ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 06:00:56 np0005593232 systemd[1]: libpod-conmon-ecb098ed779ad1815eb52e13cd2ad57c4bacb416ee8b16165f0d28f804e88bb9.scope: Deactivated successfully.
Jan 23 06:00:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:00:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:56.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:00:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4054: 321 pgs: 321 active+clean; 174 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1023 KiB/s rd, 243 KiB/s wr, 43 op/s
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.338683796 +0000 UTC m=+0.041086659 container create d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 06:00:57 np0005593232 systemd[1]: Started libpod-conmon-d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f.scope.
Jan 23 06:00:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.319156451 +0000 UTC m=+0.021559334 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.440640504 +0000 UTC m=+0.143043377 container init d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.448089326 +0000 UTC m=+0.150492189 container start d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.451514823 +0000 UTC m=+0.153917696 container attach d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:00:57 np0005593232 ecstatic_chaplygin[410502]: 167 167
Jan 23 06:00:57 np0005593232 systemd[1]: libpod-d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f.scope: Deactivated successfully.
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.456511145 +0000 UTC m=+0.158914008 container died d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:00:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6b2ce4b1212f7dcd9d358e63af8e69a80a4a14a0427bfe6e5f6392833f410305-merged.mount: Deactivated successfully.
Jan 23 06:00:57 np0005593232 podman[410485]: 2026-01-23 11:00:57.543671203 +0000 UTC m=+0.246074066 container remove d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:00:57 np0005593232 systemd[1]: libpod-conmon-d5fa08830ff1a3bf32e10c3d7ba25c0e23a27333b0645c97dfa60268a449d35f.scope: Deactivated successfully.
Jan 23 06:00:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:00:57 np0005593232 podman[410526]: 2026-01-23 11:00:57.749649488 +0000 UTC m=+0.048154990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:00:57 np0005593232 podman[410526]: 2026-01-23 11:00:57.872339505 +0000 UTC m=+0.170844947 container create d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:00:57 np0005593232 systemd[1]: Started libpod-conmon-d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66.scope.
Jan 23 06:00:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:00:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d947de97918f9aefcdd42c69aa52980719a5db0328b20b212545577d0b1d038f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d947de97918f9aefcdd42c69aa52980719a5db0328b20b212545577d0b1d038f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d947de97918f9aefcdd42c69aa52980719a5db0328b20b212545577d0b1d038f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d947de97918f9aefcdd42c69aa52980719a5db0328b20b212545577d0b1d038f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:00:58 np0005593232 podman[410526]: 2026-01-23 11:00:58.050106708 +0000 UTC m=+0.348612160 container init d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:00:58 np0005593232 podman[410526]: 2026-01-23 11:00:58.062575873 +0000 UTC m=+0.361081295 container start d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:00:58 np0005593232 podman[410526]: 2026-01-23 11:00:58.066586177 +0000 UTC m=+0.365091599 container attach d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:00:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:00:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:00:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:00:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:00:58.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]: {
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:        "osd_id": 0,
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:        "type": "bluestore"
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]:    }
Jan 23 06:00:58 np0005593232 thirsty_haibt[410542]: }
Jan 23 06:00:59 np0005593232 systemd[1]: libpod-d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66.scope: Deactivated successfully.
Jan 23 06:00:59 np0005593232 podman[410564]: 2026-01-23 11:00:59.099521927 +0000 UTC m=+0.039906085 container died d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 06:00:59 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d947de97918f9aefcdd42c69aa52980719a5db0328b20b212545577d0b1d038f-merged.mount: Deactivated successfully.
Jan 23 06:00:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4055: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 89 op/s
Jan 23 06:00:59 np0005593232 podman[410564]: 2026-01-23 11:00:59.162788276 +0000 UTC m=+0.103172424 container remove d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:00:59 np0005593232 systemd[1]: libpod-conmon-d88497422a800e25462a4d01cd131db5b4aa61acee0fbfb275f4ef85be7f7c66.scope: Deactivated successfully.
Jan 23 06:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:00:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:00:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fcb7e8f4-d9f3-43ad-bfff-043360bb58fb does not exist
Jan 23 06:00:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a5e7feb3-4ecd-4c86-ab02-e10796d18d93 does not exist
Jan 23 06:00:59 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8647261d-c4ba-4b1f-84ea-9032b52e781c does not exist
Jan 23 06:00:59 np0005593232 nova_compute[250269]: 2026-01-23 11:00:59.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:01:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:01:00 np0005593232 nova_compute[250269]: 2026-01-23 11:01:00.377 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:00.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4056: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 06:01:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:02.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:02.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4057: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 06:01:04 np0005593232 nova_compute[250269]: 2026-01-23 11:01:04.250 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:04.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4058: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 23 06:01:05 np0005593232 nova_compute[250269]: 2026-01-23 11:01:05.381 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:06.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4059: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:08.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4060: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 1.9 MiB/s wr, 59 op/s
Jan 23 06:01:09 np0005593232 nova_compute[250269]: 2026-01-23 11:01:09.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:10 np0005593232 nova_compute[250269]: 2026-01-23 11:01:10.411 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4061: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 3.7 KiB/s rd, 31 KiB/s wr, 6 op/s
Jan 23 06:01:11 np0005593232 podman[410646]: 2026-01-23 11:01:11.502319587 +0000 UTC m=+0.149832540 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 23 06:01:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:12.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4062: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 31 KiB/s wr, 72 op/s
Jan 23 06:01:13 np0005593232 podman[410673]: 2026-01-23 11:01:13.419322638 +0000 UTC m=+0.075866208 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:01:14 np0005593232 nova_compute[250269]: 2026-01-23 11:01:14.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:14.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4063: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Jan 23 06:01:15 np0005593232 nova_compute[250269]: 2026-01-23 11:01:15.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:01:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 226K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 23K syncs, 2.63 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1936 writes, 7029 keys, 1936 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1936 writes, 756 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:01:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:16.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4064: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 72 op/s
Jan 23 06:01:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:18.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:18.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4065: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 19 KiB/s wr, 71 op/s
Jan 23 06:01:19 np0005593232 nova_compute[250269]: 2026-01-23 11:01:19.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:20 np0005593232 nova_compute[250269]: 2026-01-23 11:01:20.458 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:20.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:20.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4066: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 0 B/s wr, 66 op/s
Jan 23 06:01:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:22.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:22.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4067: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 12 KiB/s wr, 109 op/s
Jan 23 06:01:24 np0005593232 nova_compute[250269]: 2026-01-23 11:01:24.349 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:24.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:24.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4068: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 533 KiB/s rd, 12 KiB/s wr, 43 op/s
Jan 23 06:01:25 np0005593232 nova_compute[250269]: 2026-01-23 11:01:25.461 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:25 np0005593232 nova_compute[250269]: 2026-01-23 11:01:25.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:25.944 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=94, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=93) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:01:25 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:25.945 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:01:26 np0005593232 nova_compute[250269]: 2026-01-23 11:01:26.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:26 np0005593232 nova_compute[250269]: 2026-01-23 11:01:26.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:26.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:26.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4069: 321 pgs: 321 active+clean; 166 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 537 KiB/s rd, 23 KiB/s wr, 48 op/s
Jan 23 06:01:27 np0005593232 nova_compute[250269]: 2026-01-23 11:01:27.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 06:01:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:28 np0005593232 nova_compute[250269]: 2026-01-23 11:01:28.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:28.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:28.948 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '94'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:01:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:28.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4070: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 546 KiB/s rd, 23 KiB/s wr, 61 op/s
Jan 23 06:01:29 np0005593232 nova_compute[250269]: 2026-01-23 11:01:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:29 np0005593232 nova_compute[250269]: 2026-01-23 11:01:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:01:29 np0005593232 nova_compute[250269]: 2026-01-23 11:01:29.352 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:30 np0005593232 nova_compute[250269]: 2026-01-23 11:01:30.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:30 np0005593232 nova_compute[250269]: 2026-01-23 11:01:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:01:30 np0005593232 nova_compute[250269]: 2026-01-23 11:01:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:01:30 np0005593232 nova_compute[250269]: 2026-01-23 11:01:30.376 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:01:30 np0005593232 nova_compute[250269]: 2026-01-23 11:01:30.463 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:30.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:30.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4071: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 24 KiB/s wr, 72 op/s
Jan 23 06:01:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:32.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:32.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4072: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 554 KiB/s rd, 24 KiB/s wr, 72 op/s
Jan 23 06:01:34 np0005593232 nova_compute[250269]: 2026-01-23 11:01:34.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:34 np0005593232 nova_compute[250269]: 2026-01-23 11:01:34.388 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:34.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:34.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4073: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 23 06:01:35 np0005593232 nova_compute[250269]: 2026-01-23 11:01:35.507 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:36.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:36.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4074: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 12 KiB/s wr, 28 op/s
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:01:37
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'default.rgw.control', 'volumes', 'vms', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:01:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:01:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:38 np0005593232 nova_compute[250269]: 2026-01-23 11:01:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:38.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:01:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:01:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:39.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4075: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 852 B/s wr, 23 op/s
Jan 23 06:01:39 np0005593232 nova_compute[250269]: 2026-01-23 11:01:39.390 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:40 np0005593232 nova_compute[250269]: 2026-01-23 11:01:40.510 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:41.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4076: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 10 op/s
Jan 23 06:01:42 np0005593232 podman[410808]: 2026-01-23 11:01:42.447235287 +0000 UTC m=+0.098000337 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 06:01:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:42.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:42.683 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:01:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:01:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:01:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:43.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4077: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.346 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.346 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.347 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.347 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.347 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:01:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:01:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810517408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:01:43 np0005593232 nova_compute[250269]: 2026-01-23 11:01:43.818 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.095 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.096 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4123MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.097 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.097 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.363 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.364 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.392 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:44 np0005593232 podman[410859]: 2026-01-23 11:01:44.41494719 +0000 UTC m=+0.067584542 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.492 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:01:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:44.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:01:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3992029700' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.978 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:01:44 np0005593232 nova_compute[250269]: 2026-01-23 11:01:44.985 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:01:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:45 np0005593232 nova_compute[250269]: 2026-01-23 11:01:45.019 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:01:45 np0005593232 nova_compute[250269]: 2026-01-23 11:01:45.021 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:01:45 np0005593232 nova_compute[250269]: 2026-01-23 11:01:45.022 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:01:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4078: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:45 np0005593232 nova_compute[250269]: 2026-01-23 11:01:45.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:46.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4079: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:01:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:01:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:48.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:49.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4080: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:49 np0005593232 nova_compute[250269]: 2026-01-23 11:01:49.428 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:50 np0005593232 nova_compute[250269]: 2026-01-23 11:01:50.530 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:50.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:51.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4081: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:01:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:52.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:01:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:53.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4082: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:54 np0005593232 nova_compute[250269]: 2026-01-23 11:01:54.477 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:54.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:55.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4083: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:55 np0005593232 nova_compute[250269]: 2026-01-23 11:01:55.565 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:01:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:56.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:57.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4084: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:01:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:01:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:01:58.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:01:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:01:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:01:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:01:59.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:01:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4085: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:01:59 np0005593232 nova_compute[250269]: 2026-01-23 11:01:59.480 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:00 np0005593232 nova_compute[250269]: 2026-01-23 11:02:00.604 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:00.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4086: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:02:01 np0005593232 podman[411129]: 2026-01-23 11:02:01.518905197 +0000 UTC m=+0.806453584 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:02:01 np0005593232 podman[411129]: 2026-01-23 11:02:01.903553171 +0000 UTC m=+1.191101598 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:02:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:02:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:02.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:02:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:03.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:02:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4087: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:04 np0005593232 podman[411284]: 2026-01-23 11:02:04.221709153 +0000 UTC m=+0.075676902 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 06:02:04 np0005593232 podman[411284]: 2026-01-23 11:02:04.236337279 +0000 UTC m=+0.090305008 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 06:02:04 np0005593232 nova_compute[250269]: 2026-01-23 11:02:04.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:04 np0005593232 podman[411349]: 2026-01-23 11:02:04.497529194 +0000 UTC m=+0.069773405 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, io.buildah.version=1.28.2, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc.)
Jan 23 06:02:04 np0005593232 podman[411349]: 2026-01-23 11:02:04.512001185 +0000 UTC m=+0.084245376 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, release=1793, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:02:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:04.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4088: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:05 np0005593232 nova_compute[250269]: 2026-01-23 11:02:05.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0b1248a2-6c24-4ef7-b614-5a4ca5678f79 does not exist
Jan 23 06:02:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 95d65461-33c5-4d75-94ce-3ef153ae9634 does not exist
Jan 23 06:02:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1969381c-b9b0-4b4d-b979-83831072cb07 does not exist
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:02:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.320819121 +0000 UTC m=+0.038250248 container create 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 06:02:06 np0005593232 systemd[1]: Started libpod-conmon-25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a.scope.
Jan 23 06:02:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.394944408 +0000 UTC m=+0.112375555 container init 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.304995801 +0000 UTC m=+0.022426918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.401649469 +0000 UTC m=+0.119080586 container start 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.405304092 +0000 UTC m=+0.122735269 container attach 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 06:02:06 np0005593232 upbeat_mahavira[411788]: 167 167
Jan 23 06:02:06 np0005593232 systemd[1]: libpod-25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a.scope: Deactivated successfully.
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.40839101 +0000 UTC m=+0.125822127 container died 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:02:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a8ec5820208cf46c48b32ef6ca42564e622c5c0bd65b764de75c29f165deeaea-merged.mount: Deactivated successfully.
Jan 23 06:02:06 np0005593232 podman[411772]: 2026-01-23 11:02:06.450236049 +0000 UTC m=+0.167667176 container remove 25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 06:02:06 np0005593232 systemd[1]: libpod-conmon-25710ca147a1b549f92e6ed3b76df7a767fee9827316fd22b9a9c130dd32259a.scope: Deactivated successfully.
Jan 23 06:02:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:06.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:06 np0005593232 podman[411812]: 2026-01-23 11:02:06.643856452 +0000 UTC m=+0.057208947 container create b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:02:06 np0005593232 systemd[1]: Started libpod-conmon-b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3.scope.
Jan 23 06:02:06 np0005593232 podman[411812]: 2026-01-23 11:02:06.619413778 +0000 UTC m=+0.032766283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:06 np0005593232 podman[411812]: 2026-01-23 11:02:06.742786545 +0000 UTC m=+0.156139070 container init b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:02:06 np0005593232 podman[411812]: 2026-01-23 11:02:06.748881868 +0000 UTC m=+0.162234343 container start b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:02:06 np0005593232 podman[411812]: 2026-01-23 11:02:06.753157379 +0000 UTC m=+0.166509864 container attach b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:02:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:02:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:02:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4089: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:07 np0005593232 quirky_brahmagupta[411828]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:02:07 np0005593232 quirky_brahmagupta[411828]: --> relative data size: 1.0
Jan 23 06:02:07 np0005593232 quirky_brahmagupta[411828]: --> All data devices are unavailable
Jan 23 06:02:07 np0005593232 systemd[1]: libpod-b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3.scope: Deactivated successfully.
Jan 23 06:02:07 np0005593232 podman[411812]: 2026-01-23 11:02:07.593541147 +0000 UTC m=+1.006893622 container died b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:02:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-747a22e9d697fd11ba530bb512eb6a0ea278dec6da1de19e4fb7f384917504e9-merged.mount: Deactivated successfully.
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:07 np0005593232 podman[411812]: 2026-01-23 11:02:07.68088255 +0000 UTC m=+1.094235025 container remove b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_brahmagupta, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 06:02:07 np0005593232 systemd[1]: libpod-conmon-b4e8cae750c8021a14214fb6dd3c21133661b3cfed0cba235fc25a6ece5377d3.scope: Deactivated successfully.
Jan 23 06:02:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.399178728 +0000 UTC m=+0.042881270 container create aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:02:08 np0005593232 systemd[1]: Started libpod-conmon-aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20.scope.
Jan 23 06:02:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.380522398 +0000 UTC m=+0.024224960 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.48016003 +0000 UTC m=+0.123862622 container init aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.488452826 +0000 UTC m=+0.132155368 container start aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.492436169 +0000 UTC m=+0.136138731 container attach aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 06:02:08 np0005593232 sharp_mclean[412012]: 167 167
Jan 23 06:02:08 np0005593232 systemd[1]: libpod-aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20.scope: Deactivated successfully.
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.496564116 +0000 UTC m=+0.140266678 container died aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:02:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e7e46d951ad71526b143429f2e4f4ef8a3b569288f16b196cb02e9c846ae89b2-merged.mount: Deactivated successfully.
Jan 23 06:02:08 np0005593232 podman[411996]: 2026-01-23 11:02:08.534839664 +0000 UTC m=+0.178542206 container remove aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_mclean, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 06:02:08 np0005593232 systemd[1]: libpod-conmon-aa34159a1fcee1a0fd5058970f28b8d71d6a3df4f95dc8a3747f25e06092ca20.scope: Deactivated successfully.
Jan 23 06:02:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:08.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:08 np0005593232 podman[412036]: 2026-01-23 11:02:08.70110441 +0000 UTC m=+0.048145289 container create a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:02:08 np0005593232 systemd[1]: Started libpod-conmon-a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92.scope.
Jan 23 06:02:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e716541ed7fb1cce2083c60c30747337f1477145e4eb944553c4c4748c0e54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e716541ed7fb1cce2083c60c30747337f1477145e4eb944553c4c4748c0e54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e716541ed7fb1cce2083c60c30747337f1477145e4eb944553c4c4748c0e54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48e716541ed7fb1cce2083c60c30747337f1477145e4eb944553c4c4748c0e54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:08 np0005593232 podman[412036]: 2026-01-23 11:02:08.678735294 +0000 UTC m=+0.025776183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:08 np0005593232 podman[412036]: 2026-01-23 11:02:08.78412271 +0000 UTC m=+0.131163629 container init a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 06:02:08 np0005593232 podman[412036]: 2026-01-23 11:02:08.793031153 +0000 UTC m=+0.140072042 container start a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:02:08 np0005593232 podman[412036]: 2026-01-23 11:02:08.79891019 +0000 UTC m=+0.145951079 container attach a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:02:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:09.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4090: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:09 np0005593232 nova_compute[250269]: 2026-01-23 11:02:09.484 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]: {
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:    "0": [
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:        {
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "devices": [
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "/dev/loop3"
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            ],
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "lv_name": "ceph_lv0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "lv_size": "7511998464",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "name": "ceph_lv0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "tags": {
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.cluster_name": "ceph",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.crush_device_class": "",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.encrypted": "0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.osd_id": "0",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.type": "block",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:                "ceph.vdo": "0"
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            },
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "type": "block",
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:            "vg_name": "ceph_vg0"
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:        }
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]:    ]
Jan 23 06:02:09 np0005593232 modest_kapitsa[412052]: }
Jan 23 06:02:09 np0005593232 systemd[1]: libpod-a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92.scope: Deactivated successfully.
Jan 23 06:02:09 np0005593232 podman[412036]: 2026-01-23 11:02:09.604019266 +0000 UTC m=+0.951060145 container died a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:02:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-48e716541ed7fb1cce2083c60c30747337f1477145e4eb944553c4c4748c0e54-merged.mount: Deactivated successfully.
Jan 23 06:02:10 np0005593232 nova_compute[250269]: 2026-01-23 11:02:10.607 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:10.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:11.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4091: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:11 np0005593232 podman[412036]: 2026-01-23 11:02:11.292389193 +0000 UTC m=+2.639430102 container remove a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_kapitsa, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 06:02:11 np0005593232 systemd[1]: libpod-conmon-a60874759857519ec264c9f6301c6a2e05e76b47a67c30ea151fd7e517f57a92.scope: Deactivated successfully.
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:11.929189743 +0000 UTC m=+0.020765353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.025989631 +0000 UTC m=+0.117565221 container create 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 06:02:12 np0005593232 systemd[1]: Started libpod-conmon-950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6.scope.
Jan 23 06:02:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.111725294 +0000 UTC m=+0.203300914 container init 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.118719937 +0000 UTC m=+0.210295527 container start 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.121901754 +0000 UTC m=+0.213477434 container attach 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:02:12 np0005593232 relaxed_darwin[412231]: 167 167
Jan 23 06:02:12 np0005593232 systemd[1]: libpod-950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6.scope: Deactivated successfully.
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.124732852 +0000 UTC m=+0.216308442 container died 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 06:02:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-df829036d15ea5f0054d58d143e97572d876e186f2d08633556464e46d092019-merged.mount: Deactivated successfully.
Jan 23 06:02:12 np0005593232 podman[412215]: 2026-01-23 11:02:12.167156262 +0000 UTC m=+0.258731852 container remove 950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:02:12 np0005593232 systemd[1]: libpod-conmon-950dd67a5cd2a8c23e95dbcdb9a90494a0ecd870ceb2584fdc316c042acbd1e6.scope: Deactivated successfully.
Jan 23 06:02:12 np0005593232 podman[412259]: 2026-01-23 11:02:12.311776387 +0000 UTC m=+0.026422069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:02:12 np0005593232 podman[412259]: 2026-01-23 11:02:12.604605737 +0000 UTC m=+0.319251399 container create 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:02:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:12.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:12 np0005593232 systemd[1]: Started libpod-conmon-1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002.scope.
Jan 23 06:02:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:02:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3ca6430e29fdf5a906634a29572d847a61098b6f63c929414e735a2b069bf0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3ca6430e29fdf5a906634a29572d847a61098b6f63c929414e735a2b069bf0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3ca6430e29fdf5a906634a29572d847a61098b6f63c929414e735a2b069bf0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f3ca6430e29fdf5a906634a29572d847a61098b6f63c929414e735a2b069bf0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:02:12 np0005593232 podman[412259]: 2026-01-23 11:02:12.702259299 +0000 UTC m=+0.416904971 container init 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:02:12 np0005593232 podman[412259]: 2026-01-23 11:02:12.711324968 +0000 UTC m=+0.425970630 container start 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 06:02:12 np0005593232 podman[412259]: 2026-01-23 11:02:12.715664268 +0000 UTC m=+0.430309960 container attach 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:02:12 np0005593232 podman[412271]: 2026-01-23 11:02:12.790033068 +0000 UTC m=+0.138558750 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:02:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:13.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4092: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]: {
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:        "osd_id": 0,
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:        "type": "bluestore"
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]:    }
Jan 23 06:02:13 np0005593232 sharp_bouman[412275]: }
Jan 23 06:02:13 np0005593232 systemd[1]: libpod-1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002.scope: Deactivated successfully.
Jan 23 06:02:13 np0005593232 podman[412259]: 2026-01-23 11:02:13.575639239 +0000 UTC m=+1.290284941 container died 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 06:02:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7f3ca6430e29fdf5a906634a29572d847a61098b6f63c929414e735a2b069bf0-merged.mount: Deactivated successfully.
Jan 23 06:02:13 np0005593232 podman[412259]: 2026-01-23 11:02:13.744525952 +0000 UTC m=+1.459171614 container remove 1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:02:13 np0005593232 systemd[1]: libpod-conmon-1d42e5fa6e1409b5ddfdc15ff72a67f3dd85b38edbd62de5ae3058ae65f43002.scope: Deactivated successfully.
Jan 23 06:02:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:02:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:02:13 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4d73b049-dbd6-43e4-92b5-adce3bc92d37 does not exist
Jan 23 06:02:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 38d0635f-7232-4e7c-b141-8416345dfcfe does not exist
Jan 23 06:02:13 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b0bc25a2-6e2e-4446-84f5-c3e8705649f8 does not exist
Jan 23 06:02:14 np0005593232 nova_compute[250269]: 2026-01-23 11:02:14.486 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:14.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:14 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:02:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:15.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4093: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail
Jan 23 06:02:15 np0005593232 podman[412411]: 2026-01-23 11:02:15.306947281 +0000 UTC m=+0.084054898 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 23 06:02:15 np0005593232 nova_compute[250269]: 2026-01-23 11:02:15.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:16.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:17.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4094: 321 pgs: 321 active+clean; 145 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 7.0 KiB/s rd, 912 KiB/s wr, 12 op/s
Jan 23 06:02:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:18.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:19.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4095: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:02:19 np0005593232 nova_compute[250269]: 2026-01-23 11:02:19.488 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:20 np0005593232 nova_compute[250269]: 2026-01-23 11:02:20.613 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:20.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:21.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4096: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:02:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:22.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:23.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4097: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:02:24 np0005593232 nova_compute[250269]: 2026-01-23 11:02:24.490 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:24.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:25.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4098: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:02:25 np0005593232 nova_compute[250269]: 2026-01-23 11:02:25.615 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:26.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:27.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4099: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 23 06:02:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:27.380 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=95, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=94) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:02:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:27.381 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:02:27 np0005593232 nova_compute[250269]: 2026-01-23 11:02:27.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:28.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.023 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.023 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.024 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:29.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4100: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 125 KiB/s rd, 915 KiB/s wr, 28 op/s
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:02:29 np0005593232 nova_compute[250269]: 2026-01-23 11:02:29.492 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:30 np0005593232 nova_compute[250269]: 2026-01-23 11:02:30.617 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:30.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:31.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4101: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 928 KiB/s rd, 12 KiB/s wr, 40 op/s
Jan 23 06:02:32 np0005593232 nova_compute[250269]: 2026-01-23 11:02:32.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:32 np0005593232 nova_compute[250269]: 2026-01-23 11:02:32.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:02:32 np0005593232 nova_compute[250269]: 2026-01-23 11:02:32.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:02:32 np0005593232 nova_compute[250269]: 2026-01-23 11:02:32.318 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:02:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:02:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:32.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:02:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:33.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4102: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:02:34 np0005593232 nova_compute[250269]: 2026-01-23 11:02:34.493 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:34.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:02:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:35.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:02:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4103: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:02:35 np0005593232 nova_compute[250269]: 2026-01-23 11:02:35.618 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:36 np0005593232 nova_compute[250269]: 2026-01-23 11:02:36.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:36.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:37.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4104: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:02:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:37.384 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '95'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:02:37
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'volumes']
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:02:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:02:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:38.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:02:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:02:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:39.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4105: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 23 06:02:39 np0005593232 nova_compute[250269]: 2026-01-23 11:02:39.495 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:40 np0005593232 nova_compute[250269]: 2026-01-23 11:02:40.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:40 np0005593232 nova_compute[250269]: 2026-01-23 11:02:40.621 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:40.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:41.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4106: 321 pgs: 321 active+clean; 179 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 82 op/s
Jan 23 06:02:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:42.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:42.684 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:42.685 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:02:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:02:42.685 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:02:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:43.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4107: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 96 op/s
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.329 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.330 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.330 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:02:43 np0005593232 podman[412526]: 2026-01-23 11:02:43.440813387 +0000 UTC m=+0.094354451 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 23 06:02:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:02:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1787346162' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.776 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.961 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.962 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4114MB free_disk=20.955421447753906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.962 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:02:43 np0005593232 nova_compute[250269]: 2026-01-23 11:02:43.963 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.205 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.205 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.286 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.382 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.382 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.410 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.433 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.473 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.543 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:44.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:02:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856261302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.943 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.948 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.974 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.976 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:02:44 np0005593232 nova_compute[250269]: 2026-01-23 11:02:44.976 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:02:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:45.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4108: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 23 06:02:45 np0005593232 nova_compute[250269]: 2026-01-23 11:02:45.648 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:46 np0005593232 podman[412600]: 2026-01-23 11:02:46.413028528 +0000 UTC m=+0.063981154 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:02:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:46.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:47.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4109: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:02:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:02:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:48.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:49.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4110: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 23 06:02:49 np0005593232 nova_compute[250269]: 2026-01-23 11:02:49.546 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:50 np0005593232 nova_compute[250269]: 2026-01-23 11:02:50.650 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:50.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:51.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4111: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 06:02:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:52.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:53.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4112: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 2.0 MiB/s wr, 73 op/s
Jan 23 06:02:54 np0005593232 nova_compute[250269]: 2026-01-23 11:02:54.548 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:54.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:54 np0005593232 nova_compute[250269]: 2026-01-23 11:02:54.971 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:02:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:55.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4113: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 869 KiB/s wr, 31 op/s
Jan 23 06:02:55 np0005593232 nova_compute[250269]: 2026-01-23 11:02:55.690 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:02:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:56.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:02:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:57.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:02:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4114: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 23 06:02:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:02:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:02:58.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:02:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:02:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:02:59.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:02:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4115: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 311 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 23 06:02:59 np0005593232 nova_compute[250269]: 2026-01-23 11:02:59.551 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:00 np0005593232 nova_compute[250269]: 2026-01-23 11:03:00.693 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:00.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:01.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4116: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 449 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 23 06:03:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:02.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:03.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4117: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 23 06:03:04 np0005593232 nova_compute[250269]: 2026-01-23 11:03:04.553 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:04.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:05.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4118: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 989 KiB/s wr, 100 op/s
Jan 23 06:03:05 np0005593232 nova_compute[250269]: 2026-01-23 11:03:05.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:06.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:07.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4119: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 989 KiB/s wr, 100 op/s
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:08.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:09.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4120: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 20 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Jan 23 06:03:09 np0005593232 nova_compute[250269]: 2026-01-23 11:03:09.555 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:10 np0005593232 nova_compute[250269]: 2026-01-23 11:03:10.699 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:10.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:11.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4121: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 614 KiB/s wr, 61 op/s
Jan 23 06:03:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:12.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:13.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4122: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Jan 23 06:03:14 np0005593232 podman[412710]: 2026-01-23 11:03:14.447770761 +0000 UTC m=+0.098078553 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 23 06:03:14 np0005593232 nova_compute[250269]: 2026-01-23 11:03:14.556 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 23 06:03:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:15.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4123: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Jan 23 06:03:15 np0005593232 nova_compute[250269]: 2026-01-23 11:03:15.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:03:15 np0005593232 podman[413034]: 2026-01-23 11:03:15.891843869 +0000 UTC m=+0.024732763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.021119601 +0000 UTC m=+0.154008515 container create e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 06:03:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:03:16 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:16 np0005593232 systemd[1]: Started libpod-conmon-e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a.scope.
Jan 23 06:03:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.128699376 +0000 UTC m=+0.261588300 container init e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.136180373 +0000 UTC m=+0.269069247 container start e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:03:16 np0005593232 nostalgic_jang[413050]: 167 167
Jan 23 06:03:16 np0005593232 systemd[1]: libpod-e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a.scope: Deactivated successfully.
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.192978618 +0000 UTC m=+0.325867532 container attach e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.194671035 +0000 UTC m=+0.327559929 container died e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:03:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-0fd900c6e4688c7f0a8dd90d8a19068d540cd0233277a9442b75ee153a8efef3-merged.mount: Deactivated successfully.
Jan 23 06:03:16 np0005593232 podman[413034]: 2026-01-23 11:03:16.392504097 +0000 UTC m=+0.525392981 container remove e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:03:16 np0005593232 systemd[1]: libpod-conmon-e6f72fdf708c3c62363e507db2da397f3e6a80e7052cef53063b726eb644d06a.scope: Deactivated successfully.
Jan 23 06:03:16 np0005593232 podman[413077]: 2026-01-23 11:03:16.605305051 +0000 UTC m=+0.066520944 container create 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:03:16 np0005593232 systemd[1]: Started libpod-conmon-4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05.scope.
Jan 23 06:03:16 np0005593232 podman[413077]: 2026-01-23 11:03:16.583315465 +0000 UTC m=+0.044531378 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:16 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c752c33ad25c04d7045165269a42b04c3fe1beda6a924b34f09dd3bbd7d238/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c752c33ad25c04d7045165269a42b04c3fe1beda6a924b34f09dd3bbd7d238/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c752c33ad25c04d7045165269a42b04c3fe1beda6a924b34f09dd3bbd7d238/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:16 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c752c33ad25c04d7045165269a42b04c3fe1beda6a924b34f09dd3bbd7d238/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:16 np0005593232 podman[413077]: 2026-01-23 11:03:16.716173107 +0000 UTC m=+0.177389050 container init 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:03:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:16 np0005593232 podman[413091]: 2026-01-23 11:03:16.724923518 +0000 UTC m=+0.078624678 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:03:16 np0005593232 podman[413077]: 2026-01-23 11:03:16.726501612 +0000 UTC m=+0.187717515 container start 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 06:03:16 np0005593232 podman[413077]: 2026-01-23 11:03:16.73115289 +0000 UTC m=+0.192368793 container attach 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 06:03:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:17.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4124: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 23 06:03:17 np0005593232 kind_tu[413094]: [
Jan 23 06:03:17 np0005593232 kind_tu[413094]:    {
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "available": false,
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "ceph_device": false,
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "lsm_data": {},
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "lvs": [],
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "path": "/dev/sr0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "rejected_reasons": [
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "Insufficient space (<5GB)",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "Has a FileSystem"
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        ],
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        "sys_api": {
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "actuators": null,
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "device_nodes": "sr0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "devname": "sr0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "human_readable_size": "482.00 KB",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "id_bus": "ata",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "model": "QEMU DVD-ROM",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "nr_requests": "2",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "parent": "/dev/sr0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "partitions": {},
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "path": "/dev/sr0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "removable": "1",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "rev": "2.5+",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "ro": "0",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "rotational": "1",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "sas_address": "",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "sas_device_handle": "",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "scheduler_mode": "mq-deadline",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "sectors": 0,
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "sectorsize": "2048",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "size": 493568.0,
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "support_discard": "2048",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "type": "disk",
Jan 23 06:03:17 np0005593232 kind_tu[413094]:            "vendor": "QEMU"
Jan 23 06:03:17 np0005593232 kind_tu[413094]:        }
Jan 23 06:03:17 np0005593232 kind_tu[413094]:    }
Jan 23 06:03:17 np0005593232 kind_tu[413094]: ]
Jan 23 06:03:18 np0005593232 systemd[1]: libpod-4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05.scope: Deactivated successfully.
Jan 23 06:03:18 np0005593232 systemd[1]: libpod-4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05.scope: Consumed 1.284s CPU time.
Jan 23 06:03:18 np0005593232 podman[413077]: 2026-01-23 11:03:18.024262356 +0000 UTC m=+1.485478239 container died 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 06:03:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c6c752c33ad25c04d7045165269a42b04c3fe1beda6a924b34f09dd3bbd7d238-merged.mount: Deactivated successfully.
Jan 23 06:03:18 np0005593232 podman[413077]: 2026-01-23 11:03:18.667863424 +0000 UTC m=+2.129079307 container remove 4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_tu, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:03:18 np0005593232 systemd[1]: libpod-conmon-4e4a59e261865638c46a1cd8a55a03e5b4b591105af8ef8b0429983a1630be05.scope: Deactivated successfully.
Jan 23 06:03:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:18.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:03:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:03:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:19.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4125: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 122 op/s
Jan 23 06:03:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:03:19 np0005593232 nova_compute[250269]: 2026-01-23 11:03:19.558 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:19.615 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=96, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=95) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:03:19 np0005593232 nova_compute[250269]: 2026-01-23 11:03:19.616 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:19.616 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:03:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:20.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:20 np0005593232 nova_compute[250269]: 2026-01-23 11:03:20.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev dcb19e90-bdc9-4745-ad4d-b806c5f21430 does not exist
Jan 23 06:03:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cab2807d-2cf8-4d8e-b02c-17d135369681 does not exist
Jan 23 06:03:20 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a472eb11-d07c-4205-a88b-fa6d3ad00378 does not exist
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:03:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:03:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:21.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4126: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 145 op/s
Jan 23 06:03:21 np0005593232 podman[414466]: 2026-01-23 11:03:21.499457301 +0000 UTC m=+0.020706331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:21.617 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '96'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:03:21 np0005593232 podman[414466]: 2026-01-23 11:03:21.637542656 +0000 UTC m=+0.158791636 container create 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 06:03:21 np0005593232 systemd[1]: Started libpod-conmon-6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c.scope.
Jan 23 06:03:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:21 np0005593232 podman[414466]: 2026-01-23 11:03:21.841575729 +0000 UTC m=+0.362824749 container init 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 06:03:21 np0005593232 podman[414466]: 2026-01-23 11:03:21.850761022 +0000 UTC m=+0.372010012 container start 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:03:21 np0005593232 youthful_kalam[414482]: 167 167
Jan 23 06:03:21 np0005593232 systemd[1]: libpod-6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c.scope: Deactivated successfully.
Jan 23 06:03:22 np0005593232 podman[414466]: 2026-01-23 11:03:22.252543235 +0000 UTC m=+0.773792255 container attach 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 23 06:03:22 np0005593232 podman[414466]: 2026-01-23 11:03:22.253924803 +0000 UTC m=+0.775173823 container died 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:22 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:03:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7f5313f70d83ede8e4a83275b8dbd7b792f6bd451b9547772f2701cd57eb6d68-merged.mount: Deactivated successfully.
Jan 23 06:03:22 np0005593232 podman[414466]: 2026-01-23 11:03:22.340015546 +0000 UTC m=+0.861264536 container remove 6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_kalam, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 23 06:03:22 np0005593232 systemd[1]: libpod-conmon-6d773f5dabee65c259ef8ab0edb09af68ebf466c18a695dfbca95298490afb1c.scope: Deactivated successfully.
Jan 23 06:03:22 np0005593232 podman[414508]: 2026-01-23 11:03:22.523635906 +0000 UTC m=+0.048087536 container create 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:03:22 np0005593232 systemd[1]: Started libpod-conmon-78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822.scope.
Jan 23 06:03:22 np0005593232 podman[414508]: 2026-01-23 11:03:22.503532422 +0000 UTC m=+0.027984052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:22 np0005593232 podman[414508]: 2026-01-23 11:03:22.720096111 +0000 UTC m=+0.244547741 container init 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:03:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:22.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:22 np0005593232 podman[414508]: 2026-01-23 11:03:22.734564309 +0000 UTC m=+0.259015909 container start 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 06:03:22 np0005593232 podman[414508]: 2026-01-23 11:03:22.883072692 +0000 UTC m=+0.407524342 container attach 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:03:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4127: 321 pgs: 321 active+clean; 165 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 383 KiB/s rd, 1.5 MiB/s wr, 210 op/s
Jan 23 06:03:23 np0005593232 busy_bose[414524]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:03:23 np0005593232 busy_bose[414524]: --> relative data size: 1.0
Jan 23 06:03:23 np0005593232 busy_bose[414524]: --> All data devices are unavailable
Jan 23 06:03:23 np0005593232 systemd[1]: libpod-78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822.scope: Deactivated successfully.
Jan 23 06:03:23 np0005593232 podman[414508]: 2026-01-23 11:03:23.608751052 +0000 UTC m=+1.133202672 container died 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:03:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-86764b666231624f437d2fef861282fe651e6d5bf3817c66c9d6843f3c390e4c-merged.mount: Deactivated successfully.
Jan 23 06:03:23 np0005593232 podman[414508]: 2026-01-23 11:03:23.67583753 +0000 UTC m=+1.200289140 container remove 78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:03:23 np0005593232 systemd[1]: libpod-conmon-78b22a14bfd721b4be5f9684cb9237b68f12ee19c539f43ade67f20551905822.scope: Deactivated successfully.
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.313975867 +0000 UTC m=+0.036476486 container create d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:03:24 np0005593232 systemd[1]: Started libpod-conmon-d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677.scope.
Jan 23 06:03:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.298386218 +0000 UTC m=+0.020886857 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.396015698 +0000 UTC m=+0.118516337 container init d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.402641751 +0000 UTC m=+0.125142390 container start d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.405785668 +0000 UTC m=+0.128286287 container attach d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 06:03:24 np0005593232 xenodochial_ishizaka[414707]: 167 167
Jan 23 06:03:24 np0005593232 systemd[1]: libpod-d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677.scope: Deactivated successfully.
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.408305937 +0000 UTC m=+0.130806556 container died d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 06:03:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d456b335e6204d57ed2a80d28dcd6794acfe66fd218802174d82329657218866-merged.mount: Deactivated successfully.
Jan 23 06:03:24 np0005593232 podman[414691]: 2026-01-23 11:03:24.442415847 +0000 UTC m=+0.164916466 container remove d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 06:03:24 np0005593232 systemd[1]: libpod-conmon-d3d034fafbb5d6657cbb5a1256ebb676f75b8ce5eace4bfb8afbd9b53f824677.scope: Deactivated successfully.
Jan 23 06:03:24 np0005593232 nova_compute[250269]: 2026-01-23 11:03:24.603 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:24 np0005593232 podman[414730]: 2026-01-23 11:03:24.644593069 +0000 UTC m=+0.091557754 container create b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:03:24 np0005593232 systemd[1]: Started libpod-conmon-b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2.scope.
Jan 23 06:03:24 np0005593232 podman[414730]: 2026-01-23 11:03:24.621130322 +0000 UTC m=+0.068095057 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2173076d5b50fa91cdd5699a94d433e5920e3792826a5e9150759e83fa1c8613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2173076d5b50fa91cdd5699a94d433e5920e3792826a5e9150759e83fa1c8613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2173076d5b50fa91cdd5699a94d433e5920e3792826a5e9150759e83fa1c8613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2173076d5b50fa91cdd5699a94d433e5920e3792826a5e9150759e83fa1c8613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:24.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:24 np0005593232 podman[414730]: 2026-01-23 11:03:24.734076635 +0000 UTC m=+0.181041300 container init b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 23 06:03:24 np0005593232 podman[414730]: 2026-01-23 11:03:24.740501642 +0000 UTC m=+0.187466287 container start b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:03:24 np0005593232 podman[414730]: 2026-01-23 11:03:24.743821694 +0000 UTC m=+0.190786369 container attach b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:03:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:25.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4128: 321 pgs: 321 active+clean; 165 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 88 KiB/s rd, 13 KiB/s wr, 136 op/s
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]: {
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:    "0": [
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:        {
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "devices": [
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "/dev/loop3"
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            ],
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "lv_name": "ceph_lv0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "lv_size": "7511998464",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "name": "ceph_lv0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "tags": {
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.cluster_name": "ceph",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.crush_device_class": "",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.encrypted": "0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.osd_id": "0",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.type": "block",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:                "ceph.vdo": "0"
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            },
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "type": "block",
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:            "vg_name": "ceph_vg0"
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:        }
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]:    ]
Jan 23 06:03:25 np0005593232 fervent_mclean[414747]: }
Jan 23 06:03:25 np0005593232 systemd[1]: libpod-b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2.scope: Deactivated successfully.
Jan 23 06:03:25 np0005593232 podman[414730]: 2026-01-23 11:03:25.488327861 +0000 UTC m=+0.935292526 container died b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 06:03:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-2173076d5b50fa91cdd5699a94d433e5920e3792826a5e9150759e83fa1c8613-merged.mount: Deactivated successfully.
Jan 23 06:03:25 np0005593232 podman[414730]: 2026-01-23 11:03:25.548836199 +0000 UTC m=+0.995800854 container remove b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_mclean, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 06:03:25 np0005593232 systemd[1]: libpod-conmon-b8d0adc9ad1ec2d9eeeba7586741140fa85345423ef08c2926110106a0c464b2.scope: Deactivated successfully.
Jan 23 06:03:25 np0005593232 nova_compute[250269]: 2026-01-23 11:03:25.754 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.184472256 +0000 UTC m=+0.040075845 container create a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:03:26 np0005593232 systemd[1]: Started libpod-conmon-a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8.scope.
Jan 23 06:03:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.259183335 +0000 UTC m=+0.114786924 container init a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.162972004 +0000 UTC m=+0.018575613 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.267153295 +0000 UTC m=+0.122756904 container start a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.270839097 +0000 UTC m=+0.126442686 container attach a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Jan 23 06:03:26 np0005593232 systemd[1]: libpod-a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8.scope: Deactivated successfully.
Jan 23 06:03:26 np0005593232 quizzical_torvalds[414926]: 167 167
Jan 23 06:03:26 np0005593232 conmon[414926]: conmon a201e1f6ce50ba122136 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8.scope/container/memory.events
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.27384647 +0000 UTC m=+0.129450059 container died a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 06:03:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-cd8c87f9b4f8094e456ecd3aba93a06ec8499db88a452b06a3e642c397b23a75-merged.mount: Deactivated successfully.
Jan 23 06:03:26 np0005593232 podman[414910]: 2026-01-23 11:03:26.335698444 +0000 UTC m=+0.191302043 container remove a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_torvalds, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 06:03:26 np0005593232 systemd[1]: libpod-conmon-a201e1f6ce50ba122136399e4cddeef5608a9cfad59ef6f692af2838825758a8.scope: Deactivated successfully.
Jan 23 06:03:26 np0005593232 podman[414952]: 2026-01-23 11:03:26.476121074 +0000 UTC m=+0.020975459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:03:26 np0005593232 podman[414952]: 2026-01-23 11:03:26.584943563 +0000 UTC m=+0.129797928 container create 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 06:03:26 np0005593232 systemd[1]: Started libpod-conmon-33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e.scope.
Jan 23 06:03:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:03:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3beb7309895358a499034422cc7d5119bf650134af336394172dbe4a9af7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3beb7309895358a499034422cc7d5119bf650134af336394172dbe4a9af7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3beb7309895358a499034422cc7d5119bf650134af336394172dbe4a9af7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.007000193s ======
Jan 23 06:03:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:26.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.007000193s
Jan 23 06:03:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bac3beb7309895358a499034422cc7d5119bf650134af336394172dbe4a9af7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:03:27 np0005593232 podman[414952]: 2026-01-23 11:03:27.020092646 +0000 UTC m=+0.564947031 container init 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 23 06:03:27 np0005593232 podman[414952]: 2026-01-23 11:03:27.034331228 +0000 UTC m=+0.579185593 container start 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:03:27 np0005593232 podman[414952]: 2026-01-23 11:03:27.047927833 +0000 UTC m=+0.592782238 container attach 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:03:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:27.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4129: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 133 KiB/s rd, 13 KiB/s wr, 208 op/s
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]: {
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:        "osd_id": 0,
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:        "type": "bluestore"
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]:    }
Jan 23 06:03:27 np0005593232 reverent_franklin[414968]: }
Jan 23 06:03:27 np0005593232 systemd[1]: libpod-33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e.scope: Deactivated successfully.
Jan 23 06:03:27 np0005593232 podman[414952]: 2026-01-23 11:03:27.963730362 +0000 UTC m=+1.508584767 container died 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 06:03:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bac3beb7309895358a499034422cc7d5119bf650134af336394172dbe4a9af7e-merged.mount: Deactivated successfully.
Jan 23 06:03:28 np0005593232 podman[414952]: 2026-01-23 11:03:28.029244258 +0000 UTC m=+1.574098623 container remove 33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:03:28 np0005593232 systemd[1]: libpod-conmon-33055d6cb0f4270fcb7eff2d789e579c45a2b7392a3c9cf11882b487da9eb58e.scope: Deactivated successfully.
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eea8ae5b-a5d0-4a22-8c43-ab2cb1f24570 does not exist
Jan 23 06:03:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1cf89c42-ae8d-47c3-9633-0fff8d966ebb does not exist
Jan 23 06:03:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3d75e99a-351e-4ea7-83e6-a4f02dc90f0d does not exist
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:28 np0005593232 nova_compute[250269]: 2026-01-23 11:03:28.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:28 np0005593232 nova_compute[250269]: 2026-01-23 11:03:28.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:28 np0005593232 nova_compute[250269]: 2026-01-23 11:03:28.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:03:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:28.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:29.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4130: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 118 KiB/s rd, 13 KiB/s wr, 193 op/s
Jan 23 06:03:29 np0005593232 nova_compute[250269]: 2026-01-23 11:03:29.645 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:30 np0005593232 nova_compute[250269]: 2026-01-23 11:03:30.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:30 np0005593232 nova_compute[250269]: 2026-01-23 11:03:30.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:30 np0005593232 nova_compute[250269]: 2026-01-23 11:03:30.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:03:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:30.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:30 np0005593232 nova_compute[250269]: 2026-01-23 11:03:30.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:31.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4131: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 109 KiB/s rd, 2.2 KiB/s wr, 177 op/s
Jan 23 06:03:32 np0005593232 nova_compute[250269]: 2026-01-23 11:03:32.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:32 np0005593232 nova_compute[250269]: 2026-01-23 11:03:32.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:03:32 np0005593232 nova_compute[250269]: 2026-01-23 11:03:32.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:03:32 np0005593232 nova_compute[250269]: 2026-01-23 11:03:32.313 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:03:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:33.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4132: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 96 KiB/s rd, 1.2 KiB/s wr, 155 op/s
Jan 23 06:03:34 np0005593232 nova_compute[250269]: 2026-01-23 11:03:34.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:35.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4133: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 852 B/s wr, 84 op/s
Jan 23 06:03:35 np0005593232 nova_compute[250269]: 2026-01-23 11:03:35.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:36.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:37.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4134: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 53 KiB/s rd, 852 B/s wr, 84 op/s
Jan 23 06:03:37 np0005593232 nova_compute[250269]: 2026-01-23 11:03:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:03:37
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root', 'vms']
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:03:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:03:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:03:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:03:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:39.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4135: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.4 KiB/s rd, 0 B/s wr, 13 op/s
Jan 23 06:03:39 np0005593232 nova_compute[250269]: 2026-01-23 11:03:39.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:40 np0005593232 nova_compute[250269]: 2026-01-23 11:03:40.804 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:41.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4136: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:03:41 np0005593232 nova_compute[250269]: 2026-01-23 11:03:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:42.685 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:42.686 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:03:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:03:42.686 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:03:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:42.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:43.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4137: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.332 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.333 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.659 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:44.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:03:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3766397968' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:03:44 np0005593232 nova_compute[250269]: 2026-01-23 11:03:44.876 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.058 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.060 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4109MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.060 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.060 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.136 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.137 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.155 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:03:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:03:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:45.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:03:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4138: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:03:45 np0005593232 podman[415152]: 2026-01-23 11:03:45.498471309 +0000 UTC m=+0.159562439 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 06:03:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:03:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827555710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.674 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.684 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.709 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.712 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.713 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.653s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:03:45 np0005593232 nova_compute[250269]: 2026-01-23 11:03:45.806 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:46.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:47.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4139: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:03:47 np0005593232 podman[415181]: 2026-01-23 11:03:47.413580957 +0000 UTC m=+0.064425476 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:03:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:03:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:48.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:49.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4140: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:03:49 np0005593232 nova_compute[250269]: 2026-01-23 11:03:49.662 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:50.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:50 np0005593232 nova_compute[250269]: 2026-01-23 11:03:50.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:03:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:03:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4141: 321 pgs: 321 active+clean; 147 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 341 B/s rd, 1.4 MiB/s wr, 2 op/s
Jan 23 06:03:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:52.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4142: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 23 06:03:54 np0005593232 nova_compute[250269]: 2026-01-23 11:03:54.664 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:54.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:55.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4143: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 23 06:03:55 np0005593232 nova_compute[250269]: 2026-01-23 11:03:55.811 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:03:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:56.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:57.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4144: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:03:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:03:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:03:58.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:03:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:03:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:03:59.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:03:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4145: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:03:59 np0005593232 nova_compute[250269]: 2026-01-23 11:03:59.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:00.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:00 np0005593232 nova_compute[250269]: 2026-01-23 11:04:00.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:01.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4146: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 23 06:04:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:02.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4147: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 393 KiB/s wr, 98 op/s
Jan 23 06:04:04 np0005593232 nova_compute[250269]: 2026-01-23 11:04:04.668 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:04.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4148: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 23 06:04:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:05 np0005593232 nova_compute[250269]: 2026-01-23 11:04:05.814 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:06.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4149: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 23 06:04:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:08.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4150: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:04:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:09.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:09 np0005593232 nova_compute[250269]: 2026-01-23 11:04:09.670 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:10.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:10 np0005593232 nova_compute[250269]: 2026-01-23 11:04:10.817 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4151: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:04:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:11.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.632909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252632993, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2102, "num_deletes": 251, "total_data_size": 4008448, "memory_usage": 4071568, "flush_reason": "Manual Compaction"}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252666452, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3896466, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89181, "largest_seqno": 91282, "table_properties": {"data_size": 3886792, "index_size": 6167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19489, "raw_average_key_size": 20, "raw_value_size": 3867713, "raw_average_value_size": 4033, "num_data_blocks": 269, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166039, "oldest_key_time": 1769166039, "file_creation_time": 1769166252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 33584 microseconds, and 8384 cpu microseconds.
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.666500) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3896466 bytes OK
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.666523) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.668205) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.668222) EVENT_LOG_v1 {"time_micros": 1769166252668217, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.668241) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3999759, prev total WAL file size 3999759, number of live WAL files 2.
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.669701) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3805KB)], [209(11MB)]
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252669730, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 16318450, "oldest_snapshot_seqno": -1}
Jan 23 06:04:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 11613 keys, 14318341 bytes, temperature: kUnknown
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252822028, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 14318341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14244717, "index_size": 43495, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29061, "raw_key_size": 306490, "raw_average_key_size": 26, "raw_value_size": 14043279, "raw_average_value_size": 1209, "num_data_blocks": 1651, "num_entries": 11613, "num_filter_entries": 11613, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166252, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.822339) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 14318341 bytes
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.844114) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.1 rd, 93.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.8 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 12132, records dropped: 519 output_compression: NoCompression
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.844154) EVENT_LOG_v1 {"time_micros": 1769166252844140, "job": 132, "event": "compaction_finished", "compaction_time_micros": 152427, "compaction_time_cpu_micros": 35565, "output_level": 6, "num_output_files": 1, "total_output_size": 14318341, "num_input_records": 12132, "num_output_records": 11613, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252844978, "job": 132, "event": "table_file_deletion", "file_number": 211}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166252847061, "job": 132, "event": "table_file_deletion", "file_number": 209}
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.669620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.847100) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.847104) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.847106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.847108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:12 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:04:12.847110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:04:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4152: 321 pgs: 321 active+clean; 171 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 548 KiB/s rd, 191 KiB/s wr, 28 op/s
Jan 23 06:04:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:13.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:14 np0005593232 nova_compute[250269]: 2026-01-23 11:04:14.671 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:14.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4153: 321 pgs: 321 active+clean; 171 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 179 KiB/s wr, 6 op/s
Jan 23 06:04:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:15.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:15 np0005593232 nova_compute[250269]: 2026-01-23 11:04:15.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:16 np0005593232 podman[415292]: 2026-01-23 11:04:16.343025417 +0000 UTC m=+0.091610806 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:04:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:16.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4154: 321 pgs: 321 active+clean; 184 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 275 KiB/s rd, 1.2 MiB/s wr, 41 op/s
Jan 23 06:04:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:17.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:18 np0005593232 podman[415345]: 2026-01-23 11:04:18.401466937 +0000 UTC m=+0.054321588 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:04:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:18.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4155: 321 pgs: 321 active+clean; 193 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 23 06:04:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:19.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:19 np0005593232 nova_compute[250269]: 2026-01-23 11:04:19.673 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:20.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:20 np0005593232 nova_compute[250269]: 2026-01-23 11:04:20.820 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4156: 321 pgs: 321 active+clean; 195 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 23 06:04:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:21.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:22.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4157: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 392 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 06:04:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:23.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:24 np0005593232 nova_compute[250269]: 2026-01-23 11:04:24.676 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:24.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4158: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 374 KiB/s rd, 2.0 MiB/s wr, 60 op/s
Jan 23 06:04:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:25.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:25 np0005593232 nova_compute[250269]: 2026-01-23 11:04:25.822 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:26.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4159: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 375 KiB/s rd, 2.0 MiB/s wr, 61 op/s
Jan 23 06:04:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:27.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:28.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4160: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 117 KiB/s rd, 1000 KiB/s wr, 27 op/s
Jan 23 06:04:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:29.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:29 np0005593232 nova_compute[250269]: 2026-01-23 11:04:29.677 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:30 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.709 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.709 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.709 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.710 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.710 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.710 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:04:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:30.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:30 np0005593232 nova_compute[250269]: 2026-01-23 11:04:30.878 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4161: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 63 KiB/s rd, 42 KiB/s wr, 14 op/s
Jan 23 06:04:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:31.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:32.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4162: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 46 KiB/s wr, 12 op/s
Jan 23 06:04:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:33.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:04:33 np0005593232 nova_compute[250269]: 2026-01-23 11:04:33.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:33 np0005593232 nova_compute[250269]: 2026-01-23 11:04:33.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:04:33 np0005593232 nova_compute[250269]: 2026-01-23 11:04:33.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:04:33 np0005593232 nova_compute[250269]: 2026-01-23 11:04:33.314 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c453fe99-cba3-4642-bd69-e19b53dfa172 does not exist
Jan 23 06:04:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9717de91-e586-47ea-94f5-5a7ac9c35e35 does not exist
Jan 23 06:04:33 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 02de668f-7432-4f9b-aa35-7ba43a68bf21 does not exist
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:04:33 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:04:33 np0005593232 nova_compute[250269]: 2026-01-23 11:04:33.377 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:33.379 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=97, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=96) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:04:33 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:33.380 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:04:33 np0005593232 podman[415644]: 2026-01-23 11:04:33.982965511 +0000 UTC m=+0.048682702 container create ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 06:04:34 np0005593232 systemd[1]: Started libpod-conmon-ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c.scope.
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:33.960037309 +0000 UTC m=+0.025754490 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:34.081535408 +0000 UTC m=+0.147252599 container init ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:34.091588435 +0000 UTC m=+0.157305606 container start ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:34.095834792 +0000 UTC m=+0.161551993 container attach ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:04:34 np0005593232 systemd[1]: libpod-ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c.scope: Deactivated successfully.
Jan 23 06:04:34 np0005593232 distracted_dhawan[415660]: 167 167
Jan 23 06:04:34 np0005593232 conmon[415660]: conmon ca9bd872d1ea52f809d6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c.scope/container/memory.events
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:34.100757117 +0000 UTC m=+0.166474288 container died ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 06:04:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3647ea1b6412f74ea1cd8ffe24343547788d78f7dc348d5d77eeab9e72424d4c-merged.mount: Deactivated successfully.
Jan 23 06:04:34 np0005593232 podman[415644]: 2026-01-23 11:04:34.142120597 +0000 UTC m=+0.207837768 container remove ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:04:34 np0005593232 systemd[1]: libpod-conmon-ca9bd872d1ea52f809d61fec55395f14036e0dfaafe31771b9a7833ae22ffc1c.scope: Deactivated successfully.
Jan 23 06:04:34 np0005593232 podman[415682]: 2026-01-23 11:04:34.300863252 +0000 UTC m=+0.042400899 container create fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:04:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:04:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:34 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:04:34 np0005593232 systemd[1]: Started libpod-conmon-fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189.scope.
Jan 23 06:04:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:34 np0005593232 podman[415682]: 2026-01-23 11:04:34.282325011 +0000 UTC m=+0.023862678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:34 np0005593232 podman[415682]: 2026-01-23 11:04:34.385741091 +0000 UTC m=+0.127278768 container init fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Jan 23 06:04:34 np0005593232 podman[415682]: 2026-01-23 11:04:34.397089894 +0000 UTC m=+0.138627531 container start fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:04:34 np0005593232 podman[415682]: 2026-01-23 11:04:34.401027613 +0000 UTC m=+0.142565250 container attach fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:04:34 np0005593232 nova_compute[250269]: 2026-01-23 11:04:34.679 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:34.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:35 np0005593232 frosty_nightingale[415698]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:04:35 np0005593232 frosty_nightingale[415698]: --> relative data size: 1.0
Jan 23 06:04:35 np0005593232 frosty_nightingale[415698]: --> All data devices are unavailable
Jan 23 06:04:35 np0005593232 systemd[1]: libpod-fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189.scope: Deactivated successfully.
Jan 23 06:04:35 np0005593232 podman[415682]: 2026-01-23 11:04:35.23150554 +0000 UTC m=+0.973043197 container died fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:04:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4163: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 KiB/s rd, 29 KiB/s wr, 3 op/s
Jan 23 06:04:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:35.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-76b5badbd33228173bf25803c7bc8a274a34b3053969b1cfe82057d9c47495a5-merged.mount: Deactivated successfully.
Jan 23 06:04:35 np0005593232 nova_compute[250269]: 2026-01-23 11:04:35.880 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:36 np0005593232 podman[415682]: 2026-01-23 11:04:36.063031027 +0000 UTC m=+1.804568664 container remove fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 06:04:36 np0005593232 systemd[1]: libpod-conmon-fbcb50cca3507d0f603c017eb112df307e06c11abe12b2ba79d81d68afa84189.scope: Deactivated successfully.
Jan 23 06:04:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Jan 23 06:04:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Jan 23 06:04:36 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.701862302 +0000 UTC m=+0.023797066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:36.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.899416746 +0000 UTC m=+0.221351460 container create 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:04:36 np0005593232 systemd[1]: Started libpod-conmon-56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02.scope.
Jan 23 06:04:36 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.971242065 +0000 UTC m=+0.293176799 container init 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.978087844 +0000 UTC m=+0.300022558 container start 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.981353074 +0000 UTC m=+0.303287818 container attach 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:04:36 np0005593232 xenodochial_heisenberg[415933]: 167 167
Jan 23 06:04:36 np0005593232 systemd[1]: libpod-56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02.scope: Deactivated successfully.
Jan 23 06:04:36 np0005593232 podman[415916]: 2026-01-23 11:04:36.983912225 +0000 UTC m=+0.305846949 container died 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 06:04:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bcf02dab31e2f9d2ac479e20b25ab92df71d6e96293d5a52f93c5de982d4fd1b-merged.mount: Deactivated successfully.
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4165: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 33 KiB/s wr, 6 op/s
Jan 23 06:04:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:37.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:37 np0005593232 nova_compute[250269]: 2026-01-23 11:04:37.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:37 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:37.383 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '97'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:04:37 np0005593232 podman[415916]: 2026-01-23 11:04:37.445056304 +0000 UTC m=+0.766991018 container remove 56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heisenberg, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 23 06:04:37 np0005593232 systemd[1]: libpod-conmon-56948141f35eaea1b51464c34b220672edfb4882c8517590a87eee36eae9dc02.scope: Deactivated successfully.
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:04:37
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:04:37 np0005593232 podman[415958]: 2026-01-23 11:04:37.629002613 +0000 UTC m=+0.065236869 container create dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:04:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:04:37 np0005593232 systemd[1]: Started libpod-conmon-dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2.scope.
Jan 23 06:04:37 np0005593232 podman[415958]: 2026-01-23 11:04:37.586137832 +0000 UTC m=+0.022372108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed17cf43108cb198ca56c991b3bc0afbb56523092eebb1debdc0dea6594cfd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed17cf43108cb198ca56c991b3bc0afbb56523092eebb1debdc0dea6594cfd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed17cf43108cb198ca56c991b3bc0afbb56523092eebb1debdc0dea6594cfd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed17cf43108cb198ca56c991b3bc0afbb56523092eebb1debdc0dea6594cfd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:37 np0005593232 podman[415958]: 2026-01-23 11:04:37.790571706 +0000 UTC m=+0.226805982 container init dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 06:04:37 np0005593232 podman[415958]: 2026-01-23 11:04:37.803509372 +0000 UTC m=+0.239743638 container start dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 06:04:37 np0005593232 podman[415958]: 2026-01-23 11:04:37.807278746 +0000 UTC m=+0.243513002 container attach dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 06:04:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]: {
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:    "0": [
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:        {
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "devices": [
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "/dev/loop3"
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            ],
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "lv_name": "ceph_lv0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "lv_size": "7511998464",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "name": "ceph_lv0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "tags": {
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.cluster_name": "ceph",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.crush_device_class": "",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.encrypted": "0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.osd_id": "0",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.type": "block",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:                "ceph.vdo": "0"
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            },
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "type": "block",
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:            "vg_name": "ceph_vg0"
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:        }
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]:    ]
Jan 23 06:04:38 np0005593232 stupefied_lovelace[415974]: }
Jan 23 06:04:38 np0005593232 systemd[1]: libpod-dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2.scope: Deactivated successfully.
Jan 23 06:04:38 np0005593232 podman[415958]: 2026-01-23 11:04:38.612440026 +0000 UTC m=+1.048674282 container died dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:04:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:38.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:04:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:04:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4166: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 17 op/s
Jan 23 06:04:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:39.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:39 np0005593232 systemd[1]: var-lib-containers-storage-overlay-aed17cf43108cb198ca56c991b3bc0afbb56523092eebb1debdc0dea6594cfd2-merged.mount: Deactivated successfully.
Jan 23 06:04:39 np0005593232 podman[415958]: 2026-01-23 11:04:39.470485574 +0000 UTC m=+1.906719830 container remove dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_lovelace, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:04:39 np0005593232 systemd[1]: libpod-conmon-dfe42a16580f531cc4cdb6189619353411bd5f018c508c7c45d6896152984ac2.scope: Deactivated successfully.
Jan 23 06:04:39 np0005593232 nova_compute[250269]: 2026-01-23 11:04:39.681 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.137602979 +0000 UTC m=+0.063663855 container create 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 23 06:04:40 np0005593232 systemd[1]: Started libpod-conmon-4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c.scope.
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.106593845 +0000 UTC m=+0.032654801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.240934677 +0000 UTC m=+0.166995553 container init 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.256336682 +0000 UTC m=+0.182397558 container start 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.259502409 +0000 UTC m=+0.185563285 container attach 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:04:40 np0005593232 systemd[1]: libpod-4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c.scope: Deactivated successfully.
Jan 23 06:04:40 np0005593232 tender_feistel[416153]: 167 167
Jan 23 06:04:40 np0005593232 conmon[416153]: conmon 4dbc43219fcffeb39207 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c.scope/container/memory.events
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.264952149 +0000 UTC m=+0.191013015 container died 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:04:40 np0005593232 nova_compute[250269]: 2026-01-23 11:04:40.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:40 np0005593232 nova_compute[250269]: 2026-01-23 11:04:40.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 06:04:40 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b5c8a99c51f9385693447dcd38ad782e234245a04f7628ca8d2e087838ab95e3-merged.mount: Deactivated successfully.
Jan 23 06:04:40 np0005593232 podman[416136]: 2026-01-23 11:04:40.315090261 +0000 UTC m=+0.241151137 container remove 4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_feistel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:04:40 np0005593232 systemd[1]: libpod-conmon-4dbc43219fcffeb392078c4b0dd6fab2892ec9e77b7451758e71d3adf862401c.scope: Deactivated successfully.
Jan 23 06:04:40 np0005593232 podman[416176]: 2026-01-23 11:04:40.477433104 +0000 UTC m=+0.042970955 container create c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:04:40 np0005593232 systemd[1]: Started libpod-conmon-c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330.scope.
Jan 23 06:04:40 np0005593232 podman[416176]: 2026-01-23 11:04:40.459731156 +0000 UTC m=+0.025269027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:04:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:04:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1f4b8ffacbbbf2e218c959e7d4ece3e95437fe899e0c71a40d50af04f51638/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1f4b8ffacbbbf2e218c959e7d4ece3e95437fe899e0c71a40d50af04f51638/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1f4b8ffacbbbf2e218c959e7d4ece3e95437fe899e0c71a40d50af04f51638/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:40 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c1f4b8ffacbbbf2e218c959e7d4ece3e95437fe899e0c71a40d50af04f51638/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:04:40 np0005593232 podman[416176]: 2026-01-23 11:04:40.765143973 +0000 UTC m=+0.330681824 container init c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 06:04:40 np0005593232 podman[416176]: 2026-01-23 11:04:40.779691864 +0000 UTC m=+0.345229715 container start c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:04:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:40.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:40 np0005593232 nova_compute[250269]: 2026-01-23 11:04:40.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:41 np0005593232 podman[416176]: 2026-01-23 11:04:41.147424149 +0000 UTC m=+0.712962000 container attach c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:04:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4167: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 46 KiB/s rd, 12 KiB/s wr, 27 op/s
Jan 23 06:04:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:41.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]: {
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:        "osd_id": 0,
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:        "type": "bluestore"
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]:    }
Jan 23 06:04:41 np0005593232 mystifying_swirles[416192]: }
Jan 23 06:04:41 np0005593232 systemd[1]: libpod-c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330.scope: Deactivated successfully.
Jan 23 06:04:41 np0005593232 podman[416176]: 2026-01-23 11:04:41.705077407 +0000 UTC m=+1.270615278 container died c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Jan 23 06:04:41 np0005593232 nova_compute[250269]: 2026-01-23 11:04:41.914 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6c1f4b8ffacbbbf2e218c959e7d4ece3e95437fe899e0c71a40d50af04f51638-merged.mount: Deactivated successfully.
Jan 23 06:04:42 np0005593232 podman[416176]: 2026-01-23 11:04:42.106802129 +0000 UTC m=+1.672339990 container remove c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swirles, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1bb8a6d5-0ac4-412d-b40f-5d4c77caf653 does not exist
Jan 23 06:04:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bbee3538-04d1-4df3-9e3e-94c62ba70363 does not exist
Jan 23 06:04:42 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f51ab393-2c71-475d-aa71-429b942b8fbd does not exist
Jan 23 06:04:42 np0005593232 systemd[1]: libpod-conmon-c83a92d8c0c5a85c4abd24a3c20116c6def1ce9cb44728f4ade01d6e13ed0330.scope: Deactivated successfully.
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:42 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:42.686 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:42.687 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:04:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:04:42.687 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:04:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:42.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4168: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 409 B/s wr, 50 op/s
Jan 23 06:04:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:43.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 06:04:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690829589' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 06:04:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 06:04:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3690829589' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 06:04:44 np0005593232 nova_compute[250269]: 2026-01-23 11:04:44.683 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:44.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4169: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 691 KiB/s rd, 409 B/s wr, 50 op/s
Jan 23 06:04:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:45.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:45 np0005593232 nova_compute[250269]: 2026-01-23 11:04:45.888 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.829 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.829 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.829 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.830 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:04:46 np0005593232 nova_compute[250269]: 2026-01-23 11:04:46.830 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:04:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:46.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4170: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 382 B/s wr, 96 op/s
Jan 23 06:04:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:04:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898170363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:04:47 np0005593232 nova_compute[250269]: 2026-01-23 11:04:47.283 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:04:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:47.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:47 np0005593232 nova_compute[250269]: 2026-01-23 11:04:47.454 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:04:47 np0005593232 nova_compute[250269]: 2026-01-23 11:04:47.455 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4049MB free_disk=20.94268798828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:04:47 np0005593232 nova_compute[250269]: 2026-01-23 11:04:47.455 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:04:47 np0005593232 nova_compute[250269]: 2026-01-23 11:04:47.456 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:04:47 np0005593232 podman[416301]: 2026-01-23 11:04:47.456631687 +0000 UTC m=+0.103191905 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002172319932998325 of space, bias 1.0, pg target 0.6516959798994976 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:04:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:04:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:48 np0005593232 nova_compute[250269]: 2026-01-23 11:04:48.763 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:04:48 np0005593232 nova_compute[250269]: 2026-01-23 11:04:48.765 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:04:48 np0005593232 nova_compute[250269]: 2026-01-23 11:04:48.789 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:04:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:48.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:04:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186166213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.252 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:04:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4171: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 84 op/s
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.263 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:04:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:49.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.349 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.353 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.354 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:04:49 np0005593232 podman[416351]: 2026-01-23 11:04:49.462930789 +0000 UTC m=+0.104543172 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 23 06:04:49 np0005593232 nova_compute[250269]: 2026-01-23 11:04:49.685 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:50.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:50 np0005593232 nova_compute[250269]: 2026-01-23 11:04:50.890 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4172: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 74 op/s
Jan 23 06:04:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:51.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Jan 23 06:04:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Jan 23 06:04:52 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Jan 23 06:04:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:52.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4174: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 66 op/s
Jan 23 06:04:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:53.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:54 np0005593232 nova_compute[250269]: 2026-01-23 11:04:54.686 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:54.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4175: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 511 B/s wr, 66 op/s
Jan 23 06:04:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:55 np0005593232 nova_compute[250269]: 2026-01-23 11:04:55.892 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:04:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:04:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:56.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:04:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4176: 321 pgs: 2 active+clean+snaptrim, 11 active+clean+snaptrim_wait, 308 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 476 KiB/s rd, 13 KiB/s wr, 47 op/s
Jan 23 06:04:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:04:58 np0005593232 nova_compute[250269]: 2026-01-23 11:04:58.350 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:04:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:04:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:04:58.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:04:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4177: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 646 KiB/s rd, 15 KiB/s wr, 67 op/s
Jan 23 06:04:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:04:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:04:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:04:59.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:04:59 np0005593232 nova_compute[250269]: 2026-01-23 11:04:59.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:00.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:01 np0005593232 nova_compute[250269]: 2026-01-23 11:05:01.052 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4178: 321 pgs: 321 active+clean; 200 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 647 KiB/s rd, 15 KiB/s wr, 68 op/s
Jan 23 06:05:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:01.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:02.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e426 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Jan 23 06:05:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Jan 23 06:05:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4180: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 610 KiB/s rd, 27 KiB/s wr, 56 op/s
Jan 23 06:05:03 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Jan 23 06:05:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:03.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:04 np0005593232 nova_compute[250269]: 2026-01-23 11:05:04.774 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4181: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 611 KiB/s rd, 27 KiB/s wr, 56 op/s
Jan 23 06:05:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:05.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:06 np0005593232 nova_compute[250269]: 2026-01-23 11:05:06.118 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:06 np0005593232 nova_compute[250269]: 2026-01-23 11:05:06.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:06.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4182: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 172 KiB/s rd, 14 KiB/s wr, 22 op/s
Jan 23 06:05:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:07.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:08.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4183: 321 pgs: 321 active+clean; 202 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 8 op/s
Jan 23 06:05:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:09.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:09 np0005593232 nova_compute[250269]: 2026-01-23 11:05:09.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:10.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:11 np0005593232 nova_compute[250269]: 2026-01-23 11:05:11.121 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4184: 321 pgs: 321 active+clean; 165 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 16 KiB/s rd, 13 KiB/s wr, 21 op/s
Jan 23 06:05:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:11.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:12 np0005593232 nova_compute[250269]: 2026-01-23 11:05:12.232 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:12 np0005593232 nova_compute[250269]: 2026-01-23 11:05:12.348 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:12 np0005593232 nova_compute[250269]: 2026-01-23 11:05:12.349 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 06:05:12 np0005593232 nova_compute[250269]: 2026-01-23 11:05:12.368 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 06:05:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:12.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4185: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 23 06:05:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:13.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:14 np0005593232 nova_compute[250269]: 2026-01-23 11:05:14.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:14.805 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=98, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=97) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:05:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:14.807 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:05:14 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:14.808 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '98'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:05:14 np0005593232 nova_compute[250269]: 2026-01-23 11:05:14.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:14.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4186: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:05:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:15.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:16 np0005593232 nova_compute[250269]: 2026-01-23 11:05:16.123 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:16.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4187: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:05:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:17.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:18 np0005593232 podman[416483]: 2026-01-23 11:05:18.454831791 +0000 UTC m=+0.111584297 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 23 06:05:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:18.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4188: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:05:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:19.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:19 np0005593232 nova_compute[250269]: 2026-01-23 11:05:19.782 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:20 np0005593232 podman[416510]: 2026-01-23 11:05:20.384404739 +0000 UTC m=+0.045588558 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 23 06:05:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:20.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:21 np0005593232 nova_compute[250269]: 2026-01-23 11:05:21.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4189: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 23 06:05:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:21.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:22.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4190: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.9 KiB/s rd, 682 B/s wr, 10 op/s
Jan 23 06:05:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:23.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:24 np0005593232 nova_compute[250269]: 2026-01-23 11:05:24.784 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:24.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4191: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 06:05:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:05:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3781836549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:05:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:25.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:26 np0005593232 nova_compute[250269]: 2026-01-23 11:05:26.126 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:26.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4192: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 23 06:05:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:27.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:28.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4193: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:29.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:29 np0005593232 nova_compute[250269]: 2026-01-23 11:05:29.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:30.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:31 np0005593232 nova_compute[250269]: 2026-01-23 11:05:31.156 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4194: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:31.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:31 np0005593232 nova_compute[250269]: 2026-01-23 11:05:31.422 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:31 np0005593232 nova_compute[250269]: 2026-01-23 11:05:31.423 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:32 np0005593232 nova_compute[250269]: 2026-01-23 11:05:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:32 np0005593232 nova_compute[250269]: 2026-01-23 11:05:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:32 np0005593232 nova_compute[250269]: 2026-01-23 11:05:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:32 np0005593232 nova_compute[250269]: 2026-01-23 11:05:32.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:05:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:32.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4195: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:33.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:33 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:34 np0005593232 nova_compute[250269]: 2026-01-23 11:05:34.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:34.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4196: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:35 np0005593232 nova_compute[250269]: 2026-01-23 11:05:35.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:35 np0005593232 nova_compute[250269]: 2026-01-23 11:05:35.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:05:35 np0005593232 nova_compute[250269]: 2026-01-23 11:05:35.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:05:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:35.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:36 np0005593232 nova_compute[250269]: 2026-01-23 11:05:36.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:36.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4197: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:37.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:05:37
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'volumes', 'cephfs.cephfs.meta', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:05:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:05:38 np0005593232 nova_compute[250269]: 2026-01-23 11:05:38.011 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:05:38 np0005593232 nova_compute[250269]: 2026-01-23 11:05:38.012 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:05:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:05:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:38.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4198: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:39.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:39 np0005593232 nova_compute[250269]: 2026-01-23 11:05:39.791 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:40.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:41 np0005593232 nova_compute[250269]: 2026-01-23 11:05:41.161 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4199: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:42.687 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:42.688 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:05:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:05:42.688 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:05:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:42.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4200: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:43 np0005593232 nova_compute[250269]: 2026-01-23 11:05:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5303ffbe-4b63-4a76-be02-28700efcc16c does not exist
Jan 23 06:05:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2ef4fa4d-46e1-4fce-be3c-fc857d8710fc does not exist
Jan 23 06:05:43 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4ca52b8d-0b0b-44b0-b4ef-af26ddd8a12f does not exist
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:05:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.496896626 +0000 UTC m=+0.041206726 container create e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:05:44 np0005593232 systemd[1]: Started libpod-conmon-e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc.scope.
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.476833933 +0000 UTC m=+0.021144053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.591568995 +0000 UTC m=+0.135879125 container init e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.598736243 +0000 UTC m=+0.143046343 container start e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.601922381 +0000 UTC m=+0.146232481 container attach e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:05:44 np0005593232 systemd[1]: libpod-e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc.scope: Deactivated successfully.
Jan 23 06:05:44 np0005593232 zealous_sutherland[416999]: 167 167
Jan 23 06:05:44 np0005593232 conmon[416999]: conmon e5823ccfffa7043abefe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc.scope/container/memory.events
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.606398874 +0000 UTC m=+0.150708974 container died e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:05:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8b1811c4dab051a0d5381670d195841416531cd0a2dd74b558a63b46a007a6c7-merged.mount: Deactivated successfully.
Jan 23 06:05:44 np0005593232 podman[416983]: 2026-01-23 11:05:44.647788835 +0000 UTC m=+0.192098935 container remove e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sutherland, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:05:44 np0005593232 systemd[1]: libpod-conmon-e5823ccfffa7043abefe59878b0815946e4aa8cb9304d9c018c9c6f9e9c6c5fc.scope: Deactivated successfully.
Jan 23 06:05:44 np0005593232 nova_compute[250269]: 2026-01-23 11:05:44.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:44 np0005593232 podman[417022]: 2026-01-23 11:05:44.798546149 +0000 UTC m=+0.042534123 container create ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:05:44 np0005593232 systemd[1]: Started libpod-conmon-ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea.scope.
Jan 23 06:05:44 np0005593232 podman[417022]: 2026-01-23 11:05:44.77829379 +0000 UTC m=+0.022281774 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3048973408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 06:05:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3048973408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 06:05:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:44.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:45 np0005593232 podman[417022]: 2026-01-23 11:05:45.073345122 +0000 UTC m=+0.317333096 container init ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 06:05:45 np0005593232 podman[417022]: 2026-01-23 11:05:45.08017899 +0000 UTC m=+0.324166954 container start ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:05:45 np0005593232 podman[417022]: 2026-01-23 11:05:45.246206346 +0000 UTC m=+0.490194310 container attach ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:05:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4201: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:45 np0005593232 thirsty_matsumoto[417038]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:05:45 np0005593232 thirsty_matsumoto[417038]: --> relative data size: 1.0
Jan 23 06:05:45 np0005593232 thirsty_matsumoto[417038]: --> All data devices are unavailable
Jan 23 06:05:45 np0005593232 systemd[1]: libpod-ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea.scope: Deactivated successfully.
Jan 23 06:05:45 np0005593232 podman[417022]: 2026-01-23 11:05:45.915713597 +0000 UTC m=+1.159701571 container died ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:05:46 np0005593232 nova_compute[250269]: 2026-01-23 11:05:46.163 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ce7ee89316a97b4a39945b7454887fcb9547836e2f7a91db5899751e566631a9-merged.mount: Deactivated successfully.
Jan 23 06:05:46 np0005593232 podman[417022]: 2026-01-23 11:05:46.482376965 +0000 UTC m=+1.726364929 container remove ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Jan 23 06:05:46 np0005593232 systemd[1]: libpod-conmon-ba490b0e379b75eacd1a67327cbec3ce5f70084aff13565cb1d9b4f9863087ea.scope: Deactivated successfully.
Jan 23 06:05:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:46.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.251594914 +0000 UTC m=+0.118494307 container create 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.156092962 +0000 UTC m=+0.022992385 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:47 np0005593232 systemd[1]: Started libpod-conmon-099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20.scope.
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4202: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:47 np0005593232 nova_compute[250269]: 2026-01-23 11:05:47.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:05:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.562577745 +0000 UTC m=+0.429477168 container init 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.571604613 +0000 UTC m=+0.438504006 container start 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.575088569 +0000 UTC m=+0.441987962 container attach 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 06:05:47 np0005593232 adoring_turing[417222]: 167 167
Jan 23 06:05:47 np0005593232 systemd[1]: libpod-099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20.scope: Deactivated successfully.
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.57692919 +0000 UTC m=+0.443828593 container died 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 23 06:05:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e02b7d8d6210244ff5ac534af5efa9b0f5c463783271b10de577298613772d44-merged.mount: Deactivated successfully.
Jan 23 06:05:47 np0005593232 podman[417206]: 2026-01-23 11:05:47.672815693 +0000 UTC m=+0.539715076 container remove 099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_turing, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:05:47 np0005593232 systemd[1]: libpod-conmon-099c7cf852adcd431424458c25f7f1799fd02fc890d338b1522203c5c4c90e20.scope: Deactivated successfully.
Jan 23 06:05:47 np0005593232 podman[417246]: 2026-01-23 11:05:47.814393904 +0000 UTC m=+0.024106975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:05:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:05:48 np0005593232 podman[417246]: 2026-01-23 11:05:48.098960287 +0000 UTC m=+0.308673338 container create 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 06:05:48 np0005593232 systemd[1]: Started libpod-conmon-52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565.scope.
Jan 23 06:05:48 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58a137ee1da39b7efb7e20d675a93f9815ea4d7290361a73c39ec9fa12da99f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58a137ee1da39b7efb7e20d675a93f9815ea4d7290361a73c39ec9fa12da99f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58a137ee1da39b7efb7e20d675a93f9815ea4d7290361a73c39ec9fa12da99f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:48 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b58a137ee1da39b7efb7e20d675a93f9815ea4d7290361a73c39ec9fa12da99f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:48 np0005593232 podman[417246]: 2026-01-23 11:05:48.573483053 +0000 UTC m=+0.783196134 container init 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:05:48 np0005593232 podman[417246]: 2026-01-23 11:05:48.58425162 +0000 UTC m=+0.793964671 container start 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:05:48 np0005593232 podman[417246]: 2026-01-23 11:05:48.681020337 +0000 UTC m=+0.890733408 container attach 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 06:05:48 np0005593232 nova_compute[250269]: 2026-01-23 11:05:48.771 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:05:48 np0005593232 nova_compute[250269]: 2026-01-23 11:05:48.771 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:05:48 np0005593232 nova_compute[250269]: 2026-01-23 11:05:48.771 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:05:48 np0005593232 nova_compute[250269]: 2026-01-23 11:05:48.771 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:05:48 np0005593232 nova_compute[250269]: 2026-01-23 11:05:48.772 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:05:48 np0005593232 podman[417265]: 2026-01-23 11:05:48.778240396 +0000 UTC m=+0.287296698 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 23 06:05:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:48.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:05:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224040876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.234 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:05:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4203: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]: {
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:    "0": [
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:        {
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "devices": [
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "/dev/loop3"
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            ],
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "lv_name": "ceph_lv0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "lv_size": "7511998464",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "name": "ceph_lv0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "tags": {
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.cluster_name": "ceph",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.crush_device_class": "",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.encrypted": "0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.osd_id": "0",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.type": "block",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:                "ceph.vdo": "0"
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            },
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "type": "block",
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:            "vg_name": "ceph_vg0"
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:        }
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]:    ]
Jan 23 06:05:49 np0005593232 friendly_herschel[417262]: }
Jan 23 06:05:49 np0005593232 systemd[1]: libpod-52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565.scope: Deactivated successfully.
Jan 23 06:05:49 np0005593232 conmon[417262]: conmon 52ae6168bb25f36946dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565.scope/container/memory.events
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.379 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.380 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4044MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.380 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.381 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:05:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:49.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:49 np0005593232 podman[417321]: 2026-01-23 11:05:49.398382257 +0000 UTC m=+0.025416821 container died 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:05:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b58a137ee1da39b7efb7e20d675a93f9815ea4d7290361a73c39ec9fa12da99f-merged.mount: Deactivated successfully.
Jan 23 06:05:49 np0005593232 podman[417321]: 2026-01-23 11:05:49.589406732 +0000 UTC m=+0.216441266 container remove 52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:05:49 np0005593232 systemd[1]: libpod-conmon-52ae6168bb25f36946dd71ad6fbb7ef7ce956c92d06d88c6c36a5032e7c6d565.scope: Deactivated successfully.
Jan 23 06:05:49 np0005593232 nova_compute[250269]: 2026-01-23 11:05:49.796 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.213560443 +0000 UTC m=+0.044128227 container create 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:05:50 np0005593232 systemd[1]: Started libpod-conmon-73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1.scope.
Jan 23 06:05:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.196235216 +0000 UTC m=+0.026803020 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.301491617 +0000 UTC m=+0.132059421 container init 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.310215227 +0000 UTC m=+0.140783011 container start 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.31430345 +0000 UTC m=+0.144871234 container attach 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 23 06:05:50 np0005593232 naughty_goldstine[417493]: 167 167
Jan 23 06:05:50 np0005593232 systemd[1]: libpod-73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1.scope: Deactivated successfully.
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.317868198 +0000 UTC m=+0.148436002 container died 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 06:05:50 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6734db9516160042bb634a95fc8b389e0a8d2d347db5ae5f62c27b0c16999887-merged.mount: Deactivated successfully.
Jan 23 06:05:50 np0005593232 podman[417476]: 2026-01-23 11:05:50.357544492 +0000 UTC m=+0.188112296 container remove 73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 06:05:50 np0005593232 systemd[1]: libpod-conmon-73e2c922a13ba5bc377e53375c1d98327e7d491b4b9cbee79ad7a37c9cc7b5d1.scope: Deactivated successfully.
Jan 23 06:05:50 np0005593232 nova_compute[250269]: 2026-01-23 11:05:50.479 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:05:50 np0005593232 nova_compute[250269]: 2026-01-23 11:05:50.481 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:05:50 np0005593232 nova_compute[250269]: 2026-01-23 11:05:50.513 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:05:50 np0005593232 podman[417516]: 2026-01-23 11:05:50.527389892 +0000 UTC m=+0.046024479 container create a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:05:50 np0005593232 systemd[1]: Started libpod-conmon-a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80.scope.
Jan 23 06:05:50 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:05:50 np0005593232 podman[417516]: 2026-01-23 11:05:50.506513847 +0000 UTC m=+0.025148454 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:05:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf98e107f816abf08e0b8fadfe41ea095e752b9555070138912aed1005449d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf98e107f816abf08e0b8fadfe41ea095e752b9555070138912aed1005449d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf98e107f816abf08e0b8fadfe41ea095e752b9555070138912aed1005449d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:50 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bf98e107f816abf08e0b8fadfe41ea095e752b9555070138912aed1005449d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:05:50 np0005593232 podman[417516]: 2026-01-23 11:05:50.622288318 +0000 UTC m=+0.140922935 container init a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:05:50 np0005593232 podman[417516]: 2026-01-23 11:05:50.632548531 +0000 UTC m=+0.151183128 container start a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 06:05:50 np0005593232 podman[417516]: 2026-01-23 11:05:50.63651957 +0000 UTC m=+0.155154177 container attach a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 06:05:50 np0005593232 podman[417531]: 2026-01-23 11:05:50.654248129 +0000 UTC m=+0.088214963 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 06:05:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:50.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:05:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/581174679' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:05:51 np0005593232 nova_compute[250269]: 2026-01-23 11:05:51.008 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:05:51 np0005593232 nova_compute[250269]: 2026-01-23 11:05:51.016 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:05:51 np0005593232 nova_compute[250269]: 2026-01-23 11:05:51.168 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4204: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:51.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:51 np0005593232 awesome_moser[417540]: {
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:        "osd_id": 0,
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:        "type": "bluestore"
Jan 23 06:05:51 np0005593232 awesome_moser[417540]:    }
Jan 23 06:05:51 np0005593232 awesome_moser[417540]: }
Jan 23 06:05:51 np0005593232 systemd[1]: libpod-a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80.scope: Deactivated successfully.
Jan 23 06:05:51 np0005593232 podman[417516]: 2026-01-23 11:05:51.565537713 +0000 UTC m=+1.084172330 container died a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:05:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3bf98e107f816abf08e0b8fadfe41ea095e752b9555070138912aed1005449d8-merged.mount: Deactivated successfully.
Jan 23 06:05:52 np0005593232 podman[417516]: 2026-01-23 11:05:52.825614529 +0000 UTC m=+2.344249116 container remove a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_moser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 23 06:05:52 np0005593232 systemd[1]: libpod-conmon-a7a1a56d6f1ed2eb70c3b4fc62603c6a8710de9cd39038156d15faa9a49fff80.scope: Deactivated successfully.
Jan 23 06:05:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:05:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:52.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:53 np0005593232 nova_compute[250269]: 2026-01-23 11:05:53.068 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:05:53 np0005593232 nova_compute[250269]: 2026-01-23 11:05:53.071 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:05:53 np0005593232 nova_compute[250269]: 2026-01-23 11:05:53.071 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:05:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4205: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:05:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 844c3c98-6a96-4fd2-9bc6-ac28d017a83c does not exist
Jan 23 06:05:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7be87e9a-5b0b-41f5-96b2-372c16adb2a5 does not exist
Jan 23 06:05:53 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 980ceb24-dc8c-48f7-bc24-4e1fc7f2c7ea does not exist
Jan 23 06:05:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:54 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:05:54 np0005593232 nova_compute[250269]: 2026-01-23 11:05:54.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:54.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4206: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:05:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:05:56 np0005593232 nova_compute[250269]: 2026-01-23 11:05:56.170 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:05:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:05:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:56.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:05:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4207: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:57.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:05:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:05:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4208: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:05:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:05:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:05:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:05:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:05:59 np0005593232 nova_compute[250269]: 2026-01-23 11:05:59.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:00.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:01 np0005593232 nova_compute[250269]: 2026-01-23 11:06:01.173 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4209: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.2 KiB/s rd, 255 B/s wr, 3 op/s
Jan 23 06:06:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:01.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:02.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4210: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 734 KiB/s wr, 4 op/s
Jan 23 06:06:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:03.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.222343) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364222398, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1271, "num_deletes": 258, "total_data_size": 2110634, "memory_usage": 2149728, "flush_reason": "Manual Compaction"}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364240280, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 2056536, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91283, "largest_seqno": 92553, "table_properties": {"data_size": 2050540, "index_size": 3262, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 12993, "raw_average_key_size": 19, "raw_value_size": 2038280, "raw_average_value_size": 3107, "num_data_blocks": 144, "num_entries": 656, "num_filter_entries": 656, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166253, "oldest_key_time": 1769166253, "file_creation_time": 1769166364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 17988 microseconds, and 5896 cpu microseconds.
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.240325) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 2056536 bytes OK
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.240347) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.247733) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.247776) EVENT_LOG_v1 {"time_micros": 1769166364247766, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.247801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 2104911, prev total WAL file size 2121048, number of live WAL files 2.
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.248639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303231' seq:72057594037927935, type:22 .. '6C6F676D0034323734' seq:0, type:0; will stop at (end)
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(2008KB)], [212(13MB)]
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364248700, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 16374877, "oldest_snapshot_seqno": -1}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 11734 keys, 16240185 bytes, temperature: kUnknown
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364495761, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 16240185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16163496, "index_size": 46240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29381, "raw_key_size": 309960, "raw_average_key_size": 26, "raw_value_size": 15957698, "raw_average_value_size": 1359, "num_data_blocks": 1766, "num_entries": 11734, "num_filter_entries": 11734, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166364, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.496176) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 16240185 bytes
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.498562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 66.2 rd, 65.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.7 +0.0 blob) out(15.5 +0.0 blob), read-write-amplify(15.9) write-amplify(7.9) OK, records in: 12269, records dropped: 535 output_compression: NoCompression
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.498587) EVENT_LOG_v1 {"time_micros": 1769166364498576, "job": 134, "event": "compaction_finished", "compaction_time_micros": 247266, "compaction_time_cpu_micros": 37894, "output_level": 6, "num_output_files": 1, "total_output_size": 16240185, "num_input_records": 12269, "num_output_records": 11734, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364499185, "job": 134, "event": "table_file_deletion", "file_number": 214}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166364502223, "job": 134, "event": "table_file_deletion", "file_number": 212}
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.248534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.502356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.502364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.502366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.502367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:04.502369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:04 np0005593232 nova_compute[250269]: 2026-01-23 11:06:04.825 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:04.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4211: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 2.3 KiB/s rd, 734 KiB/s wr, 4 op/s
Jan 23 06:06:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:06 np0005593232 nova_compute[250269]: 2026-01-23 11:06:06.175 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:06.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4212: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:06:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:07.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:08.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4213: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:06:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:09.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:09 np0005593232 nova_compute[250269]: 2026-01-23 11:06:09.827 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:09.967 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=99, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=98) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:06:09 np0005593232 nova_compute[250269]: 2026-01-23 11:06:09.967 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:09.969 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:06:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:10.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:11 np0005593232 nova_compute[250269]: 2026-01-23 11:06:11.178 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4214: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 757 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 23 06:06:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:11.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:11.971 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '99'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:06:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:12.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4215: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Jan 23 06:06:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:13.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:13 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:14 np0005593232 nova_compute[250269]: 2026-01-23 11:06:14.830 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:14.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4216: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Jan 23 06:06:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:16 np0005593232 nova_compute[250269]: 2026-01-23 11:06:16.181 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:16.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4217: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 96 op/s
Jan 23 06:06:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:18.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4218: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:06:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:19.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:19 np0005593232 podman[417774]: 2026-01-23 11:06:19.458008941 +0000 UTC m=+0.111754781 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 06:06:19 np0005593232 nova_compute[250269]: 2026-01-23 11:06:19.859 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:20.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:21 np0005593232 nova_compute[250269]: 2026-01-23 11:06:21.184 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4219: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 75 op/s
Jan 23 06:06:21 np0005593232 podman[417803]: 2026-01-23 11:06:21.40581161 +0000 UTC m=+0.065660931 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:06:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:21.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:22.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4220: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 1.3 MiB/s wr, 63 op/s
Jan 23 06:06:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.051586) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384051639, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 414, "num_deletes": 256, "total_data_size": 369994, "memory_usage": 379040, "flush_reason": "Manual Compaction"}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384058137, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 290117, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92554, "largest_seqno": 92967, "table_properties": {"data_size": 287744, "index_size": 472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6387, "raw_average_key_size": 20, "raw_value_size": 283030, "raw_average_value_size": 898, "num_data_blocks": 21, "num_entries": 315, "num_filter_entries": 315, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166364, "oldest_key_time": 1769166364, "file_creation_time": 1769166384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 6609 microseconds, and 1717 cpu microseconds.
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.058190) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 290117 bytes OK
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.058212) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.060720) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.060742) EVENT_LOG_v1 {"time_micros": 1769166384060735, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.060763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 367392, prev total WAL file size 367392, number of live WAL files 2.
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.061279) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353136' seq:72057594037927935, type:22 .. '6D6772737461740033373733' seq:0, type:0; will stop at (end)
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(283KB)], [215(15MB)]
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384061333, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 16530302, "oldest_snapshot_seqno": -1}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 11534 keys, 12675870 bytes, temperature: kUnknown
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384331320, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 12675870, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12605235, "index_size": 40698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 305987, "raw_average_key_size": 26, "raw_value_size": 12407649, "raw_average_value_size": 1075, "num_data_blocks": 1535, "num_entries": 11534, "num_filter_entries": 11534, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.331757) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12675870 bytes
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.333618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 61.2 rd, 46.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.5 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(100.7) write-amplify(43.7) OK, records in: 12049, records dropped: 515 output_compression: NoCompression
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.333648) EVENT_LOG_v1 {"time_micros": 1769166384333635, "job": 136, "event": "compaction_finished", "compaction_time_micros": 270166, "compaction_time_cpu_micros": 34649, "output_level": 6, "num_output_files": 1, "total_output_size": 12675870, "num_input_records": 12049, "num_output_records": 11534, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384333982, "job": 136, "event": "table_file_deletion", "file_number": 217}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166384337577, "job": 136, "event": "table_file_deletion", "file_number": 215}
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.061186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.337695) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.337702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.337703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.337705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:06:24.337706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:06:24 np0005593232 nova_compute[250269]: 2026-01-23 11:06:24.861 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:24.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4221: 321 pgs: 321 active+clean; 182 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 29 KiB/s rd, 1.3 MiB/s wr, 16 op/s
Jan 23 06:06:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:26 np0005593232 nova_compute[250269]: 2026-01-23 11:06:26.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:26.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4222: 321 pgs: 321 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 137 KiB/s rd, 2.1 MiB/s wr, 39 op/s
Jan 23 06:06:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:27.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:28.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4223: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 23 06:06:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:29.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:29 np0005593232 nova_compute[250269]: 2026-01-23 11:06:29.897 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:30.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:31 np0005593232 nova_compute[250269]: 2026-01-23 11:06:31.191 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4224: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 23 06:06:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:31.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:32.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4225: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 266 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 23 06:06:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:33.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:34 np0005593232 nova_compute[250269]: 2026-01-23 11:06:34.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:34.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4226: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 240 KiB/s rd, 854 KiB/s wr, 43 op/s
Jan 23 06:06:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:35.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:36 np0005593232 nova_compute[250269]: 2026-01-23 11:06:36.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:37.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.073 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.074 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.074 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.074 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.101 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.102 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.103 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.103 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.103 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.103 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:06:37 np0005593232 nova_compute[250269]: 2026-01-23 11:06:37.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4227: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 241 KiB/s rd, 872 KiB/s wr, 44 op/s
Jan 23 06:06:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:06:37
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'images', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:06:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:06:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:06:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:39.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4228: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 133 KiB/s rd, 81 KiB/s wr, 23 op/s
Jan 23 06:06:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:39 np0005593232 nova_compute[250269]: 2026-01-23 11:06:39.903 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:41.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:41 np0005593232 nova_compute[250269]: 2026-01-23 11:06:41.242 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4229: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 852 B/s rd, 29 KiB/s wr, 3 op/s
Jan 23 06:06:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:41.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:42.689 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:42.689 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:06:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:06:42.690 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:06:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:43.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4230: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 29 KiB/s wr, 6 op/s
Jan 23 06:06:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:44 np0005593232 nova_compute[250269]: 2026-01-23 11:06:44.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:44 np0005593232 nova_compute[250269]: 2026-01-23 11:06:44.951 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:45.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4231: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.7 KiB/s rd, 21 KiB/s wr, 4 op/s
Jan 23 06:06:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:45.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:46 np0005593232 nova_compute[250269]: 2026-01-23 11:06:46.244 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:47.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.316 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.317 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.317 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.317 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.318 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:06:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4232: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 21 KiB/s wr, 46 op/s
Jan 23 06:06:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:47.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:06:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2030577243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.787 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.985 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.987 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4112MB free_disk=20.942691802978516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.987 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:06:47 np0005593232 nova_compute[250269]: 2026-01-23 11:06:47.987 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:06:47 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021721381793225387 of space, bias 1.0, pg target 0.6516414537967616 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:06:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.054 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.055 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.071 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:06:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:06:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490843490' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.585 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.591 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.613 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.614 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:06:48 np0005593232 nova_compute[250269]: 2026-01-23 11:06:48.615 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:06:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:49.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4233: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.0 KiB/s wr, 71 op/s
Jan 23 06:06:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:49 np0005593232 nova_compute[250269]: 2026-01-23 11:06:49.952 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:50 np0005593232 podman[417931]: 2026-01-23 11:06:50.423749715 +0000 UTC m=+0.086272059 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 06:06:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:51.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:51 np0005593232 nova_compute[250269]: 2026-01-23 11:06:51.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4234: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 23 06:06:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:52 np0005593232 podman[417958]: 2026-01-23 11:06:52.391128565 +0000 UTC m=+0.051981834 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 06:06:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:06:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:53.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:06:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4235: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 71 op/s
Jan 23 06:06:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:54 np0005593232 nova_compute[250269]: 2026-01-23 11:06:54.955 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4236: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 68 op/s
Jan 23 06:06:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:55.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:06:56 np0005593232 nova_compute[250269]: 2026-01-23 11:06:56.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:06:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:06:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:06:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:57.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:06:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4237: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 1.3 KiB/s wr, 75 op/s
Jan 23 06:06:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:06:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:06:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:06:57 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:06:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 319c9f9b-81e9-4d4a-aafe-dd3f2d290d26 does not exist
Jan 23 06:06:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b2012fe7-4168-4ed4-80b4-62428cc136f0 does not exist
Jan 23 06:06:58 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 1621a1ba-60b2-4f22-9598-33f1c0b1a020 does not exist
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:06:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:06:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:06:59.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:06:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4238: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.0 MiB/s rd, 11 KiB/s wr, 55 op/s
Jan 23 06:06:59 np0005593232 podman[418305]: 2026-01-23 11:06:59.464599445 +0000 UTC m=+0.071053029 container create a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:06:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:06:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:06:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:06:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:06:59 np0005593232 podman[418305]: 2026-01-23 11:06:59.414736271 +0000 UTC m=+0.021189865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:06:59 np0005593232 nova_compute[250269]: 2026-01-23 11:06:59.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:07:00 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:07:00 np0005593232 systemd[1]: Started libpod-conmon-a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13.scope.
Jan 23 06:07:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:00 np0005593232 podman[418305]: 2026-01-23 11:07:00.820109272 +0000 UTC m=+1.426562886 container init a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:07:00 np0005593232 podman[418305]: 2026-01-23 11:07:00.833790319 +0000 UTC m=+1.440243903 container start a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 06:07:00 np0005593232 fervent_carson[418321]: 167 167
Jan 23 06:07:00 np0005593232 systemd[1]: libpod-a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13.scope: Deactivated successfully.
Jan 23 06:07:00 np0005593232 podman[418305]: 2026-01-23 11:07:00.846342795 +0000 UTC m=+1.452796409 container attach a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:07:00 np0005593232 podman[418305]: 2026-01-23 11:07:00.846856089 +0000 UTC m=+1.453309673 container died a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 06:07:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-75360b14276b238f2068d2289cabce30f5a18084b4b16799d052455ba566ccc4-merged.mount: Deactivated successfully.
Jan 23 06:07:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:01.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:01 np0005593232 nova_compute[250269]: 2026-01-23 11:07:01.312 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4239: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 358 KiB/s rd, 12 KiB/s wr, 32 op/s
Jan 23 06:07:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:01.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:01 np0005593232 nova_compute[250269]: 2026-01-23 11:07:01.610 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:01 np0005593232 podman[418305]: 2026-01-23 11:07:01.75108763 +0000 UTC m=+2.357541214 container remove a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_carson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:07:01 np0005593232 systemd[1]: libpod-conmon-a02899746551049e1b7fd46a4fa905e3b57c7fbf6b80e42b61d0b1da368a3b13.scope: Deactivated successfully.
Jan 23 06:07:01 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:07:01 np0005593232 podman[418347]: 2026-01-23 11:07:01.89334274 +0000 UTC m=+0.022518942 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:07:02 np0005593232 podman[418347]: 2026-01-23 11:07:02.006601651 +0000 UTC m=+0.135777843 container create 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 23 06:07:02 np0005593232 systemd[1]: Started libpod-conmon-3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67.scope.
Jan 23 06:07:02 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:02 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:02 np0005593232 podman[418347]: 2026-01-23 11:07:02.259793739 +0000 UTC m=+0.388969951 container init 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 06:07:02 np0005593232 podman[418347]: 2026-01-23 11:07:02.269231279 +0000 UTC m=+0.398407461 container start 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 06:07:02 np0005593232 podman[418347]: 2026-01-23 11:07:02.409656639 +0000 UTC m=+0.538832851 container attach 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:07:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:03 np0005593232 charming_haibt[418363]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:07:03 np0005593232 charming_haibt[418363]: --> relative data size: 1.0
Jan 23 06:07:03 np0005593232 charming_haibt[418363]: --> All data devices are unavailable
Jan 23 06:07:03 np0005593232 systemd[1]: libpod-3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67.scope: Deactivated successfully.
Jan 23 06:07:03 np0005593232 podman[418347]: 2026-01-23 11:07:03.100248381 +0000 UTC m=+1.229424563 container died 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:07:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4240: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 523 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 23 06:07:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a98f26efd181ff0a46ed435341b672f73e5f640ca8b4d16601e49ab0c937349e-merged.mount: Deactivated successfully.
Jan 23 06:07:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:04 np0005593232 nova_compute[250269]: 2026-01-23 11:07:04.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:05.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:05 np0005593232 podman[418347]: 2026-01-23 11:07:05.159735048 +0000 UTC m=+3.288911230 container remove 3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_haibt, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:07:05 np0005593232 systemd[1]: libpod-conmon-3a146afc25e7b60dac4df12b00f3dc5d223dca8d532c9f64adba6bd0c6037c67.scope: Deactivated successfully.
Jan 23 06:07:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4241: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 523 KiB/s rd, 12 KiB/s wr, 42 op/s
Jan 23 06:07:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:05 np0005593232 podman[418532]: 2026-01-23 11:07:05.748816183 +0000 UTC m=+0.019801377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.042334642 +0000 UTC m=+0.313319816 container create b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:07:06 np0005593232 systemd[1]: Started libpod-conmon-b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f.scope.
Jan 23 06:07:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.231732632 +0000 UTC m=+0.502717836 container init b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.239401764 +0000 UTC m=+0.510386938 container start b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Jan 23 06:07:06 np0005593232 great_ramanujan[418549]: 167 167
Jan 23 06:07:06 np0005593232 systemd[1]: libpod-b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f.scope: Deactivated successfully.
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.274431769 +0000 UTC m=+0.545416963 container attach b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.276403073 +0000 UTC m=+0.547388257 container died b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 06:07:06 np0005593232 nova_compute[250269]: 2026-01-23 11:07:06.315 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a02685636dbd09fe1ab651cf4c14153c6f50e9444aeb7560d527b5afdb566491-merged.mount: Deactivated successfully.
Jan 23 06:07:06 np0005593232 podman[418532]: 2026-01-23 11:07:06.810812181 +0000 UTC m=+1.081797355 container remove b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_ramanujan, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 06:07:06 np0005593232 systemd[1]: libpod-conmon-b384a91c4ee69fafa70382d300b3bcc4581fe353084f29cb650fd6152df9160f.scope: Deactivated successfully.
Jan 23 06:07:07 np0005593232 podman[418573]: 2026-01-23 11:07:07.034121546 +0000 UTC m=+0.077586980 container create 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 06:07:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:07.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:07 np0005593232 podman[418573]: 2026-01-23 11:07:06.982727029 +0000 UTC m=+0.026192483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:07:07 np0005593232 systemd[1]: Started libpod-conmon-36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b.scope.
Jan 23 06:07:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab258e0544a01bf8a584fc0952e9fdb6487056eb0cac0e609e392006e4b5beae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab258e0544a01bf8a584fc0952e9fdb6487056eb0cac0e609e392006e4b5beae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab258e0544a01bf8a584fc0952e9fdb6487056eb0cac0e609e392006e4b5beae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab258e0544a01bf8a584fc0952e9fdb6487056eb0cac0e609e392006e4b5beae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:07 np0005593232 podman[418573]: 2026-01-23 11:07:07.230064525 +0000 UTC m=+0.273529979 container init 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:07:07 np0005593232 podman[418573]: 2026-01-23 11:07:07.237072218 +0000 UTC m=+0.280537652 container start 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:07:07 np0005593232 podman[418573]: 2026-01-23 11:07:07.248914424 +0000 UTC m=+0.292379858 container attach 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4242: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 525 KiB/s rd, 22 KiB/s wr, 44 op/s
Jan 23 06:07:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:08 np0005593232 sweet_gates[418592]: {
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:    "0": [
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:        {
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "devices": [
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "/dev/loop3"
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            ],
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "lv_name": "ceph_lv0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "lv_size": "7511998464",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "name": "ceph_lv0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "tags": {
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.cluster_name": "ceph",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.crush_device_class": "",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.encrypted": "0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.osd_id": "0",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.type": "block",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:                "ceph.vdo": "0"
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            },
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "type": "block",
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:            "vg_name": "ceph_vg0"
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:        }
Jan 23 06:07:08 np0005593232 sweet_gates[418592]:    ]
Jan 23 06:07:08 np0005593232 sweet_gates[418592]: }
Jan 23 06:07:08 np0005593232 systemd[1]: libpod-36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b.scope: Deactivated successfully.
Jan 23 06:07:08 np0005593232 podman[418601]: 2026-01-23 11:07:08.096661098 +0000 UTC m=+0.025456673 container died 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:07:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ab258e0544a01bf8a584fc0952e9fdb6487056eb0cac0e609e392006e4b5beae-merged.mount: Deactivated successfully.
Jan 23 06:07:08 np0005593232 podman[418601]: 2026-01-23 11:07:08.154110071 +0000 UTC m=+0.082905646 container remove 36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_gates, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 06:07:08 np0005593232 systemd[1]: libpod-conmon-36d96d2c468d6908039338201d661428bd7159891b3e94905757d14a15abe13b.scope: Deactivated successfully.
Jan 23 06:07:08 np0005593232 podman[418754]: 2026-01-23 11:07:08.784519085 +0000 UTC m=+0.051503281 container create 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 06:07:08 np0005593232 systemd[1]: Started libpod-conmon-9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125.scope.
Jan 23 06:07:08 np0005593232 podman[418754]: 2026-01-23 11:07:08.762613231 +0000 UTC m=+0.029597447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:07:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:09 np0005593232 podman[418754]: 2026-01-23 11:07:09.158616405 +0000 UTC m=+0.425600631 container init 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:07:09 np0005593232 podman[418754]: 2026-01-23 11:07:09.167720936 +0000 UTC m=+0.434705142 container start 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:07:09 np0005593232 hardcore_heyrovsky[418770]: 167 167
Jan 23 06:07:09 np0005593232 systemd[1]: libpod-9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125.scope: Deactivated successfully.
Jan 23 06:07:09 np0005593232 podman[418754]: 2026-01-23 11:07:09.174538544 +0000 UTC m=+0.441522740 container attach 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 06:07:09 np0005593232 podman[418754]: 2026-01-23 11:07:09.175328316 +0000 UTC m=+0.442312512 container died 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:07:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6b3604ab9e7d69c0432d78e9023cfcd04fb7fab3f023097c6b05ef487fbe88e6-merged.mount: Deactivated successfully.
Jan 23 06:07:09 np0005593232 podman[418754]: 2026-01-23 11:07:09.230218168 +0000 UTC m=+0.497202364 container remove 9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 23 06:07:09 np0005593232 systemd[1]: libpod-conmon-9fc9da55bd3d5234b4d28dd34c8dbd98b8c68ff15a2e4d22da1d505a44fe5125.scope: Deactivated successfully.
Jan 23 06:07:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4243: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 395 KiB/s rd, 21 KiB/s wr, 37 op/s
Jan 23 06:07:09 np0005593232 podman[418793]: 2026-01-23 11:07:09.41680352 +0000 UTC m=+0.046426240 container create cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:07:09 np0005593232 systemd[1]: Started libpod-conmon-cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592.scope.
Jan 23 06:07:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:09.457 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=100, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=99) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:07:09 np0005593232 nova_compute[250269]: 2026-01-23 11:07:09.458 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:09.459 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:07:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:07:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d88734d5459d56ecf17c6cdec0198abc096a3e259697fb12b385dcfbd8dffc2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:09 np0005593232 podman[418793]: 2026-01-23 11:07:09.396157641 +0000 UTC m=+0.025780391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:07:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d88734d5459d56ecf17c6cdec0198abc096a3e259697fb12b385dcfbd8dffc2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d88734d5459d56ecf17c6cdec0198abc096a3e259697fb12b385dcfbd8dffc2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:09 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d88734d5459d56ecf17c6cdec0198abc096a3e259697fb12b385dcfbd8dffc2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:07:09 np0005593232 podman[418793]: 2026-01-23 11:07:09.508435976 +0000 UTC m=+0.138058716 container init cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:07:09 np0005593232 podman[418793]: 2026-01-23 11:07:09.516111507 +0000 UTC m=+0.145734227 container start cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 06:07:09 np0005593232 podman[418793]: 2026-01-23 11:07:09.520319443 +0000 UTC m=+0.149942183 container attach cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:07:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:09.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:09 np0005593232 nova_compute[250269]: 2026-01-23 11:07:09.961 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]: {
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:        "osd_id": 0,
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:        "type": "bluestore"
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]:    }
Jan 23 06:07:10 np0005593232 compassionate_blackwell[418809]: }
Jan 23 06:07:10 np0005593232 systemd[1]: libpod-cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592.scope: Deactivated successfully.
Jan 23 06:07:10 np0005593232 podman[418793]: 2026-01-23 11:07:10.43059606 +0000 UTC m=+1.060218780 container died cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:07:10 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1d88734d5459d56ecf17c6cdec0198abc096a3e259697fb12b385dcfbd8dffc2-merged.mount: Deactivated successfully.
Jan 23 06:07:10 np0005593232 podman[418793]: 2026-01-23 11:07:10.822204822 +0000 UTC m=+1.451827542 container remove cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:07:10 np0005593232 systemd[1]: libpod-conmon-cfa96940faedb131b07c8b15a7ec78c71083ec79ef16c29c0832b0257b312592.scope: Deactivated successfully.
Jan 23 06:07:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:07:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:07:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:07:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:11.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:07:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0bffd98f-4b4e-4831-8e01-9e447bfb5e0c does not exist
Jan 23 06:07:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0a9cb2d1-f90a-4d5e-bc0c-18cb801b8a3f does not exist
Jan 23 06:07:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev aa69c630-a2d1-4f6e-9ee9-fcf4eeaa9e2b does not exist
Jan 23 06:07:11 np0005593232 nova_compute[250269]: 2026-01-23 11:07:11.318 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4244: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 237 KiB/s rd, 11 KiB/s wr, 22 op/s
Jan 23 06:07:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:07:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:07:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:07:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:07:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:13.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4245: 321 pgs: 321 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 173 KiB/s rd, 10 KiB/s wr, 21 op/s
Jan 23 06:07:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:14 np0005593232 nova_compute[250269]: 2026-01-23 11:07:14.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:15.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4246: 321 pgs: 321 active+clean; 144 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.7 KiB/s rd, 10 KiB/s wr, 10 op/s
Jan 23 06:07:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:15.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:16 np0005593232 nova_compute[250269]: 2026-01-23 11:07:16.320 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:07:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:17.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:07:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4247: 321 pgs: 321 active+clean; 121 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s rd, 11 KiB/s wr, 17 op/s
Jan 23 06:07:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:17.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:18 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:18.461 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '100'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:07:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:19.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4248: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:07:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:19.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:19 np0005593232 nova_compute[250269]: 2026-01-23 11:07:19.965 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:21.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4249: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:07:21 np0005593232 nova_compute[250269]: 2026-01-23 11:07:21.371 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:21 np0005593232 podman[418950]: 2026-01-23 11:07:21.501812696 +0000 UTC m=+0.115703859 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 23 06:07:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:21.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:23.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4250: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Jan 23 06:07:23 np0005593232 podman[418977]: 2026-01-23 11:07:23.408712779 +0000 UTC m=+0.064180970 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 06:07:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:24 np0005593232 nova_compute[250269]: 2026-01-23 11:07:24.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:25.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4251: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Jan 23 06:07:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:25.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:26 np0005593232 nova_compute[250269]: 2026-01-23 11:07:26.374 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:27.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4252: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Jan 23 06:07:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:27.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:07:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2393615124' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:07:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:29.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4253: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.4 KiB/s rd, 341 B/s wr, 11 op/s
Jan 23 06:07:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:29.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:29 np0005593232 nova_compute[250269]: 2026-01-23 11:07:29.973 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:31.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:31 np0005593232 nova_compute[250269]: 2026-01-23 11:07:31.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4254: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:31 np0005593232 nova_compute[250269]: 2026-01-23 11:07:31.376 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:32 np0005593232 nova_compute[250269]: 2026-01-23 11:07:32.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:32 np0005593232 nova_compute[250269]: 2026-01-23 11:07:32.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:32 np0005593232 nova_compute[250269]: 2026-01-23 11:07:32.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:33.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4255: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:07:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:07:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:34 np0005593232 nova_compute[250269]: 2026-01-23 11:07:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:34 np0005593232 nova_compute[250269]: 2026-01-23 11:07:34.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:07:34 np0005593232 nova_compute[250269]: 2026-01-23 11:07:34.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:07:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:35.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:07:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4256: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:35.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:36 np0005593232 nova_compute[250269]: 2026-01-23 11:07:36.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:36 np0005593232 nova_compute[250269]: 2026-01-23 11:07:36.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:07:36 np0005593232 nova_compute[250269]: 2026-01-23 11:07:36.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:07:36 np0005593232 nova_compute[250269]: 2026-01-23 11:07:36.378 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:37 np0005593232 nova_compute[250269]: 2026-01-23 11:07:36.999 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:07:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4257: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:07:37
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'vms', 'default.rgw.log']
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:07:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:37.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:07:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:07:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:07:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:39 np0005593232 nova_compute[250269]: 2026-01-23 11:07:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4258: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:39.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:39 np0005593232 nova_compute[250269]: 2026-01-23 11:07:39.977 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4259: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:41 np0005593232 nova_compute[250269]: 2026-01-23 11:07:41.381 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:42.690 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:42.690 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:07:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:07:42.690 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:07:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:43.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4260: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:43.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 23 06:07:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/739007400' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 23 06:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 23 06:07:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/739007400' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 23 06:07:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:44 np0005593232 nova_compute[250269]: 2026-01-23 11:07:44.979 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4261: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:45.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:46 np0005593232 nova_compute[250269]: 2026-01-23 11:07:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:46 np0005593232 nova_compute[250269]: 2026-01-23 11:07:46.383 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4262: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:47.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:07:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.336 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.337 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:07:48 np0005593232 nova_compute[250269]: 2026-01-23 11:07:48.337 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Jan 23 06:07:48 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:48.648648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Jan 23 06:07:48 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166468648686, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 939, "num_deletes": 251, "total_data_size": 1480901, "memory_usage": 1498080, "flush_reason": "Manual Compaction"}
Jan 23 06:07:48 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Jan 23 06:07:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:49.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166469181129, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 1466995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92968, "largest_seqno": 93906, "table_properties": {"data_size": 1462255, "index_size": 2327, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10433, "raw_average_key_size": 19, "raw_value_size": 1452808, "raw_average_value_size": 2772, "num_data_blocks": 102, "num_entries": 524, "num_filter_entries": 524, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166384, "oldest_key_time": 1769166384, "file_creation_time": 1769166468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 532528 microseconds, and 4089 cpu microseconds.
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861838720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.278 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.941s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.181172) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 1466995 bytes OK
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.181192) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.346184) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.346235) EVENT_LOG_v1 {"time_micros": 1769166469346225, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.346261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 1476405, prev total WAL file size 1476405, number of live WAL files 2.
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.347346) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(1432KB)], [218(12MB)]
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166469347387, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 14142865, "oldest_snapshot_seqno": -1}
Jan 23 06:07:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4263: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.474 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.476 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4121MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.477 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.477 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.574 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.574 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:07:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:49.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.619 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.670 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.670 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.685 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.722 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.737 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 11541 keys, 12190969 bytes, temperature: kUnknown
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166469923031, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 12190969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12120741, "index_size": 40291, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28869, "raw_key_size": 306850, "raw_average_key_size": 26, "raw_value_size": 11923415, "raw_average_value_size": 1033, "num_data_blocks": 1512, "num_entries": 11541, "num_filter_entries": 11541, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166469, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:49 np0005593232 nova_compute[250269]: 2026-01-23 11:07:49.980 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.923309) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 12190969 bytes
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.997790) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 24.6 rd, 21.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 12.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(18.0) write-amplify(8.3) OK, records in: 12058, records dropped: 517 output_compression: NoCompression
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.997837) EVENT_LOG_v1 {"time_micros": 1769166469997821, "job": 138, "event": "compaction_finished", "compaction_time_micros": 575729, "compaction_time_cpu_micros": 53649, "output_level": 6, "num_output_files": 1, "total_output_size": 12190969, "num_input_records": 12058, "num_output_records": 11541, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:07:49 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166469998457, "job": 138, "event": "table_file_deletion", "file_number": 220}
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166470001239, "job": 138, "event": "table_file_deletion", "file_number": 218}
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:49.346939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:50.001320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:50.001325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:50.001327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:50.001329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:07:50.001331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:07:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661968437' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:07:50 np0005593232 nova_compute[250269]: 2026-01-23 11:07:50.310 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:07:50 np0005593232 nova_compute[250269]: 2026-01-23 11:07:50.316 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:07:50 np0005593232 nova_compute[250269]: 2026-01-23 11:07:50.336 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:07:50 np0005593232 nova_compute[250269]: 2026-01-23 11:07:50.338 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:07:50 np0005593232 nova_compute[250269]: 2026-01-23 11:07:50.338 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:07:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:07:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:51.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:07:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4264: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 op/s
Jan 23 06:07:51 np0005593232 nova_compute[250269]: 2026-01-23 11:07:51.384 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:51.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:52 np0005593232 podman[419104]: 2026-01-23 11:07:52.436925978 +0000 UTC m=+0.086952577 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 06:07:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:53.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4265: 321 pgs: 321 active+clean; 160 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Jan 23 06:07:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:53.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:54 np0005593232 podman[419132]: 2026-01-23 11:07:54.397054978 +0000 UTC m=+0.062063722 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 06:07:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:07:54 np0005593232 nova_compute[250269]: 2026-01-23 11:07:54.982 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4266: 321 pgs: 321 active+clean; 160 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 1.6 MiB/s wr, 15 op/s
Jan 23 06:07:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:55.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:56 np0005593232 nova_compute[250269]: 2026-01-23 11:07:56.386 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:07:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:57.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4267: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 23 06:07:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:57.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:07:59.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4268: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 23 06:07:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:07:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:07:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:07:59.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:07:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:00 np0005593232 nova_compute[250269]: 2026-01-23 11:08:00.074 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4269: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 226 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 23 06:08:01 np0005593232 nova_compute[250269]: 2026-01-23 11:08:01.389 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:01.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4270: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 06:08:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:03.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:05 np0005593232 nova_compute[250269]: 2026-01-23 11:08:05.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:05.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4271: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 201 KiB/s wr, 85 op/s
Jan 23 06:08:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:05.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:06 np0005593232 nova_compute[250269]: 2026-01-23 11:08:06.391 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:07.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4272: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 201 KiB/s wr, 85 op/s
Jan 23 06:08:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:09.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4273: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 170 B/s wr, 66 op/s
Jan 23 06:08:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:09.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:10 np0005593232 nova_compute[250269]: 2026-01-23 11:08:10.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4274: 321 pgs: 321 active+clean; 168 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 178 KiB/s wr, 68 op/s
Jan 23 06:08:11 np0005593232 nova_compute[250269]: 2026-01-23 11:08:11.394 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:11.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:08:12 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4275: 321 pgs: 321 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 2.0 MiB/s wr, 80 op/s
Jan 23 06:08:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:13.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 06:08:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 06:08:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:13 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:08:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:15 np0005593232 nova_compute[250269]: 2026-01-23 11:08:15.080 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:15.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4319a95-6bff-4d47-8348-82e3a38611df does not exist
Jan 23 06:08:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 01e55d75-2218-46f4-8aa5-1ddadd185eb4 does not exist
Jan 23 06:08:15 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev acabf2de-602a-48ae-9bb1-f42fb88a4979 does not exist
Jan 23 06:08:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4276: 321 pgs: 321 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 30 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:08:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:15.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:08:16 np0005593232 podman[419484]: 2026-01-23 11:08:16.182809642 +0000 UTC m=+0.028684801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:16 np0005593232 nova_compute[250269]: 2026-01-23 11:08:16.395 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:17 np0005593232 podman[419484]: 2026-01-23 11:08:17.063397881 +0000 UTC m=+0.909273060 container create ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:17.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4277: 321 pgs: 321 active+clean; 192 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 102 KiB/s rd, 2.0 MiB/s wr, 37 op/s
Jan 23 06:08:17 np0005593232 systemd[1]: Started libpod-conmon-ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b.scope.
Jan 23 06:08:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:17.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:08:18 np0005593232 podman[419484]: 2026-01-23 11:08:18.112222024 +0000 UTC m=+1.958097173 container init ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:08:18 np0005593232 podman[419484]: 2026-01-23 11:08:18.11968185 +0000 UTC m=+1.965556989 container start ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:08:18 np0005593232 ecstatic_chaum[419501]: 167 167
Jan 23 06:08:18 np0005593232 systemd[1]: libpod-ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b.scope: Deactivated successfully.
Jan 23 06:08:18 np0005593232 conmon[419501]: conmon ac9d4fea795cc3d47aff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b.scope/container/memory.events
Jan 23 06:08:18 np0005593232 podman[419484]: 2026-01-23 11:08:18.377062913 +0000 UTC m=+2.222938052 container attach ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 06:08:18 np0005593232 podman[419484]: 2026-01-23 11:08:18.378665067 +0000 UTC m=+2.224540226 container died ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 06:08:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:19.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4278: 321 pgs: 321 active+clean; 193 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 253 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 23 06:08:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:19.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:20 np0005593232 nova_compute[250269]: 2026-01-23 11:08:20.082 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:21.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4279: 321 pgs: 321 active+clean; 194 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 256 KiB/s rd, 2.1 MiB/s wr, 56 op/s
Jan 23 06:08:21 np0005593232 nova_compute[250269]: 2026-01-23 11:08:21.399 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b74f9214076614c8103549f752d6ec679a879a270ee62dae6f849042489cf7d7-merged.mount: Deactivated successfully.
Jan 23 06:08:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:21.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:22 np0005593232 podman[419484]: 2026-01-23 11:08:22.426192843 +0000 UTC m=+6.272067982 container remove ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 06:08:22 np0005593232 systemd[1]: libpod-conmon-ac9d4fea795cc3d47affccd3bd94b1f1909923a29923844dca9bd36153e30a5b.scope: Deactivated successfully.
Jan 23 06:08:22 np0005593232 podman[419578]: 2026-01-23 11:08:22.609544626 +0000 UTC m=+0.050718499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:22 np0005593232 podman[419578]: 2026-01-23 11:08:22.967181312 +0000 UTC m=+0.408355155 container create 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:08:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4280: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 274 KiB/s rd, 2.0 MiB/s wr, 55 op/s
Jan 23 06:08:23 np0005593232 systemd[1]: Started libpod-conmon-32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630.scope.
Jan 23 06:08:23 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:23 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:23.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:24 np0005593232 podman[419578]: 2026-01-23 11:08:24.03933935 +0000 UTC m=+1.480513313 container init 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:08:24 np0005593232 podman[419578]: 2026-01-23 11:08:24.054140618 +0000 UTC m=+1.495314451 container start 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 23 06:08:24 np0005593232 podman[419578]: 2026-01-23 11:08:24.393489541 +0000 UTC m=+1.834663404 container attach 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:24 np0005593232 podman[419592]: 2026-01-23 11:08:24.397043159 +0000 UTC m=+1.376867107 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:08:24 np0005593232 cool_feynman[419606]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:08:24 np0005593232 cool_feynman[419606]: --> relative data size: 1.0
Jan 23 06:08:24 np0005593232 cool_feynman[419606]: --> All data devices are unavailable
Jan 23 06:08:24 np0005593232 systemd[1]: libpod-32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630.scope: Deactivated successfully.
Jan 23 06:08:24 np0005593232 podman[419578]: 2026-01-23 11:08:24.955327845 +0000 UTC m=+2.396501698 container died 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:08:25 np0005593232 nova_compute[250269]: 2026-01-23 11:08:25.084 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4281: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 252 KiB/s rd, 97 KiB/s wr, 35 op/s
Jan 23 06:08:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:25.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b0e53a673a83a4f5deff17adf704c6e67dbdd4dc2f2405d68e4eeeb022963bfd-merged.mount: Deactivated successfully.
Jan 23 06:08:26 np0005593232 nova_compute[250269]: 2026-01-23 11:08:26.402 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:27 np0005593232 podman[419578]: 2026-01-23 11:08:27.096383611 +0000 UTC m=+4.537557464 container remove 32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 06:08:27 np0005593232 systemd[1]: libpod-conmon-32bc14b595bee66274340363e45ba90b2f57d1b4c8bc9b00522eccb16ade5630.scope: Deactivated successfully.
Jan 23 06:08:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:27 np0005593232 podman[419638]: 2026-01-23 11:08:27.206191267 +0000 UTC m=+2.220532957 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 23 06:08:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4282: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 265 KiB/s rd, 97 KiB/s wr, 38 op/s
Jan 23 06:08:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:27.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:27 np0005593232 podman[419809]: 2026-01-23 11:08:27.691227294 +0000 UTC m=+0.021710579 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:27 np0005593232 podman[419809]: 2026-01-23 11:08:27.992774135 +0000 UTC m=+0.323257420 container create 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:08:28 np0005593232 systemd[1]: Started libpod-conmon-8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1.scope.
Jan 23 06:08:28 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:28 np0005593232 podman[419809]: 2026-01-23 11:08:28.387578166 +0000 UTC m=+0.718061441 container init 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 23 06:08:28 np0005593232 podman[419809]: 2026-01-23 11:08:28.395996468 +0000 UTC m=+0.726479723 container start 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:08:28 np0005593232 podman[419809]: 2026-01-23 11:08:28.400493202 +0000 UTC m=+0.730976487 container attach 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 06:08:28 np0005593232 clever_brattain[419825]: 167 167
Jan 23 06:08:28 np0005593232 systemd[1]: libpod-8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1.scope: Deactivated successfully.
Jan 23 06:08:28 np0005593232 podman[419809]: 2026-01-23 11:08:28.404222995 +0000 UTC m=+0.734706260 container died 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 23 06:08:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d2d99f01b6f90870bb849928ae87e1477dd68dbe7190ee22d9546e266f0e7033-merged.mount: Deactivated successfully.
Jan 23 06:08:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4283: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 193 KiB/s rd, 91 KiB/s wr, 25 op/s
Jan 23 06:08:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:29.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:30 np0005593232 nova_compute[250269]: 2026-01-23 11:08:30.087 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:30 np0005593232 podman[419809]: 2026-01-23 11:08:30.345469944 +0000 UTC m=+2.675953199 container remove 8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_brattain, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:08:30 np0005593232 systemd[1]: libpod-conmon-8e23b7a3c440543c024841dceac6aca7218ce18468f006935c408c6a7a8c05f1.scope: Deactivated successfully.
Jan 23 06:08:30 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:30 np0005593232 podman[419850]: 2026-01-23 11:08:30.518682847 +0000 UTC m=+0.025235256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:30 np0005593232 podman[419850]: 2026-01-23 11:08:30.690291317 +0000 UTC m=+0.196843726 container create eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:31 np0005593232 systemd[1]: Started libpod-conmon-eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051.scope.
Jan 23 06:08:31 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686c0777ee15262b465bac630f9f15ac8e7ce696ce9f1cf959ba92ab4cd9592f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686c0777ee15262b465bac630f9f15ac8e7ce696ce9f1cf959ba92ab4cd9592f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686c0777ee15262b465bac630f9f15ac8e7ce696ce9f1cf959ba92ab4cd9592f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:31 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/686c0777ee15262b465bac630f9f15ac8e7ce696ce9f1cf959ba92ab4cd9592f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:31 np0005593232 podman[419850]: 2026-01-23 11:08:31.111273509 +0000 UTC m=+0.617825928 container init eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 06:08:31 np0005593232 podman[419850]: 2026-01-23 11:08:31.119737722 +0000 UTC m=+0.626290121 container start eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 06:08:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:31 np0005593232 podman[419850]: 2026-01-23 11:08:31.166462779 +0000 UTC m=+0.673015178 container attach eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4284: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 42 KiB/s rd, 19 KiB/s wr, 11 op/s
Jan 23 06:08:31 np0005593232 nova_compute[250269]: 2026-01-23 11:08:31.404 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:31.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]: {
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:    "0": [
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:        {
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "devices": [
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "/dev/loop3"
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            ],
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "lv_name": "ceph_lv0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "lv_size": "7511998464",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "name": "ceph_lv0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "tags": {
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.cluster_name": "ceph",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.crush_device_class": "",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.encrypted": "0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.osd_id": "0",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.type": "block",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:                "ceph.vdo": "0"
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            },
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "type": "block",
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:            "vg_name": "ceph_vg0"
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:        }
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]:    ]
Jan 23 06:08:31 np0005593232 nervous_nightingale[419867]: }
Jan 23 06:08:31 np0005593232 systemd[1]: libpod-eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051.scope: Deactivated successfully.
Jan 23 06:08:31 np0005593232 podman[419850]: 2026-01-23 11:08:31.919343519 +0000 UTC m=+1.425895918 container died eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:08:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-686c0777ee15262b465bac630f9f15ac8e7ce696ce9f1cf959ba92ab4cd9592f-merged.mount: Deactivated successfully.
Jan 23 06:08:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:33.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4285: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 6 op/s
Jan 23 06:08:33 np0005593232 podman[419850]: 2026-01-23 11:08:33.387206931 +0000 UTC m=+2.893759350 container remove eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:08:33 np0005593232 systemd[1]: libpod-conmon-eb2d78633f33f8f3aa8ece45626dbb1c27af342c6f3d068b7fee2257936ac051.scope: Deactivated successfully.
Jan 23 06:08:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:33.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.015209659 +0000 UTC m=+0.039121629 container create 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:08:34 np0005593232 systemd[1]: Started libpod-conmon-76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7.scope.
Jan 23 06:08:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:33.996794921 +0000 UTC m=+0.020706911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.10597085 +0000 UTC m=+0.129882840 container init 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.112555462 +0000 UTC m=+0.136467432 container start 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.116095839 +0000 UTC m=+0.140007809 container attach 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:34 np0005593232 adoring_nash[420047]: 167 167
Jan 23 06:08:34 np0005593232 systemd[1]: libpod-76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7.scope: Deactivated successfully.
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.119267517 +0000 UTC m=+0.143179487 container died 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 23 06:08:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-856c42467c80db235bcf76b90ffc015020380f0f2331d27b279902f3122cd55d-merged.mount: Deactivated successfully.
Jan 23 06:08:34 np0005593232 podman[420031]: 2026-01-23 11:08:34.16110333 +0000 UTC m=+0.185015300 container remove 76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_nash, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 06:08:34 np0005593232 systemd[1]: libpod-conmon-76f2c153a155cb1900a7d3c50c4214e5bb57ac4ba137e0b501cf90d7718d3ed7.scope: Deactivated successfully.
Jan 23 06:08:34 np0005593232 podman[420073]: 2026-01-23 11:08:34.333435179 +0000 UTC m=+0.046277146 container create 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:08:34 np0005593232 nova_compute[250269]: 2026-01-23 11:08:34.339 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:34 np0005593232 nova_compute[250269]: 2026-01-23 11:08:34.340 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:34 np0005593232 nova_compute[250269]: 2026-01-23 11:08:34.340 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:34 np0005593232 nova_compute[250269]: 2026-01-23 11:08:34.341 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:34 np0005593232 systemd[1]: Started libpod-conmon-67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011.scope.
Jan 23 06:08:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:08:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c303b3f7f2dccf64eb8db16465d1397ac55a5fff92125879d1b29730965da65a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c303b3f7f2dccf64eb8db16465d1397ac55a5fff92125879d1b29730965da65a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c303b3f7f2dccf64eb8db16465d1397ac55a5fff92125879d1b29730965da65a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c303b3f7f2dccf64eb8db16465d1397ac55a5fff92125879d1b29730965da65a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:08:34 np0005593232 podman[420073]: 2026-01-23 11:08:34.409454184 +0000 UTC m=+0.122296171 container init 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:08:34 np0005593232 podman[420073]: 2026-01-23 11:08:34.312558284 +0000 UTC m=+0.025400271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:08:34 np0005593232 podman[420073]: 2026-01-23 11:08:34.420440757 +0000 UTC m=+0.133282724 container start 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:08:34 np0005593232 podman[420073]: 2026-01-23 11:08:34.424153079 +0000 UTC m=+0.136995046 container attach 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:08:35 np0005593232 nova_compute[250269]: 2026-01-23 11:08:35.089 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:35.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:35 np0005593232 zen_allen[420090]: {
Jan 23 06:08:35 np0005593232 zen_allen[420090]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:08:35 np0005593232 zen_allen[420090]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:08:35 np0005593232 zen_allen[420090]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:08:35 np0005593232 zen_allen[420090]:        "osd_id": 0,
Jan 23 06:08:35 np0005593232 zen_allen[420090]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:08:35 np0005593232 zen_allen[420090]:        "type": "bluestore"
Jan 23 06:08:35 np0005593232 zen_allen[420090]:    }
Jan 23 06:08:35 np0005593232 zen_allen[420090]: }
Jan 23 06:08:35 np0005593232 nova_compute[250269]: 2026-01-23 11:08:35.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:35 np0005593232 nova_compute[250269]: 2026-01-23 11:08:35.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:08:35 np0005593232 systemd[1]: libpod-67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011.scope: Deactivated successfully.
Jan 23 06:08:35 np0005593232 podman[420112]: 2026-01-23 11:08:35.350956712 +0000 UTC m=+0.029799153 container died 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:08:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4286: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 682 B/s wr, 3 op/s
Jan 23 06:08:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c303b3f7f2dccf64eb8db16465d1397ac55a5fff92125879d1b29730965da65a-merged.mount: Deactivated successfully.
Jan 23 06:08:35 np0005593232 podman[420112]: 2026-01-23 11:08:35.419587063 +0000 UTC m=+0.098429484 container remove 67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_allen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:08:35 np0005593232 systemd[1]: libpod-conmon-67b2f74293e2f5a81834f1e0126ea14652c71bb865b1ff8cb5a73bcee1745011.scope: Deactivated successfully.
Jan 23 06:08:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:08:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:08:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:35 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b188f6ca-5313-402d-9a60-20e4b01db0a8 does not exist
Jan 23 06:08:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9762367b-09a8-4dd6-b37c-36e9a53ef318 does not exist
Jan 23 06:08:35 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 41627451-ce52-41f4-9f70-67cd051ada65 does not exist
Jan 23 06:08:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:35.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:36 np0005593232 nova_compute[250269]: 2026-01-23 11:08:36.294 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:36 np0005593232 nova_compute[250269]: 2026-01-23 11:08:36.295 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:08:36 np0005593232 nova_compute[250269]: 2026-01-23 11:08:36.296 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:08:36 np0005593232 nova_compute[250269]: 2026-01-23 11:08:36.318 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:08:36 np0005593232 nova_compute[250269]: 2026-01-23 11:08:36.405 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:37.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4287: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 15 KiB/s rd, 682 B/s wr, 5 op/s
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:08:37
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.rgw.root', 'images', '.mgr', 'default.rgw.meta']
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:08:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:37.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:08:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:08:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:08:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:39.005 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=101, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=100) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:08:39 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:39.006 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:08:39 np0005593232 nova_compute[250269]: 2026-01-23 11:08:39.019 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4288: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.6 KiB/s rd, 0 B/s wr, 5 op/s
Jan 23 06:08:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:39.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:40 np0005593232 nova_compute[250269]: 2026-01-23 11:08:40.098 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:41.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:41 np0005593232 nova_compute[250269]: 2026-01-23 11:08:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4289: 321 pgs: 321 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.7 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Jan 23 06:08:41 np0005593232 nova_compute[250269]: 2026-01-23 11:08:41.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:42.691 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:42.691 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:08:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:42.692 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:08:43 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:08:43.008 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '101'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:08:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4290: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 33 op/s
Jan 23 06:08:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:43.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:08:45 np0005593232 nova_compute[250269]: 2026-01-23 11:08:45.100 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:45.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4291: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 33 op/s
Jan 23 06:08:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:45.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:46 np0005593232 nova_compute[250269]: 2026-01-23 11:08:46.409 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:47.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:47 np0005593232 nova_compute[250269]: 2026-01-23 11:08:47.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4292: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 14 KiB/s wr, 33 op/s
Jan 23 06:08:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:47.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:08:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 21K writes, 94K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 21K writes, 21K syncs, 1.00 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1441 writes, 6351 keys, 1441 commit groups, 1.0 writes per commit group, ingest: 10.33 MB, 0.02 MB/s#012Interval WAL: 1441 writes, 1441 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     38.1      3.38              0.50        69    0.049       0      0       0.0       0.0#012  L6      1/0   11.63 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.5     77.1     66.2     10.76              2.55        68    0.158    564K    36K       0.0       0.0#012 Sum      1/0   11.63 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.5     58.7     59.5     14.14              3.05       137    0.103    564K    36K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.4     40.0     39.8      2.15              0.28        11    0.196     71K   3129       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0     77.1     66.2     10.76              2.55        68    0.158    564K    36K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     38.1      3.37              0.50        68    0.050       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.126, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.82 GB write, 0.11 MB/s write, 0.81 GB read, 0.11 MB/s read, 14.1 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 2.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 89.83 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000737 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5165,85.91 MB,28.2592%) FilterBlock(138,1.53 MB,0.503796%) IndexBlock(138,2.39 MB,0.785903%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:08:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:08:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4293: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 14 KiB/s wr, 30 op/s
Jan 23 06:08:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:49.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.149 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.442 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.443 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.443 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.443 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:08:50 np0005593232 nova_compute[250269]: 2026-01-23 11:08:50.443 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:08:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:08:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3970097222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.337 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:08:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4294: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.412 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.495 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.496 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4111MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.496 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.497 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.570 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.571 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:08:51 np0005593232 nova_compute[250269]: 2026-01-23 11:08:51.586 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:08:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:51.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:08:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980730450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:08:52 np0005593232 nova_compute[250269]: 2026-01-23 11:08:52.009 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:08:52 np0005593232 nova_compute[250269]: 2026-01-23 11:08:52.015 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:08:52 np0005593232 nova_compute[250269]: 2026-01-23 11:08:52.048 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:08:52 np0005593232 nova_compute[250269]: 2026-01-23 11:08:52.050 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:08:52 np0005593232 nova_compute[250269]: 2026-01-23 11:08:52.050 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:08:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:08:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:08:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4295: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s rd, 12 KiB/s wr, 19 op/s
Jan 23 06:08:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:53.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:55 np0005593232 nova_compute[250269]: 2026-01-23 11:08:55.151 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4296: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:08:55 np0005593232 podman[420280]: 2026-01-23 11:08:55.446988294 +0000 UTC m=+0.102572178 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 06:08:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:55.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:08:56 np0005593232 nova_compute[250269]: 2026-01-23 11:08:56.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:08:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:57.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4297: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:08:57 np0005593232 podman[420311]: 2026-01-23 11:08:57.390659381 +0000 UTC m=+0.050772190 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 06:08:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:57.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:08:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:08:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:08:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4298: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:08:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:08:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:08:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:08:59.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:00 np0005593232 nova_compute[250269]: 2026-01-23 11:09:00.219 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:01.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4299: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:01 np0005593232 nova_compute[250269]: 2026-01-23 11:09:01.455 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:01.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:03 np0005593232 nova_compute[250269]: 2026-01-23 11:09:03.046 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4300: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:05 np0005593232 nova_compute[250269]: 2026-01-23 11:09:05.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4301: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:06 np0005593232 nova_compute[250269]: 2026-01-23 11:09:06.458 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:07.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4302: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:07.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:09.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4303: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:09.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:10 np0005593232 nova_compute[250269]: 2026-01-23 11:09:10.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:11.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4304: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:11 np0005593232 nova_compute[250269]: 2026-01-23 11:09:11.463 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:11.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:13.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4305: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:13.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:15.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:15 np0005593232 nova_compute[250269]: 2026-01-23 11:09:15.225 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4306: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:15.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:16 np0005593232 nova_compute[250269]: 2026-01-23 11:09:16.465 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:17.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4307: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:17.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.444 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.445 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.637 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.717 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.718 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.726 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 06:09:18 np0005593232 nova_compute[250269]: 2026-01-23 11:09:18.726 250273 INFO nova.compute.claims [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.165 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:19.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4308: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:09:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:09:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3507766561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:09:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:19.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.736 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.743 250273 DEBUG nova.compute.provider_tree [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.760 250273 DEBUG nova.scheduler.client.report [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.779 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.780 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.828 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.829 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.847 250273 INFO nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.865 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.979 250273 DEBUG nova.policy [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1768560f7fe74b5284d84f78d3da759b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.990 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.991 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 06:09:19 np0005593232 nova_compute[250269]: 2026-01-23 11:09:19.991 250273 INFO nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Creating image(s)#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.018 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.047 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.072 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.076 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.163 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.164 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.165 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.165 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.194 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.198 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:20 np0005593232 nova_compute[250269]: 2026-01-23 11:09:20.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:21 np0005593232 nova_compute[250269]: 2026-01-23 11:09:21.060 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.863s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:21 np0005593232 nova_compute[250269]: 2026-01-23 11:09:21.133 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] resizing rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 06:09:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:21.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4309: 321 pgs: 321 active+clean; 135 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.0 KiB/s rd, 542 KiB/s wr, 7 op/s
Jan 23 06:09:21 np0005593232 nova_compute[250269]: 2026-01-23 11:09:21.609 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:21 np0005593232 nova_compute[250269]: 2026-01-23 11:09:21.730 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Successfully created port: c0891f44-e154-47cd-89b7-a6be56d5a196 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 06:09:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:21.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.410 250273 DEBUG nova.objects.instance [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'migration_context' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.425 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.426 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Ensure instance console log exists: /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.427 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.427 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:22 np0005593232 nova_compute[250269]: 2026-01-23 11:09:22.427 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:23.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.324 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Successfully updated port: c0891f44-e154-47cd-89b7-a6be56d5a196 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.340 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.340 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquired lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.341 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:09:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4310: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.440 250273 DEBUG nova.compute.manager [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-changed-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.440 250273 DEBUG nova.compute.manager [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Refreshing instance network info cache due to event network-changed-c0891f44-e154-47cd-89b7-a6be56d5a196. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.440 250273 DEBUG oslo_concurrency.lockutils [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:09:23 np0005593232 nova_compute[250269]: 2026-01-23 11:09:23.534 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 06:09:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:23.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.866 250273 DEBUG nova.network.neutron [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.898 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Releasing lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.898 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance network_info: |[{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.899 250273 DEBUG oslo_concurrency.lockutils [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.899 250273 DEBUG nova.network.neutron [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Refreshing network info cache for port c0891f44-e154-47cd-89b7-a6be56d5a196 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.902 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Start _get_guest_xml network_info=[{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.907 250273 WARNING nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.913 250273 DEBUG nova.virt.libvirt.host [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.914 250273 DEBUG nova.virt.libvirt.host [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.919 250273 DEBUG nova.virt.libvirt.host [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.920 250273 DEBUG nova.virt.libvirt.host [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.923 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.923 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.924 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.925 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.925 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.926 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.926 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.927 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.927 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.928 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.929 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.929 250273 DEBUG nova.virt.hardware [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 06:09:24 np0005593232 nova_compute[250269]: 2026-01-23 11:09:24.933 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:25.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.267 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:09:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2744907874' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.389 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4311: 321 pgs: 321 active+clean; 167 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 1.8 MiB/s wr, 22 op/s
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.414 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.422 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:25.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:09:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/242389056' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.876 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.878 250273 DEBUG nova.virt.libvirt.vif [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1156873023',display_name='tempest-TestServerAdvancedOps-server-1156873023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1156873023',id=219,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6dfc80710a6b4e6385f114bbc1c309b2',ramdisk_id='',reservation_id='r-36ty6xe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-823818588',owner_user_name='tempest-TestServerAdvancedOps-823818588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:09:19Z,user_data=None,user_id='1768560f7fe74b5284d84f78d3da759b',uuid=d734c36e-0a78-4f75-b951-dd9e07b4ab08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.879 250273 DEBUG nova.network.os_vif_util [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converting VIF {"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.880 250273 DEBUG nova.network.os_vif_util [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.881 250273 DEBUG nova.objects.instance [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.897 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] End _get_guest_xml xml=<domain type="kvm">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <uuid>d734c36e-0a78-4f75-b951-dd9e07b4ab08</uuid>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <name>instance-000000db</name>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestServerAdvancedOps-server-1156873023</nova:name>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 11:09:24</nova:creationTime>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:user uuid="1768560f7fe74b5284d84f78d3da759b">tempest-TestServerAdvancedOps-823818588-project-member</nova:user>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:project uuid="6dfc80710a6b4e6385f114bbc1c309b2">tempest-TestServerAdvancedOps-823818588</nova:project>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <nova:port uuid="c0891f44-e154-47cd-89b7-a6be56d5a196">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <system>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="serial">d734c36e-0a78-4f75-b951-dd9e07b4ab08</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="uuid">d734c36e-0a78-4f75-b951-dd9e07b4ab08</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </system>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <os>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </os>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <features>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </features>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </clock>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  <devices>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:19:1d:da"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <target dev="tapc0891f44-e1"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </interface>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/console.log" append="off"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </serial>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <video>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </video>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </rng>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 06:09:25 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 06:09:25 np0005593232 nova_compute[250269]:  </devices>
Jan 23 06:09:25 np0005593232 nova_compute[250269]: </domain>
Jan 23 06:09:25 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.899 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Preparing to wait for external event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.899 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.899 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.899 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.900 250273 DEBUG nova.virt.libvirt.vif [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1156873023',display_name='tempest-TestServerAdvancedOps-server-1156873023',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1156873023',id=219,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6dfc80710a6b4e6385f114bbc1c309b2',ramdisk_id='',reservation_id='r-36ty6xe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-823818588',owner_user_name='tempest-TestServerAdvancedOps-823818588-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:09:19Z,user_data=None,user_id='1768560f7fe74b5284d84f78d3da759b',uuid=d734c36e-0a78-4f75-b951-dd9e07b4ab08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.900 250273 DEBUG nova.network.os_vif_util [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converting VIF {"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.901 250273 DEBUG nova.network.os_vif_util [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.901 250273 DEBUG os_vif [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.902 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.903 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.903 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.907 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.907 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0891f44-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.908 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0891f44-e1, col_values=(('external_ids', {'iface-id': 'c0891f44-e154-47cd-89b7-a6be56d5a196', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:1d:da', 'vm-uuid': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.909 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:25 np0005593232 NetworkManager[49057]: <info>  [1769166565.9108] manager: (tapc0891f44-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/391)
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.912 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.916 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:25 np0005593232 nova_compute[250269]: 2026-01-23 11:09:25.917 250273 INFO os_vif [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1')#033[00m
Jan 23 06:09:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:26 np0005593232 podman[420697]: 2026-01-23 11:09:26.513946682 +0000 UTC m=+0.155117866 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:09:26 np0005593232 nova_compute[250269]: 2026-01-23 11:09:26.830 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:09:26 np0005593232 nova_compute[250269]: 2026-01-23 11:09:26.830 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:09:26 np0005593232 nova_compute[250269]: 2026-01-23 11:09:26.830 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] No VIF found with MAC fa:16:3e:19:1d:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 06:09:26 np0005593232 nova_compute[250269]: 2026-01-23 11:09:26.831 250273 INFO nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Using config drive#033[00m
Jan 23 06:09:26 np0005593232 nova_compute[250269]: 2026-01-23 11:09:26.920 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:27.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4312: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:09:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:27.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:27 np0005593232 nova_compute[250269]: 2026-01-23 11:09:27.824 250273 INFO nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Creating config drive at /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config#033[00m
Jan 23 06:09:27 np0005593232 nova_compute[250269]: 2026-01-23 11:09:27.835 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi302gn1j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:27 np0005593232 nova_compute[250269]: 2026-01-23 11:09:27.987 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi302gn1j" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.035 250273 DEBUG nova.storage.rbd_utils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] rbd image d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.041 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.081 250273 DEBUG nova.network.neutron [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updated VIF entry in instance network info cache for port c0891f44-e154-47cd-89b7-a6be56d5a196. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.082 250273 DEBUG nova.network.neutron [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.106 250273 DEBUG oslo_concurrency.lockutils [req-7caa4548-e3d7-4148-b095-e7fa3647d207 req-eefffbd6-56f7-47c0-b0e0-f02289a6b763 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:09:28 np0005593232 podman[420781]: 2026-01-23 11:09:28.385700476 +0000 UTC m=+0.047301805 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.406 250273 DEBUG oslo_concurrency.processutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config d734c36e-0a78-4f75-b951-dd9e07b4ab08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.407 250273 INFO nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Deleting local config drive /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08/disk.config because it was imported into RBD.#033[00m
Jan 23 06:09:28 np0005593232 kernel: tapc0891f44-e1: entered promiscuous mode
Jan 23 06:09:28 np0005593232 NetworkManager[49057]: <info>  [1769166568.4710] manager: (tapc0891f44-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.471 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:28 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:28Z|00833|binding|INFO|Claiming lport c0891f44-e154-47cd-89b7-a6be56d5a196 for this chassis.
Jan 23 06:09:28 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:28Z|00834|binding|INFO|c0891f44-e154-47cd-89b7-a6be56d5a196: Claiming fa:16:3e:19:1d:da 10.100.0.12
Jan 23 06:09:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:28.481 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:28.483 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 bound to our chassis#033[00m
Jan 23 06:09:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:28.483 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:28.485 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4679bd87-5e68-4376-ad6c-b2051931dec8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:28 np0005593232 systemd-udevd[420815]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:09:28 np0005593232 systemd-machined[215836]: New machine qemu-94-instance-000000db.
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.515 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:28 np0005593232 NetworkManager[49057]: <info>  [1769166568.5174] device (tapc0891f44-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 06:09:28 np0005593232 NetworkManager[49057]: <info>  [1769166568.5182] device (tapc0891f44-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 06:09:28 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:28Z|00835|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 ovn-installed in OVS
Jan 23 06:09:28 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:28Z|00836|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 up in Southbound
Jan 23 06:09:28 np0005593232 nova_compute[250269]: 2026-01-23 11:09:28.520 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:28 np0005593232 systemd[1]: Started Virtual Machine qemu-94-instance-000000db.
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.023 250273 DEBUG nova.compute.manager [req-a4981cfa-a344-4be0-809d-9f5120f91d36 req-117e299f-377b-47bb-8979-875aef4bd52f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.024 250273 DEBUG oslo_concurrency.lockutils [req-a4981cfa-a344-4be0-809d-9f5120f91d36 req-117e299f-377b-47bb-8979-875aef4bd52f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.025 250273 DEBUG oslo_concurrency.lockutils [req-a4981cfa-a344-4be0-809d-9f5120f91d36 req-117e299f-377b-47bb-8979-875aef4bd52f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.025 250273 DEBUG oslo_concurrency.lockutils [req-a4981cfa-a344-4be0-809d-9f5120f91d36 req-117e299f-377b-47bb-8979-875aef4bd52f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.025 250273 DEBUG nova.compute.manager [req-a4981cfa-a344-4be0-809d-9f5120f91d36 req-117e299f-377b-47bb-8979-875aef4bd52f 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Processing event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.197 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166569.1965542, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.198 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Started (Lifecycle Event)#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.200 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 06:09:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:29.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.204 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.209 250273 INFO nova.virt.libvirt.driver [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance spawned successfully.#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.209 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.258 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.263 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.264 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.264 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.265 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.265 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.266 250273 DEBUG nova.virt.libvirt.driver [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.270 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.298 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.298 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166569.1967025, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.299 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Paused (Lifecycle Event)#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.322 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.326 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166569.2037103, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.326 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Resumed (Lifecycle Event)#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.332 250273 INFO nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Took 9.34 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.332 250273 DEBUG nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.341 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.344 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.370 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.395 250273 INFO nova.compute.manager [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Took 10.71 seconds to build instance.#033[00m
Jan 23 06:09:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4313: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:09:29 np0005593232 nova_compute[250269]: 2026-01-23 11:09:29.415 250273 DEBUG oslo_concurrency.lockutils [None req-b32d80d0-7417-42dd-8122-89050aa98a2e 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:29.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:30 np0005593232 nova_compute[250269]: 2026-01-23 11:09:30.299 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:30 np0005593232 nova_compute[250269]: 2026-01-23 11:09:30.910 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.114 250273 DEBUG nova.compute.manager [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.115 250273 DEBUG oslo_concurrency.lockutils [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.115 250273 DEBUG oslo_concurrency.lockutils [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.116 250273 DEBUG oslo_concurrency.lockutils [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.116 250273 DEBUG nova.compute.manager [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.116 250273 WARNING nova.compute.manager [req-868c4c18-d8c0-42b4-96fe-7a75e0873002 req-76ce5167-e7de-4920-ae4a-d506af84eb59 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:09:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.160 250273 DEBUG nova.objects.instance [None req-6b97158e-0c23-46b6-9fe5-db8d87d6098d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.185 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166571.184896, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.185 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Paused (Lifecycle Event)#033[00m
Jan 23 06:09:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:31.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.211 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.215 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:31 np0005593232 nova_compute[250269]: 2026-01-23 11:09:31.235 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 23 06:09:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4314: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 714 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 23 06:09:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:31.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:32 np0005593232 kernel: tapc0891f44-e1 (unregistering): left promiscuous mode
Jan 23 06:09:32 np0005593232 NetworkManager[49057]: <info>  [1769166572.1587] device (tapc0891f44-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 06:09:32 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:32Z|00837|binding|INFO|Releasing lport c0891f44-e154-47cd-89b7-a6be56d5a196 from this chassis (sb_readonly=0)
Jan 23 06:09:32 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:32Z|00838|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 down in Southbound
Jan 23 06:09:32 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:32Z|00839|binding|INFO|Removing iface tapc0891f44-e1 ovn-installed in OVS
Jan 23 06:09:32 np0005593232 nova_compute[250269]: 2026-01-23 11:09:32.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:32 np0005593232 nova_compute[250269]: 2026-01-23 11:09:32.185 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:32.191 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:32.192 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 unbound from our chassis#033[00m
Jan 23 06:09:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:32.193 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:32 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:32.193 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[50e43d0d-e972-4958-b683-50f9b5fa5c01]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:32 np0005593232 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000db.scope: Deactivated successfully.
Jan 23 06:09:32 np0005593232 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d000000db.scope: Consumed 2.929s CPU time.
Jan 23 06:09:32 np0005593232 systemd-machined[215836]: Machine qemu-94-instance-000000db terminated.
Jan 23 06:09:32 np0005593232 nova_compute[250269]: 2026-01-23 11:09:32.324 250273 DEBUG nova.compute.manager [None req-6b97158e-0c23-46b6-9fe5-db8d87d6098d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:33.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.225 250273 DEBUG nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.225 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.226 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.226 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.226 250273 DEBUG nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.227 250273 WARNING nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.227 250273 DEBUG nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.227 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.227 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.227 250273 DEBUG oslo_concurrency.lockutils [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.228 250273 DEBUG nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.228 250273 WARNING nova.compute.manager [req-ba31465f-9257-40fb-a02c-ff4e9cf91d36 req-9944eb99-5cde-4446-a857-df059d7b6de3 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.249 250273 INFO nova.compute.manager [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Resuming#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.250 250273 DEBUG nova.objects.instance [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'flavor' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.283 250273 DEBUG oslo_concurrency.lockutils [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.283 250273 DEBUG oslo_concurrency.lockutils [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquired lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.283 250273 DEBUG nova.network.neutron [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:09:33 np0005593232 nova_compute[250269]: 2026-01-23 11:09:33.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4315: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 92 op/s
Jan 23 06:09:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:33.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.782 250273 DEBUG nova.network.neutron [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.804 250273 DEBUG oslo_concurrency.lockutils [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Releasing lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.810 250273 DEBUG nova.virt.libvirt.vif [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1156873023',display_name='tempest-TestServerAdvancedOps-server-1156873023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1156873023',id=219,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:09:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6dfc80710a6b4e6385f114bbc1c309b2',ramdisk_id='',reservation_id='r-36ty6xe8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-823818588',owner_user_name='tempest-TestServerAdvancedOps-823818588-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:09:32Z,user_data=None,user_id='1768560f7fe74b5284d84f78d3da759b',uuid=d734c36e-0a78-4f75-b951-dd9e07b4ab08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.810 250273 DEBUG nova.network.os_vif_util [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converting VIF {"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.811 250273 DEBUG nova.network.os_vif_util [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.811 250273 DEBUG os_vif [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.812 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.812 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.813 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.816 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0891f44-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.816 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0891f44-e1, col_values=(('external_ids', {'iface-id': 'c0891f44-e154-47cd-89b7-a6be56d5a196', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:1d:da', 'vm-uuid': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.816 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.816 250273 INFO os_vif [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1')#033[00m
Jan 23 06:09:34 np0005593232 nova_compute[250269]: 2026-01-23 11:09:34.968 250273 DEBUG nova.objects.instance [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:35 np0005593232 kernel: tapc0891f44-e1: entered promiscuous mode
Jan 23 06:09:35 np0005593232 systemd-udevd[420874]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:35Z|00840|binding|INFO|Claiming lport c0891f44-e154-47cd-89b7-a6be56d5a196 for this chassis.
Jan 23 06:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:35Z|00841|binding|INFO|c0891f44-e154-47cd-89b7-a6be56d5a196: Claiming fa:16:3e:19:1d:da 10.100.0.12
Jan 23 06:09:35 np0005593232 NetworkManager[49057]: <info>  [1769166575.0590] manager: (tapc0891f44-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.059 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:35.064 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:35.066 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 bound to our chassis#033[00m
Jan 23 06:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:35.066 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:35 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:35.067 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ad38d5-e2c7-4341-af5b-873e81273389]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:35Z|00842|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 up in Southbound
Jan 23 06:09:35 np0005593232 NetworkManager[49057]: <info>  [1769166575.0735] device (tapc0891f44-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 06:09:35 np0005593232 NetworkManager[49057]: <info>  [1769166575.0742] device (tapc0891f44-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 06:09:35 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:35Z|00843|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 ovn-installed in OVS
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.117 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:35 np0005593232 systemd-machined[215836]: New machine qemu-95-instance-000000db.
Jan 23 06:09:35 np0005593232 systemd[1]: Started Virtual Machine qemu-95-instance-000000db.
Jan 23 06:09:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:35.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.301 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4316: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 78 op/s
Jan 23 06:09:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:35.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:35 np0005593232 nova_compute[250269]: 2026-01-23 11:09:35.912 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.006 250273 DEBUG nova.compute.manager [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.008 250273 DEBUG oslo_concurrency.lockutils [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.008 250273 DEBUG oslo_concurrency.lockutils [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.009 250273 DEBUG oslo_concurrency.lockutils [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.009 250273 DEBUG nova.compute.manager [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.009 250273 WARNING nova.compute.manager [req-6aec83dd-d6bd-494c-b725-b33f05b087fd req-7d9081c9-bb2c-457b-b01c-ace79d6c9976 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.231 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for d734c36e-0a78-4f75-b951-dd9e07b4ab08 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.232 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166576.231187, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.232 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Started (Lifecycle Event)#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.258 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.267 250273 DEBUG nova.compute.manager [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.268 250273 DEBUG nova.objects.instance [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.278 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.286 250273 INFO nova.virt.libvirt.driver [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance running successfully.#033[00m
Jan 23 06:09:36 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.293 250273 DEBUG nova.virt.libvirt.guest [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.294 250273 DEBUG nova.compute.manager [None req-9c886ec7-c820-476e-b320-8a32a3b026d4 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.296 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.297 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166576.235433, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.297 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Resumed (Lifecycle Event)#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.321 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.326 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:36 np0005593232 nova_compute[250269]: 2026-01-23 11:09:36.351 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5fd7cbf8-3787-4758-9625-1d96dbb8bd49 does not exist
Jan 23 06:09:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 52aaed56-540c-43da-8988-72b89defc296 does not exist
Jan 23 06:09:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 5af78847-4ad2-4a93-982f-431d63bdf1c9 does not exist
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:09:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:09:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:37.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:09:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 06:09:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:09:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:37 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.37229466 +0000 UTC m=+0.046565584 container create 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4317: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 82 op/s
Jan 23 06:09:37 np0005593232 systemd[1]: Started libpod-conmon-20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622.scope.
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.347150327 +0000 UTC m=+0.021421271 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:09:37
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'default.rgw.log', 'backups', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images']
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.540251048 +0000 UTC m=+0.214521992 container init 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.549124212 +0000 UTC m=+0.223395126 container start 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.55302079 +0000 UTC m=+0.227291704 container attach 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:09:37 np0005593232 gracious_germain[421241]: 167 167
Jan 23 06:09:37 np0005593232 systemd[1]: libpod-20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622.scope: Deactivated successfully.
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.555341874 +0000 UTC m=+0.229612788 container died 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 06:09:37 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d9419443034a313c4e0a89356dd1ef594e1cc8ddac2898b847e06e89b8e31e3d-merged.mount: Deactivated successfully.
Jan 23 06:09:37 np0005593232 podman[421225]: 2026-01-23 11:09:37.638439844 +0000 UTC m=+0.312710758 container remove 20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_germain, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 06:09:37 np0005593232 systemd[1]: libpod-conmon-20df8068530d5f2f362c9535c43dd8ff0b160c444a6aa0e0f555c309b530a622.scope: Deactivated successfully.
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:09:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:09:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:37.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.824 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.825 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.826 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 06:09:37 np0005593232 nova_compute[250269]: 2026-01-23 11:09:37.826 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:37 np0005593232 podman[421264]: 2026-01-23 11:09:37.853199583 +0000 UTC m=+0.086162766 container create 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:09:37 np0005593232 podman[421264]: 2026-01-23 11:09:37.792198822 +0000 UTC m=+0.025162025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:37 np0005593232 systemd[1]: Started libpod-conmon-7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8.scope.
Jan 23 06:09:37 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:37 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:37 np0005593232 podman[421264]: 2026-01-23 11:09:37.99387984 +0000 UTC m=+0.226843043 container init 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Jan 23 06:09:38 np0005593232 podman[421264]: 2026-01-23 11:09:38.001839859 +0000 UTC m=+0.234803042 container start 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.100 250273 DEBUG nova.compute.manager [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.100 250273 DEBUG oslo_concurrency.lockutils [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.100 250273 DEBUG oslo_concurrency.lockutils [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.100 250273 DEBUG oslo_concurrency.lockutils [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.101 250273 DEBUG nova.compute.manager [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.101 250273 WARNING nova.compute.manager [req-2a090dc3-487f-45e4-8a94-8829341b6908 req-3891f62e-2dd0-421d-bfda-c8e9a6e9f089 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:09:38 np0005593232 podman[421264]: 2026-01-23 11:09:38.2308301 +0000 UTC m=+0.463793303 container attach 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.497 250273 DEBUG nova.objects.instance [None req-4d5d24ea-3a32-492c-bd21-575e15af0ddf 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.528 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166578.5280163, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.529 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Paused (Lifecycle Event)#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.555 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.573 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:38 np0005593232 nova_compute[250269]: 2026-01-23 11:09:38.593 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Jan 23 06:09:38 np0005593232 pensive_mcclintock[421280]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:09:38 np0005593232 pensive_mcclintock[421280]: --> relative data size: 1.0
Jan 23 06:09:38 np0005593232 pensive_mcclintock[421280]: --> All data devices are unavailable
Jan 23 06:09:38 np0005593232 systemd[1]: libpod-7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8.scope: Deactivated successfully.
Jan 23 06:09:38 np0005593232 podman[421264]: 2026-01-23 11:09:38.840665227 +0000 UTC m=+1.073628420 container died 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:09:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:09:39 np0005593232 nova_compute[250269]: 2026-01-23 11:09:39.199 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:39.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:39 np0005593232 nova_compute[250269]: 2026-01-23 11:09:39.222 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:09:39 np0005593232 nova_compute[250269]: 2026-01-23 11:09:39.222 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 06:09:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4318: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Jan 23 06:09:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:40 np0005593232 nova_compute[250269]: 2026-01-23 11:09:40.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:40 np0005593232 nova_compute[250269]: 2026-01-23 11:09:40.914 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3ccb845c505e4e8001ba4432b518cfeedacc0ac6f7978d5c1fb778b8d0204461-merged.mount: Deactivated successfully.
Jan 23 06:09:41 np0005593232 kernel: tapc0891f44-e1 (unregistering): left promiscuous mode
Jan 23 06:09:41 np0005593232 NetworkManager[49057]: <info>  [1769166581.1007] device (tapc0891f44-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 06:09:41 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:41Z|00844|binding|INFO|Releasing lport c0891f44-e154-47cd-89b7-a6be56d5a196 from this chassis (sb_readonly=0)
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.108 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:41 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:41Z|00845|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 down in Southbound
Jan 23 06:09:41 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:41Z|00846|binding|INFO|Removing iface tapc0891f44-e1 ovn-installed in OVS
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.112 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:41.117 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:41.119 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 unbound from our chassis#033[00m
Jan 23 06:09:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:41.119 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:41 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:41.120 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e751d0a0-e97f-48e5-91ee-44c88356e9db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.129 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:41 np0005593232 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000db.scope: Deactivated successfully.
Jan 23 06:09:41 np0005593232 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d000000db.scope: Consumed 3.262s CPU time.
Jan 23 06:09:41 np0005593232 systemd-machined[215836]: Machine qemu-95-instance-000000db terminated.
Jan 23 06:09:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:41.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.282 250273 DEBUG nova.compute.manager [None req-4d5d24ea-3a32-492c-bd21-575e15af0ddf 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4319: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 78 op/s
Jan 23 06:09:41 np0005593232 podman[421264]: 2026-01-23 11:09:41.765781382 +0000 UTC m=+3.998744565 container remove 7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:41.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:41 np0005593232 systemd[1]: libpod-conmon-7a3e5e6453816d5c1622f6aca947758ba15c2f1c436d3a32b55703f68bfd5eb8.scope: Deactivated successfully.
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.904 250273 DEBUG nova.compute.manager [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.905 250273 DEBUG oslo_concurrency.lockutils [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.905 250273 DEBUG oslo_concurrency.lockutils [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.905 250273 DEBUG oslo_concurrency.lockutils [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.906 250273 DEBUG nova.compute.manager [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:41 np0005593232 nova_compute[250269]: 2026-01-23 11:09:41.906 250273 WARNING nova.compute.manager [req-bd50fefe-4bcc-41b4-8bd1-3ebbeb27f47f req-45378f97-c673-4fb6-9e05-cb33a12c7d19 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state suspended and task_state None.#033[00m
Jan 23 06:09:42 np0005593232 podman[421522]: 2026-01-23 11:09:42.456055615 +0000 UTC m=+0.029707740 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:42.692 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:42.693 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:42.694 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:42 np0005593232 podman[421522]: 2026-01-23 11:09:42.957256318 +0000 UTC m=+0.530908383 container create e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:09:43 np0005593232 systemd[1]: Started libpod-conmon-e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05.scope.
Jan 23 06:09:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:43.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.351 250273 INFO nova.compute.manager [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Resuming#033[00m
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.352 250273 DEBUG nova.objects.instance [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'flavor' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:43 np0005593232 podman[421522]: 2026-01-23 11:09:43.383461164 +0000 UTC m=+0.957113259 container init e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:09:43 np0005593232 podman[421522]: 2026-01-23 11:09:43.394383165 +0000 UTC m=+0.968035200 container start e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:09:43 np0005593232 mystifying_carver[421540]: 167 167
Jan 23 06:09:43 np0005593232 systemd[1]: libpod-e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05.scope: Deactivated successfully.
Jan 23 06:09:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4320: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 511 B/s wr, 52 op/s
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.413 250273 DEBUG oslo_concurrency.lockutils [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.413 250273 DEBUG oslo_concurrency.lockutils [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquired lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:09:43 np0005593232 nova_compute[250269]: 2026-01-23 11:09:43.413 250273 DEBUG nova.network.neutron [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:09:43 np0005593232 podman[421522]: 2026-01-23 11:09:43.430221003 +0000 UTC m=+1.003873068 container attach e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:09:43 np0005593232 podman[421522]: 2026-01-23 11:09:43.431286562 +0000 UTC m=+1.004938597 container died e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:09:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c34a6d34f514548dc91e5fe64c7c1a670a075a43c18225f58837c3777919a791-merged.mount: Deactivated successfully.
Jan 23 06:09:43 np0005593232 podman[421522]: 2026-01-23 11:09:43.88839934 +0000 UTC m=+1.462051365 container remove e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:43 np0005593232 systemd[1]: libpod-conmon-e2821f14baaf8cc386d53d71ac12d431434f432b926a46e0c0db395785b49c05.scope: Deactivated successfully.
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.012 250273 DEBUG nova.compute.manager [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.014 250273 DEBUG oslo_concurrency.lockutils [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.014 250273 DEBUG oslo_concurrency.lockutils [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.015 250273 DEBUG oslo_concurrency.lockutils [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.015 250273 DEBUG nova.compute.manager [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.015 250273 WARNING nova.compute.manager [req-8f53451a-29f9-4f00-97ae-a3eca6646d64 req-e20dff50-d990-4f98-a33b-9de2bec5e72a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state suspended and task_state resuming.#033[00m
Jan 23 06:09:44 np0005593232 podman[421566]: 2026-01-23 11:09:44.119736926 +0000 UTC m=+0.119705090 container create ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:09:44 np0005593232 podman[421566]: 2026-01-23 11:09:44.029015915 +0000 UTC m=+0.028984139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:44 np0005593232 systemd[1]: Started libpod-conmon-ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a.scope.
Jan 23 06:09:44 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41dfd1356f54c040e9d1fdddb64ccd3de087baddc7b73b9fd403b294ca4bab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41dfd1356f54c040e9d1fdddb64ccd3de087baddc7b73b9fd403b294ca4bab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41dfd1356f54c040e9d1fdddb64ccd3de087baddc7b73b9fd403b294ca4bab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:44 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a41dfd1356f54c040e9d1fdddb64ccd3de087baddc7b73b9fd403b294ca4bab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:44 np0005593232 podman[421566]: 2026-01-23 11:09:44.259783875 +0000 UTC m=+0.259752049 container init ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 06:09:44 np0005593232 podman[421566]: 2026-01-23 11:09:44.268678931 +0000 UTC m=+0.268647095 container start ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:09:44 np0005593232 podman[421566]: 2026-01-23 11:09:44.274129491 +0000 UTC m=+0.274097655 container attach ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:09:44 np0005593232 nova_compute[250269]: 2026-01-23 11:09:44.990 250273 DEBUG nova.network.neutron [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [{"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.013 250273 DEBUG oslo_concurrency.lockutils [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Releasing lock "refresh_cache-d734c36e-0a78-4f75-b951-dd9e07b4ab08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.019 250273 DEBUG nova.virt.libvirt.vif [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1156873023',display_name='tempest-TestServerAdvancedOps-server-1156873023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1156873023',id=219,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:09:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='6dfc80710a6b4e6385f114bbc1c309b2',ramdisk_id='',reservation_id='r-36ty6xe8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-823818588',owner_user_name='tempest-TestServerAdvancedOps-823818588-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:09:41Z,user_data=None,user_id='1768560f7fe74b5284d84f78d3da759b',uuid=d734c36e-0a78-4f75-b951-dd9e07b4ab08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.019 250273 DEBUG nova.network.os_vif_util [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converting VIF {"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.020 250273 DEBUG nova.network.os_vif_util [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.020 250273 DEBUG os_vif [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.021 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.021 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.021 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.024 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.025 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0891f44-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.025 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0891f44-e1, col_values=(('external_ids', {'iface-id': 'c0891f44-e154-47cd-89b7-a6be56d5a196', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:1d:da', 'vm-uuid': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.025 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.026 250273 INFO os_vif [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1')#033[00m
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]: {
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:    "0": [
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:        {
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "devices": [
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "/dev/loop3"
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            ],
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "lv_name": "ceph_lv0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "lv_size": "7511998464",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "name": "ceph_lv0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "tags": {
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.cluster_name": "ceph",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.crush_device_class": "",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.encrypted": "0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.osd_id": "0",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.type": "block",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:                "ceph.vdo": "0"
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            },
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "type": "block",
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:            "vg_name": "ceph_vg0"
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:        }
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]:    ]
Jan 23 06:09:45 np0005593232 exciting_lewin[421583]: }
Jan 23 06:09:45 np0005593232 systemd[1]: libpod-ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a.scope: Deactivated successfully.
Jan 23 06:09:45 np0005593232 podman[421566]: 2026-01-23 11:09:45.060211334 +0000 UTC m=+1.060179488 container died ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.063 250273 DEBUG nova.objects.instance [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'numa_topology' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:45 np0005593232 kernel: tapc0891f44-e1: entered promiscuous mode
Jan 23 06:09:45 np0005593232 NetworkManager[49057]: <info>  [1769166585.1311] manager: (tapc0891f44-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:45Z|00847|binding|INFO|Claiming lport c0891f44-e154-47cd-89b7-a6be56d5a196 for this chassis.
Jan 23 06:09:45 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:45Z|00848|binding|INFO|c0891f44-e154-47cd-89b7-a6be56d5a196: Claiming fa:16:3e:19:1d:da 10.100.0.12
Jan 23 06:09:45 np0005593232 systemd-udevd[421616]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:09:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:45.184 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:45.185 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 bound to our chassis#033[00m
Jan 23 06:09:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:45.186 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:45 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:45.187 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4f9b29-3f51-49f6-8ff0-bf7bf1bd3382]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.188 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:45Z|00849|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 ovn-installed in OVS
Jan 23 06:09:45 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:45Z|00850|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 up in Southbound
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.190 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.193 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 NetworkManager[49057]: <info>  [1769166585.1959] device (tapc0891f44-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 06:09:45 np0005593232 NetworkManager[49057]: <info>  [1769166585.1969] device (tapc0891f44-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 06:09:45 np0005593232 systemd-machined[215836]: New machine qemu-96-instance-000000db.
Jan 23 06:09:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:45.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:45 np0005593232 systemd[1]: Started Virtual Machine qemu-96-instance-000000db.
Jan 23 06:09:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8a41dfd1356f54c040e9d1fdddb64ccd3de087baddc7b73b9fd403b294ca4bab-merged.mount: Deactivated successfully.
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.305 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:45 np0005593232 podman[421566]: 2026-01-23 11:09:45.361522158 +0000 UTC m=+1.361490322 container remove ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lewin, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:09:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4321: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 4 op/s
Jan 23 06:09:45 np0005593232 systemd[1]: libpod-conmon-ceffe28976faaddf9f095a1d1086ef5c7dc396e0caa12716e24a748a0e1a3f8a.scope: Deactivated successfully.
Jan 23 06:09:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.792 250273 DEBUG nova.virt.libvirt.host [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Removed pending event for d734c36e-0a78-4f75-b951-dd9e07b4ab08 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.793 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166585.7918272, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.793 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Started (Lifecycle Event)#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.811 250273 DEBUG nova.compute.manager [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.811 250273 DEBUG nova.objects.instance [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'pci_devices' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.818 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.821 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.827 250273 INFO nova.virt.libvirt.driver [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance running successfully.#033[00m
Jan 23 06:09:45 np0005593232 virtqemud[249592]: argument unsupported: QEMU guest agent is not configured
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.830 250273 DEBUG nova.virt.libvirt.guest [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.830 250273 DEBUG nova.compute.manager [None req-86e9cc77-fd19-4191-8e68-e057d80351fe 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.840 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.840 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166585.7962286, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.841 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Resumed (Lifecycle Event)#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.868 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.873 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:09:45 np0005593232 nova_compute[250269]: 2026-01-23 11:09:45.916 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:45.981826463 +0000 UTC m=+0.031009845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.109 250273 DEBUG nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.110 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.111 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.111 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.111 250273 DEBUG nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.112 250273 WARNING nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.112 250273 DEBUG nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.112 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.112 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.113 250273 DEBUG oslo_concurrency.lockutils [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.113 250273 DEBUG nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:46 np0005593232 nova_compute[250269]: 2026-01-23 11:09:46.113 250273 WARNING nova.compute.manager [req-14d45a76-0a09-42ee-9db0-0b8ad7b3caf0 req-2d4864ef-f09b-4422-bc91-8d3eb42ba365 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:09:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:46.435383113 +0000 UTC m=+0.484566515 container create 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:09:46 np0005593232 systemd[1]: Started libpod-conmon-40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8.scope.
Jan 23 06:09:46 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:46.876031977 +0000 UTC m=+0.925215429 container init 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:46.885312983 +0000 UTC m=+0.934496385 container start 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:09:46 np0005593232 heuristic_hermann[421826]: 167 167
Jan 23 06:09:46 np0005593232 systemd[1]: libpod-40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8.scope: Deactivated successfully.
Jan 23 06:09:46 np0005593232 conmon[421826]: conmon 40ebd12d03384e239f5a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8.scope/container/memory.events
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:46.968764093 +0000 UTC m=+1.017947555 container attach 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 06:09:46 np0005593232 podman[421809]: 2026-01-23 11:09:46.97119604 +0000 UTC m=+1.020379442 container died 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.099 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.101 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.103 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.103 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.104 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.107 250273 INFO nova.compute.manager [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Terminating instance#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.109 250273 DEBUG nova.compute.manager [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 06:09:47 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7f6315327b33e5551c46d8775212bd50f59e1ccb0af15747a945882da9a882c7-merged.mount: Deactivated successfully.
Jan 23 06:09:47 np0005593232 kernel: tapc0891f44-e1 (unregistering): left promiscuous mode
Jan 23 06:09:47 np0005593232 NetworkManager[49057]: <info>  [1769166587.2127] device (tapc0891f44-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 06:09:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:47.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:47Z|00851|binding|INFO|Releasing lport c0891f44-e154-47cd-89b7-a6be56d5a196 from this chassis (sb_readonly=0)
Jan 23 06:09:47 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:47Z|00852|binding|INFO|Setting lport c0891f44-e154-47cd-89b7-a6be56d5a196 down in Southbound
Jan 23 06:09:47 np0005593232 ovn_controller[151001]: 2026-01-23T11:09:47Z|00853|binding|INFO|Removing iface tapc0891f44-e1 ovn-installed in OVS
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:47.233 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:1d:da 10.100.0.12'], port_security=['fa:16:3e:19:1d:da 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd734c36e-0a78-4f75-b951-dd9e07b4ab08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e9827ce-730f-43ce-9ce9-b2a89ad36a71', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6dfc80710a6b4e6385f114bbc1c309b2', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b2d4b004-7fb3-4ff8-88b3-fb7e97152533', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=932eac00-f6df-48e9-9c95-76ada1168e79, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=c0891f44-e154-47cd-89b7-a6be56d5a196) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:47.235 161902 INFO neutron.agent.ovn.metadata.agent [-] Port c0891f44-e154-47cd-89b7-a6be56d5a196 in datapath 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 unbound from our chassis#033[00m
Jan 23 06:09:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:47.235 161902 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 8e9827ce-730f-43ce-9ce9-b2a89ad36a71 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Jan 23 06:09:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:47.236 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[4382b51b-df04-4fc4-88ee-2b276cab16e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.243 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000db.scope: Deactivated successfully.
Jan 23 06:09:47 np0005593232 podman[421809]: 2026-01-23 11:09:47.272481833 +0000 UTC m=+1.321665195 container remove 40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 23 06:09:47 np0005593232 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d000000db.scope: Consumed 1.729s CPU time.
Jan 23 06:09:47 np0005593232 systemd-machined[215836]: Machine qemu-96-instance-000000db terminated.
Jan 23 06:09:47 np0005593232 systemd[1]: libpod-conmon-40ebd12d03384e239f5a5367512c387a71e9b53976043ccc5a897fe78f8bceb8.scope: Deactivated successfully.
Jan 23 06:09:47 np0005593232 NetworkManager[49057]: <info>  [1769166587.3331] manager: (tapc0891f44-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/395)
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.349 250273 INFO nova.virt.libvirt.driver [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Instance destroyed successfully.#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.350 250273 DEBUG nova.objects.instance [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lazy-loading 'resources' on Instance uuid d734c36e-0a78-4f75-b951-dd9e07b4ab08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.363 250273 DEBUG nova.virt.libvirt.vif [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:09:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1156873023',display_name='tempest-TestServerAdvancedOps-server-1156873023',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1156873023',id=219,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:09:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6dfc80710a6b4e6385f114bbc1c309b2',ramdisk_id='',reservation_id='r-36ty6xe8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-823818588',owner_user_name='tempest-TestServerAdvancedOps-823818588-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:09:45Z,user_data=None,user_id='1768560f7fe74b5284d84f78d3da759b',uuid=d734c36e-0a78-4f75-b951-dd9e07b4ab08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.363 250273 DEBUG nova.network.os_vif_util [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converting VIF {"id": "c0891f44-e154-47cd-89b7-a6be56d5a196", "address": "fa:16:3e:19:1d:da", "network": {"id": "8e9827ce-730f-43ce-9ce9-b2a89ad36a71", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-1191115642-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "6dfc80710a6b4e6385f114bbc1c309b2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0891f44-e1", "ovs_interfaceid": "c0891f44-e154-47cd-89b7-a6be56d5a196", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.364 250273 DEBUG nova.network.os_vif_util [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.364 250273 DEBUG os_vif [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.367 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0891f44-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:47 np0005593232 nova_compute[250269]: 2026-01-23 11:09:47.371 250273 INFO os_vif [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:1d:da,bridge_name='br-int',has_traffic_filtering=True,id=c0891f44-e154-47cd-89b7-a6be56d5a196,network=Network(8e9827ce-730f-43ce-9ce9-b2a89ad36a71),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0891f44-e1')#033[00m
Jan 23 06:09:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4322: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.2 KiB/s rd, 9 op/s
Jan 23 06:09:47 np0005593232 podman[421869]: 2026-01-23 11:09:47.419343631 +0000 UTC m=+0.026689557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:09:47 np0005593232 podman[421869]: 2026-01-23 11:09:47.649970347 +0000 UTC m=+0.257316243 container create 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:09:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:47.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:47 np0005593232 systemd[1]: Started libpod-conmon-8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea.scope.
Jan 23 06:09:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:09:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd07a041c8564edc9640f48b9694ca5478a8ea512dd23b001726974530707a86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd07a041c8564edc9640f48b9694ca5478a8ea512dd23b001726974530707a86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd07a041c8564edc9640f48b9694ca5478a8ea512dd23b001726974530707a86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd07a041c8564edc9640f48b9694ca5478a8ea512dd23b001726974530707a86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:09:48 np0005593232 podman[421869]: 2026-01-23 11:09:48.020785887 +0000 UTC m=+0.628131823 container init 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Jan 23 06:09:48 np0005593232 podman[421869]: 2026-01-23 11:09:48.029290981 +0000 UTC m=+0.636636877 container start 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0009958283896333519 of space, bias 1.0, pg target 0.2987485168900056 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:09:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:09:48 np0005593232 podman[421869]: 2026-01-23 11:09:48.053659953 +0000 UTC m=+0.661005869 container attach 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.285 250273 DEBUG nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.287 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.287 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.288 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.288 250273 DEBUG nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.288 250273 DEBUG nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-unplugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.289 250273 DEBUG nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.289 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.289 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.290 250273 DEBUG oslo_concurrency.lockutils [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.290 250273 DEBUG nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] No waiting events found dispatching network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.290 250273 WARNING nova.compute.manager [req-a4caeb64-3b35-437d-8554-c60af8ec51ef req-588a1d45-97fa-4043-84a3-a4fe4bee4642 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received unexpected event network-vif-plugged-c0891f44-e154-47cd-89b7-a6be56d5a196 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.601 250273 INFO nova.virt.libvirt.driver [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Deleting instance files /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08_del#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.602 250273 INFO nova.virt.libvirt.driver [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Deletion of /var/lib/nova/instances/d734c36e-0a78-4f75-b951-dd9e07b4ab08_del complete#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.741 250273 INFO nova.compute.manager [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Took 1.63 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.742 250273 DEBUG oslo.service.loopingcall [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.742 250273 DEBUG nova.compute.manager [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 06:09:48 np0005593232 nova_compute[250269]: 2026-01-23 11:09:48.742 250273 DEBUG nova.network.neutron [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]: {
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:        "osd_id": 0,
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:        "type": "bluestore"
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]:    }
Jan 23 06:09:48 np0005593232 unruffled_nightingale[421903]: }
Jan 23 06:09:49 np0005593232 systemd[1]: libpod-8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea.scope: Deactivated successfully.
Jan 23 06:09:49 np0005593232 conmon[421903]: conmon 8d06baa6c56c83f1fcf8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea.scope/container/memory.events
Jan 23 06:09:49 np0005593232 podman[421869]: 2026-01-23 11:09:49.011857299 +0000 UTC m=+1.619203205 container died 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Jan 23 06:09:49 np0005593232 systemd[1]: var-lib-containers-storage-overlay-bd07a041c8564edc9640f48b9694ca5478a8ea512dd23b001726974530707a86-merged.mount: Deactivated successfully.
Jan 23 06:09:49 np0005593232 podman[421869]: 2026-01-23 11:09:49.064786698 +0000 UTC m=+1.672132594 container remove 8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_nightingale, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 06:09:49 np0005593232 systemd[1]: libpod-conmon-8d06baa6c56c83f1fcf888e8e1682ddb841f2cf2063e887026f4c27d7e9a7aea.scope: Deactivated successfully.
Jan 23 06:09:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:09:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:09:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 2182c761-49db-4d25-91b4-d24d5bac9f16 does not exist
Jan 23 06:09:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev be98f93a-ce44-4202-9bc0-60760163bd5d does not exist
Jan 23 06:09:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a0fbfc01-c36f-4002-ba1d-f0036b12ccfb does not exist
Jan 23 06:09:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:49.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4323: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 KiB/s rd, 5 op/s
Jan 23 06:09:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:49.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:50 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.322 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.334 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.334 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.334 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:09:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939305068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.803 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.959 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.960 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3995MB free_disk=20.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.961 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:50 np0005593232 nova_compute[250269]: 2026-01-23 11:09:50.961 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.064 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance d734c36e-0a78-4f75-b951-dd9e07b4ab08 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.065 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.065 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:09:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.155 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:51.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:51.246 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=102, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=101) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:09:51 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:51.247 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.316 250273 DEBUG nova.network.neutron [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.360 250273 INFO nova.compute.manager [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Took 2.62 seconds to deallocate network for instance.#033[00m
Jan 23 06:09:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4324: 321 pgs: 321 active+clean; 153 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s rd, 511 B/s wr, 15 op/s
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.413 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.436 250273 DEBUG nova.compute.manager [req-4965b7e3-3456-4037-af4e-56795eaf151b req-70570321-adaf-4d77-89cb-721f987ec35c 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Received event network-vif-deleted-c0891f44-e154-47cd-89b7-a6be56d5a196 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:09:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:09:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524979374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.634 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.640 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.658 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.689 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.689 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.690 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.277s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.690 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.690 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 06:09:51 np0005593232 nova_compute[250269]: 2026-01-23 11:09:51.741 250273 DEBUG oslo_concurrency.processutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:09:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:51.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:09:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/651656143' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.263 250273 DEBUG oslo_concurrency.processutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.269 250273 DEBUG nova.compute.provider_tree [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.288 250273 DEBUG nova.scheduler.client.report [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.307 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.330 250273 INFO nova.scheduler.client.report [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Deleted allocations for instance d734c36e-0a78-4f75-b951-dd9e07b4ab08#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.369 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:52 np0005593232 nova_compute[250269]: 2026-01-23 11:09:52.423 250273 DEBUG oslo_concurrency.lockutils [None req-2c69b89e-0eef-4e8d-9561-89663e4a686d 1768560f7fe74b5284d84f78d3da759b 6dfc80710a6b4e6385f114bbc1c309b2 - - default default] Lock "d734c36e-0a78-4f75-b951-dd9e07b4ab08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:09:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:53.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4325: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 23 06:09:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:09:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:09:54 np0005593232 nova_compute[250269]: 2026-01-23 11:09:54.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:55.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:55 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:09:55.249 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '102'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:09:55 np0005593232 nova_compute[250269]: 2026-01-23 11:09:55.324 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4326: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 23 06:09:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:55.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:09:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:09:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:57.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:09:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4327: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Jan 23 06:09:57 np0005593232 nova_compute[250269]: 2026-01-23 11:09:57.414 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:09:57 np0005593232 podman[422060]: 2026-01-23 11:09:57.622019969 +0000 UTC m=+0.279698489 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 23 06:09:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:57.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:58 np0005593232 podman[422110]: 2026-01-23 11:09:58.780516187 +0000 UTC m=+0.066227956 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 06:09:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:09:59.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:09:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4328: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 23 06:09:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:09:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:09:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:09:59.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 06:10:00 np0005593232 nova_compute[250269]: 2026-01-23 11:10:00.326 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:01.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:01 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 06:10:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4329: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Jan 23 06:10:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:01.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:02 np0005593232 nova_compute[250269]: 2026-01-23 11:10:02.348 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769166587.3471801, d734c36e-0a78-4f75-b951-dd9e07b4ab08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:10:02 np0005593232 nova_compute[250269]: 2026-01-23 11:10:02.349 250273 INFO nova.compute.manager [-] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] VM Stopped (Lifecycle Event)#033[00m
Jan 23 06:10:02 np0005593232 nova_compute[250269]: 2026-01-23 11:10:02.372 250273 DEBUG nova.compute.manager [None req-2a830cc4-826f-449a-a9fd-b46334978a8e - - - - - -] [instance: d734c36e-0a78-4f75-b951-dd9e07b4ab08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:10:02 np0005593232 nova_compute[250269]: 2026-01-23 11:10:02.436 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:03.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4330: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 10 KiB/s rd, 682 B/s wr, 15 op/s
Jan 23 06:10:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:03.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 06:10:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:05.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:05 np0005593232 nova_compute[250269]: 2026-01-23 11:10:05.329 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4331: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:05.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:07.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4332: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:07 np0005593232 nova_compute[250269]: 2026-01-23 11:10:07.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:07.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:09.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4333: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:09.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:10 np0005593232 nova_compute[250269]: 2026-01-23 11:10:10.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4334: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:11.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:12 np0005593232 nova_compute[250269]: 2026-01-23 11:10:12.477 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:13.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4335: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:10:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:13.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:10:14 np0005593232 nova_compute[250269]: 2026-01-23 11:10:14.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:15.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:15 np0005593232 nova_compute[250269]: 2026-01-23 11:10:15.332 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4336: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:15.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4337: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:17 np0005593232 nova_compute[250269]: 2026-01-23 11:10:17.479 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:17.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4338: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:19.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:20 np0005593232 nova_compute[250269]: 2026-01-23 11:10:20.311 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:20 np0005593232 nova_compute[250269]: 2026-01-23 11:10:20.311 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 06:10:20 np0005593232 nova_compute[250269]: 2026-01-23 11:10:20.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:20 np0005593232 nova_compute[250269]: 2026-01-23 11:10:20.336 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 06:10:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:21.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4339: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:21.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:22 np0005593232 nova_compute[250269]: 2026-01-23 11:10:22.542 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:23.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4340: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:23.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:25.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:25 np0005593232 nova_compute[250269]: 2026-01-23 11:10:25.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:25 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:25Z|00854|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 23 06:10:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4341: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:25.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:27.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4342: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:27 np0005593232 nova_compute[250269]: 2026-01-23 11:10:27.570 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.177 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.178 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.192 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.286 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.288 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.296 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.296 250273 INFO nova.compute.claims [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 06:10:28 np0005593232 podman[422220]: 2026-01-23 11:10:28.412773574 +0000 UTC m=+0.077008674 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.433 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:10:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/527711412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.914 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.923 250273 DEBUG nova.compute.provider_tree [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.942 250273 DEBUG nova.scheduler.client.report [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.966 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:28 np0005593232 nova_compute[250269]: 2026-01-23 11:10:28.968 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.018 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.019 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.043 250273 INFO nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.063 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.146 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.148 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.148 250273 INFO nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Creating image(s)#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.177 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.208 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.238 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.243 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:29.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.337 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.338 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.339 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.339 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.371 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.376 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 52a35f72-182e-43fe-882b-db754817996b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:29 np0005593232 podman[422324]: 2026-01-23 11:10:29.39199076 +0000 UTC m=+0.052362404 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 06:10:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4343: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.654 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 52a35f72-182e-43fe-882b-db754817996b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.724 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] resizing rbd image 52a35f72-182e-43fe-882b-db754817996b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.833 250273 DEBUG nova.objects.instance [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lazy-loading 'migration_context' on Instance uuid 52a35f72-182e-43fe-882b-db754817996b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:10:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:29.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.861 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.862 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Ensure instance console log exists: /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.862 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.862 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:29 np0005593232 nova_compute[250269]: 2026-01-23 11:10:29.863 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:30 np0005593232 nova_compute[250269]: 2026-01-23 11:10:30.339 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:10:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:31.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:10:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4344: 321 pgs: 321 active+clean; 134 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 7.4 KiB/s rd, 445 KiB/s wr, 14 op/s
Jan 23 06:10:31 np0005593232 nova_compute[250269]: 2026-01-23 11:10:31.656 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Successfully created port: 9c953981-c820-4533-8bf7-e5bcac25fc22 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 06:10:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:31.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.325 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Successfully updated port: 9c953981-c820-4533-8bf7-e5bcac25fc22 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.350 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.350 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquired lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.351 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.422 250273 DEBUG nova.compute.manager [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-changed-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.423 250273 DEBUG nova.compute.manager [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Refreshing instance network info cache due to event network-changed-9c953981-c820-4533-8bf7-e5bcac25fc22. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.423 250273 DEBUG oslo_concurrency.lockutils [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.509 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 06:10:32 np0005593232 nova_compute[250269]: 2026-01-23 11:10:32.605 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:33.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.319 250273 DEBUG nova.network.neutron [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Updating instance_info_cache with network_info: [{"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:10:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4345: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.598 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Releasing lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.599 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Instance network_info: |[{"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.600 250273 DEBUG oslo_concurrency.lockutils [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.601 250273 DEBUG nova.network.neutron [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Refreshing network info cache for port 9c953981-c820-4533-8bf7-e5bcac25fc22 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.608 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Start _get_guest_xml network_info=[{"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.616 250273 WARNING nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.625 250273 DEBUG nova.virt.libvirt.host [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.626 250273 DEBUG nova.virt.libvirt.host [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.630 250273 DEBUG nova.virt.libvirt.host [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.630 250273 DEBUG nova.virt.libvirt.host [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.632 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.633 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.633 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.633 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.634 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.634 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.634 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.635 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.635 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.635 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.636 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.636 250273 DEBUG nova.virt.hardware [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 06:10:33 np0005593232 nova_compute[250269]: 2026-01-23 11:10:33.640 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:10:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193388443' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.165 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.193 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.198 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.317 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.318 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:34 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:10:34 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678406409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.688 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.691 250273 DEBUG nova.virt.libvirt.vif [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-763432880',display_name='tempest-TestServerMultinode-server-763432880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-763432880',id=220,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4691e06029a4b11bbda2856a451bd88',ramdisk_id='',reservation_id='r-fi8mfmvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1152571872',owner_user_name='tempest-TestServerMultinode-1152571872-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:10:29Z,user_data=None,user_id='ac51edf400184ec0b11ee5acc335ff21',uuid=52a35f72-182e-43fe-882b-db754817996b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.691 250273 DEBUG nova.network.os_vif_util [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converting VIF {"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.693 250273 DEBUG nova.network.os_vif_util [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.695 250273 DEBUG nova.objects.instance [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52a35f72-182e-43fe-882b-db754817996b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.715 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <uuid>52a35f72-182e-43fe-882b-db754817996b</uuid>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <name>instance-000000dc</name>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestServerMultinode-server-763432880</nova:name>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 11:10:33</nova:creationTime>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:user uuid="ac51edf400184ec0b11ee5acc335ff21">tempest-TestServerMultinode-1152571872-project-admin</nova:user>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:project uuid="d4691e06029a4b11bbda2856a451bd88">tempest-TestServerMultinode-1152571872</nova:project>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <nova:port uuid="9c953981-c820-4533-8bf7-e5bcac25fc22">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <system>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="serial">52a35f72-182e-43fe-882b-db754817996b</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="uuid">52a35f72-182e-43fe-882b-db754817996b</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </system>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <os>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </os>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <features>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </features>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </clock>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  <devices>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/52a35f72-182e-43fe-882b-db754817996b_disk">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/52a35f72-182e-43fe-882b-db754817996b_disk.config">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:89:d3:05"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <target dev="tap9c953981-c8"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </interface>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/console.log" append="off"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </serial>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <video>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </video>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </rng>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 06:10:34 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 06:10:34 np0005593232 nova_compute[250269]:  </devices>
Jan 23 06:10:34 np0005593232 nova_compute[250269]: </domain>
Jan 23 06:10:34 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.717 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Preparing to wait for external event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.718 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.719 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.720 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.721 250273 DEBUG nova.virt.libvirt.vif [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerMultinode-server-763432880',display_name='tempest-TestServerMultinode-server-763432880',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-763432880',id=220,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d4691e06029a4b11bbda2856a451bd88',ramdisk_id='',reservation_id='r-fi8mfmvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerMultinode-1152571872',owner_user_name='tempest-TestServerMultinode-1152571872-project-admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:10:29Z,user_data=None,user_id='ac51edf400184ec0b11ee5acc335ff21',uuid=52a35f72-182e-43fe-882b-db754817996b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.721 250273 DEBUG nova.network.os_vif_util [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converting VIF {"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.723 250273 DEBUG nova.network.os_vif_util [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.723 250273 DEBUG os_vif [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.725 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.726 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.727 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.732 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.732 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c953981-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.733 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c953981-c8, col_values=(('external_ids', {'iface-id': '9c953981-c820-4533-8bf7-e5bcac25fc22', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:d3:05', 'vm-uuid': '52a35f72-182e-43fe-882b-db754817996b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.736 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 06:10:34 np0005593232 NetworkManager[49057]: <info>  [1769166634.7365] manager: (tap9c953981-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/396)
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:34 np0005593232 nova_compute[250269]: 2026-01-23 11:10:34.743 250273 INFO os_vif [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8')#033[00m
Jan 23 06:10:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000055s ======
Jan 23 06:10:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:35.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.340 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4346: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.690 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.691 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.692 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] No VIF found with MAC fa:16:3e:89:d3:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.693 250273 INFO nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Using config drive#033[00m
Jan 23 06:10:35 np0005593232 nova_compute[250269]: 2026-01-23 11:10:35.733 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:36 np0005593232 nova_compute[250269]: 2026-01-23 11:10:36.183 250273 DEBUG nova.network.neutron [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Updated VIF entry in instance network info cache for port 9c953981-c820-4533-8bf7-e5bcac25fc22. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 06:10:36 np0005593232 nova_compute[250269]: 2026-01-23 11:10:36.184 250273 DEBUG nova.network.neutron [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Updating instance_info_cache with network_info: [{"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:10:36 np0005593232 nova_compute[250269]: 2026-01-23 11:10:36.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:36 np0005593232 nova_compute[250269]: 2026-01-23 11:10:36.321 250273 DEBUG oslo_concurrency.lockutils [req-ca1b3188-93fa-4aa2-8649-105f4791ddb2 req-59f551d3-97ee-4c6c-8d7e-9ace9a8ffaa9 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-52a35f72-182e-43fe-882b-db754817996b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:10:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:37.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:37 np0005593232 nova_compute[250269]: 2026-01-23 11:10:37.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4347: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:10:37
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'volumes', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.meta', '.rgw.root']
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:10:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:10:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:37.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:37 np0005593232 nova_compute[250269]: 2026-01-23 11:10:37.869 250273 INFO nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Creating config drive at /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config#033[00m
Jan 23 06:10:37 np0005593232 nova_compute[250269]: 2026-01-23 11:10:37.875 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu1_7f0v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:38 np0005593232 nova_compute[250269]: 2026-01-23 11:10:38.032 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxu1_7f0v" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:10:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:10:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:39.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4348: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.470 250273 DEBUG nova.storage.rbd_utils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] rbd image 52a35f72-182e-43fe-882b-db754817996b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.475 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config 52a35f72-182e-43fe-882b-db754817996b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.508 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.509 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.509 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.735 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.795 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 23 06:10:39 np0005593232 nova_compute[250269]: 2026-01-23 11:10:39.796 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:10:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:10:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:39.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:10:40 np0005593232 nova_compute[250269]: 2026-01-23 11:10:40.342 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:41.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4349: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 23 06:10:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:42.694 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:42.695 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:42.695 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.154 250273 DEBUG oslo_concurrency.processutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config 52a35f72-182e-43fe-882b-db754817996b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.155 250273 INFO nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Deleting local config drive /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b/disk.config because it was imported into RBD.#033[00m
Jan 23 06:10:43 np0005593232 kernel: tap9c953981-c8: entered promiscuous mode
Jan 23 06:10:43 np0005593232 NetworkManager[49057]: <info>  [1769166643.2228] manager: (tap9c953981-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/397)
Jan 23 06:10:43 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:43Z|00855|binding|INFO|Claiming lport 9c953981-c820-4533-8bf7-e5bcac25fc22 for this chassis.
Jan 23 06:10:43 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:43Z|00856|binding|INFO|9c953981-c820-4533-8bf7-e5bcac25fc22: Claiming fa:16:3e:89:d3:05 10.100.0.5
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.224 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:43 np0005593232 systemd-udevd[422648]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:10:43 np0005593232 systemd-machined[215836]: New machine qemu-97-instance-000000dc.
Jan 23 06:10:43 np0005593232 NetworkManager[49057]: <info>  [1769166643.2733] device (tap9c953981-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 06:10:43 np0005593232 NetworkManager[49057]: <info>  [1769166643.2738] device (tap9c953981-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.291 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:43 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:43Z|00857|binding|INFO|Setting lport 9c953981-c820-4533-8bf7-e5bcac25fc22 ovn-installed in OVS
Jan 23 06:10:43 np0005593232 nova_compute[250269]: 2026-01-23 11:10:43.298 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:43 np0005593232 systemd[1]: Started Virtual Machine qemu-97-instance-000000dc.
Jan 23 06:10:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4350: 321 pgs: 321 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 27 KiB/s rd, 4.4 MiB/s wr, 44 op/s
Jan 23 06:10:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:44 np0005593232 nova_compute[250269]: 2026-01-23 11:10:44.264 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166644.2643623, 52a35f72-182e-43fe-882b-db754817996b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:10:44 np0005593232 nova_compute[250269]: 2026-01-23 11:10:44.265 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] VM Started (Lifecycle Event)#033[00m
Jan 23 06:10:44 np0005593232 nova_compute[250269]: 2026-01-23 11:10:44.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:44 np0005593232 nova_compute[250269]: 2026-01-23 11:10:44.738 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:45.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:45 np0005593232 nova_compute[250269]: 2026-01-23 11:10:45.344 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4351: 321 pgs: 321 active+clean; 242 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 3.1 MiB/s wr, 32 op/s
Jan 23 06:10:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:45.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:46 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:46Z|00858|binding|INFO|Setting lport 9c953981-c820-4533-8bf7-e5bcac25fc22 up in Southbound
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.709 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:d3:05 10.100.0.5'], port_security=['fa:16:3e:89:d3:05 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52a35f72-182e-43fe-882b-db754817996b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4691e06029a4b11bbda2856a451bd88', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ca02cfe-9b98-40f4-8c92-4cc40f5f9499', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9205c727-159c-48df-8bc6-3771f4de4cfc, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=9c953981-c820-4533-8bf7-e5bcac25fc22) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.710 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9c953981-c820-4533-8bf7-e5bcac25fc22 in datapath f4706ca2-15b6-4141-8d7b-8d4cab159f24 bound to our chassis#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.712 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4706ca2-15b6-4141-8d7b-8d4cab159f24#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.724 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[5fafac3e-abcf-4222-87ec-3128b27efdb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.724 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4706ca2-11 in ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.726 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4706ca2-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.726 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[caf9fd41-3c72-4ade-a281-4f841ffad237]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.727 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[cde860ad-4654-4c17-a7b1-b7f0eeb6ee91]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.740 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ad25d105-f8c6-415d-b4d2-21fac5fd68d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.753 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[faee68ef-2188-4dc9-b973-2c02cdff7ce7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.784 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2745b0-a827-409d-8ca9-2437f398f712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.789 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb9641a-120c-4e21-9286-d145a019a6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 NetworkManager[49057]: <info>  [1769166646.7902] manager: (tapf4706ca2-10): new Veth device (/org/freedesktop/NetworkManager/Devices/398)
Jan 23 06:10:46 np0005593232 systemd-udevd[422707]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.827 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[bfae674f-21d2-4f7c-992c-2ab1e81510e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.831 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[a10157bc-ac7d-4f45-b0c8-54ac4ac036a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 NetworkManager[49057]: <info>  [1769166646.8617] device (tapf4706ca2-10): carrier: link connected
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.866 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[56fd885d-b5f1-4516-b8fd-5de6c0531ca6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.883 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[818eeb14-4bfa-4d2f-a82d-2b47b3b7cca1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4706ca2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:aa:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1056000, 'reachable_time': 42763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422726, 'error': None, 'target': 'ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.896 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[812e29a5-d8d5-4cb9-92d0-c03a9cf79319]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:aa5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1056000, 'tstamp': 1056000}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 422727, 'error': None, 'target': 'ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.917 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[2261b91d-ecdb-4783-8cea-3dc5868aa011]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4706ca2-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:aa:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 258], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1056000, 'reachable_time': 42763, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 422728, 'error': None, 'target': 'ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:46 np0005593232 nova_compute[250269]: 2026-01-23 11:10:46.945 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:10:46 np0005593232 nova_compute[250269]: 2026-01-23 11:10:46.949 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166644.2674618, 52a35f72-182e-43fe-882b-db754817996b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:10:46 np0005593232 nova_compute[250269]: 2026-01-23 11:10:46.949 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] VM Paused (Lifecycle Event)#033[00m
Jan 23 06:10:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:46.949 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[d3dfb606-d226-458a-b3ec-c9b6c19d4325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.002 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1b3001-b46b-421d-8ad6-ad618babf1a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.004 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4706ca2-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.004 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.005 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4706ca2-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:47 np0005593232 NetworkManager[49057]: <info>  [1769166647.0069] manager: (tapf4706ca2-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/399)
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.006 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:47 np0005593232 kernel: tapf4706ca2-10: entered promiscuous mode
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.009 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.010 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4706ca2-10, col_values=(('external_ids', {'iface-id': '5655a848-aba1-4fa8-84e5-387dc4198f8a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:10:47 np0005593232 ovn_controller[151001]: 2026-01-23T11:10:47Z|00859|binding|INFO|Releasing lport 5655a848-aba1-4fa8-84e5-387dc4198f8a from this chassis (sb_readonly=0)
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.012 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.013 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4706ca2-15b6-4141-8d7b-8d4cab159f24.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4706ca2-15b6-4141-8d7b-8d4cab159f24.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.014 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[743cb32f-175f-46ab-94c1-411a95a7a7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.016 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-f4706ca2-15b6-4141-8d7b-8d4cab159f24
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/f4706ca2-15b6-4141-8d7b-8d4cab159f24.pid.haproxy
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID f4706ca2-15b6-4141-8d7b-8d4cab159f24
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 06:10:47 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:47.017 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'env', 'PROCESS_TAG=haproxy-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4706ca2-15b6-4141-8d7b-8d4cab159f24.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.032 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.036 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:10:47 np0005593232 nova_compute[250269]: 2026-01-23 11:10:47.070 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:10:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:47.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:47 np0005593232 podman[422762]: 2026-01-23 11:10:47.384056954 +0000 UTC m=+0.050849424 container create 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:10:47 np0005593232 systemd[1]: Started libpod-conmon-2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518.scope.
Jan 23 06:10:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4352: 321 pgs: 321 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 31 KiB/s rd, 3.6 MiB/s wr, 51 op/s
Jan 23 06:10:47 np0005593232 podman[422762]: 2026-01-23 11:10:47.355354449 +0000 UTC m=+0.022146939 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 06:10:47 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:47 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ee9806b58fdb12a39c05cda05c7dbc7a1cb5143294b26c091441ac70ec2513/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:47 np0005593232 podman[422762]: 2026-01-23 11:10:47.479978695 +0000 UTC m=+0.146771195 container init 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 06:10:47 np0005593232 podman[422762]: 2026-01-23 11:10:47.486648925 +0000 UTC m=+0.153441395 container start 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:10:47 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [NOTICE]   (422783) : New worker (422785) forked
Jan 23 06:10:47 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [NOTICE]   (422783) : Loading success.
Jan 23 06:10:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:47.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029733083821887212 of space, bias 1.0, pg target 0.8919925146566163 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:10:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.232 250273 DEBUG nova.compute.manager [req-b15573d4-f471-4d27-a0e6-986b57d18600 req-159ab168-eb2b-4121-a42d-6a41827d06e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.233 250273 DEBUG oslo_concurrency.lockutils [req-b15573d4-f471-4d27-a0e6-986b57d18600 req-159ab168-eb2b-4121-a42d-6a41827d06e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.234 250273 DEBUG oslo_concurrency.lockutils [req-b15573d4-f471-4d27-a0e6-986b57d18600 req-159ab168-eb2b-4121-a42d-6a41827d06e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.234 250273 DEBUG oslo_concurrency.lockutils [req-b15573d4-f471-4d27-a0e6-986b57d18600 req-159ab168-eb2b-4121-a42d-6a41827d06e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.234 250273 DEBUG nova.compute.manager [req-b15573d4-f471-4d27-a0e6-986b57d18600 req-159ab168-eb2b-4121-a42d-6a41827d06e8 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Processing event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.235 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.243 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166648.2432673, 52a35f72-182e-43fe-882b-db754817996b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.243 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.245 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.249 250273 INFO nova.virt.libvirt.driver [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] Instance spawned successfully.#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.250 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.327 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.330 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.707 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.709 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.709 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.710 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.710 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.711 250273 DEBUG nova.virt.libvirt.driver [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.864 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.983 250273 INFO nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Took 19.84 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 06:10:48 np0005593232 nova_compute[250269]: 2026-01-23 11:10:48.983 250273 DEBUG nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:10:49 np0005593232 nova_compute[250269]: 2026-01-23 11:10:49.164 250273 INFO nova.compute.manager [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Took 20.91 seconds to build instance.#033[00m
Jan 23 06:10:49 np0005593232 nova_compute[250269]: 2026-01-23 11:10:49.188 250273 DEBUG oslo_concurrency.lockutils [None req-cc49df4b-e2e2-49df-a982-65b22891f2c5 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.010s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:10:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:49.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:10:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4353: 321 pgs: 321 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 39 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 23 06:10:49 np0005593232 nova_compute[250269]: 2026-01-23 11:10:49.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:49.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.346 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.466 250273 DEBUG nova.compute.manager [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.467 250273 DEBUG oslo_concurrency.lockutils [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.467 250273 DEBUG oslo_concurrency.lockutils [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.467 250273 DEBUG oslo_concurrency.lockutils [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.467 250273 DEBUG nova.compute.manager [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] No waiting events found dispatching network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:10:50 np0005593232 nova_compute[250269]: 2026-01-23 11:10:50.468 250273 WARNING nova.compute.manager [req-dc569653-b11e-4d28-b991-85801071fb24 req-09cf2c28-8a17-4528-aa7a-0fbb035baf23 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received unexpected event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:10:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:10:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:10:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:10:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:10:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:10:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev be1d7480-8467-41c8-817a-68b51f33a362 does not exist
Jan 23 06:10:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 387690a0-f46b-4ef4-8a1f-8aef9edd5ff7 does not exist
Jan 23 06:10:51 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4b66ccd-9453-4e89-ac97-1b322f43fa5b does not exist
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:10:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:51.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4354: 321 pgs: 321 active+clean; 259 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Jan 23 06:10:51 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.619 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.619 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.619 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:10:51 np0005593232 nova_compute[250269]: 2026-01-23 11:10:51.620 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:51.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:10:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779554260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.060299892 +0000 UTC m=+0.028797938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:52 np0005593232 nova_compute[250269]: 2026-01-23 11:10:52.168 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.254691858 +0000 UTC m=+0.223189884 container create a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:10:52 np0005593232 systemd[1]: Started libpod-conmon-a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b.scope.
Jan 23 06:10:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.643051787 +0000 UTC m=+0.611549833 container init a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:10:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:10:52 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.652729131 +0000 UTC m=+0.621227157 container start a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.657915828 +0000 UTC m=+0.626413864 container attach a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Jan 23 06:10:52 np0005593232 eloquent_ritchie[423107]: 167 167
Jan 23 06:10:52 np0005593232 systemd[1]: libpod-a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b.scope: Deactivated successfully.
Jan 23 06:10:52 np0005593232 conmon[423107]: conmon a34510f1ab4704daaad0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b.scope/container/memory.events
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.665762731 +0000 UTC m=+0.634260797 container died a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:10:52 np0005593232 systemd[1]: var-lib-containers-storage-overlay-8c6f8b7e1f0d117ef07118a716eb4604a3ebb7c788429e2e0e68a44e490b2c0b-merged.mount: Deactivated successfully.
Jan 23 06:10:52 np0005593232 podman[423088]: 2026-01-23 11:10:52.729960043 +0000 UTC m=+0.698458069 container remove a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:10:52 np0005593232 systemd[1]: libpod-conmon-a34510f1ab4704daaad0761952249349b32411a57c72785d5ce0649228fac92b.scope: Deactivated successfully.
Jan 23 06:10:52 np0005593232 podman[423131]: 2026-01-23 11:10:52.930233055 +0000 UTC m=+0.057124652 container create ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:52 np0005593232 systemd[1]: Started libpod-conmon-ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0.scope.
Jan 23 06:10:53 np0005593232 podman[423131]: 2026-01-23 11:10:52.907587463 +0000 UTC m=+0.034479070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:53 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:53 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:53 np0005593232 podman[423131]: 2026-01-23 11:10:53.050737495 +0000 UTC m=+0.177629092 container init ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 06:10:53 np0005593232 podman[423131]: 2026-01-23 11:10:53.064142165 +0000 UTC m=+0.191033792 container start ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 06:10:53 np0005593232 podman[423131]: 2026-01-23 11:10:53.068816118 +0000 UTC m=+0.195707715 container attach ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:53.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4355: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Jan 23 06:10:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:53.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:53 np0005593232 elegant_kare[423148]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:10:53 np0005593232 elegant_kare[423148]: --> relative data size: 1.0
Jan 23 06:10:53 np0005593232 elegant_kare[423148]: --> All data devices are unavailable
Jan 23 06:10:53 np0005593232 systemd[1]: libpod-ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0.scope: Deactivated successfully.
Jan 23 06:10:53 np0005593232 podman[423131]: 2026-01-23 11:10:53.965958044 +0000 UTC m=+1.092849631 container died ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:10:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e7bcc586c3e2ac5711c5cfbfdfb3ec0ec7cde83ca825c430b66306913a8e48ec-merged.mount: Deactivated successfully.
Jan 23 06:10:54 np0005593232 podman[423131]: 2026-01-23 11:10:54.022978372 +0000 UTC m=+1.149869959 container remove ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:10:54 np0005593232 systemd[1]: libpod-conmon-ddd21ff0c413a67af9cfb9f278fed92d0bfe5ce80ad37be69c7d87c6e7a4c4c0.scope: Deactivated successfully.
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.159 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.161 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000dc as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.358 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.359 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3852MB free_disk=20.92587661743164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.360 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.360 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.604701249 +0000 UTC m=+0.038731880 container create 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:10:54 np0005593232 systemd[1]: Started libpod-conmon-0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a.scope.
Jan 23 06:10:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.682731253 +0000 UTC m=+0.116761904 container init 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.586842042 +0000 UTC m=+0.020872693 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.688821706 +0000 UTC m=+0.122852337 container start 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.691942764 +0000 UTC m=+0.125973425 container attach 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:10:54 np0005593232 nice_galois[423330]: 167 167
Jan 23 06:10:54 np0005593232 systemd[1]: libpod-0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a.scope: Deactivated successfully.
Jan 23 06:10:54 np0005593232 conmon[423330]: conmon 0680191627bf61085b44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a.scope/container/memory.events
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.69461226 +0000 UTC m=+0.128642921 container died 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Jan 23 06:10:54 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7bb7dd86df8449163bfcdcf031b0bb2d625bc789b61a690ad74b5d623982aa92-merged.mount: Deactivated successfully.
Jan 23 06:10:54 np0005593232 podman[423314]: 2026-01-23 11:10:54.740529653 +0000 UTC m=+0.174560274 container remove 0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_galois, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.742 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.752 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance 52a35f72-182e-43fe-882b-db754817996b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.752 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.753 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:10:54 np0005593232 systemd[1]: libpod-conmon-0680191627bf61085b44c770d11ed2949f1cd4dd3eb053762aeadfddbdc02a0a.scope: Deactivated successfully.
Jan 23 06:10:54 np0005593232 nova_compute[250269]: 2026-01-23 11:10:54.792 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:10:54 np0005593232 podman[423353]: 2026-01-23 11:10:54.926269403 +0000 UTC m=+0.041651013 container create fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 23 06:10:54 np0005593232 systemd[1]: Started libpod-conmon-fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d.scope.
Jan 23 06:10:54 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:54 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646c01354be2d0ae2a72bf5fe60adb2500929656e14ee4e5ad7d12979385cec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646c01354be2d0ae2a72bf5fe60adb2500929656e14ee4e5ad7d12979385cec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646c01354be2d0ae2a72bf5fe60adb2500929656e14ee4e5ad7d12979385cec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:54.908321564 +0000 UTC m=+0.023703194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/646c01354be2d0ae2a72bf5fe60adb2500929656e14ee4e5ad7d12979385cec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:55.02342298 +0000 UTC m=+0.138804620 container init fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:55.032745934 +0000 UTC m=+0.148127554 container start fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:55.036663056 +0000 UTC m=+0.152044696 container attach fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:10:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276180198' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.257 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.265 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:10:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:55.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.431 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.436 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4356: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 492 KiB/s wr, 105 op/s
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.607 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:10:55 np0005593232 nova_compute[250269]: 2026-01-23 11:10:55.608 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:10:55 np0005593232 epic_feistel[423388]: {
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:    "0": [
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:        {
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "devices": [
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "/dev/loop3"
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            ],
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "lv_name": "ceph_lv0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "lv_size": "7511998464",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "name": "ceph_lv0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "tags": {
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.cluster_name": "ceph",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.crush_device_class": "",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.encrypted": "0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.osd_id": "0",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.type": "block",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:                "ceph.vdo": "0"
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            },
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "type": "block",
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:            "vg_name": "ceph_vg0"
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:        }
Jan 23 06:10:55 np0005593232 epic_feistel[423388]:    ]
Jan 23 06:10:55 np0005593232 epic_feistel[423388]: }
Jan 23 06:10:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:55.888994341 +0000 UTC m=+1.004375961 container died fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:10:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:55.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:10:55 np0005593232 systemd[1]: libpod-fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d.scope: Deactivated successfully.
Jan 23 06:10:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-646c01354be2d0ae2a72bf5fe60adb2500929656e14ee4e5ad7d12979385cec4-merged.mount: Deactivated successfully.
Jan 23 06:10:55 np0005593232 podman[423353]: 2026-01-23 11:10:55.941806199 +0000 UTC m=+1.057187809 container remove fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Jan 23 06:10:55 np0005593232 systemd[1]: libpod-conmon-fad6bb1cf44a95e2a20ead5846a3b73656b42c8eab2df68b606b709dc055fe9d.scope: Deactivated successfully.
Jan 23 06:10:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:10:56 np0005593232 podman[423555]: 2026-01-23 11:10:56.465775316 +0000 UTC m=+0.024614860 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:56.969 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=103, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=102) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:10:56 np0005593232 nova_compute[250269]: 2026-01-23 11:10:56.968 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:56 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:56.971 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:10:57 np0005593232 podman[423555]: 2026-01-23 11:10:57.075746124 +0000 UTC m=+0.634585648 container create 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:10:57 np0005593232 systemd[1]: Started libpod-conmon-6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa.scope.
Jan 23 06:10:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:10:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:57.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4357: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 MiB/s rd, 492 KiB/s wr, 148 op/s
Jan 23 06:10:57 np0005593232 podman[423555]: 2026-01-23 11:10:57.61940157 +0000 UTC m=+1.178241104 container init 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 06:10:57 np0005593232 podman[423555]: 2026-01-23 11:10:57.62643546 +0000 UTC m=+1.185274974 container start 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:10:57 np0005593232 crazy_moore[423573]: 167 167
Jan 23 06:10:57 np0005593232 systemd[1]: libpod-6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa.scope: Deactivated successfully.
Jan 23 06:10:57 np0005593232 podman[423555]: 2026-01-23 11:10:57.81181359 +0000 UTC m=+1.370653104 container attach 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 06:10:57 np0005593232 podman[423555]: 2026-01-23 11:10:57.812666384 +0000 UTC m=+1.371505898 container died 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:10:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:58 np0005593232 systemd[1]: var-lib-containers-storage-overlay-28e0efda3ca258d6d2bce84e2c73c3d9986fc9432811b9716a36d5ad2cc1312f-merged.mount: Deactivated successfully.
Jan 23 06:10:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:10:59.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4358: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 159 op/s
Jan 23 06:10:59 np0005593232 podman[423555]: 2026-01-23 11:10:59.634681554 +0000 UTC m=+3.193521068 container remove 6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:10:59 np0005593232 systemd[1]: libpod-conmon-6fa5cfdbe00d70d0c2ae2fb1bb8895932f670d6265fb7c0638405f8cfdb59daa.scope: Deactivated successfully.
Jan 23 06:10:59 np0005593232 nova_compute[250269]: 2026-01-23 11:10:59.746 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:10:59 np0005593232 podman[423653]: 2026-01-23 11:10:59.787683325 +0000 UTC m=+0.049981569 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:10:59 np0005593232 podman[423590]: 2026-01-23 11:10:59.811989294 +0000 UTC m=+1.255225267 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 23 06:10:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:10:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:10:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:10:59.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:10:59 np0005593232 podman[423682]: 2026-01-23 11:10:59.826749293 +0000 UTC m=+0.031887226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:10:59 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:10:59.972 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '103'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:11:00 np0005593232 podman[423682]: 2026-01-23 11:11:00.035744793 +0000 UTC m=+0.240882706 container create 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 06:11:00 np0005593232 nova_compute[250269]: 2026-01-23 11:11:00.430 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:00 np0005593232 systemd[1]: Started libpod-conmon-2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514.scope.
Jan 23 06:11:00 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:11:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e439dd0afddc4c80ae88ab0ded56436e000eaea4e8f47648f0d2f37cf7f52cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:11:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e439dd0afddc4c80ae88ab0ded56436e000eaea4e8f47648f0d2f37cf7f52cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:11:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e439dd0afddc4c80ae88ab0ded56436e000eaea4e8f47648f0d2f37cf7f52cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:11:00 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e439dd0afddc4c80ae88ab0ded56436e000eaea4e8f47648f0d2f37cf7f52cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:11:01 np0005593232 podman[423682]: 2026-01-23 11:11:01.241527388 +0000 UTC m=+1.446665351 container init 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:11:01 np0005593232 podman[423682]: 2026-01-23 11:11:01.25041416 +0000 UTC m=+1.455552083 container start 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:11:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:01.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:01 np0005593232 podman[423682]: 2026-01-23 11:11:01.38364015 +0000 UTC m=+1.588778083 container attach 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:11:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4359: 321 pgs: 321 active+clean; 260 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 23 06:11:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:01.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]: {
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:        "osd_id": 0,
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:        "type": "bluestore"
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]:    }
Jan 23 06:11:02 np0005593232 awesome_wiles[423704]: }
Jan 23 06:11:02 np0005593232 systemd[1]: libpod-2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514.scope: Deactivated successfully.
Jan 23 06:11:02 np0005593232 podman[423728]: 2026-01-23 11:11:02.249758877 +0000 UTC m=+0.031041412 container died 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 06:11:02 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3e439dd0afddc4c80ae88ab0ded56436e000eaea4e8f47648f0d2f37cf7f52cc-merged.mount: Deactivated successfully.
Jan 23 06:11:02 np0005593232 podman[423728]: 2026-01-23 11:11:02.310106559 +0000 UTC m=+0.091389064 container remove 2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 06:11:02 np0005593232 systemd[1]: libpod-conmon-2abf803118e166396806d6f3b8d17d8dbff9b9d0cdfe8ef2ead8f55ebaa20514.scope: Deactivated successfully.
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:11:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bf6adbfe-c3a0-4a39-8ffc-196c5ac55a65 does not exist
Jan 23 06:11:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 20b63441-c92d-4dd9-9e87-bf4f4a437b9d does not exist
Jan 23 06:11:02 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a23f489a-a019-442f-a9d6-d22bc7f07fee does not exist
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:11:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:11:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:03.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4360: 321 pgs: 321 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 180 op/s
Jan 23 06:11:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:03.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:03 np0005593232 ovn_controller[151001]: 2026-01-23T11:11:03Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:d3:05 10.100.0.5
Jan 23 06:11:03 np0005593232 ovn_controller[151001]: 2026-01-23T11:11:03Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:d3:05 10.100.0.5
Jan 23 06:11:04 np0005593232 nova_compute[250269]: 2026-01-23 11:11:04.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:05.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:05 np0005593232 nova_compute[250269]: 2026-01-23 11:11:05.432 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4361: 321 pgs: 321 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 107 op/s
Jan 23 06:11:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:07.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4362: 321 pgs: 321 active+clean; 281 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.4 MiB/s rd, 2.1 MiB/s wr, 165 op/s
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:07.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:08 np0005593232 nova_compute[250269]: 2026-01-23 11:11:08.603 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:09.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4363: 321 pgs: 321 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Jan 23 06:11:09 np0005593232 nova_compute[250269]: 2026-01-23 11:11:09.798 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:09.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:10 np0005593232 nova_compute[250269]: 2026-01-23 11:11:10.435 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:10 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 23 06:11:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:11.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4364: 321 pgs: 321 active+clean; 285 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 23 06:11:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:11.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:13.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4365: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 MiB/s rd, 4.1 MiB/s wr, 181 op/s
Jan 23 06:11:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:13.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:14 np0005593232 nova_compute[250269]: 2026-01-23 11:11:14.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:15.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4366: 321 pgs: 321 active+clean; 246 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 147 op/s
Jan 23 06:11:15 np0005593232 nova_compute[250269]: 2026-01-23 11:11:15.475 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:15.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:11:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 63K writes, 233K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 63K writes, 24K syncs, 2.62 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2022 writes, 7204 keys, 2022 commit groups, 1.0 writes per commit group, ingest: 7.29 MB, 0.01 MB/s#012Interval WAL: 2022 writes, 809 syncs, 2.50 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:11:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:17.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4367: 321 pgs: 321 active+clean; 239 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 172 op/s
Jan 23 06:11:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:17.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4368: 321 pgs: 321 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.2 MiB/s rd, 4.0 MiB/s wr, 158 op/s
Jan 23 06:11:19 np0005593232 nova_compute[250269]: 2026-01-23 11:11:19.802 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:19.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:20 np0005593232 nova_compute[250269]: 2026-01-23 11:11:20.477 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:21.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4369: 321 pgs: 321 active+clean; 216 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 394 KiB/s rd, 4.0 MiB/s wr, 126 op/s
Jan 23 06:11:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:21.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.103 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.104 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.104 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.104 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.105 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.106 250273 INFO nova.compute.manager [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Terminating instance#033[00m
Jan 23 06:11:22 np0005593232 nova_compute[250269]: 2026-01-23 11:11:22.107 250273 DEBUG nova.compute.manager [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 23 06:11:23 np0005593232 kernel: tap9c953981-c8 (unregistering): left promiscuous mode
Jan 23 06:11:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:23.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:23 np0005593232 NetworkManager[49057]: <info>  [1769166683.3707] device (tap9c953981-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 06:11:23 np0005593232 ovn_controller[151001]: 2026-01-23T11:11:23Z|00860|binding|INFO|Releasing lport 9c953981-c820-4533-8bf7-e5bcac25fc22 from this chassis (sb_readonly=0)
Jan 23 06:11:23 np0005593232 ovn_controller[151001]: 2026-01-23T11:11:23Z|00861|binding|INFO|Setting lport 9c953981-c820-4533-8bf7-e5bcac25fc22 down in Southbound
Jan 23 06:11:23 np0005593232 ovn_controller[151001]: 2026-01-23T11:11:23Z|00862|binding|INFO|Removing iface tap9c953981-c8 ovn-installed in OVS
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.384 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:23.391 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:d3:05 10.100.0.5'], port_security=['fa:16:3e:89:d3:05 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '52a35f72-182e-43fe-882b-db754817996b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd4691e06029a4b11bbda2856a451bd88', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ca02cfe-9b98-40f4-8c92-4cc40f5f9499', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9205c727-159c-48df-8bc6-3771f4de4cfc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=9c953981-c820-4533-8bf7-e5bcac25fc22) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:11:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:23.393 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 9c953981-c820-4533-8bf7-e5bcac25fc22 in datapath f4706ca2-15b6-4141-8d7b-8d4cab159f24 unbound from our chassis#033[00m
Jan 23 06:11:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:23.395 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4706ca2-15b6-4141-8d7b-8d4cab159f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 06:11:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:23.397 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a92040cf-d2d0-42ec-a61b-1c3438d95968]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:23 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:23.398 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24 namespace which is not needed anymore#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.407 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000dc.scope: Deactivated successfully.
Jan 23 06:11:23 np0005593232 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d000000dc.scope: Consumed 14.964s CPU time.
Jan 23 06:11:23 np0005593232 systemd-machined[215836]: Machine qemu-97-instance-000000dc terminated.
Jan 23 06:11:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4370: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 399 KiB/s rd, 4.0 MiB/s wr, 133 op/s
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.548 250273 INFO nova.virt.libvirt.driver [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] Instance destroyed successfully.#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.550 250273 DEBUG nova.objects.instance [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lazy-loading 'resources' on Instance uuid 52a35f72-182e-43fe-882b-db754817996b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.572 250273 DEBUG nova.virt.libvirt.vif [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerMultinode-server-763432880',display_name='tempest-TestServerMultinode-server-763432880',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testservermultinode-server-763432880',id=220,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:10:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d4691e06029a4b11bbda2856a451bd88',ramdisk_id='',reservation_id='r-fi8mfmvf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerMultinode-1152571872',owner_user_name='tempest-TestServerMultinode-1152571872-project-admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:10:49Z,user_data=None,user_id='ac51edf400184ec0b11ee5acc335ff21',uuid=52a35f72-182e-43fe-882b-db754817996b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.572 250273 DEBUG nova.network.os_vif_util [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converting VIF {"id": "9c953981-c820-4533-8bf7-e5bcac25fc22", "address": "fa:16:3e:89:d3:05", "network": {"id": "f4706ca2-15b6-4141-8d7b-8d4cab159f24", "bridge": "br-int", "label": "tempest-TestServerMultinode-1792921973-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "76608d1b79f84e2385a2dcadacaea9f3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c953981-c8", "ovs_interfaceid": "9c953981-c820-4533-8bf7-e5bcac25fc22", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.573 250273 DEBUG nova.network.os_vif_util [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.574 250273 DEBUG os_vif [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.576 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.576 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c953981-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.578 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:23 np0005593232 nova_compute[250269]: 2026-01-23 11:11:23.582 250273 INFO os_vif [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:d3:05,bridge_name='br-int',has_traffic_filtering=True,id=9c953981-c820-4533-8bf7-e5bcac25fc22,network=Network(f4706ca2-15b6-4141-8d7b-8d4cab159f24),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c953981-c8')#033[00m
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [NOTICE]   (422783) : haproxy version is 2.8.14-c23fe91
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [NOTICE]   (422783) : path to executable is /usr/sbin/haproxy
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [WARNING]  (422783) : Exiting Master process...
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [WARNING]  (422783) : Exiting Master process...
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [ALERT]    (422783) : Current worker (422785) exited with code 143 (Terminated)
Jan 23 06:11:23 np0005593232 neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24[422779]: [WARNING]  (422783) : All workers exited. Exiting... (0)
Jan 23 06:11:23 np0005593232 systemd[1]: libpod-2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518.scope: Deactivated successfully.
Jan 23 06:11:23 np0005593232 podman[423879]: 2026-01-23 11:11:23.861414163 +0000 UTC m=+0.326300630 container died 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 06:11:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:23 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 06:11:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:23.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:23 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 06:11:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518-userdata-shm.mount: Deactivated successfully.
Jan 23 06:11:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-87ee9806b58fdb12a39c05cda05c7dbc7a1cb5143294b26c091441ac70ec2513-merged.mount: Deactivated successfully.
Jan 23 06:11:23 np0005593232 podman[423879]: 2026-01-23 11:11:23.96951208 +0000 UTC m=+0.434398537 container cleanup 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 06:11:23 np0005593232 systemd[1]: libpod-conmon-2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518.scope: Deactivated successfully.
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.097 250273 DEBUG nova.compute.manager [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-unplugged-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.098 250273 DEBUG oslo_concurrency.lockutils [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.098 250273 DEBUG oslo_concurrency.lockutils [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.099 250273 DEBUG oslo_concurrency.lockutils [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.099 250273 DEBUG nova.compute.manager [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] No waiting events found dispatching network-vif-unplugged-9c953981-c820-4533-8bf7-e5bcac25fc22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.099 250273 DEBUG nova.compute.manager [req-4b924df1-17d3-4fbd-be3c-5065933a5b1b req-a01f1c7a-9298-40ae-8a91-f6b56d180758 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-unplugged-9c953981-c820-4533-8bf7-e5bcac25fc22 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 23 06:11:24 np0005593232 podman[423938]: 2026-01-23 11:11:24.763239313 +0000 UTC m=+0.659628739 container remove 2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.772 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[937b0b0c-6a0a-4940-be52-bcbfbf878ee4]: (4, ('Fri Jan 23 11:11:23 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24 (2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518)\n2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518\nFri Jan 23 11:11:23 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24 (2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518)\n2cec3f28ae95fc80d28a097c7a579b84a8a5b5e8e9342c81f6e8f2fcf92bf518\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.774 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[8ba45b90-842f-430a-b860-f863b8952f00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.776 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4706ca2-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:24 np0005593232 kernel: tapf4706ca2-10: left promiscuous mode
Jan 23 06:11:24 np0005593232 nova_compute[250269]: 2026-01-23 11:11:24.799 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.802 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3214d497-e390-4ddb-a001-afd400f1a49d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.827 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cce735-4015-4abc-9b03-fb04f0f1ab0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.828 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ea725b90-cdb2-45b5-84e3-509e0bdfa82b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.843 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc08f8a-2bc9-4e6b-8b77-70c8a3b7f50e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1055992, 'reachable_time': 41271, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423953, 'error': None, 'target': 'ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.847 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4706ca2-15b6-4141-8d7b-8d4cab159f24 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 06:11:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:24.847 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[f198e10a-1d62-4873-b004-6f2d7749088f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:11:24 np0005593232 systemd[1]: run-netns-ovnmeta\x2df4706ca2\x2d15b6\x2d4141\x2d8d7b\x2d8d4cab159f24.mount: Deactivated successfully.
Jan 23 06:11:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:25.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4371: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 289 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 23 06:11:25 np0005593232 nova_compute[250269]: 2026-01-23 11:11:25.478 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:25.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.158 250273 DEBUG nova.compute.manager [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.159 250273 DEBUG oslo_concurrency.lockutils [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "52a35f72-182e-43fe-882b-db754817996b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.159 250273 DEBUG oslo_concurrency.lockutils [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.159 250273 DEBUG oslo_concurrency.lockutils [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.160 250273 DEBUG nova.compute.manager [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] No waiting events found dispatching network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:11:26 np0005593232 nova_compute[250269]: 2026-01-23 11:11:26.160 250273 WARNING nova.compute.manager [req-31a62269-1484-4df7-ad23-cd606382b894 req-d0511b3f-362b-4afd-8f78-13512ae329bf 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received unexpected event network-vif-plugged-9c953981-c820-4533-8bf7-e5bcac25fc22 for instance with vm_state active and task_state deleting.#033[00m
Jan 23 06:11:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4372: 321 pgs: 321 active+clean; 161 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 91 op/s
Jan 23 06:11:27 np0005593232 ceph-mgr[74726]: [devicehealth INFO root] Check health
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.778 250273 INFO nova.virt.libvirt.driver [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Deleting instance files /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b_del#033[00m
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.779 250273 INFO nova.virt.libvirt.driver [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Deletion of /var/lib/nova/instances/52a35f72-182e-43fe-882b-db754817996b_del complete#033[00m
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.859 250273 INFO nova.compute.manager [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Took 5.75 seconds to destroy the instance on the hypervisor.#033[00m
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.859 250273 DEBUG oslo.service.loopingcall [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.860 250273 DEBUG nova.compute.manager [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 23 06:11:27 np0005593232 nova_compute[250269]: 2026-01-23 11:11:27.860 250273 DEBUG nova.network.neutron [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 23 06:11:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:27.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.579 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.787 250273 DEBUG nova.network.neutron [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.803 250273 INFO nova.compute.manager [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] Took 0.94 seconds to deallocate network for instance.#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.833 250273 DEBUG nova.compute.manager [req-784bd550-2b1e-485b-86d8-7023222c5bcb req-06514b33-60e1-4594-a229-d371e0d46d1a 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: 52a35f72-182e-43fe-882b-db754817996b] Received event network-vif-deleted-9c953981-c820-4533-8bf7-e5bcac25fc22 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.856 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.857 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:28 np0005593232 nova_compute[250269]: 2026-01-23 11:11:28.908 250273 DEBUG oslo_concurrency.processutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:11:29 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:11:29 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4025300804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.367 250273 DEBUG oslo_concurrency.processutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:11:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:29.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.374 250273 DEBUG nova.compute.provider_tree [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.414 250273 DEBUG nova.scheduler.client.report [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.432 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4373: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 225 KiB/s rd, 1.2 MiB/s wr, 76 op/s
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.467 250273 INFO nova.scheduler.client.report [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Deleted allocations for instance 52a35f72-182e-43fe-882b-db754817996b#033[00m
Jan 23 06:11:29 np0005593232 nova_compute[250269]: 2026-01-23 11:11:29.563 250273 DEBUG oslo_concurrency.lockutils [None req-9c596f21-0d65-4537-aed2-671a1169aa96 ac51edf400184ec0b11ee5acc335ff21 d4691e06029a4b11bbda2856a451bd88 - - default default] Lock "52a35f72-182e-43fe-882b-db754817996b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:29.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:30 np0005593232 podman[423981]: 2026-01-23 11:11:30.401460736 +0000 UTC m=+0.058559632 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:11:30 np0005593232 podman[423980]: 2026-01-23 11:11:30.443019065 +0000 UTC m=+0.102350445 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:11:30 np0005593232 nova_compute[250269]: 2026-01-23 11:11:30.480 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:31.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4374: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Jan 23 06:11:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.016000453s ======
Jan 23 06:11:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:31.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.016000453s
Jan 23 06:11:33 np0005593232 nova_compute[250269]: 2026-01-23 11:11:33.085 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:33.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4375: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 34 op/s
Jan 23 06:11:33 np0005593232 nova_compute[250269]: 2026-01-23 11:11:33.621 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:33.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:34 np0005593232 nova_compute[250269]: 2026-01-23 11:11:34.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:35 np0005593232 nova_compute[250269]: 2026-01-23 11:11:35.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:35 np0005593232 nova_compute[250269]: 2026-01-23 11:11:35.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:11:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:35.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4376: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:11:35 np0005593232 nova_compute[250269]: 2026-01-23 11:11:35.482 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:36 np0005593232 nova_compute[250269]: 2026-01-23 11:11:36.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:37 np0005593232 nova_compute[250269]: 2026-01-23 11:11:37.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:37.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4377: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:11:37
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'images', 'backups', '.rgw.root', 'default.rgw.meta']
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:11:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:11:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:37.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.317 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.317 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.546 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769166683.5455444, 52a35f72-182e-43fe-882b-db754817996b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.547 250273 INFO nova.compute.manager [-] [instance: 52a35f72-182e-43fe-882b-db754817996b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.573 250273 DEBUG nova.compute.manager [None req-51c8bfa6-5a92-4dcb-8dcb-b377cece46bd - - - - - -] [instance: 52a35f72-182e-43fe-882b-db754817996b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:11:38 np0005593232 nova_compute[250269]: 2026-01-23 11:11:38.625 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:11:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:11:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:39.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4378: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s rd, 341 B/s wr, 12 op/s
Jan 23 06:11:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:39.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:40 np0005593232 nova_compute[250269]: 2026-01-23 11:11:40.484 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:41.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4379: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Jan 23 06:11:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:11:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:42.695 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:42.696 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:11:42.696 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:43.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4380: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 1.4 KiB/s rd, 341 B/s wr, 2 op/s
Jan 23 06:11:43 np0005593232 nova_compute[250269]: 2026-01-23 11:11:43.628 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.185091) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166704185168, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 2107, "num_deletes": 251, "total_data_size": 3905695, "memory_usage": 3957536, "flush_reason": "Manual Compaction"}
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166704481944, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 3844408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 93907, "largest_seqno": 96013, "table_properties": {"data_size": 3834799, "index_size": 6102, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19802, "raw_average_key_size": 20, "raw_value_size": 3815687, "raw_average_value_size": 3941, "num_data_blocks": 265, "num_entries": 968, "num_filter_entries": 968, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166469, "oldest_key_time": 1769166469, "file_creation_time": 1769166704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 296931 microseconds, and 12396 cpu microseconds.
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.482018) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 3844408 bytes OK
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.482047) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.596080) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.596114) EVENT_LOG_v1 {"time_micros": 1769166704596106, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.596138) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 3896995, prev total WAL file size 3912439, number of live WAL files 2.
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.597473) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(3754KB)], [221(11MB)]
Jan 23 06:11:44 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166704597564, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 16035377, "oldest_snapshot_seqno": -1}
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 11992 keys, 14002831 bytes, temperature: kUnknown
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166705129554, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 14002831, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13928063, "index_size": 43679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30021, "raw_key_size": 317082, "raw_average_key_size": 26, "raw_value_size": 13721433, "raw_average_value_size": 1144, "num_data_blocks": 1651, "num_entries": 11992, "num_filter_entries": 11992, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166704, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:11:45 np0005593232 nova_compute[250269]: 2026-01-23 11:11:45.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4381: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.129895) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 14002831 bytes
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.484079) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 30.1 rd, 26.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 11.6 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 12509, records dropped: 517 output_compression: NoCompression
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.484138) EVENT_LOG_v1 {"time_micros": 1769166705484115, "job": 140, "event": "compaction_finished", "compaction_time_micros": 532075, "compaction_time_cpu_micros": 60598, "output_level": 6, "num_output_files": 1, "total_output_size": 14002831, "num_input_records": 12509, "num_output_records": 11992, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166705485999, "job": 140, "event": "table_file_deletion", "file_number": 223}
Jan 23 06:11:45 np0005593232 nova_compute[250269]: 2026-01-23 11:11:45.487 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166705490993, "job": 140, "event": "table_file_deletion", "file_number": 221}
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:44.597376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.491153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.491165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.491170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.491175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:11:45.491179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:11:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:45.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:47.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4382: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:11:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:11:48 np0005593232 nova_compute[250269]: 2026-01-23 11:11:48.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:48 np0005593232 nova_compute[250269]: 2026-01-23 11:11:48.631 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:49.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4383: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:49.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:50 np0005593232 nova_compute[250269]: 2026-01-23 11:11:50.488 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:51.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4384: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.318 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.318 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.319 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.319 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.319 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:11:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:53.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4385: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.634 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:11:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/240202074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.775 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.935 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.937 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4074MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.937 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:11:53 np0005593232 nova_compute[250269]: 2026-01-23 11:11:53.938 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:11:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:53.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.004 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.004 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.020 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:11:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:11:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258368466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.454 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.460 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.481 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.513 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:11:54 np0005593232 nova_compute[250269]: 2026-01-23 11:11:54.513 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:11:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:11:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:55.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:11:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4386: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:55 np0005593232 nova_compute[250269]: 2026-01-23 11:11:55.490 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:55.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:11:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:57.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4387: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:58 np0005593232 nova_compute[250269]: 2026-01-23 11:11:58.697 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:11:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:11:59.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:11:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4388: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:11:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:11:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:11:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:11:59.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:00 np0005593232 nova_compute[250269]: 2026-01-23 11:12:00.535 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:01.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:01 np0005593232 podman[424186]: 2026-01-23 11:12:01.418718256 +0000 UTC m=+0.063155003 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 06:12:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4389: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:01 np0005593232 podman[424185]: 2026-01-23 11:12:01.470350191 +0000 UTC m=+0.108644423 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 23 06:12:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:01.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4390: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:03 np0005593232 nova_compute[250269]: 2026-01-23 11:12:03.701 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:04 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c8b3b9a1-fafd-4824-9e0b-9387c65ac48c does not exist
Jan 23 06:12:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 89a63866-5803-454b-93b9-3ae026d094b6 does not exist
Jan 23 06:12:05 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a9b8cc8b-f5df-4415-aee0-c1a3020f2ea8 does not exist
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:12:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:12:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:05.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4391: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:05 np0005593232 nova_compute[250269]: 2026-01-23 11:12:05.537 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:05 np0005593232 podman[424501]: 2026-01-23 11:12:05.737227724 +0000 UTC m=+0.023113637 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:05.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:06 np0005593232 podman[424501]: 2026-01-23 11:12:06.022848339 +0000 UTC m=+0.308734232 container create e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:12:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:12:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:06 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:12:06 np0005593232 systemd[1]: Started libpod-conmon-e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93.scope.
Jan 23 06:12:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:06 np0005593232 podman[424501]: 2026-01-23 11:12:06.51660415 +0000 UTC m=+0.802490163 container init e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Jan 23 06:12:06 np0005593232 podman[424501]: 2026-01-23 11:12:06.52646969 +0000 UTC m=+0.812355603 container start e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:12:06 np0005593232 friendly_curie[424517]: 167 167
Jan 23 06:12:06 np0005593232 systemd[1]: libpod-e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93.scope: Deactivated successfully.
Jan 23 06:12:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:06 np0005593232 podman[424501]: 2026-01-23 11:12:06.829705793 +0000 UTC m=+1.115591706 container attach e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:12:06 np0005593232 podman[424501]: 2026-01-23 11:12:06.83134697 +0000 UTC m=+1.117232933 container died e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:12:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:07.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-52d90a0c9da9934626fe2b288ad604fc47f27bca493a686e6129f208aac5eaa4-merged.mount: Deactivated successfully.
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4392: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:07 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:07Z|00863|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:07 np0005593232 podman[424501]: 2026-01-23 11:12:07.722469806 +0000 UTC m=+2.008355699 container remove e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:12:07 np0005593232 systemd[1]: libpod-conmon-e06d1ad944acf7cd11ea9256ee4c6a431114378c04051d3fc3bdf11af2b2bd93.scope: Deactivated successfully.
Jan 23 06:12:07 np0005593232 podman[424541]: 2026-01-23 11:12:07.906906359 +0000 UTC m=+0.068219517 container create c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:12:07 np0005593232 systemd[1]: Started libpod-conmon-c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de.scope.
Jan 23 06:12:07 np0005593232 podman[424541]: 2026-01-23 11:12:07.862597981 +0000 UTC m=+0.023911169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:07 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:07 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:08.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:08 np0005593232 podman[424541]: 2026-01-23 11:12:08.023035094 +0000 UTC m=+0.184348302 container init c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 23 06:12:08 np0005593232 podman[424541]: 2026-01-23 11:12:08.03170018 +0000 UTC m=+0.193013338 container start c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:12:08 np0005593232 podman[424541]: 2026-01-23 11:12:08.03631127 +0000 UTC m=+0.197624428 container attach c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 06:12:08 np0005593232 nova_compute[250269]: 2026-01-23 11:12:08.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:08 np0005593232 thirsty_meitner[424559]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:12:08 np0005593232 thirsty_meitner[424559]: --> relative data size: 1.0
Jan 23 06:12:08 np0005593232 thirsty_meitner[424559]: --> All data devices are unavailable
Jan 23 06:12:08 np0005593232 systemd[1]: libpod-c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de.scope: Deactivated successfully.
Jan 23 06:12:08 np0005593232 conmon[424559]: conmon c6fe8bd8c850e4775968 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de.scope/container/memory.events
Jan 23 06:12:08 np0005593232 podman[424541]: 2026-01-23 11:12:08.895826099 +0000 UTC m=+1.057139257 container died c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:12:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d894aec651ec0d75f21f9229cff224286295fc114b67424c96816ebdac62a802-merged.mount: Deactivated successfully.
Jan 23 06:12:08 np0005593232 podman[424541]: 2026-01-23 11:12:08.963396807 +0000 UTC m=+1.124709975 container remove c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:12:08 np0005593232 systemd[1]: libpod-conmon-c6fe8bd8c850e477596861f39cd65cbda2476091a765fa42bc7639f24e6ee0de.scope: Deactivated successfully.
Jan 23 06:12:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:09.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4393: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.557787713 +0000 UTC m=+0.041507789 container create 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 06:12:09 np0005593232 systemd[1]: Started libpod-conmon-603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26.scope.
Jan 23 06:12:09 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.539624777 +0000 UTC m=+0.023344833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.639081559 +0000 UTC m=+0.122801595 container init 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.645893813 +0000 UTC m=+0.129613839 container start 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:12:09 np0005593232 bold_archimedes[424743]: 167 167
Jan 23 06:12:09 np0005593232 systemd[1]: libpod-603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26.scope: Deactivated successfully.
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.651246314 +0000 UTC m=+0.134966350 container attach 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.651512962 +0000 UTC m=+0.135232998 container died 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:12:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c2d5793a8872abe524362e64fc190b973101844dcbef65413d1c6ffea482c1d0-merged.mount: Deactivated successfully.
Jan 23 06:12:09 np0005593232 podman[424727]: 2026-01-23 11:12:09.692707891 +0000 UTC m=+0.176427937 container remove 603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:12:09 np0005593232 systemd[1]: libpod-conmon-603a21c6a731da0a13605475c1b47369947962e20ed504e91963a88005128d26.scope: Deactivated successfully.
Jan 23 06:12:09 np0005593232 podman[424766]: 2026-01-23 11:12:09.827451634 +0000 UTC m=+0.022988363 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:09 np0005593232 podman[424766]: 2026-01-23 11:12:09.929909512 +0000 UTC m=+0.125446221 container create 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 06:12:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:10 np0005593232 systemd[1]: Started libpod-conmon-2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead.scope.
Jan 23 06:12:10 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2c1fb73df763ab852099ef39007f10ddf7dfb8354f0c0c0bd7120d991c5f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2c1fb73df763ab852099ef39007f10ddf7dfb8354f0c0c0bd7120d991c5f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2c1fb73df763ab852099ef39007f10ddf7dfb8354f0c0c0bd7120d991c5f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:10 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d2c1fb73df763ab852099ef39007f10ddf7dfb8354f0c0c0bd7120d991c5f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:10 np0005593232 podman[424766]: 2026-01-23 11:12:10.401488733 +0000 UTC m=+0.597025472 container init 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:12:10 np0005593232 podman[424766]: 2026-01-23 11:12:10.409397587 +0000 UTC m=+0.604934296 container start 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 06:12:10 np0005593232 nova_compute[250269]: 2026-01-23 11:12:10.574 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:10 np0005593232 podman[424766]: 2026-01-23 11:12:10.662937231 +0000 UTC m=+0.858473950 container attach 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]: {
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:    "0": [
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:        {
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "devices": [
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "/dev/loop3"
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            ],
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "lv_name": "ceph_lv0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "lv_size": "7511998464",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "name": "ceph_lv0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "tags": {
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.cluster_name": "ceph",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.crush_device_class": "",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.encrypted": "0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.osd_id": "0",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.type": "block",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:                "ceph.vdo": "0"
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            },
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "type": "block",
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:            "vg_name": "ceph_vg0"
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:        }
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]:    ]
Jan 23 06:12:11 np0005593232 recursing_mendel[424783]: }
Jan 23 06:12:11 np0005593232 systemd[1]: libpod-2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead.scope: Deactivated successfully.
Jan 23 06:12:11 np0005593232 podman[424766]: 2026-01-23 11:12:11.227078809 +0000 UTC m=+1.422615518 container died 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.231 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.231 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.248 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 23 06:12:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e1d2c1fb73df763ab852099ef39007f10ddf7dfb8354f0c0c0bd7120d991c5f2-merged.mount: Deactivated successfully.
Jan 23 06:12:11 np0005593232 podman[424766]: 2026-01-23 11:12:11.285534108 +0000 UTC m=+1.481070817 container remove 2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:12:11 np0005593232 systemd[1]: libpod-conmon-2ffea9adbf630c454706eaac1c6b7ab07238490011f09b5fa3c38f273603aead.scope: Deactivated successfully.
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.315 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.316 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.323 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.324 250273 INFO nova.compute.claims [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Jan 23 06:12:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:11.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.425 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4394: 321 pgs: 321 active+clean; 120 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:12:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:12:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499502235' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.864582367 +0000 UTC m=+0.037741492 container create e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.867 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.874 250273 DEBUG nova.compute.provider_tree [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.891 250273 DEBUG nova.scheduler.client.report [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:12:11 np0005593232 systemd[1]: Started libpod-conmon-e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923.scope.
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.913 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.914 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 23 06:12:11 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.934659536 +0000 UTC m=+0.107818671 container init e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.940212653 +0000 UTC m=+0.113371778 container start e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 06:12:11 np0005593232 heuristic_elgamal[424983]: 167 167
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.848493161 +0000 UTC m=+0.021652306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.943726703 +0000 UTC m=+0.116885848 container attach e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:12:11 np0005593232 systemd[1]: libpod-e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923.scope: Deactivated successfully.
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.944484644 +0000 UTC m=+0.117643769 container died e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.961 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 23 06:12:11 np0005593232 nova_compute[250269]: 2026-01-23 11:12:11.963 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 23 06:12:11 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4d9fbb39d0dc445cf8cd8f89960892fda1a6fbff98fd7180aaeae35a7f46af15-merged.mount: Deactivated successfully.
Jan 23 06:12:11 np0005593232 podman[424964]: 2026-01-23 11:12:11.979738035 +0000 UTC m=+0.152897160 container remove e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 23 06:12:11 np0005593232 systemd[1]: libpod-conmon-e4542dd061853f3180dbe04d1fcb7de1abfc0a3a9c3723f6c1a264b80d0d7923.scope: Deactivated successfully.
Jan 23 06:12:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:12 np0005593232 podman[425006]: 2026-01-23 11:12:12.139175409 +0000 UTC m=+0.040585273 container create afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.155 250273 INFO nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 23 06:12:12 np0005593232 systemd[1]: Started libpod-conmon-afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90.scope.
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.187 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 23 06:12:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec876cbfad8bd91679d08f6a6f2c2954f700496bbb77745267bed082cf3aa922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec876cbfad8bd91679d08f6a6f2c2954f700496bbb77745267bed082cf3aa922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec876cbfad8bd91679d08f6a6f2c2954f700496bbb77745267bed082cf3aa922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:12 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec876cbfad8bd91679d08f6a6f2c2954f700496bbb77745267bed082cf3aa922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:12 np0005593232 podman[425006]: 2026-01-23 11:12:12.213963821 +0000 UTC m=+0.115373705 container init afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:12:12 np0005593232 podman[425006]: 2026-01-23 11:12:12.121811486 +0000 UTC m=+0.023221370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:12:12 np0005593232 podman[425006]: 2026-01-23 11:12:12.219893639 +0000 UTC m=+0.121303493 container start afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 06:12:12 np0005593232 podman[425006]: 2026-01-23 11:12:12.223222254 +0000 UTC m=+0.124632118 container attach afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.272 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.274 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.275 250273 INFO nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Creating image(s)#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.304 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.336 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.363 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.367 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.436 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.437 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.438 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.438 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "a6f655456a04e1d13ef2e44ed4544c38917863a2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.467 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.471 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d4c75524-52b8-4c2b-b0cb-18d94089013b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.918 250273 DEBUG nova.policy [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5d6a458f5d9345379b05f0cdb69a7b0f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3a245f7970f14fffa60af2ff972b4bfd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 23 06:12:12 np0005593232 nova_compute[250269]: 2026-01-23 11:12:12.948 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/a6f655456a04e1d13ef2e44ed4544c38917863a2 d4c75524-52b8-4c2b-b0cb-18d94089013b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.044 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] resizing rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 23 06:12:13 np0005593232 sweet_benz[425022]: {
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:        "osd_id": 0,
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:        "type": "bluestore"
Jan 23 06:12:13 np0005593232 sweet_benz[425022]:    }
Jan 23 06:12:13 np0005593232 sweet_benz[425022]: }
Jan 23 06:12:13 np0005593232 systemd[1]: libpod-afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90.scope: Deactivated successfully.
Jan 23 06:12:13 np0005593232 podman[425006]: 2026-01-23 11:12:13.100801305 +0000 UTC m=+1.002211189 container died afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.159 250273 DEBUG nova.objects.instance [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lazy-loading 'migration_context' on Instance uuid d4c75524-52b8-4c2b-b0cb-18d94089013b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.176 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.177 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Ensure instance console log exists: /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.177 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.178 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.178 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:13.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4395: 321 pgs: 321 active+clean; 131 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 149 KiB/s wr, 9 op/s
Jan 23 06:12:13 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ec876cbfad8bd91679d08f6a6f2c2954f700496bbb77745267bed082cf3aa922-merged.mount: Deactivated successfully.
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.613 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Successfully created port: 56bb5fc8-f112-47c7-84d3-d47e53c4d481 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 23 06:12:13 np0005593232 nova_compute[250269]: 2026-01-23 11:12:13.708 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:13 np0005593232 podman[425006]: 2026-01-23 11:12:13.97180289 +0000 UTC m=+1.873212744 container remove afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 06:12:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:12:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:14 np0005593232 systemd[1]: libpod-conmon-afd885ef86fd6083ab40ce64e8763695024f17277cb3a455e499f903bd526a90.scope: Deactivated successfully.
Jan 23 06:12:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:14 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:12:14 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 73774cd1-2e43-4af4-adb6-bd6921ac856d does not exist
Jan 23 06:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c9b1c835-2b42-46a1-8679-b58e7d95508a does not exist
Jan 23 06:12:14 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev bbe6da90-3a36-4955-93bb-2d24f1d1fc36 does not exist
Jan 23 06:12:14 np0005593232 nova_compute[250269]: 2026-01-23 11:12:14.891 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Successfully updated port: 56bb5fc8-f112-47c7-84d3-d47e53c4d481 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 23 06:12:14 np0005593232 nova_compute[250269]: 2026-01-23 11:12:14.919 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:12:14 np0005593232 nova_compute[250269]: 2026-01-23 11:12:14.919 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:12:14 np0005593232 nova_compute[250269]: 2026-01-23 11:12:14.919 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:12:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:15 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:12:15 np0005593232 nova_compute[250269]: 2026-01-23 11:12:15.059 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 23 06:12:15 np0005593232 nova_compute[250269]: 2026-01-23 11:12:15.067 250273 DEBUG nova.compute.manager [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:12:15 np0005593232 nova_compute[250269]: 2026-01-23 11:12:15.067 250273 DEBUG nova.compute.manager [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing instance network info cache due to event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 06:12:15 np0005593232 nova_compute[250269]: 2026-01-23 11:12:15.067 250273 DEBUG oslo_concurrency.lockutils [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:12:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:15.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4396: 321 pgs: 321 active+clean; 131 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 5.4 KiB/s rd, 149 KiB/s wr, 9 op/s
Jan 23 06:12:15 np0005593232 nova_compute[250269]: 2026-01-23 11:12:15.606 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:16.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.924 250273 DEBUG nova.network.neutron [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.945 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.945 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance network_info: |[{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.946 250273 DEBUG oslo_concurrency.lockutils [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.946 250273 DEBUG nova.network.neutron [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.950 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Start _get_guest_xml network_info=[{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_options': None, 'device_name': '/dev/vda', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'device_type': 'disk', 'encryption_format': None, 'boot_index': 0, 'size': 0, 'encrypted': False, 'guest_format': None, 'image_id': '84c0ef19-7f67-4bd3-95d8-507c3e0942ed'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.956 250273 WARNING nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.962 250273 DEBUG nova.virt.libvirt.host [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.963 250273 DEBUG nova.virt.libvirt.host [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.966 250273 DEBUG nova.virt.libvirt.host [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.967 250273 DEBUG nova.virt.libvirt.host [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.968 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.969 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-23T09:27:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='68d42077-c749-4366-ba3e-07758debb02d',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-23T09:27:25Z,direct_url=<?>,disk_format='qcow2',id=84c0ef19-7f67-4bd3-95d8-507c3e0942ed,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7ace0d3e1d354841bc1ddea0c12699d6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-23T09:27:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.969 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.969 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.970 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.970 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.970 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.971 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.971 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.971 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.971 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.972 250273 DEBUG nova.virt.hardware [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 23 06:12:16 np0005593232 nova_compute[250269]: 2026-01-23 11:12:16.975 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:17.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.437 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.462 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.466 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4397: 321 pgs: 321 active+clean; 155 MiB data, 1.5 GiB used, 19 GiB / 21 GiB avail; 6.4 KiB/s rd, 961 KiB/s wr, 12 op/s
Jan 23 06:12:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:12:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693928580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.891 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.893 250273 DEBUG nova.virt.libvirt.vif [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-926357377',display_name='tempest-TestShelveInstance-server-926357377',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-926357377',id=223,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEFOGuv6jcY9DwIy/xYvt2X4Vb1HISACx7cX6o9lDAD3l3O1QRG07pd8MQdd17GGOSBZRG+y+TaN6Gc5Y3oNpLF+mD7AORx9ZprSr452pQ3EZnBrolaOtjqq79YfGAlew==',key_name='tempest-TestShelveInstance-93398597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a245f7970f14fffa60af2ff972b4bfd',ramdisk_id='',reservation_id='r-9cvenz37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-869807080',owner_user_name='tempest-TestShelveInstance-869807080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:12:12Z,user_data=None,user_id='5d6a458f5d9345379b05f0cdb69a7b0f',uuid=d4c75524-52b8-4c2b-b0cb-18d94089013b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.894 250273 DEBUG nova.network.os_vif_util [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converting VIF {"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.895 250273 DEBUG nova.network.os_vif_util [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.896 250273 DEBUG nova.objects.instance [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lazy-loading 'pci_devices' on Instance uuid d4c75524-52b8-4c2b-b0cb-18d94089013b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.921 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] End _get_guest_xml xml=<domain type="kvm">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <uuid>d4c75524-52b8-4c2b-b0cb-18d94089013b</uuid>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <name>instance-000000df</name>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <memory>131072</memory>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <vcpu>1</vcpu>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <metadata>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:name>tempest-TestShelveInstance-server-926357377</nova:name>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:creationTime>2026-01-23 11:12:16</nova:creationTime>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:flavor name="m1.nano">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:memory>128</nova:memory>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:disk>1</nova:disk>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:swap>0</nova:swap>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:ephemeral>0</nova:ephemeral>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:vcpus>1</nova:vcpus>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </nova:flavor>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:owner>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:user uuid="5d6a458f5d9345379b05f0cdb69a7b0f">tempest-TestShelveInstance-869807080-project-member</nova:user>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:project uuid="3a245f7970f14fffa60af2ff972b4bfd">tempest-TestShelveInstance-869807080</nova:project>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </nova:owner>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:root type="image" uuid="84c0ef19-7f67-4bd3-95d8-507c3e0942ed"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <nova:ports>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <nova:port uuid="56bb5fc8-f112-47c7-84d3-d47e53c4d481">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        </nova:port>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </nova:ports>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </nova:instance>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </metadata>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <sysinfo type="smbios">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <system>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="manufacturer">RDO</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="product">OpenStack Compute</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="serial">d4c75524-52b8-4c2b-b0cb-18d94089013b</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="uuid">d4c75524-52b8-4c2b-b0cb-18d94089013b</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <entry name="family">Virtual Machine</entry>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </system>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </sysinfo>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <os>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <boot dev="hd"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <smbios mode="sysinfo"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </os>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <features>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <acpi/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <apic/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <vmcoreinfo/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </features>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <clock offset="utc">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <timer name="pit" tickpolicy="delay"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <timer name="hpet" present="no"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </clock>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <cpu mode="custom" match="exact">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <model>Nehalem</model>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <topology sockets="1" cores="1" threads="1"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </cpu>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  <devices>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <disk type="network" device="disk">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d4c75524-52b8-4c2b-b0cb-18d94089013b_disk">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <target dev="vda" bus="virtio"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <disk type="network" device="cdrom">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <driver type="raw" cache="none"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <source protocol="rbd" name="vms/d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.100" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.102" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <host name="192.168.122.101" port="6789"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </source>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <auth username="openstack">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:        <secret type="ceph" uuid="e1533653-0a5a-584c-b34b-8689f0d32e77"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      </auth>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <target dev="sda" bus="sata"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </disk>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <interface type="ethernet">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <mac address="fa:16:3e:be:cd:e8"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <driver name="vhost" rx_queue_size="512"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <mtu size="1442"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <target dev="tap56bb5fc8-f1"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </interface>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <serial type="pty">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <log file="/var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/console.log" append="off"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </serial>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <video>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <model type="virtio"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </video>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <input type="tablet" bus="usb"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <rng model="virtio">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <backend model="random">/dev/urandom</backend>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </rng>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="pci" model="pcie-root-port"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <controller type="usb" index="0"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    <memballoon model="virtio">
Jan 23 06:12:17 np0005593232 nova_compute[250269]:      <stats period="10"/>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:    </memballoon>
Jan 23 06:12:17 np0005593232 nova_compute[250269]:  </devices>
Jan 23 06:12:17 np0005593232 nova_compute[250269]: </domain>
Jan 23 06:12:17 np0005593232 nova_compute[250269]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.923 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Preparing to wait for external event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.923 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.924 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.924 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.925 250273 DEBUG nova.virt.libvirt.vif [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-23T11:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-926357377',display_name='tempest-TestShelveInstance-server-926357377',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-926357377',id=223,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEFOGuv6jcY9DwIy/xYvt2X4Vb1HISACx7cX6o9lDAD3l3O1QRG07pd8MQdd17GGOSBZRG+y+TaN6Gc5Y3oNpLF+mD7AORx9ZprSr452pQ3EZnBrolaOtjqq79YfGAlew==',key_name='tempest-TestShelveInstance-93398597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3a245f7970f14fffa60af2ff972b4bfd',ramdisk_id='',reservation_id='r-9cvenz37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-869807080',owner_user_name='tempest-TestShelveInstance-869807080-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-23T11:12:12Z,user_data=None,user_id='5d6a458f5d9345379b05f0cdb69a7b0f',uuid=d4c75524-52b8-4c2b-b0cb-18d94089013b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.925 250273 DEBUG nova.network.os_vif_util [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converting VIF {"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.925 250273 DEBUG nova.network.os_vif_util [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.926 250273 DEBUG os_vif [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.927 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.927 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.932 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.932 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap56bb5fc8-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.932 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap56bb5fc8-f1, col_values=(('external_ids', {'iface-id': '56bb5fc8-f112-47c7-84d3-d47e53c4d481', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:cd:e8', 'vm-uuid': 'd4c75524-52b8-4c2b-b0cb-18d94089013b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.933 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:17 np0005593232 NetworkManager[49057]: <info>  [1769166737.9348] manager: (tap56bb5fc8-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.937 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:17 np0005593232 nova_compute[250269]: 2026-01-23 11:12:17.941 250273 INFO os_vif [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1')#033[00m
Jan 23 06:12:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:18.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.064 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.065 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.065 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] No VIF found with MAC fa:16:3e:be:cd:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.066 250273 INFO nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Using config drive#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.102 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.202 250273 DEBUG nova.network.neutron [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updated VIF entry in instance network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.203 250273 DEBUG nova.network.neutron [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.263 250273 DEBUG oslo_concurrency.lockutils [req-a3aa8303-daac-4ef1-be40-8f8f66241694 req-6818bb9e-7322-4086-a8ec-151f270e02ae 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.575 250273 INFO nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Creating config drive at /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.580 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qib2j15 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.719 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9qib2j15" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.748 250273 DEBUG nova.storage.rbd_utils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] rbd image d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.752 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.970 250273 DEBUG oslo_concurrency.processutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config d4c75524-52b8-4c2b-b0cb-18d94089013b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:18 np0005593232 nova_compute[250269]: 2026-01-23 11:12:18.971 250273 INFO nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Deleting local config drive /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b/disk.config because it was imported into RBD.#033[00m
Jan 23 06:12:19 np0005593232 kernel: tap56bb5fc8-f1: entered promiscuous mode
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.0443] manager: (tap56bb5fc8-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Jan 23 06:12:19 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:19Z|00864|binding|INFO|Claiming lport 56bb5fc8-f112-47c7-84d3-d47e53c4d481 for this chassis.
Jan 23 06:12:19 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:19Z|00865|binding|INFO|56bb5fc8-f112-47c7-84d3-d47e53c4d481: Claiming fa:16:3e:be:cd:e8 10.100.0.10
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.047 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.054 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.056 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.066 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 systemd-machined[215836]: New machine qemu-98-instance-000000df.
Jan 23 06:12:19 np0005593232 systemd-udevd[425410]: Network interface NamePolicy= disabled on kernel command line.
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.1061] device (tap56bb5fc8-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.1076] device (tap56bb5fc8-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 23 06:12:19 np0005593232 systemd[1]: Started Virtual Machine qemu-98-instance-000000df.
Jan 23 06:12:19 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:19Z|00866|binding|INFO|Setting lport 56bb5fc8-f112-47c7-84d3-d47e53c4d481 ovn-installed in OVS
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:19Z|00867|binding|INFO|Setting lport 56bb5fc8-f112-47c7-84d3-d47e53c4d481 up in Southbound
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.187 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:cd:e8 10.100.0.10'], port_security=['fa:16:3e:be:cd:e8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd4c75524-52b8-4c2b-b0cb-18d94089013b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42899517-91b9-42e3-96a7-29180211a7a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a245f7970f14fffa60af2ff972b4bfd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3f0cc653-92a4-4b83-958a-564f01bb9144', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef3519b3-9b5b-4b40-8630-d2487396abc0, chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=56bb5fc8-f112-47c7-84d3-d47e53c4d481) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.189 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 56bb5fc8-f112-47c7-84d3-d47e53c4d481 in datapath 42899517-91b9-42e3-96a7-29180211a7a4 bound to our chassis#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.190 161902 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 42899517-91b9-42e3-96a7-29180211a7a4#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.206 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb59b99-b85f-4ab3-9c9d-017803998009]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.207 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap42899517-91 in ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.209 258153 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap42899517-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.209 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[bbab96cc-8c23-4b41-bed4-a40388b98750]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.210 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[e43224ee-444a-4b49-aac9-225bc74f27fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.228 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[b71fd018-8305-4403-9f5b-273c51077878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.252 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ec059fff-0207-4b78-a520-405d59fce268]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.283 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[db9b472c-c50c-4820-955d-9bd4c270ec72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.290 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[037d875e-e5f3-4a2c-be8e-69db2ceb5445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.2910] manager: (tap42899517-90): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.326 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[5c39519b-7b3d-4f1a-b116-68dbc403d1f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.332 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[71b0c1a3-2238-4a28-94e3-568ac897108e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.3520] device (tap42899517-90): carrier: link connected
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.357 258324 DEBUG oslo.privsep.daemon [-] privsep: reply[782a6b7d-fb4f-4523-8377-a3308c918c9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.372 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[1977ad0d-84d1-4205-9c1e-375bd9f259ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42899517-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:09:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1065249, 'reachable_time': 40310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425444, 'error': None, 'target': 'ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.385 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[91962497-8911-4ca7-9e45-8bbe20b34dc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:998'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1065249, 'tstamp': 1065249}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 425445, 'error': None, 'target': 'ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.398 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[790bb154-3ca1-4d16-b3d8-88d1a3270188]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap42899517-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:09:98'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 261], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1065249, 'reachable_time': 40310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 425446, 'error': None, 'target': 'ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:19.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.425 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[32a2eadb-e891-40db-a5bc-9b605254369b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4398: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.495 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9adff9-9016-4570-9d08-2643e2855098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.498 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42899517-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.499 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.499 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap42899517-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.501 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 NetworkManager[49057]: <info>  [1769166739.5020] manager: (tap42899517-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 23 06:12:19 np0005593232 kernel: tap42899517-90: entered promiscuous mode
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.504 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.505 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap42899517-90, col_values=(('external_ids', {'iface-id': '82ae71e6-e83a-4506-8f0f-261163163937'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:19 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:19Z|00868|binding|INFO|Releasing lport 82ae71e6-e83a-4506-8f0f-261163163937 from this chassis (sb_readonly=0)
Jan 23 06:12:19 np0005593232 nova_compute[250269]: 2026-01-23 11:12:19.519 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.521 161902 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/42899517-91b9-42e3-96a7-29180211a7a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/42899517-91b9-42e3-96a7-29180211a7a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.522 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[edf94bc0-b4c1-48a0-8296-50b0e75f628f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.523 161902 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: global
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    log         /dev/log local0 debug
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    log-tag     haproxy-metadata-proxy-42899517-91b9-42e3-96a7-29180211a7a4
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    user        root
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    group       root
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    maxconn     1024
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    pidfile     /var/lib/neutron/external/pids/42899517-91b9-42e3-96a7-29180211a7a4.pid.haproxy
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    daemon
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: defaults
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    log global
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    mode http
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    option httplog
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    option dontlognull
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    option http-server-close
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    option forwardfor
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    retries                 3
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    timeout http-request    30s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    timeout connect         30s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    timeout client          32s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    timeout server          32s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    timeout http-keep-alive 30s
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: listen listener
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    bind 169.254.169.254:80
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    server metadata /var/lib/neutron/metadata_proxy
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]:    http-request add-header X-OVN-Network-ID 42899517-91b9-42e3-96a7-29180211a7a4
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 23 06:12:19 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:19.523 161902 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4', 'env', 'PROCESS_TAG=haproxy-42899517-91b9-42e3-96a7-29180211a7a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/42899517-91b9-42e3-96a7-29180211a7a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 23 06:12:19 np0005593232 podman[425513]: 2026-01-23 11:12:19.900544597 +0000 UTC m=+0.054074376 container create f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 06:12:19 np0005593232 systemd[1]: Started libpod-conmon-f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55.scope.
Jan 23 06:12:19 np0005593232 podman[425513]: 2026-01-23 11:12:19.87353306 +0000 UTC m=+0.027062859 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 23 06:12:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:12:19 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c72ae2b4a59e86be69aee7bf296c7d02b7ba03825973f6369221d1101bd36471/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 23 06:12:19 np0005593232 podman[425513]: 2026-01-23 11:12:19.997814747 +0000 UTC m=+0.151344546 container init f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:12:20 np0005593232 podman[425513]: 2026-01-23 11:12:20.004706232 +0000 UTC m=+0.158236011 container start f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 23 06:12:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:20.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:20 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [NOTICE]   (425543) : New worker (425545) forked
Jan 23 06:12:20 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [NOTICE]   (425543) : Loading success.
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.204 250273 DEBUG nova.compute.manager [req-211cf428-08a3-4dce-8ae7-786c62b91cf5 req-88e08158-2172-417b-abc0-30001d3ac7d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.205 250273 DEBUG oslo_concurrency.lockutils [req-211cf428-08a3-4dce-8ae7-786c62b91cf5 req-88e08158-2172-417b-abc0-30001d3ac7d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.206 250273 DEBUG oslo_concurrency.lockutils [req-211cf428-08a3-4dce-8ae7-786c62b91cf5 req-88e08158-2172-417b-abc0-30001d3ac7d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.207 250273 DEBUG oslo_concurrency.lockutils [req-211cf428-08a3-4dce-8ae7-786c62b91cf5 req-88e08158-2172-417b-abc0-30001d3ac7d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.207 250273 DEBUG nova.compute.manager [req-211cf428-08a3-4dce-8ae7-786c62b91cf5 req-88e08158-2172-417b-abc0-30001d3ac7d5 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Processing event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.427 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166740.426678, d4c75524-52b8-4c2b-b0cb-18d94089013b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.428 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] VM Started (Lifecycle Event)#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.431 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.434 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.438 250273 INFO nova.virt.libvirt.driver [-] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance spawned successfully.#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.438 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.470 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.474 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.514 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.515 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166740.428273, d4c75524-52b8-4c2b-b0cb-18d94089013b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.515 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] VM Paused (Lifecycle Event)#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.519 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.520 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.520 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.521 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.521 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.522 250273 DEBUG nova.virt.libvirt.driver [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.544 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.549 250273 DEBUG nova.virt.driver [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] Emitting event <LifecycleEvent: 1769166740.4338481, d4c75524-52b8-4c2b-b0cb-18d94089013b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.549 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] VM Resumed (Lifecycle Event)#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.609 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.610 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.614 250273 DEBUG nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.636 250273 INFO nova.compute.manager [None req-e31e3719-f0c2-492e-9a2d-239e9116af30 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.645 250273 INFO nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Took 8.37 seconds to spawn the instance on the hypervisor.#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.645 250273 DEBUG nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.701 250273 INFO nova.compute.manager [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Took 9.41 seconds to build instance.#033[00m
Jan 23 06:12:20 np0005593232 nova_compute[250269]: 2026-01-23 11:12:20.717 250273 DEBUG oslo_concurrency.lockutils [None req-3d1cf924-2ade-4cf7-a7da-378cf73cb973 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:21.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4399: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 23 06:12:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:21.718 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=104, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=103) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:12:21 np0005593232 nova_compute[250269]: 2026-01-23 11:12:21.719 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:21 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:21.722 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:12:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:22.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.749 250273 DEBUG nova.compute.manager [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.749 250273 DEBUG oslo_concurrency.lockutils [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.749 250273 DEBUG oslo_concurrency.lockutils [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.749 250273 DEBUG oslo_concurrency.lockutils [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.750 250273 DEBUG nova.compute.manager [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] No waiting events found dispatching network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.750 250273 WARNING nova.compute.manager [req-89f1ecc7-603e-450f-8b55-6831c33cf6b2 req-97422478-a6f2-4039-9e1e-e40fca29c230 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received unexpected event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 for instance with vm_state active and task_state None.#033[00m
Jan 23 06:12:22 np0005593232 nova_compute[250269]: 2026-01-23 11:12:22.935 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:23.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4400: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 06:12:23 np0005593232 nova_compute[250269]: 2026-01-23 11:12:23.987 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:23 np0005593232 NetworkManager[49057]: <info>  [1769166743.9877] manager: (patch-br-int-to-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Jan 23 06:12:23 np0005593232 NetworkManager[49057]: <info>  [1769166743.9889] manager: (patch-provnet-d9c92ce5-db5b-485c-b1fb-94d4128a85f1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 23 06:12:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:24.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.058 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:24 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:24Z|00869|binding|INFO|Releasing lport 82ae71e6-e83a-4506-8f0f-261163163937 from this chassis (sb_readonly=0)
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.070 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.480 250273 DEBUG nova.compute.manager [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.481 250273 DEBUG nova.compute.manager [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing instance network info cache due to event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.481 250273 DEBUG oslo_concurrency.lockutils [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.481 250273 DEBUG oslo_concurrency.lockutils [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:12:24 np0005593232 nova_compute[250269]: 2026-01-23 11:12:24.481 250273 DEBUG nova.network.neutron [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 06:12:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:25.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4401: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 91 op/s
Jan 23 06:12:25 np0005593232 nova_compute[250269]: 2026-01-23 11:12:25.611 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:26.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:27 np0005593232 nova_compute[250269]: 2026-01-23 11:12:27.365 250273 DEBUG nova.network.neutron [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updated VIF entry in instance network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 06:12:27 np0005593232 nova_compute[250269]: 2026-01-23 11:12:27.366 250273 DEBUG nova.network.neutron [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:12:27 np0005593232 nova_compute[250269]: 2026-01-23 11:12:27.384 250273 DEBUG oslo_concurrency.lockutils [req-38ff6951-fd7e-45ea-a6ce-a9bf1fb25641 req-bdf42ce1-8e63-4f73-983a-bf1f73280598 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:12:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:27.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4402: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 91 op/s
Jan 23 06:12:27 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:27.725 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '104'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:12:27 np0005593232 nova_compute[250269]: 2026-01-23 11:12:27.940 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:29.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4403: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 867 KiB/s wr, 87 op/s
Jan 23 06:12:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:30.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:30 np0005593232 nova_compute[250269]: 2026-01-23 11:12:30.646 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:31.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4404: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:12:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:32.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:32 np0005593232 podman[425604]: 2026-01-23 11:12:32.407706126 +0000 UTC m=+0.062022570 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:12:32 np0005593232 podman[425603]: 2026-01-23 11:12:32.443852292 +0000 UTC m=+0.101875752 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 06:12:32 np0005593232 nova_compute[250269]: 2026-01-23 11:12:32.942 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:33.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4405: 321 pgs: 321 active+clean; 185 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 23 06:12:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:34.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:34 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:34Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:cd:e8 10.100.0.10
Jan 23 06:12:34 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:34Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:cd:e8 10.100.0.10
Jan 23 06:12:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:35.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4406: 321 pgs: 321 active+clean; 185 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 23 06:12:35 np0005593232 nova_compute[250269]: 2026-01-23 11:12:35.692 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:36.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:36 np0005593232 nova_compute[250269]: 2026-01-23 11:12:36.514 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:37 np0005593232 nova_compute[250269]: 2026-01-23 11:12:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:37 np0005593232 nova_compute[250269]: 2026-01-23 11:12:37.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:37 np0005593232 nova_compute[250269]: 2026-01-23 11:12:37.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:12:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:37.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4407: 321 pgs: 321 active+clean; 191 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:12:37
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr']
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:12:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:12:37 np0005593232 nova_compute[250269]: 2026-01-23 11:12:37.944 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:12:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:38.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:12:38 np0005593232 nova_compute[250269]: 2026-01-23 11:12:38.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:12:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:12:39 np0005593232 nova_compute[250269]: 2026-01-23 11:12:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:39.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4408: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 06:12:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:12:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:40.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.294 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.696 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.891 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.892 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.892 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 23 06:12:40 np0005593232 nova_compute[250269]: 2026-01-23 11:12:40.892 250273 DEBUG nova.objects.instance [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d4c75524-52b8-4c2b-b0cb-18d94089013b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:12:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:41.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4409: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 06:12:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:42.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:42 np0005593232 nova_compute[250269]: 2026-01-23 11:12:42.191 250273 DEBUG nova.network.neutron [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:12:42 np0005593232 nova_compute[250269]: 2026-01-23 11:12:42.222 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:12:42 np0005593232 nova_compute[250269]: 2026-01-23 11:12:42.222 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 23 06:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:42.697 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:42.698 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:12:42.698 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:42 np0005593232 nova_compute[250269]: 2026-01-23 11:12:42.946 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4410: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 389 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 23 06:12:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:44.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:12:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:45.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:12:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4411: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 372 KiB/s wr, 39 op/s
Jan 23 06:12:45 np0005593232 nova_compute[250269]: 2026-01-23 11:12:45.698 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:46.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:46 np0005593232 nova_compute[250269]: 2026-01-23 11:12:46.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:47.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4412: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 316 KiB/s rd, 372 KiB/s wr, 39 op/s
Jan 23 06:12:47 np0005593232 nova_compute[250269]: 2026-01-23 11:12:47.949 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021692301205099573 of space, bias 1.0, pg target 0.6507690361529872 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:12:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:12:49 np0005593232 nova_compute[250269]: 2026-01-23 11:12:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:49.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4413: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 92 KiB/s rd, 50 KiB/s wr, 16 op/s
Jan 23 06:12:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:50 np0005593232 nova_compute[250269]: 2026-01-23 11:12:50.700 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:51.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4414: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 23 06:12:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:52.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:52 np0005593232 nova_compute[250269]: 2026-01-23 11:12:52.950 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.332 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.333 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.333 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:53.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4415: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 23 06:12:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:12:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665544869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.797 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.860 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 06:12:53 np0005593232 nova_compute[250269]: 2026-01-23 11:12:53.860 250273 DEBUG nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] skipping disk for instance-000000df as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 23 06:12:53 np0005593232 ovn_controller[151001]: 2026-01-23T11:12:53Z|00870|memory_trim|INFO|Detected inactivity (last active 30022 ms ago): trimming memory
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.050 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.052 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3853MB free_disk=20.942752838134766GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.052 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.052 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:12:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:12:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:54.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.244 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Instance d4c75524-52b8-4c2b-b0cb-18d94089013b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.244 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.245 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.332 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.395 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.395 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.409 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.431 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 06:12:54 np0005593232 nova_compute[250269]: 2026-01-23 11:12:54.527 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:12:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:12:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187116043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.004 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.010 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.399 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:12:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:55.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4416: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.734 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.884 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:12:55 np0005593232 nova_compute[250269]: 2026-01-23 11:12:55.885 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:12:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:12:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:12:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:12:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:57.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4417: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 06:12:57 np0005593232 nova_compute[250269]: 2026-01-23 11:12:57.952 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:12:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:12:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:12:58.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:12:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:12:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 23 06:12:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:12:59.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 23 06:13:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:00 np0005593232 nova_compute[250269]: 2026-01-23 11:13:00.094 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:00 np0005593232 nova_compute[250269]: 2026-01-23 11:13:00.095 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:00 np0005593232 nova_compute[250269]: 2026-01-23 11:13:00.095 250273 INFO nova.compute.manager [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Shelving#033[00m
Jan 23 06:13:00 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev[96687]: Fri Jan 23 11:13:00 2026: A thread timer expired 2.586690 seconds ago
Jan 23 06:13:00 np0005593232 nova_compute[250269]: 2026-01-23 11:13:00.813 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4418: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.1 KiB/s wr, 0 op/s
Jan 23 06:13:00 np0005593232 nova_compute[250269]: 2026-01-23 11:13:00.817 250273 DEBUG nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 23 06:13:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:02.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4419: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 KiB/s rd, 25 KiB/s wr, 4 op/s
Jan 23 06:13:02 np0005593232 nova_compute[250269]: 2026-01-23 11:13:02.954 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:03 np0005593232 kernel: tap56bb5fc8-f1 (unregistering): left promiscuous mode
Jan 23 06:13:03 np0005593232 NetworkManager[49057]: <info>  [1769166783.3227] device (tap56bb5fc8-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.332 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:03 np0005593232 ovn_controller[151001]: 2026-01-23T11:13:03Z|00871|binding|INFO|Releasing lport 56bb5fc8-f112-47c7-84d3-d47e53c4d481 from this chassis (sb_readonly=0)
Jan 23 06:13:03 np0005593232 ovn_controller[151001]: 2026-01-23T11:13:03Z|00872|binding|INFO|Setting lport 56bb5fc8-f112-47c7-84d3-d47e53c4d481 down in Southbound
Jan 23 06:13:03 np0005593232 ovn_controller[151001]: 2026-01-23T11:13:03Z|00873|binding|INFO|Removing iface tap56bb5fc8-f1 ovn-installed in OVS
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.336 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.370 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:03 np0005593232 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000df.scope: Deactivated successfully.
Jan 23 06:13:03 np0005593232 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d000000df.scope: Consumed 16.383s CPU time.
Jan 23 06:13:03 np0005593232 systemd-machined[215836]: Machine qemu-98-instance-000000df terminated.
Jan 23 06:13:03 np0005593232 podman[425814]: 2026-01-23 11:13:03.412654258 +0000 UTC m=+0.063429821 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:13:03 np0005593232 podman[425811]: 2026-01-23 11:13:03.431507833 +0000 UTC m=+0.086911228 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 23 06:13:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:03.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.836 250273 INFO nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance shutdown successfully after 3 seconds.#033[00m
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.843 250273 INFO nova.virt.libvirt.driver [-] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance destroyed successfully.#033[00m
Jan 23 06:13:03 np0005593232 nova_compute[250269]: 2026-01-23 11:13:03.843 250273 DEBUG nova.objects.instance [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lazy-loading 'numa_topology' on Instance uuid d4c75524-52b8-4c2b-b0cb-18d94089013b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:13:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:04.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.171 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:cd:e8 10.100.0.10'], port_security=['fa:16:3e:be:cd:e8 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'd4c75524-52b8-4c2b-b0cb-18d94089013b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-42899517-91b9-42e3-96a7-29180211a7a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a245f7970f14fffa60af2ff972b4bfd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3f0cc653-92a4-4b83-958a-564f01bb9144', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef3519b3-9b5b-4b40-8630-d2487396abc0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f955c559970>], logical_port=56bb5fc8-f112-47c7-84d3-d47e53c4d481) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f955c559970>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.173 161902 INFO neutron.agent.ovn.metadata.agent [-] Port 56bb5fc8-f112-47c7-84d3-d47e53c4d481 in datapath 42899517-91b9-42e3-96a7-29180211a7a4 unbound from our chassis#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.175 161902 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 42899517-91b9-42e3-96a7-29180211a7a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.176 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[ec8da234-940b-4dae-8bcb-cf688539d903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.177 161902 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4 namespace which is not needed anymore#033[00m
Jan 23 06:13:04 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [NOTICE]   (425543) : haproxy version is 2.8.14-c23fe91
Jan 23 06:13:04 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [NOTICE]   (425543) : path to executable is /usr/sbin/haproxy
Jan 23 06:13:04 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [WARNING]  (425543) : Exiting Master process...
Jan 23 06:13:04 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [ALERT]    (425543) : Current worker (425545) exited with code 143 (Terminated)
Jan 23 06:13:04 np0005593232 neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4[425539]: [WARNING]  (425543) : All workers exited. Exiting... (0)
Jan 23 06:13:04 np0005593232 systemd[1]: libpod-f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55.scope: Deactivated successfully.
Jan 23 06:13:04 np0005593232 podman[425887]: 2026-01-23 11:13:04.314301392 +0000 UTC m=+0.053561361 container died f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 06:13:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c72ae2b4a59e86be69aee7bf296c7d02b7ba03825973f6369221d1101bd36471-merged.mount: Deactivated successfully.
Jan 23 06:13:04 np0005593232 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55-userdata-shm.mount: Deactivated successfully.
Jan 23 06:13:04 np0005593232 podman[425887]: 2026-01-23 11:13:04.352375753 +0000 UTC m=+0.091635722 container cleanup f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 23 06:13:04 np0005593232 systemd[1]: libpod-conmon-f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55.scope: Deactivated successfully.
Jan 23 06:13:04 np0005593232 podman[425917]: 2026-01-23 11:13:04.410258925 +0000 UTC m=+0.038687869 container remove f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.415 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[56b95b3c-2421-4426-a20d-400a35d19d8a]: (4, ('Fri Jan 23 11:13:04 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4 (f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55)\nf8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55\nFri Jan 23 11:13:04 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4 (f8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55)\nf8eaadb4fa0603d959acbe91a6463d6063a47909cd9a6e721416783a8556cc55\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.417 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[68becfc4-0abe-4cc3-99fd-df402769110e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.417 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap42899517-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:13:04 np0005593232 nova_compute[250269]: 2026-01-23 11:13:04.419 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:04 np0005593232 kernel: tap42899517-90: left promiscuous mode
Jan 23 06:13:04 np0005593232 nova_compute[250269]: 2026-01-23 11:13:04.434 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.437 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[da34a110-6aa5-4492-8393-99890fccba5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.453 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[f182c8de-f953-4c0d-b080-db926dcee1e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.454 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[13170a42-9939-41f3-90fc-4253398497f1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.468 258153 DEBUG oslo.privsep.daemon [-] privsep: reply[7ddf874d-326d-4774-8249-d772096f2e7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1065242, 'reachable_time': 27903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425935, 'error': None, 'target': 'ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 systemd[1]: run-netns-ovnmeta\x2d42899517\x2d91b9\x2d42e3\x2d96a7\x2d29180211a7a4.mount: Deactivated successfully.
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.472 162637 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-42899517-91b9-42e3-96a7-29180211a7a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 23 06:13:04 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:04.472 162637 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1466a8-8986-4e46-b666-196d773fca37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 23 06:13:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4420: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.6 KiB/s rd, 25 KiB/s wr, 4 op/s
Jan 23 06:13:05 np0005593232 nova_compute[250269]: 2026-01-23 11:13:05.201 250273 INFO nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Beginning cold snapshot process#033[00m
Jan 23 06:13:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:05 np0005593232 nova_compute[250269]: 2026-01-23 11:13:05.549 250273 DEBUG nova.virt.libvirt.imagebackend [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] No parent info for 84c0ef19-7f67-4bd3-95d8-507c3e0942ed; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Jan 23 06:13:05 np0005593232 nova_compute[250269]: 2026-01-23 11:13:05.716 250273 DEBUG nova.storage.rbd_utils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] creating snapshot(02fcd8eafd734e47bcc97dbe03347c1e) on rbd image(d4c75524-52b8-4c2b-b0cb-18d94089013b_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 06:13:05 np0005593232 nova_compute[250269]: 2026-01-23 11:13:05.816 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Jan 23 06:13:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:06.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Jan 23 06:13:06 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.167 250273 DEBUG nova.storage.rbd_utils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] cloning vms/d4c75524-52b8-4c2b-b0cb-18d94089013b_disk@02fcd8eafd734e47bcc97dbe03347c1e to images/7da2f187-f7de-4714-b817-454a50a6b19a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.314 250273 DEBUG nova.storage.rbd_utils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] flattening images/7da2f187-f7de-4714-b817-454a50a6b19a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.652 250273 DEBUG nova.compute.manager [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-vif-unplugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.653 250273 DEBUG oslo_concurrency.lockutils [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.653 250273 DEBUG oslo_concurrency.lockutils [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.653 250273 DEBUG oslo_concurrency.lockutils [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.653 250273 DEBUG nova.compute.manager [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] No waiting events found dispatching network-vif-unplugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.654 250273 WARNING nova.compute.manager [req-b4f3fe8d-a800-47f5-8b5e-e53df696819f req-519493fe-537d-413f-816b-ce58c45efbff 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received unexpected event network-vif-unplugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 23 06:13:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e428 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:06 np0005593232 nova_compute[250269]: 2026-01-23 11:13:06.709 250273 DEBUG nova.storage.rbd_utils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] removing snapshot(02fcd8eafd734e47bcc97dbe03347c1e) on rbd image(d4c75524-52b8-4c2b-b0cb-18d94089013b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Jan 23 06:13:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4422: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.2 KiB/s rd, 28 KiB/s wr, 4 op/s
Jan 23 06:13:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Jan 23 06:13:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Jan 23 06:13:07 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Jan 23 06:13:07 np0005593232 nova_compute[250269]: 2026-01-23 11:13:07.407 250273 DEBUG nova.storage.rbd_utils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] creating snapshot(snap) on rbd image(7da2f187-f7de-4714-b817-454a50a6b19a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 23 06:13:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:07.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:07 np0005593232 nova_compute[250269]: 2026-01-23 11:13:07.957 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:08.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Jan 23 06:13:08 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Jan 23 06:13:08 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.732 250273 DEBUG nova.compute.manager [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.733 250273 DEBUG oslo_concurrency.lockutils [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.733 250273 DEBUG oslo_concurrency.lockutils [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.733 250273 DEBUG oslo_concurrency.lockutils [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.734 250273 DEBUG nova.compute.manager [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] No waiting events found dispatching network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 23 06:13:08 np0005593232 nova_compute[250269]: 2026-01-23 11:13:08.734 250273 WARNING nova.compute.manager [req-47e42f8d-7501-4226-a44f-89f616f6925f req-389fe10c-76e5-45f0-8e22-6f6984309fca 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received unexpected event network-vif-plugged-56bb5fc8-f112-47c7-84d3-d47e53c4d481 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Jan 23 06:13:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4425: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 144 op/s
Jan 23 06:13:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:09.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:10.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.514 250273 INFO nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Snapshot image upload complete#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.515 250273 DEBUG nova.compute.manager [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.578 250273 INFO nova.compute.manager [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Shelve offloading#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.585 250273 INFO nova.virt.libvirt.driver [-] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance destroyed successfully.#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.586 250273 DEBUG nova.compute.manager [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.589 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.589 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.589 250273 DEBUG nova.network.neutron [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 23 06:13:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4426: 321 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 313 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 144 op/s
Jan 23 06:13:10 np0005593232 nova_compute[250269]: 2026-01-23 11:13:10.817 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:11.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:11 np0005593232 nova_compute[250269]: 2026-01-23 11:13:11.879 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:11 np0005593232 nova_compute[250269]: 2026-01-23 11:13:11.945 250273 DEBUG nova.network.neutron [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:13:11 np0005593232 nova_compute[250269]: 2026-01-23 11:13:11.969 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:13:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:12.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.700 250273 INFO nova.virt.libvirt.driver [-] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Instance destroyed successfully.#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.700 250273 DEBUG nova.objects.instance [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lazy-loading 'resources' on Instance uuid d4c75524-52b8-4c2b-b0cb-18d94089013b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.714 250273 DEBUG nova.virt.libvirt.vif [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-23T11:12:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-926357377',display_name='tempest-TestShelveInstance-server-926357377',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-926357377',id=223,image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEFOGuv6jcY9DwIy/xYvt2X4Vb1HISACx7cX6o9lDAD3l3O1QRG07pd8MQdd17GGOSBZRG+y+TaN6Gc5Y3oNpLF+mD7AORx9ZprSr452pQ3EZnBrolaOtjqq79YfGAlew==',key_name='tempest-TestShelveInstance-93398597',keypairs=<?>,launch_index=0,launched_at=2026-01-23T11:12:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='3a245f7970f14fffa60af2ff972b4bfd',ramdisk_id='',reservation_id='r-9cvenz37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='84c0ef19-7f67-4bd3-95d8-507c3e0942ed',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-869807080',owner_user_name='tempest-TestShelveInstance-869807080-project-member',shelved_at='2026-01-23T11:13:10.515610',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='7da2f187-f7de-4714-b817-454a50a6b19a'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-23T11:13:05Z,user_data=None,user_id='5d6a458f5d9345379b05f0cdb69a7b0f',uuid=d4c75524-52b8-4c2b-b0cb-18d94089013b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.714 250273 DEBUG nova.network.os_vif_util [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converting VIF {"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": "br-int", "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.715 250273 DEBUG nova.network.os_vif_util [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.715 250273 DEBUG os_vif [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.717 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.717 250273 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap56bb5fc8-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.764 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.769 250273 INFO os_vif [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:cd:e8,bridge_name='br-int',has_traffic_filtering=True,id=56bb5fc8-f112-47c7-84d3-d47e53c4d481,network=Network(42899517-91b9-42e3-96a7-29180211a7a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap56bb5fc8-f1')#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.801 250273 DEBUG nova.compute.manager [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Received event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.802 250273 DEBUG nova.compute.manager [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing instance network info cache due to event network-changed-56bb5fc8-f112-47c7-84d3-d47e53c4d481. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.802 250273 DEBUG oslo_concurrency.lockutils [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquiring lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.803 250273 DEBUG oslo_concurrency.lockutils [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Acquired lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 23 06:13:12 np0005593232 nova_compute[250269]: 2026-01-23 11:13:12.803 250273 DEBUG nova.network.neutron [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Refreshing network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 23 06:13:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4427: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 7.0 MiB/s wr, 154 op/s
Jan 23 06:13:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:13.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:13 np0005593232 nova_compute[250269]: 2026-01-23 11:13:13.734 250273 INFO nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Deleting instance files /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b_del#033[00m
Jan 23 06:13:13 np0005593232 nova_compute[250269]: 2026-01-23 11:13:13.736 250273 INFO nova.virt.libvirt.driver [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Deletion of /var/lib/nova/instances/d4c75524-52b8-4c2b-b0cb-18d94089013b_del complete#033[00m
Jan 23 06:13:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:14.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4428: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 129 op/s
Jan 23 06:13:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 23 06:13:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 23 06:13:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 23 06:13:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 23 06:13:15 np0005593232 radosgw[94687]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 23 06:13:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:15 np0005593232 nova_compute[250269]: 2026-01-23 11:13:15.819 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:16.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:16 np0005593232 podman[426274]: 2026-01-23 11:13:16.126743099 +0000 UTC m=+0.341556493 container exec 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 06:13:16 np0005593232 podman[426274]: 2026-01-23 11:13:16.424393855 +0000 UTC m=+0.639207149 container exec_died 5952394e9ece6dd41238b6dc4483561ebdd720fe3d31b44bdbe576ff46ac866d (image=quay.io/ceph/ceph:v18, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 23 06:13:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Jan 23 06:13:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4429: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.3 MiB/s rd, 4.0 MiB/s wr, 105 op/s
Jan 23 06:13:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Jan 23 06:13:16 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Jan 23 06:13:16 np0005593232 nova_compute[250269]: 2026-01-23 11:13:16.940 250273 DEBUG nova.network.neutron [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updated VIF entry in instance network info cache for port 56bb5fc8-f112-47c7-84d3-d47e53c4d481. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 23 06:13:16 np0005593232 nova_compute[250269]: 2026-01-23 11:13:16.940 250273 DEBUG nova.network.neutron [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Updating instance_info_cache with network_info: [{"id": "56bb5fc8-f112-47c7-84d3-d47e53c4d481", "address": "fa:16:3e:be:cd:e8", "network": {"id": "42899517-91b9-42e3-96a7-29180211a7a4", "bridge": null, "label": "tempest-TestShelveInstance-1168103236-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3a245f7970f14fffa60af2ff972b4bfd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap56bb5fc8-f1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 23 06:13:17 np0005593232 podman[426431]: 2026-01-23 11:13:17.482951312 +0000 UTC m=+0.065249673 container exec b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 06:13:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:17 np0005593232 podman[426431]: 2026-01-23 11:13:17.495857378 +0000 UTC m=+0.078155739 container exec_died b8dc60a3d5511144f5c9aac5deac66cf6b58bd2fc1789905b8a0512e7e46885d (image=quay.io/ceph/haproxy:2.3, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-haproxy-rgw-default-compute-0-iyrury)
Jan 23 06:13:17 np0005593232 podman[426497]: 2026-01-23 11:13:17.727244134 +0000 UTC m=+0.055875597 container exec 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, distribution-scope=public, release=1793, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, architecture=x86_64)
Jan 23 06:13:17 np0005593232 podman[426497]: 2026-01-23 11:13:17.740006986 +0000 UTC m=+0.068638419 container exec_died 2ba7e0c43b41819a42ea30e9d2bb4ab4394f81fac2aa56eea857e3b3af6a4027 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-keepalived-rgw-default-compute-0-qsixev, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Jan 23 06:13:17 np0005593232 nova_compute[250269]: 2026-01-23 11:13:17.765 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:17 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:18.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.132 250273 DEBUG oslo_concurrency.lockutils [req-3ee8aed5-71b5-42fd-a8ba-c682bb2b463e req-b7b3e9b0-4a5f-443e-a6cd-b172eb2d9500 010960fbe58245b384c2cbebe84d3b1f 87ac1761717c4b48bea28f65374beaf8 - - default default] Releasing lock "refresh_cache-d4c75524-52b8-4c2b-b0cb-18d94089013b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.243 250273 INFO nova.scheduler.client.report [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Deleted allocations for instance d4c75524-52b8-4c2b-b0cb-18d94089013b#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.369 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.370 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.398 250273 DEBUG oslo_concurrency.processutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.565 250273 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769166783.564562, d4c75524-52b8-4c2b-b0cb-18d94089013b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.566 250273 INFO nova.compute.manager [-] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] VM Stopped (Lifecycle Event)#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.594 250273 DEBUG nova.compute.manager [None req-5f3f8671-41dc-4dae-894b-859a94267070 - - - - - -] [instance: d4c75524-52b8-4c2b-b0cb-18d94089013b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 0959ebc6-4cc6-4f7f-9d76-0cac6f3d9fc2 does not exist
Jan 23 06:13:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 67a823e9-28a2-4221-91bc-48fd3754c5c2 does not exist
Jan 23 06:13:18 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f5978b89-826f-4e20-b1a1-32eced7e5356 does not exist
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4431: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 81 op/s
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200341085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.870 250273 DEBUG oslo_concurrency.processutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.876 250273 DEBUG nova.compute.provider_tree [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.891 250273 DEBUG nova.scheduler.client.report [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.915 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:18 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:13:18 np0005593232 nova_compute[250269]: 2026-01-23 11:13:18.970 250273 DEBUG oslo_concurrency.lockutils [None req-67b75e49-dbac-40c6-8ef9-37b82ab00eb1 5d6a458f5d9345379b05f0cdb69a7b0f 3a245f7970f14fffa60af2ff972b4bfd - - default default] Lock "d4c75524-52b8-4c2b-b0cb-18d94089013b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 18.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.21893416 +0000 UTC m=+0.039459560 container create 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:13:19 np0005593232 systemd[1]: Started libpod-conmon-0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831.scope.
Jan 23 06:13:19 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.202758301 +0000 UTC m=+0.023283731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.374652158 +0000 UTC m=+0.195177598 container init 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.382922013 +0000 UTC m=+0.203447433 container start 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 06:13:19 np0005593232 happy_austin[426839]: 167 167
Jan 23 06:13:19 np0005593232 systemd[1]: libpod-0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831.scope: Deactivated successfully.
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.475319884 +0000 UTC m=+0.295845394 container attach 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:13:19 np0005593232 podman[426823]: 2026-01-23 11:13:19.476639662 +0000 UTC m=+0.297165102 container died 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:13:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:19.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:19 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ec3f8a5c6379a64efc787c802ac66992b1138e4d11cc1a39e003153787de59eb-merged.mount: Deactivated successfully.
Jan 23 06:13:20 np0005593232 podman[426823]: 2026-01-23 11:13:20.047716016 +0000 UTC m=+0.868241426 container remove 0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:13:20 np0005593232 systemd[1]: libpod-conmon-0cc6c0692169a9b13e00bf3a4f889eedebfaf76e8237ae59aacd8212bfeb1831.scope: Deactivated successfully.
Jan 23 06:13:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:20.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:20 np0005593232 podman[426865]: 2026-01-23 11:13:20.213794289 +0000 UTC m=+0.049306250 container create 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:13:20 np0005593232 systemd[1]: Started libpod-conmon-6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d.scope.
Jan 23 06:13:20 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:20 np0005593232 podman[426865]: 2026-01-23 11:13:20.192242557 +0000 UTC m=+0.027754548 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:20 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:20 np0005593232 podman[426865]: 2026-01-23 11:13:20.304600005 +0000 UTC m=+0.140111986 container init 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:13:20 np0005593232 podman[426865]: 2026-01-23 11:13:20.310961176 +0000 UTC m=+0.146473157 container start 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 06:13:20 np0005593232 podman[426865]: 2026-01-23 11:13:20.316187924 +0000 UTC m=+0.151699895 container attach 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:13:20 np0005593232 nova_compute[250269]: 2026-01-23 11:13:20.821 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4432: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 54 KiB/s rd, 1.9 KiB/s wr, 81 op/s
Jan 23 06:13:21 np0005593232 condescending_galois[426881]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:13:21 np0005593232 condescending_galois[426881]: --> relative data size: 1.0
Jan 23 06:13:21 np0005593232 condescending_galois[426881]: --> All data devices are unavailable
Jan 23 06:13:21 np0005593232 systemd[1]: libpod-6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d.scope: Deactivated successfully.
Jan 23 06:13:21 np0005593232 podman[426865]: 2026-01-23 11:13:21.090410833 +0000 UTC m=+0.925922804 container died 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 06:13:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-83e0cbe7fd06c5d8e865d9a6e26e1b74d291190a35c8da9ff1624d36332967dc-merged.mount: Deactivated successfully.
Jan 23 06:13:21 np0005593232 podman[426865]: 2026-01-23 11:13:21.144979911 +0000 UTC m=+0.980491882 container remove 6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 06:13:21 np0005593232 systemd[1]: libpod-conmon-6c6af58527fe0b4253ec7aa0bb46bf85a63705fcbedf9d4b7f13d785f8540d5d.scope: Deactivated successfully.
Jan 23 06:13:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:21.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:21 np0005593232 podman[427098]: 2026-01-23 11:13:21.755082913 +0000 UTC m=+0.052578353 container create 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:21 np0005593232 podman[427098]: 2026-01-23 11:13:21.726316927 +0000 UTC m=+0.023812417 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:21 np0005593232 systemd[1]: Started libpod-conmon-620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43.scope.
Jan 23 06:13:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:22 np0005593232 podman[427098]: 2026-01-23 11:13:22.270360075 +0000 UTC m=+0.567855595 container init 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 06:13:22 np0005593232 podman[427098]: 2026-01-23 11:13:22.282337705 +0000 UTC m=+0.579833165 container start 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:22 np0005593232 dazzling_thompson[427115]: 167 167
Jan 23 06:13:22 np0005593232 systemd[1]: libpod-620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43.scope: Deactivated successfully.
Jan 23 06:13:22 np0005593232 podman[427098]: 2026-01-23 11:13:22.299958044 +0000 UTC m=+0.597453584 container attach 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:22 np0005593232 podman[427098]: 2026-01-23 11:13:22.300550151 +0000 UTC m=+0.598045591 container died 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:13:22 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6ffa935c887b4e986f90969b54b513d1eccf88b842409776fbd49d0cc9874c53-merged.mount: Deactivated successfully.
Jan 23 06:13:22 np0005593232 podman[427098]: 2026-01-23 11:13:22.442832058 +0000 UTC m=+0.740327538 container remove 620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:13:22 np0005593232 systemd[1]: libpod-conmon-620874c8c1224fbefeb3b773aea1af99adcb1317cac194f7f0ea0c69b312bc43.scope: Deactivated successfully.
Jan 23 06:13:22 np0005593232 podman[427137]: 2026-01-23 11:13:22.607960083 +0000 UTC m=+0.044187444 container create c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:13:22 np0005593232 systemd[1]: Started libpod-conmon-c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650.scope.
Jan 23 06:13:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55614bace8cd56bd1f7198910472b0cb46acc47f87691a919c6db9d165af48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55614bace8cd56bd1f7198910472b0cb46acc47f87691a919c6db9d165af48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55614bace8cd56bd1f7198910472b0cb46acc47f87691a919c6db9d165af48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c55614bace8cd56bd1f7198910472b0cb46acc47f87691a919c6db9d165af48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:22 np0005593232 podman[427137]: 2026-01-23 11:13:22.587889404 +0000 UTC m=+0.024116785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:22 np0005593232 podman[427137]: 2026-01-23 11:13:22.688415226 +0000 UTC m=+0.124642617 container init c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:22 np0005593232 podman[427137]: 2026-01-23 11:13:22.694992673 +0000 UTC m=+0.131220034 container start c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 23 06:13:22 np0005593232 podman[427137]: 2026-01-23 11:13:22.710845203 +0000 UTC m=+0.147072654 container attach c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 23 06:13:22 np0005593232 nova_compute[250269]: 2026-01-23 11:13:22.767 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4433: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 148 KiB/s rd, 1.4 KiB/s wr, 242 op/s
Jan 23 06:13:23 np0005593232 eager_galois[427153]: {
Jan 23 06:13:23 np0005593232 eager_galois[427153]:    "0": [
Jan 23 06:13:23 np0005593232 eager_galois[427153]:        {
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "devices": [
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "/dev/loop3"
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            ],
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "lv_name": "ceph_lv0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "lv_size": "7511998464",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "name": "ceph_lv0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "tags": {
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.cluster_name": "ceph",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.crush_device_class": "",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.encrypted": "0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.osd_id": "0",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.type": "block",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:                "ceph.vdo": "0"
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            },
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "type": "block",
Jan 23 06:13:23 np0005593232 eager_galois[427153]:            "vg_name": "ceph_vg0"
Jan 23 06:13:23 np0005593232 eager_galois[427153]:        }
Jan 23 06:13:23 np0005593232 eager_galois[427153]:    ]
Jan 23 06:13:23 np0005593232 eager_galois[427153]: }
Jan 23 06:13:23 np0005593232 systemd[1]: libpod-c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650.scope: Deactivated successfully.
Jan 23 06:13:23 np0005593232 podman[427137]: 2026-01-23 11:13:23.462729157 +0000 UTC m=+0.898956508 container died c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7c55614bace8cd56bd1f7198910472b0cb46acc47f87691a919c6db9d165af48-merged.mount: Deactivated successfully.
Jan 23 06:13:23 np0005593232 podman[427137]: 2026-01-23 11:13:23.758817838 +0000 UTC m=+1.195045199 container remove c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:13:23 np0005593232 systemd[1]: libpod-conmon-c28d64c5d459f7079a78a41c0ab67d192eb8f78e6ff479deeaf0ecff32ba3650.scope: Deactivated successfully.
Jan 23 06:13:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:24.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.393071446 +0000 UTC m=+0.077697776 container create 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.340973377 +0000 UTC m=+0.025599737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:24 np0005593232 systemd[1]: Started libpod-conmon-27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a.scope.
Jan 23 06:13:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.72945147 +0000 UTC m=+0.414077820 container init 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.73579511 +0000 UTC m=+0.420421440 container start 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:13:24 np0005593232 awesome_wozniak[427332]: 167 167
Jan 23 06:13:24 np0005593232 systemd[1]: libpod-27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a.scope: Deactivated successfully.
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.769526367 +0000 UTC m=+0.454152697 container attach 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 06:13:24 np0005593232 podman[427316]: 2026-01-23 11:13:24.770026361 +0000 UTC m=+0.454652691 container died 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 06:13:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4434: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 148 KiB/s rd, 1.4 KiB/s wr, 242 op/s
Jan 23 06:13:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-1e516ca5ead026a60a51b593c5315d436791f5ebf51b92612099376f35608e3f-merged.mount: Deactivated successfully.
Jan 23 06:13:25 np0005593232 podman[427316]: 2026-01-23 11:13:25.019161581 +0000 UTC m=+0.703787911 container remove 27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_wozniak, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:13:25 np0005593232 systemd[1]: libpod-conmon-27e964b3f5a2e8de6b49518a662b32addbbdf8a223a48425c54829ab0b98e37a.scope: Deactivated successfully.
Jan 23 06:13:25 np0005593232 podman[427358]: 2026-01-23 11:13:25.22387834 +0000 UTC m=+0.027519712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:13:25 np0005593232 podman[427358]: 2026-01-23 11:13:25.394043279 +0000 UTC m=+0.197684671 container create 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 06:13:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:25.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:25 np0005593232 systemd[1]: Started libpod-conmon-6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce.scope.
Jan 23 06:13:25 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:13:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d31ba0d4aec3995fe5b627c66880f4dd91ffbdd360e976301914faa38578d79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d31ba0d4aec3995fe5b627c66880f4dd91ffbdd360e976301914faa38578d79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d31ba0d4aec3995fe5b627c66880f4dd91ffbdd360e976301914faa38578d79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:25 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d31ba0d4aec3995fe5b627c66880f4dd91ffbdd360e976301914faa38578d79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:13:25 np0005593232 podman[427358]: 2026-01-23 11:13:25.720056179 +0000 UTC m=+0.523697681 container init 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:13:25 np0005593232 podman[427358]: 2026-01-23 11:13:25.729120246 +0000 UTC m=+0.532761598 container start 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:13:25 np0005593232 nova_compute[250269]: 2026-01-23 11:13:25.826 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:25 np0005593232 podman[427358]: 2026-01-23 11:13:25.84874224 +0000 UTC m=+0.652383602 container attach 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 23 06:13:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:26.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:26 np0005593232 nice_greider[427374]: {
Jan 23 06:13:26 np0005593232 nice_greider[427374]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:13:26 np0005593232 nice_greider[427374]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:13:26 np0005593232 nice_greider[427374]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:13:26 np0005593232 nice_greider[427374]:        "osd_id": 0,
Jan 23 06:13:26 np0005593232 nice_greider[427374]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:13:26 np0005593232 nice_greider[427374]:        "type": "bluestore"
Jan 23 06:13:26 np0005593232 nice_greider[427374]:    }
Jan 23 06:13:26 np0005593232 nice_greider[427374]: }
Jan 23 06:13:26 np0005593232 systemd[1]: libpod-6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce.scope: Deactivated successfully.
Jan 23 06:13:26 np0005593232 podman[427358]: 2026-01-23 11:13:26.591507965 +0000 UTC m=+1.395149327 container died 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 23 06:13:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3d31ba0d4aec3995fe5b627c66880f4dd91ffbdd360e976301914faa38578d79-merged.mount: Deactivated successfully.
Jan 23 06:13:26 np0005593232 podman[427358]: 2026-01-23 11:13:26.645472066 +0000 UTC m=+1.449113448 container remove 6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:13:26 np0005593232 systemd[1]: libpod-conmon-6653afe594bbeb4578adbe4f8b522329bbe514bfe5f7e0a62d0d3d439c6353ce.scope: Deactivated successfully.
Jan 23 06:13:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:13:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:13:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c9213454-e2f9-4cdf-9bc1-8ddde2fc3ef8 does not exist
Jan 23 06:13:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 38905664-171a-46ae-86df-44b378881d54 does not exist
Jan 23 06:13:26 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a2c2bcfd-63fa-45f6-ac64-36a29ad6ef87 does not exist
Jan 23 06:13:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4435: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 148 KiB/s rd, 1.4 KiB/s wr, 242 op/s
Jan 23 06:13:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:27.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:27 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:13:27 np0005593232 nova_compute[250269]: 2026-01-23 11:13:27.772 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:28.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:28.151 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=105, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=104) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:13:28 np0005593232 nova_compute[250269]: 2026-01-23 11:13:28.151 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:28 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:28.152 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:13:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4436: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 125 KiB/s rd, 1.2 KiB/s wr, 203 op/s
Jan 23 06:13:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:29.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:30.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4437: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 89 KiB/s rd, 0 B/s wr, 148 op/s
Jan 23 06:13:30 np0005593232 nova_compute[250269]: 2026-01-23 11:13:30.861 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:31 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:31.154 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '105'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:13:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:31.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e431 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.734053) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811734103, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 1231, "num_deletes": 256, "total_data_size": 2000180, "memory_usage": 2024832, "flush_reason": "Manual Compaction"}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811747999, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 1958201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 96014, "largest_seqno": 97244, "table_properties": {"data_size": 1952288, "index_size": 3243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12748, "raw_average_key_size": 19, "raw_value_size": 1940261, "raw_average_value_size": 3041, "num_data_blocks": 141, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166704, "oldest_key_time": 1769166704, "file_creation_time": 1769166811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 13971 microseconds, and 5116 cpu microseconds.
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.748027) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 1958201 bytes OK
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.748043) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.750410) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.750422) EVENT_LOG_v1 {"time_micros": 1769166811750418, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.750437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1994619, prev total WAL file size 2010063, number of live WAL files 2.
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.750992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323733' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end)
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(1912KB)], [224(13MB)]
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811751040, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 15961032, "oldest_snapshot_seqno": -1}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 12099 keys, 15831019 bytes, temperature: kUnknown
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811946967, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 15831019, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15753282, "index_size": 46392, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 320375, "raw_average_key_size": 26, "raw_value_size": 15542525, "raw_average_value_size": 1284, "num_data_blocks": 1762, "num_entries": 12099, "num_filter_entries": 12099, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166811, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.947248) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 15831019 bytes
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.951209) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.4 rd, 80.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 13.4 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(16.2) write-amplify(8.1) OK, records in: 12630, records dropped: 531 output_compression: NoCompression
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.951231) EVENT_LOG_v1 {"time_micros": 1769166811951222, "job": 142, "event": "compaction_finished", "compaction_time_micros": 196022, "compaction_time_cpu_micros": 36815, "output_level": 6, "num_output_files": 1, "total_output_size": 15831019, "num_input_records": 12630, "num_output_records": 12099, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811951740, "job": 142, "event": "table_file_deletion", "file_number": 226}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166811954261, "job": 142, "event": "table_file_deletion", "file_number": 224}
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.750929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.954351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.954360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.954362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.954364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:31 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:13:31.954367) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:13:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:32.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:32 np0005593232 nova_compute[250269]: 2026-01-23 11:13:32.776 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4438: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.0 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Jan 23 06:13:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:33.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:34 np0005593232 podman[427462]: 2026-01-23 11:13:34.424593399 +0000 UTC m=+0.062985778 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 23 06:13:34 np0005593232 podman[427461]: 2026-01-23 11:13:34.526609153 +0000 UTC m=+0.173651228 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 06:13:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4439: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 76 op/s
Jan 23 06:13:35 np0005593232 nova_compute[250269]: 2026-01-23 11:13:35.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:35.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Jan 23 06:13:35 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Jan 23 06:13:35 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Jan 23 06:13:35 np0005593232 nova_compute[250269]: 2026-01-23 11:13:35.863 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:36.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4441: 321 pgs: 321 active+clean; 279 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Jan 23 06:13:37 np0005593232 nova_compute[250269]: 2026-01-23 11:13:37.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:37 np0005593232 nova_compute[250269]: 2026-01-23 11:13:37.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:13:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:37.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:13:37
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta']
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:13:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:13:37 np0005593232 nova_compute[250269]: 2026-01-23 11:13:37.778 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:38.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:38 np0005593232 nova_compute[250269]: 2026-01-23 11:13:38.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4442: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:13:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:13:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:39.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:40.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.290 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.307 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.308 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4443: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 208 op/s
Jan 23 06:13:40 np0005593232 nova_compute[250269]: 2026-01-23 11:13:40.865 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:41.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e432 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Jan 23 06:13:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Jan 23 06:13:41 np0005593232 ceph-mon[74423]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Jan 23 06:13:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:42.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:42 np0005593232 ovn_controller[151001]: 2026-01-23T11:13:42Z|00874|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 23 06:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:42.699 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:42.699 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:13:42.700 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:42 np0005593232 nova_compute[250269]: 2026-01-23 11:13:42.781 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4445: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.9 MiB/s rd, 20 KiB/s wr, 145 op/s
Jan 23 06:13:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:43.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4446: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 MiB/s rd, 18 KiB/s wr, 128 op/s
Jan 23 06:13:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:45.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:45 np0005593232 nova_compute[250269]: 2026-01-23 11:13:45.868 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:46.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4447: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 116 op/s
Jan 23 06:13:47 np0005593232 nova_compute[250269]: 2026-01-23 11:13:47.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:47.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:47 np0005593232 nova_compute[250269]: 2026-01-23 11:13:47.783 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0021759550065140517 of space, bias 1.0, pg target 0.6527865019542155 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:13:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:48.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4448: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 3.2 KiB/s wr, 11 op/s
Jan 23 06:13:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:49.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:50.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4449: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 175 KiB/s rd, 3.2 KiB/s wr, 11 op/s
Jan 23 06:13:50 np0005593232 nova_compute[250269]: 2026-01-23 11:13:50.917 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:51 np0005593232 nova_compute[250269]: 2026-01-23 11:13:51.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:51.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:13:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:52.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:52 np0005593232 nova_compute[250269]: 2026-01-23 11:13:52.786 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4450: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 575 KiB/s rd, 14 KiB/s wr, 45 op/s
Jan 23 06:13:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:53.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:54.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4451: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.333 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.334 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.334 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.334 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.334 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:13:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:55.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:13:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4156029275' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.828 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:13:55 np0005593232 nova_compute[250269]: 2026-01-23 11:13:55.920 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.011 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.012 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4002MB free_disk=20.942611694335938GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.013 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.013 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.086 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.087 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.125 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:13:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:13:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:56.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:13:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:13:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1110700608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.553 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.559 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.593 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.618 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:13:56 np0005593232 nova_compute[250269]: 2026-01-23 11:13:56.618 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:13:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:13:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4452: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 13 KiB/s wr, 42 op/s
Jan 23 06:13:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:57.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:57 np0005593232 nova_compute[250269]: 2026-01-23 11:13:57.788 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:13:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:13:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:13:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:13:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4453: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 529 KiB/s rd, 24 KiB/s wr, 42 op/s
Jan 23 06:13:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:13:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:13:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:13:59.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:00.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4454: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 383 KiB/s rd, 21 KiB/s wr, 33 op/s
Jan 23 06:14:00 np0005593232 nova_compute[250269]: 2026-01-23 11:14:00.968 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:01.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:02.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:02 np0005593232 nova_compute[250269]: 2026-01-23 11:14:02.790 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4455: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 383 KiB/s rd, 25 KiB/s wr, 33 op/s
Jan 23 06:14:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:03.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:04.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4456: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 06:14:05 np0005593232 podman[427670]: 2026-01-23 11:14:05.430353326 +0000 UTC m=+0.082797700 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 06:14:05 np0005593232 podman[427669]: 2026-01-23 11:14:05.434828003 +0000 UTC m=+0.088474861 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 23 06:14:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:05.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:05 np0005593232 nova_compute[250269]: 2026-01-23 11:14:05.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:06.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4457: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 14 KiB/s wr, 1 op/s
Jan 23 06:14:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:07.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:07 np0005593232 nova_compute[250269]: 2026-01-23 11:14:07.792 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:08.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4458: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 19 KiB/s wr, 1 op/s
Jan 23 06:14:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:09.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:09 np0005593232 nova_compute[250269]: 2026-01-23 11:14:09.941 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:09.941 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=106, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=105) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:14:09 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:09.944 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:14:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:10.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4459: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 8.7 KiB/s wr, 1 op/s
Jan 23 06:14:10 np0005593232 nova_compute[250269]: 2026-01-23 11:14:10.973 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:11.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:11 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:11.945 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '106'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:14:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:12.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:12 np0005593232 nova_compute[250269]: 2026-01-23 11:14:12.794 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4460: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 9.8 KiB/s wr, 34 op/s
Jan 23 06:14:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:13.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4461: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Jan 23 06:14:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:15.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:16 np0005593232 nova_compute[250269]: 2026-01-23 11:14:16.020 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:16.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4462: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Jan 23 06:14:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:17.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:17 np0005593232 nova_compute[250269]: 2026-01-23 11:14:17.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:18.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4463: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 6.2 KiB/s wr, 34 op/s
Jan 23 06:14:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:19.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:20.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4464: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Jan 23 06:14:21 np0005593232 nova_compute[250269]: 2026-01-23 11:14:21.022 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:21.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:22 np0005593232 nova_compute[250269]: 2026-01-23 11:14:22.800 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4465: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 1.6 KiB/s wr, 41 op/s
Jan 23 06:14:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:23.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:24.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4466: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 23 06:14:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:25.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:26 np0005593232 nova_compute[250269]: 2026-01-23 11:14:26.025 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:26.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4467: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 7 op/s
Jan 23 06:14:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:27.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:27 np0005593232 nova_compute[250269]: 2026-01-23 11:14:27.801 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev f7c1df1e-6e88-47db-9721-51838f1a6b18 does not exist
Jan 23 06:14:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3b0e8de6-8cc7-4d76-9580-e817eeb901e1 does not exist
Jan 23 06:14:28 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 9c7aa592-b57c-4dec-a1bd-6db59cc8dcca does not exist
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:14:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:14:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:28.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:28 np0005593232 podman[428048]: 2026-01-23 11:14:28.575119445 +0000 UTC m=+0.021986895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4468: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 23 06:14:28 np0005593232 podman[428048]: 2026-01-23 11:14:28.880182622 +0000 UTC m=+0.327050022 container create d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:14:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:14:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:29 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:14:29 np0005593232 systemd[1]: Started libpod-conmon-d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339.scope.
Jan 23 06:14:29 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:29 np0005593232 podman[428048]: 2026-01-23 11:14:29.34144929 +0000 UTC m=+0.788316740 container init d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 23 06:14:29 np0005593232 podman[428048]: 2026-01-23 11:14:29.352241606 +0000 UTC m=+0.799109046 container start d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:14:29 np0005593232 systemd[1]: libpod-d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339.scope: Deactivated successfully.
Jan 23 06:14:29 np0005593232 intelligent_jepsen[428065]: 167 167
Jan 23 06:14:29 np0005593232 conmon[428065]: conmon d66b9bb4ab528f4a499a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339.scope/container/memory.events
Jan 23 06:14:29 np0005593232 podman[428048]: 2026-01-23 11:14:29.368151097 +0000 UTC m=+0.815018537 container attach d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 23 06:14:29 np0005593232 podman[428048]: 2026-01-23 11:14:29.369546147 +0000 UTC m=+0.816413547 container died d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 06:14:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:29.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:29 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95aa77f5d09e418ad39989a4d09dbb7e6b56d081b02fad88cbeaeb1ad860796c-merged.mount: Deactivated successfully.
Jan 23 06:14:29 np0005593232 podman[428048]: 2026-01-23 11:14:29.896021126 +0000 UTC m=+1.342888516 container remove d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_jepsen, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 23 06:14:29 np0005593232 systemd[1]: libpod-conmon-d66b9bb4ab528f4a499a32e9c1ed43e0646ed62bf9b6058aebf4a2c35593f339.scope: Deactivated successfully.
Jan 23 06:14:30 np0005593232 podman[428089]: 2026-01-23 11:14:30.162787236 +0000 UTC m=+0.091579100 container create ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 23 06:14:30 np0005593232 podman[428089]: 2026-01-23 11:14:30.101197368 +0000 UTC m=+0.029989232 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:30 np0005593232 systemd[1]: Started libpod-conmon-ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6.scope.
Jan 23 06:14:30 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:30 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:30 np0005593232 podman[428089]: 2026-01-23 11:14:30.36347313 +0000 UTC m=+0.292265004 container init ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 23 06:14:30 np0005593232 podman[428089]: 2026-01-23 11:14:30.370554671 +0000 UTC m=+0.299346525 container start ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:14:30 np0005593232 podman[428089]: 2026-01-23 11:14:30.389334554 +0000 UTC m=+0.318126428 container attach ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:14:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4469: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 23 06:14:31 np0005593232 nova_compute[250269]: 2026-01-23 11:14:31.025 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:31 np0005593232 thirsty_panini[428105]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:14:31 np0005593232 thirsty_panini[428105]: --> relative data size: 1.0
Jan 23 06:14:31 np0005593232 thirsty_panini[428105]: --> All data devices are unavailable
Jan 23 06:14:31 np0005593232 systemd[1]: libpod-ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6.scope: Deactivated successfully.
Jan 23 06:14:31 np0005593232 podman[428089]: 2026-01-23 11:14:31.269659432 +0000 UTC m=+1.198451296 container died ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Jan 23 06:14:31 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9907704d4c081255171dc72925f05e534a76139345f748e42af282268ad76857-merged.mount: Deactivated successfully.
Jan 23 06:14:31 np0005593232 podman[428089]: 2026-01-23 11:14:31.325753644 +0000 UTC m=+1.254545498 container remove ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:14:31 np0005593232 systemd[1]: libpod-conmon-ee86e4f79ef45cd4a643e9e0fbe61de4cc194e60b1e81fc23e1a06b568e60db6.scope: Deactivated successfully.
Jan 23 06:14:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:31.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.007779436 +0000 UTC m=+0.052605603 container create 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 23 06:14:32 np0005593232 systemd[1]: Started libpod-conmon-1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c.scope.
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:31.982467448 +0000 UTC m=+0.027293615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.109517573 +0000 UTC m=+0.154343770 container init 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.122287346 +0000 UTC m=+0.167113503 container start 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:14:32 np0005593232 keen_kowalevski[428291]: 167 167
Jan 23 06:14:32 np0005593232 systemd[1]: libpod-1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c.scope: Deactivated successfully.
Jan 23 06:14:32 np0005593232 conmon[428291]: conmon 1f52b74ac2989d946f04 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c.scope/container/memory.events
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.126933717 +0000 UTC m=+0.171759874 container attach 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.12914076 +0000 UTC m=+0.173966917 container died 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 23 06:14:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:32.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 23 06:14:32 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624927327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 23 06:14:32 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3f1980a67c1be98f12b69edc67fddc40789bf4f33550f8eab3418e958c92a31f-merged.mount: Deactivated successfully.
Jan 23 06:14:32 np0005593232 podman[428275]: 2026-01-23 11:14:32.302634944 +0000 UTC m=+0.347461081 container remove 1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kowalevski, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 23 06:14:32 np0005593232 systemd[1]: libpod-conmon-1f52b74ac2989d946f04d00955e55fad4b5257468bd67a3b5633930f26a6957c.scope: Deactivated successfully.
Jan 23 06:14:32 np0005593232 podman[428315]: 2026-01-23 11:14:32.49495276 +0000 UTC m=+0.046713496 container create b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:14:32 np0005593232 systemd[1]: Started libpod-conmon-b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c.scope.
Jan 23 06:14:32 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60f4494989840a50f5ee59fdfd3942fec98359847fbc90e5955d5f219fbe835/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60f4494989840a50f5ee59fdfd3942fec98359847fbc90e5955d5f219fbe835/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60f4494989840a50f5ee59fdfd3942fec98359847fbc90e5955d5f219fbe835/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:32 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60f4494989840a50f5ee59fdfd3942fec98359847fbc90e5955d5f219fbe835/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:32 np0005593232 podman[428315]: 2026-01-23 11:14:32.477044352 +0000 UTC m=+0.028805128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:32 np0005593232 podman[428315]: 2026-01-23 11:14:32.575989049 +0000 UTC m=+0.127749845 container init b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 23 06:14:32 np0005593232 podman[428315]: 2026-01-23 11:14:32.583886204 +0000 UTC m=+0.135646950 container start b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 23 06:14:32 np0005593232 podman[428315]: 2026-01-23 11:14:32.588103013 +0000 UTC m=+0.139863759 container attach b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:14:32 np0005593232 nova_compute[250269]: 2026-01-23 11:14:32.803 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4470: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 41 op/s
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]: {
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:    "0": [
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:        {
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "devices": [
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "/dev/loop3"
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            ],
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "lv_name": "ceph_lv0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "lv_size": "7511998464",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "name": "ceph_lv0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "tags": {
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.cluster_name": "ceph",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.crush_device_class": "",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.encrypted": "0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.osd_id": "0",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.type": "block",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:                "ceph.vdo": "0"
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            },
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "type": "block",
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:            "vg_name": "ceph_vg0"
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:        }
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]:    ]
Jan 23 06:14:33 np0005593232 elastic_shannon[428331]: }
Jan 23 06:14:33 np0005593232 systemd[1]: libpod-b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c.scope: Deactivated successfully.
Jan 23 06:14:33 np0005593232 podman[428315]: 2026-01-23 11:14:33.326825135 +0000 UTC m=+0.878585911 container died b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:14:33 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c60f4494989840a50f5ee59fdfd3942fec98359847fbc90e5955d5f219fbe835-merged.mount: Deactivated successfully.
Jan 23 06:14:33 np0005593232 podman[428315]: 2026-01-23 11:14:33.392782906 +0000 UTC m=+0.944543652 container remove b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_shannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 06:14:33 np0005593232 systemd[1]: libpod-conmon-b1a0fd655dae2ddf715ba3047679c6ec01bb7917f978c132ebd00e0ead79a94c.scope: Deactivated successfully.
Jan 23 06:14:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:33.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:33.985288729 +0000 UTC m=+0.026539434 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:34.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.256238357 +0000 UTC m=+0.297489072 container create c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 23 06:14:34 np0005593232 systemd[1]: Started libpod-conmon-c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74.scope.
Jan 23 06:14:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.614712818 +0000 UTC m=+0.655963523 container init c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.623141027 +0000 UTC m=+0.664391702 container start c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 06:14:34 np0005593232 nostalgic_buck[428510]: 167 167
Jan 23 06:14:34 np0005593232 systemd[1]: libpod-c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74.scope: Deactivated successfully.
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.635398165 +0000 UTC m=+0.676648860 container attach c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.636192377 +0000 UTC m=+0.677443052 container died c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 23 06:14:34 np0005593232 systemd[1]: var-lib-containers-storage-overlay-03caf837abe1783bd584f1884bd89f1207e303ec8260b1bbf31bb9414b7e082d-merged.mount: Deactivated successfully.
Jan 23 06:14:34 np0005593232 podman[428494]: 2026-01-23 11:14:34.679533407 +0000 UTC m=+0.720784092 container remove c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_buck, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 06:14:34 np0005593232 systemd[1]: libpod-conmon-c327f651dc7770599a188b0d743efe4f00ec13c50edf29d65af9c07044ca8f74.scope: Deactivated successfully.
Jan 23 06:14:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4471: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 23 06:14:34 np0005593232 podman[428534]: 2026-01-23 11:14:34.890410581 +0000 UTC m=+0.054476367 container create ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef)
Jan 23 06:14:34 np0005593232 systemd[1]: Started libpod-conmon-ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8.scope.
Jan 23 06:14:34 np0005593232 podman[428534]: 2026-01-23 11:14:34.864237578 +0000 UTC m=+0.028303444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:14:34 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:14:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164ad8d1e9194980b24648a4189fb9c887a0dca4a596dbe7099dd75b5d954f26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164ad8d1e9194980b24648a4189fb9c887a0dca4a596dbe7099dd75b5d954f26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164ad8d1e9194980b24648a4189fb9c887a0dca4a596dbe7099dd75b5d954f26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:34 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/164ad8d1e9194980b24648a4189fb9c887a0dca4a596dbe7099dd75b5d954f26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:14:35 np0005593232 podman[428534]: 2026-01-23 11:14:35.022728475 +0000 UTC m=+0.186794301 container init ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:14:35 np0005593232 podman[428534]: 2026-01-23 11:14:35.032522193 +0000 UTC m=+0.196587979 container start ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:14:35 np0005593232 podman[428534]: 2026-01-23 11:14:35.036665321 +0000 UTC m=+0.200731117 container attach ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 06:14:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:35.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:35 np0005593232 epic_margulis[428551]: {
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:        "osd_id": 0,
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:        "type": "bluestore"
Jan 23 06:14:35 np0005593232 epic_margulis[428551]:    }
Jan 23 06:14:35 np0005593232 epic_margulis[428551]: }
Jan 23 06:14:35 np0005593232 systemd[1]: libpod-ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8.scope: Deactivated successfully.
Jan 23 06:14:35 np0005593232 podman[428534]: 2026-01-23 11:14:35.942490814 +0000 UTC m=+1.106556600 container died ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 23 06:14:35 np0005593232 systemd[1]: var-lib-containers-storage-overlay-164ad8d1e9194980b24648a4189fb9c887a0dca4a596dbe7099dd75b5d954f26-merged.mount: Deactivated successfully.
Jan 23 06:14:36 np0005593232 podman[428534]: 2026-01-23 11:14:36.018652495 +0000 UTC m=+1.182718281 container remove ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_margulis, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:14:36 np0005593232 systemd[1]: libpod-conmon-ecedb8454136dba1cdc75c21c21b3af0c115c16e9905577c893c57c4d2f60ec8.scope: Deactivated successfully.
Jan 23 06:14:36 np0005593232 nova_compute[250269]: 2026-01-23 11:14:36.026 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:36 np0005593232 podman[428578]: 2026-01-23 11:14:36.03575777 +0000 UTC m=+0.063909644 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:14:36 np0005593232 podman[428573]: 2026-01-23 11:14:36.066842252 +0000 UTC m=+0.093014640 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev e8ceefdf-496f-4b02-948c-678d50926fcf does not exist
Jan 23 06:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4e21b882-2ba0-45ce-a825-51dc4c34002a does not exist
Jan 23 06:14:36 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 999cd800-a31f-4ae4-ad3a-49ccfa14dd4e does not exist
Jan 23 06:14:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:36.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:36 np0005593232 nova_compute[250269]: 2026-01-23 11:14:36.618 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:14:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4472: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:14:37
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.rgw.root', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', 'vms', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'volumes']
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:14:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:37.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:14:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:14:37 np0005593232 nova_compute[250269]: 2026-01-23 11:14:37.806 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:38.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4473: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:14:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:14:39 np0005593232 nova_compute[250269]: 2026-01-23 11:14:39.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:39 np0005593232 nova_compute[250269]: 2026-01-23 11:14:39.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:14:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:39.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:40 np0005593232 nova_compute[250269]: 2026-01-23 11:14:40.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:40 np0005593232 nova_compute[250269]: 2026-01-23 11:14:40.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4474: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 4 op/s
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.029 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:14:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:41.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.589 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:14:41 np0005593232 nova_compute[250269]: 2026-01-23 11:14:41.589 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:42.699 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:42.700 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:14:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:14:42.700 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:14:42 np0005593232 nova_compute[250269]: 2026-01-23 11:14:42.808 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4475: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:14:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:43.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4476: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:14:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:45.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:46 np0005593232 nova_compute[250269]: 2026-01-23 11:14:46.030 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:46.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4477: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:14:47 np0005593232 nova_compute[250269]: 2026-01-23 11:14:47.293 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:47.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:47 np0005593232 nova_compute[250269]: 2026-01-23 11:14:47.809 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.003163967988088591 of space, bias 1.0, pg target 0.9491903964265773 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:14:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:48.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4478: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 23 06:14:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:49.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:50.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4479: 321 pgs: 321 active+clean; 167 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 23 06:14:51 np0005593232 nova_compute[250269]: 2026-01-23 11:14:51.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:51 np0005593232 nova_compute[250269]: 2026-01-23 11:14:51.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:51 np0005593232 nova_compute[250269]: 2026-01-23 11:14:51.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 06:14:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:51.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:14:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:52.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:14:52 np0005593232 nova_compute[250269]: 2026-01-23 11:14:52.312 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:52 np0005593232 nova_compute[250269]: 2026-01-23 11:14:52.811 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4480: 321 pgs: 321 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 103 op/s
Jan 23 06:14:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:14:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:14:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:54.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4481: 321 pgs: 321 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 2.0 MiB/s wr, 34 op/s
Jan 23 06:14:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:55.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.037 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.329 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.330 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.331 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #228. Immutable memtables: 0.
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.801621) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 228
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166896801668, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 996, "num_deletes": 251, "total_data_size": 1512827, "memory_usage": 1533672, "flush_reason": "Manual Compaction"}
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #229: started
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166896818386, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 229, "file_size": 931973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97245, "largest_seqno": 98240, "table_properties": {"data_size": 928020, "index_size": 1604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10798, "raw_average_key_size": 21, "raw_value_size": 919386, "raw_average_value_size": 1802, "num_data_blocks": 69, "num_entries": 510, "num_filter_entries": 510, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166811, "oldest_key_time": 1769166811, "file_creation_time": 1769166896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 229, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 16869 microseconds, and 8011 cpu microseconds.
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.818484) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #229: 931973 bytes OK
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.818516) [db/memtable_list.cc:519] [default] Level-0 commit table #229 started
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.820147) [db/memtable_list.cc:722] [default] Level-0 commit table #229: memtable #1 done
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.820176) EVENT_LOG_v1 {"time_micros": 1769166896820167, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.820201) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 1508148, prev total WAL file size 1508148, number of live WAL files 2.
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000225.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.821280) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373732' seq:72057594037927935, type:22 .. '6D6772737461740034303233' seq:0, type:0; will stop at (end)
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [229(910KB)], [227(15MB)]
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166896821349, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [229], "files_L6": [227], "score": -1, "input_data_size": 16762992, "oldest_snapshot_seqno": -1}
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:14:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765086566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:14:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4482: 321 pgs: 321 active+clean; 188 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 190 KiB/s rd, 2.0 MiB/s wr, 34 op/s
Jan 23 06:14:56 np0005593232 nova_compute[250269]: 2026-01-23 11:14:56.881 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #230: 12127 keys, 13490834 bytes, temperature: kUnknown
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166897029345, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 230, "file_size": 13490834, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13416538, "index_size": 42887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30341, "raw_key_size": 321283, "raw_average_key_size": 26, "raw_value_size": 13208943, "raw_average_value_size": 1089, "num_data_blocks": 1616, "num_entries": 12127, "num_filter_entries": 12127, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166896, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.029993) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 13490834 bytes
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.031613) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.5 rd, 64.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 15.1 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(32.5) write-amplify(14.5) OK, records in: 12609, records dropped: 482 output_compression: NoCompression
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.031651) EVENT_LOG_v1 {"time_micros": 1769166897031635, "job": 144, "event": "compaction_finished", "compaction_time_micros": 208291, "compaction_time_cpu_micros": 74661, "output_level": 6, "num_output_files": 1, "total_output_size": 13490834, "num_input_records": 12609, "num_output_records": 12127, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000229.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166897032068, "job": 144, "event": "table_file_deletion", "file_number": 229}
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166897036675, "job": 144, "event": "table_file_deletion", "file_number": 227}
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:56.821167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.036789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.036796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.036799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.036801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:14:57.036803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.098 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.100 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4007MB free_disk=20.98813247680664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.100 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.101 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.193 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.194 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.212 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:14:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:57.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:14:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3967003672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.644 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.650 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.665 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.667 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.668 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:14:57 np0005593232 nova_compute[250269]: 2026-01-23 11:14:57.954 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:14:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:14:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:14:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4483: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 06:14:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:14:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:14:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:14:59.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4484: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 06:15:01 np0005593232 nova_compute[250269]: 2026-01-23 11:15:01.041 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:01.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4485: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 387 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 23 06:15:02 np0005593232 nova_compute[250269]: 2026-01-23 11:15:02.956 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:03.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:04.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4486: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 107 KiB/s wr, 31 op/s
Jan 23 06:15:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:06 np0005593232 nova_compute[250269]: 2026-01-23 11:15:06.077 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:06 np0005593232 podman[428836]: 2026-01-23 11:15:06.407984559 +0000 UTC m=+0.064085279 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 23 06:15:06 np0005593232 podman[428835]: 2026-01-23 11:15:06.429213821 +0000 UTC m=+0.086612998 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 23 06:15:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4487: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 107 KiB/s wr, 31 op/s
Jan 23 06:15:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:07.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:07 np0005593232 nova_compute[250269]: 2026-01-23 11:15:07.959 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:08.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4488: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 196 KiB/s rd, 107 KiB/s wr, 31 op/s
Jan 23 06:15:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:09.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:10.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4489: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 12 KiB/s wr, 0 op/s
Jan 23 06:15:11 np0005593232 nova_compute[250269]: 2026-01-23 11:15:11.078 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:11.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:11 np0005593232 ovn_controller[151001]: 2026-01-23T11:15:11Z|00875|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 23 06:15:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4490: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 13 KiB/s wr, 0 op/s
Jan 23 06:15:12 np0005593232 nova_compute[250269]: 2026-01-23 11:15:12.963 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:13.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #231. Immutable memtables: 0.
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.014966) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 231
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166914015029, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 385, "num_deletes": 251, "total_data_size": 304289, "memory_usage": 312584, "flush_reason": "Manual Compaction"}
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #232: started
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166914138176, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 232, "file_size": 301788, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98242, "largest_seqno": 98625, "table_properties": {"data_size": 299466, "index_size": 485, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5716, "raw_average_key_size": 18, "raw_value_size": 294876, "raw_average_value_size": 963, "num_data_blocks": 21, "num_entries": 306, "num_filter_entries": 306, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166898, "oldest_key_time": 1769166898, "file_creation_time": 1769166914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 232, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 123273 microseconds, and 2029 cpu microseconds.
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.138235) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #232: 301788 bytes OK
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.138259) [db/memtable_list.cc:519] [default] Level-0 commit table #232 started
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.219785) [db/memtable_list.cc:722] [default] Level-0 commit table #232: memtable #1 done
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.219908) EVENT_LOG_v1 {"time_micros": 1769166914219848, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.219949) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 301823, prev total WAL file size 301823, number of live WAL files 2.
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000228.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.220804) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [232(294KB)], [230(12MB)]
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166914220847, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [232], "files_L6": [230], "score": -1, "input_data_size": 13792622, "oldest_snapshot_seqno": -1}
Jan 23 06:15:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:14 np0005593232 nova_compute[250269]: 2026-01-23 11:15:14.662 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4491: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #233: 11923 keys, 11803032 bytes, temperature: kUnknown
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166914888307, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 233, "file_size": 11803032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11731525, "index_size": 40610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29829, "raw_key_size": 317745, "raw_average_key_size": 26, "raw_value_size": 11528651, "raw_average_value_size": 966, "num_data_blocks": 1514, "num_entries": 11923, "num_filter_entries": 11923, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769166914, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:15:14 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.888776) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 11803032 bytes
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.060053) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.7 rd, 17.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.9 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(84.8) write-amplify(39.1) OK, records in: 12433, records dropped: 510 output_compression: NoCompression
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.060122) EVENT_LOG_v1 {"time_micros": 1769166915060095, "job": 146, "event": "compaction_finished", "compaction_time_micros": 667644, "compaction_time_cpu_micros": 59788, "output_level": 6, "num_output_files": 1, "total_output_size": 11803032, "num_input_records": 12433, "num_output_records": 11923, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000232.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166915060634, "job": 146, "event": "table_file_deletion", "file_number": 232}
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769166915066568, "job": 146, "event": "table_file_deletion", "file_number": 230}
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:14.220667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.066658) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.066668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.066672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.066675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:15:15.066679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:15:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:15.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:16 np0005593232 nova_compute[250269]: 2026-01-23 11:15:16.134 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:16.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4492: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1023 B/s wr, 0 op/s
Jan 23 06:15:16 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:17.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:17 np0005593232 nova_compute[250269]: 2026-01-23 11:15:17.965 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:18.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4493: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 28 KiB/s wr, 4 op/s
Jan 23 06:15:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:19.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:20.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4494: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.7 KiB/s rd, 28 KiB/s wr, 4 op/s
Jan 23 06:15:21 np0005593232 nova_compute[250269]: 2026-01-23 11:15:21.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:21.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:22.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4495: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 28 KiB/s wr, 5 op/s
Jan 23 06:15:22 np0005593232 nova_compute[250269]: 2026-01-23 11:15:22.969 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:22 np0005593232 nova_compute[250269]: 2026-01-23 11:15:22.979 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:22.979 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=107, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=106) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:15:22 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:22.981 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:15:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:23.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:24 np0005593232 nova_compute[250269]: 2026-01-23 11:15:24.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:24.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4496: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 27 KiB/s wr, 5 op/s
Jan 23 06:15:24 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:24.982 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '107'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:15:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:25.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:26 np0005593232 nova_compute[250269]: 2026-01-23 11:15:26.136 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4497: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 27 KiB/s wr, 5 op/s
Jan 23 06:15:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:27.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:27 np0005593232 nova_compute[250269]: 2026-01-23 11:15:27.971 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4498: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 5.8 KiB/s rd, 27 KiB/s wr, 5 op/s
Jan 23 06:15:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:29.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:30.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4499: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 23 06:15:31 np0005593232 nova_compute[250269]: 2026-01-23 11:15:31.148 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:31 np0005593232 nova_compute[250269]: 2026-01-23 11:15:31.281 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:31.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:31 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:32.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4500: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.6 KiB/s rd, 0 B/s wr, 3 op/s
Jan 23 06:15:32 np0005593232 nova_compute[250269]: 2026-01-23 11:15:32.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:33.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:33 np0005593232 nova_compute[250269]: 2026-01-23 11:15:33.846 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:33 np0005593232 nova_compute[250269]: 2026-01-23 11:15:33.846 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 06:15:34 np0005593232 nova_compute[250269]: 2026-01-23 11:15:34.002 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 06:15:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:34.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4501: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 06:15:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:35.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:36 np0005593232 nova_compute[250269]: 2026-01-23 11:15:36.152 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:36.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:36 np0005593232 podman[428973]: 2026-01-23 11:15:36.701894054 +0000 UTC m=+0.054341323 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 23 06:15:36 np0005593232 podman[428972]: 2026-01-23 11:15:36.730902617 +0000 UTC m=+0.089096319 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 23 06:15:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4502: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.4 KiB/s rd, 0 B/s wr, 2 op/s
Jan 23 06:15:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:15:37
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', 'images', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr']
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:15:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:37.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:15:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:15:37 np0005593232 nova_compute[250269]: 2026-01-23 11:15:37.977 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4503: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:38 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:15:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3c4c3800-237a-499f-b7c5-7b9221ac49f0 does not exist
Jan 23 06:15:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 878e0839-9908-4a98-bbc7-85d219f02e97 does not exist
Jan 23 06:15:39 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c1959333-596e-408d-b02e-a25647ee71a5 does not exist
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:15:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:15:39 np0005593232 nova_compute[250269]: 2026-01-23 11:15:39.449 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:39 np0005593232 nova_compute[250269]: 2026-01-23 11:15:39.449 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:39 np0005593232 nova_compute[250269]: 2026-01-23 11:15:39.449 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:15:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:39 np0005593232 podman[429261]: 2026-01-23 11:15:39.804391808 +0000 UTC m=+0.020654377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:40 np0005593232 podman[429261]: 2026-01-23 11:15:40.186202092 +0000 UTC m=+0.402464641 container create 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:15:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:15:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:40 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:15:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:40 np0005593232 systemd[1]: Started libpod-conmon-675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657.scope.
Jan 23 06:15:40 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:40 np0005593232 podman[429261]: 2026-01-23 11:15:40.793597636 +0000 UTC m=+1.009860205 container init 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:15:40 np0005593232 podman[429261]: 2026-01-23 11:15:40.801308575 +0000 UTC m=+1.017571124 container start 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:15:40 np0005593232 vigilant_matsumoto[429277]: 167 167
Jan 23 06:15:40 np0005593232 systemd[1]: libpod-675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657.scope: Deactivated successfully.
Jan 23 06:15:40 np0005593232 podman[429261]: 2026-01-23 11:15:40.827943161 +0000 UTC m=+1.044205730 container attach 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:15:40 np0005593232 podman[429261]: 2026-01-23 11:15:40.82896568 +0000 UTC m=+1.045228229 container died 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Jan 23 06:15:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4504: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Jan 23 06:15:41 np0005593232 systemd[1]: var-lib-containers-storage-overlay-de83afc23e3c3a69ecbcde1dccb53dce0e5dd8c3916623141febafe23fb2b5d8-merged.mount: Deactivated successfully.
Jan 23 06:15:41 np0005593232 podman[429261]: 2026-01-23 11:15:41.045213266 +0000 UTC m=+1.261475815 container remove 675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Jan 23 06:15:41 np0005593232 systemd[1]: libpod-conmon-675a7f66ddf7cc57bde479b7d47efe4055556aa1c5e35858e0b540566e5ae657.scope: Deactivated successfully.
Jan 23 06:15:41 np0005593232 nova_compute[250269]: 2026-01-23 11:15:41.154 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:41 np0005593232 podman[429305]: 2026-01-23 11:15:41.228634561 +0000 UTC m=+0.039801221 container create cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 23 06:15:41 np0005593232 systemd[1]: Started libpod-conmon-cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058.scope.
Jan 23 06:15:41 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:41 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:41 np0005593232 nova_compute[250269]: 2026-01-23 11:15:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:41 np0005593232 podman[429305]: 2026-01-23 11:15:41.304169244 +0000 UTC m=+0.115335934 container init cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 06:15:41 np0005593232 podman[429305]: 2026-01-23 11:15:41.211149144 +0000 UTC m=+0.022315814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:41 np0005593232 podman[429305]: 2026-01-23 11:15:41.310651628 +0000 UTC m=+0.121818288 container start cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 06:15:41 np0005593232 podman[429305]: 2026-01-23 11:15:41.315102034 +0000 UTC m=+0.126268694 container attach cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:15:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:41.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:42 np0005593232 xenodochial_liskov[429321]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:15:42 np0005593232 xenodochial_liskov[429321]: --> relative data size: 1.0
Jan 23 06:15:42 np0005593232 xenodochial_liskov[429321]: --> All data devices are unavailable
Jan 23 06:15:42 np0005593232 systemd[1]: libpod-cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058.scope: Deactivated successfully.
Jan 23 06:15:42 np0005593232 podman[429305]: 2026-01-23 11:15:42.188555919 +0000 UTC m=+0.999722579 container died cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 23 06:15:42 np0005593232 nova_compute[250269]: 2026-01-23 11:15:42.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:42 np0005593232 nova_compute[250269]: 2026-01-23 11:15:42.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:42 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d791f093266f08c0d962aaef6128980da0017a32f0d9032dd0b841cf49a6341f-merged.mount: Deactivated successfully.
Jan 23 06:15:42 np0005593232 podman[429305]: 2026-01-23 11:15:42.380192416 +0000 UTC m=+1.191359076 container remove cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_liskov, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:15:42 np0005593232 systemd[1]: libpod-conmon-cd5d2213c90e4384bee05dc3090835c2e73641613c29f1282197475e31816058.scope: Deactivated successfully.
Jan 23 06:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:42.700 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:42.701 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:15:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:15:42.701 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:15:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4505: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 9.4 KiB/s rd, 597 B/s wr, 12 op/s
Jan 23 06:15:42 np0005593232 podman[429540]: 2026-01-23 11:15:42.947488212 +0000 UTC m=+0.037477244 container create 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:15:42 np0005593232 nova_compute[250269]: 2026-01-23 11:15:42.979 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:42 np0005593232 systemd[1]: Started libpod-conmon-4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e.scope.
Jan 23 06:15:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:43.025076084 +0000 UTC m=+0.115065136 container init 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:42.93189698 +0000 UTC m=+0.021886042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:43.030988092 +0000 UTC m=+0.120977124 container start 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:43.034398019 +0000 UTC m=+0.124387081 container attach 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 06:15:43 np0005593232 xenodochial_heyrovsky[429555]: 167 167
Jan 23 06:15:43 np0005593232 systemd[1]: libpod-4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e.scope: Deactivated successfully.
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:43.037429335 +0000 UTC m=+0.127418367 container died 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 06:15:43 np0005593232 systemd[1]: var-lib-containers-storage-overlay-816785e8873a81d81396d962d0f341c5fd909f80085a012c4984a1c2c8aba347-merged.mount: Deactivated successfully.
Jan 23 06:15:43 np0005593232 podman[429540]: 2026-01-23 11:15:43.077379038 +0000 UTC m=+0.167368070 container remove 4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_heyrovsky, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 23 06:15:43 np0005593232 systemd[1]: libpod-conmon-4357c3d57f79d3e5c86c0324dbccec3972399b41806b8e68b63c6ed8ca346f7e.scope: Deactivated successfully.
Jan 23 06:15:43 np0005593232 podman[429580]: 2026-01-23 11:15:43.233302372 +0000 UTC m=+0.037641249 container create 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 06:15:43 np0005593232 systemd[1]: Started libpod-conmon-412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612.scope.
Jan 23 06:15:43 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27caf8484a00e46e61ee0e5afe1ac49d2a3d817b5af3ccbdc78813834f038336/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:43 np0005593232 nova_compute[250269]: 2026-01-23 11:15:43.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:43 np0005593232 nova_compute[250269]: 2026-01-23 11:15:43.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:15:43 np0005593232 nova_compute[250269]: 2026-01-23 11:15:43.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:15:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27caf8484a00e46e61ee0e5afe1ac49d2a3d817b5af3ccbdc78813834f038336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27caf8484a00e46e61ee0e5afe1ac49d2a3d817b5af3ccbdc78813834f038336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:43 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27caf8484a00e46e61ee0e5afe1ac49d2a3d817b5af3ccbdc78813834f038336/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:43 np0005593232 podman[429580]: 2026-01-23 11:15:43.303844264 +0000 UTC m=+0.108183161 container init 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:15:43 np0005593232 podman[429580]: 2026-01-23 11:15:43.311639485 +0000 UTC m=+0.115978362 container start 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Jan 23 06:15:43 np0005593232 podman[429580]: 2026-01-23 11:15:43.217757491 +0000 UTC m=+0.022096388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:43 np0005593232 podman[429580]: 2026-01-23 11:15:43.315583477 +0000 UTC m=+0.119922374 container attach 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:15:43 np0005593232 nova_compute[250269]: 2026-01-23 11:15:43.321 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:15:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:43.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]: {
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:    "0": [
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:        {
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "devices": [
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "/dev/loop3"
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            ],
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "lv_name": "ceph_lv0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "lv_size": "7511998464",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "name": "ceph_lv0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "tags": {
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.cluster_name": "ceph",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.crush_device_class": "",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.encrypted": "0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.osd_id": "0",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.type": "block",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:                "ceph.vdo": "0"
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            },
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "type": "block",
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:            "vg_name": "ceph_vg0"
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:        }
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]:    ]
Jan 23 06:15:44 np0005593232 crazy_banzai[429597]: }
Jan 23 06:15:44 np0005593232 systemd[1]: libpod-412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612.scope: Deactivated successfully.
Jan 23 06:15:44 np0005593232 podman[429580]: 2026-01-23 11:15:44.091591667 +0000 UTC m=+0.895930564 container died 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 23 06:15:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:44 np0005593232 systemd[1]: var-lib-containers-storage-overlay-27caf8484a00e46e61ee0e5afe1ac49d2a3d817b5af3ccbdc78813834f038336-merged.mount: Deactivated successfully.
Jan 23 06:15:44 np0005593232 podman[429580]: 2026-01-23 11:15:44.672717817 +0000 UTC m=+1.477056694 container remove 412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:15:44 np0005593232 systemd[1]: libpod-conmon-412b9efeacc48e315ceac7069aa6488e38b2619d518b1e7c6ff3bb9ec756f612.scope: Deactivated successfully.
Jan 23 06:15:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4506: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 597 B/s wr, 9 op/s
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.273958137 +0000 UTC m=+0.039425180 container create 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:15:45 np0005593232 systemd[1]: Started libpod-conmon-74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2.scope.
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.256633645 +0000 UTC m=+0.022100718 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.375327793 +0000 UTC m=+0.140794856 container init 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.381712934 +0000 UTC m=+0.147179977 container start 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.384947246 +0000 UTC m=+0.150414299 container attach 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 06:15:45 np0005593232 clever_lovelace[429778]: 167 167
Jan 23 06:15:45 np0005593232 systemd[1]: libpod-74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2.scope: Deactivated successfully.
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.390852343 +0000 UTC m=+0.156319446 container died 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:15:45 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b668f861295ec174c5692cd71e338eed33573e7b55d7f30996f2d7b921765985-merged.mount: Deactivated successfully.
Jan 23 06:15:45 np0005593232 podman[429761]: 2026-01-23 11:15:45.439546775 +0000 UTC m=+0.205013848 container remove 74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lovelace, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:15:45 np0005593232 systemd[1]: libpod-conmon-74fcd5fae7897e69e28c884697f0bf7845f3157426a53bd32e802b7f836b82c2.scope: Deactivated successfully.
Jan 23 06:15:45 np0005593232 podman[429802]: 2026-01-23 11:15:45.61839002 +0000 UTC m=+0.037687151 container create 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:15:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:45 np0005593232 systemd[1]: Started libpod-conmon-5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6.scope.
Jan 23 06:15:45 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:15:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f9cdaff5b2f809a9a8aad468cbd81222c7792bd818a67689c23b2651e00669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f9cdaff5b2f809a9a8aad468cbd81222c7792bd818a67689c23b2651e00669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f9cdaff5b2f809a9a8aad468cbd81222c7792bd818a67689c23b2651e00669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:45 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95f9cdaff5b2f809a9a8aad468cbd81222c7792bd818a67689c23b2651e00669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:15:45 np0005593232 podman[429802]: 2026-01-23 11:15:45.69100617 +0000 UTC m=+0.110303321 container init 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 06:15:45 np0005593232 podman[429802]: 2026-01-23 11:15:45.601712997 +0000 UTC m=+0.021010148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:15:45 np0005593232 podman[429802]: 2026-01-23 11:15:45.700279734 +0000 UTC m=+0.119576865 container start 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:15:45 np0005593232 podman[429802]: 2026-01-23 11:15:45.703466614 +0000 UTC m=+0.122763765 container attach 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 06:15:46 np0005593232 nova_compute[250269]: 2026-01-23 11:15:46.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:46.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:46 np0005593232 strange_snyder[429818]: {
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:        "osd_id": 0,
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:        "type": "bluestore"
Jan 23 06:15:46 np0005593232 strange_snyder[429818]:    }
Jan 23 06:15:46 np0005593232 strange_snyder[429818]: }
Jan 23 06:15:46 np0005593232 systemd[1]: libpod-5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6.scope: Deactivated successfully.
Jan 23 06:15:46 np0005593232 podman[429802]: 2026-01-23 11:15:46.547479292 +0000 UTC m=+0.966776463 container died 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:15:46 np0005593232 systemd[1]: var-lib-containers-storage-overlay-95f9cdaff5b2f809a9a8aad468cbd81222c7792bd818a67689c23b2651e00669-merged.mount: Deactivated successfully.
Jan 23 06:15:46 np0005593232 podman[429802]: 2026-01-23 11:15:46.606892678 +0000 UTC m=+1.026189809 container remove 5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_snyder, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 23 06:15:46 np0005593232 systemd[1]: libpod-conmon-5332275037d154c832171ce4a79a3cea9ccca9fe73ddac4e771b0081757883d6.scope: Deactivated successfully.
Jan 23 06:15:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:15:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4507: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 597 B/s wr, 9 op/s
Jan 23 06:15:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:15:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a1f49717-0a7b-4147-b51e-38b9ce43bd0e does not exist
Jan 23 06:15:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fed1fa3b-078e-4f7a-adfd-1c8ebdc2428c does not exist
Jan 23 06:15:47 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev edb7548c-b830-407f-b985-06a79ae7c72b does not exist
Jan 23 06:15:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:47.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:47 np0005593232 nova_compute[250269]: 2026-01-23 11:15:47.981 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0043440946049692905 of space, bias 1.0, pg target 1.3032283814907872 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 06:15:48 np0005593232 nova_compute[250269]: 2026-01-23 11:15:48.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:48 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:15:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4508: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 7.0 KiB/s rd, 597 B/s wr, 9 op/s
Jan 23 06:15:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:49.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4509: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:15:51 np0005593232 nova_compute[250269]: 2026-01-23 11:15:51.158 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:51.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:52.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4510: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:15:52 np0005593232 nova_compute[250269]: 2026-01-23 11:15:52.984 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:53 np0005593232 nova_compute[250269]: 2026-01-23 11:15:53.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:53.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4511: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:15:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:56 np0005593232 nova_compute[250269]: 2026-01-23 11:15:56.159 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4512: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:15:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:15:57 np0005593232 nova_compute[250269]: 2026-01-23 11:15:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:15:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:15:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:15:57 np0005593232 nova_compute[250269]: 2026-01-23 11:15:57.986 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:15:58 np0005593232 ovn_controller[151001]: 2026-01-23T11:15:58Z|00876|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.365 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.366 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.366 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.366 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.366 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:15:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:15:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:15:58.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:15:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:15:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3829235904' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:15:58 np0005593232 nova_compute[250269]: 2026-01-23 11:15:58.834 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:15:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4513: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.116 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.117 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4031MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.118 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.118 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.234 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.234 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.291 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:15:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:15:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:15:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:15:59.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:15:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:15:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541178950' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.740 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:15:59 np0005593232 nova_compute[250269]: 2026-01-23 11:15:59.747 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:16:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:00.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4514: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:16:01 np0005593232 nova_compute[250269]: 2026-01-23 11:16:01.161 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:01.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:02 np0005593232 nova_compute[250269]: 2026-01-23 11:16:02.173 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:16:02 np0005593232 nova_compute[250269]: 2026-01-23 11:16:02.175 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:16:02 np0005593232 nova_compute[250269]: 2026-01-23 11:16:02.175 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:16:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4515: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:16:02 np0005593232 nova_compute[250269]: 2026-01-23 11:16:02.989 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:03.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4516: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:16:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:05.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:06 np0005593232 nova_compute[250269]: 2026-01-23 11:16:06.165 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:06.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4517: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:16:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:07 np0005593232 podman[430010]: 2026-01-23 11:16:07.395466532 +0000 UTC m=+0.049990209 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 23 06:16:07 np0005593232 podman[430009]: 2026-01-23 11:16:07.43168075 +0000 UTC m=+0.085755455 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:16:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:16:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:07.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:07 np0005593232 nova_compute[250269]: 2026-01-23 11:16:07.992 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:08.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4518: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 5 op/s
Jan 23 06:16:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:09.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:10.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4519: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 3.2 KiB/s rd, 12 KiB/s wr, 5 op/s
Jan 23 06:16:11 np0005593232 nova_compute[250269]: 2026-01-23 11:16:11.167 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:11.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:12.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4520: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 23 06:16:12 np0005593232 nova_compute[250269]: 2026-01-23 11:16:12.994 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:13.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4521: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 23 06:16:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:15.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:16 np0005593232 nova_compute[250269]: 2026-01-23 11:16:16.169 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:16.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4522: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 23 06:16:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:17.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:17 np0005593232 nova_compute[250269]: 2026-01-23 11:16:17.997 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:18.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4523: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 23 06:16:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:19.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:20.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4524: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 1.9 MiB/s rd, 85 B/s wr, 68 op/s
Jan 23 06:16:21 np0005593232 nova_compute[250269]: 2026-01-23 11:16:21.172 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:16:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:21.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:16:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:16:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:22.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:16:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4525: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 2.3 MiB/s rd, 13 KiB/s wr, 105 op/s
Jan 23 06:16:23 np0005593232 nova_compute[250269]: 2026-01-23 11:16:23.001 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:23.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:24.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4526: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 13 KiB/s wr, 37 op/s
Jan 23 06:16:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:25.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:26 np0005593232 nova_compute[250269]: 2026-01-23 11:16:26.173 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:26.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4527: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 372 KiB/s rd, 13 KiB/s wr, 37 op/s
Jan 23 06:16:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:27.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:28 np0005593232 nova_compute[250269]: 2026-01-23 11:16:28.028 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:28.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4528: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 540 KiB/s rd, 13 KiB/s wr, 49 op/s
Jan 23 06:16:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:29.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:30.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4529: 321 pgs: 321 active+clean; 200 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 540 KiB/s rd, 13 KiB/s wr, 49 op/s
Jan 23 06:16:31 np0005593232 nova_compute[250269]: 2026-01-23 11:16:31.175 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:31.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4530: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 540 KiB/s rd, 23 KiB/s wr, 50 op/s
Jan 23 06:16:33 np0005593232 nova_compute[250269]: 2026-01-23 11:16:33.031 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:33.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:34.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4531: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 10 KiB/s wr, 13 op/s
Jan 23 06:16:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:36 np0005593232 nova_compute[250269]: 2026-01-23 11:16:36.177 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:36.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4532: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 10 KiB/s wr, 13 op/s
Jan 23 06:16:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:16:37
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.meta', 'images', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:16:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:16:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:37.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:38 np0005593232 nova_compute[250269]: 2026-01-23 11:16:38.034 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:38.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:38 np0005593232 podman[430120]: 2026-01-23 11:16:38.431928148 +0000 UTC m=+0.086926218 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 23 06:16:38 np0005593232 podman[430119]: 2026-01-23 11:16:38.498197548 +0000 UTC m=+0.147516226 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4533: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 167 KiB/s rd, 21 KiB/s wr, 14 op/s
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:16:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:16:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:40.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4534: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 21 KiB/s wr, 2 op/s
Jan 23 06:16:41 np0005593232 nova_compute[250269]: 2026-01-23 11:16:41.180 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:16:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2976828886' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:16:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:42.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:42.701 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:42.702 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:16:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:42.702 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:16:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4535: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 22 KiB/s wr, 2 op/s
Jan 23 06:16:43 np0005593232 nova_compute[250269]: 2026-01-23 11:16:43.037 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:43.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.176 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.177 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.177 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.177 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.177 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.331 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:16:44 np0005593232 nova_compute[250269]: 2026-01-23 11:16:44.331 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:44.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4536: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s wr, 1 op/s
Jan 23 06:16:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:45.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:46.162 161902 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=108, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'a2:14:5d', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'e2:f0:52:36:44:3c'}, ipsec=False) old=SB_Global(nb_cfg=107) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 23 06:16:46 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:46.163 161902 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 23 06:16:46 np0005593232 nova_compute[250269]: 2026-01-23 11:16:46.164 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:46 np0005593232 nova_compute[250269]: 2026-01-23 11:16:46.181 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4537: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 11 KiB/s wr, 1 op/s
Jan 23 06:16:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:47.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:48 np0005593232 nova_compute[250269]: 2026-01-23 11:16:48.039 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 7.088393355667225e-06 of space, bias 1.0, pg target 0.0021265180067001677 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.004346093895402941 of space, bias 1.0, pg target 1.3038281686208821 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4344349060115393e-05 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.5690396790084683 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017390191699236926 quantized to 16 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021737739624046157 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018477078680439233 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.00043475479248092315 quantized to 32 (current 32)
Jan 23 06:16:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:48.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:16:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:16:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4538: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 11 KiB/s wr, 7 op/s
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:16:49 np0005593232 nova_compute[250269]: 2026-01-23 11:16:49.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3fe5f7ab-f15a-495a-bf3b-75ad01a0922d does not exist
Jan 23 06:16:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d797ef08-dac7-472e-94d9-a21e57ce9c63 does not exist
Jan 23 06:16:49 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b298a3ef-293c-4f99-8786-d1e6ad0b4547 does not exist
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:16:49 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:16:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:49.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:50 np0005593232 podman[430614]: 2026-01-23 11:16:49.960950753 +0000 UTC m=+0.025855675 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:50 np0005593232 podman[430614]: 2026-01-23 11:16:50.399106796 +0000 UTC m=+0.464011728 container create 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:16:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4539: 321 pgs: 321 active+clean; 202 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 212 KiB/s rd, 1023 B/s wr, 6 op/s
Jan 23 06:16:50 np0005593232 systemd[1]: Started libpod-conmon-3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545.scope.
Jan 23 06:16:51 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:51 np0005593232 nova_compute[250269]: 2026-01-23 11:16:51.183 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:51 np0005593232 podman[430614]: 2026-01-23 11:16:51.222486859 +0000 UTC m=+1.287391771 container init 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:16:51 np0005593232 podman[430614]: 2026-01-23 11:16:51.231031531 +0000 UTC m=+1.295936483 container start 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:16:51 np0005593232 loving_neumann[430631]: 167 167
Jan 23 06:16:51 np0005593232 systemd[1]: libpod-3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545.scope: Deactivated successfully.
Jan 23 06:16:51 np0005593232 conmon[430631]: conmon 3a9fa86e45bbfad83c43 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545.scope/container/memory.events
Jan 23 06:16:51 np0005593232 podman[430614]: 2026-01-23 11:16:51.468915961 +0000 UTC m=+1.533820953 container attach 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Jan 23 06:16:51 np0005593232 podman[430614]: 2026-01-23 11:16:51.469553429 +0000 UTC m=+1.534458371 container died 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Jan 23 06:16:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:51 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f85be7164e07732e59fcfe5aaaafdb3d6aec748ac92ec3e08815eebb321efad2-merged.mount: Deactivated successfully.
Jan 23 06:16:51 np0005593232 podman[430614]: 2026-01-23 11:16:51.974894928 +0000 UTC m=+2.039799830 container remove 3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:16:52 np0005593232 systemd[1]: libpod-conmon-3a9fa86e45bbfad83c43156c1cf05f2c1c890ceaa34dd1959c7107c9f3d87545.scope: Deactivated successfully.
Jan 23 06:16:52 np0005593232 podman[430654]: 2026-01-23 11:16:52.140932709 +0000 UTC m=+0.044129693 container create 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 23 06:16:52 np0005593232 systemd[1]: Started libpod-conmon-9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507.scope.
Jan 23 06:16:52 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:52 np0005593232 podman[430654]: 2026-01-23 11:16:52.120587662 +0000 UTC m=+0.023784666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:52 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:52 np0005593232 podman[430654]: 2026-01-23 11:16:52.390710717 +0000 UTC m=+0.293907761 container init 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 06:16:52 np0005593232 podman[430654]: 2026-01-23 11:16:52.399754804 +0000 UTC m=+0.302951788 container start 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Jan 23 06:16:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:52 np0005593232 podman[430654]: 2026-01-23 11:16:52.543568304 +0000 UTC m=+0.446765318 container attach 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:16:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4540: 321 pgs: 321 active+clean; 137 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 2.0 KiB/s wr, 34 op/s
Jan 23 06:16:53 np0005593232 nova_compute[250269]: 2026-01-23 11:16:53.043 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:53 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:16:53.165 161902 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d80bc768-e67f-4e48-bcf3-42912cda98f1, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '108'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 23 06:16:53 np0005593232 nifty_hypatia[430670]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:16:53 np0005593232 nifty_hypatia[430670]: --> relative data size: 1.0
Jan 23 06:16:53 np0005593232 nifty_hypatia[430670]: --> All data devices are unavailable
Jan 23 06:16:53 np0005593232 systemd[1]: libpod-9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507.scope: Deactivated successfully.
Jan 23 06:16:53 np0005593232 podman[430654]: 2026-01-23 11:16:53.274036721 +0000 UTC m=+1.177233735 container died 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 06:16:53 np0005593232 nova_compute[250269]: 2026-01-23 11:16:53.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:53 np0005593232 systemd[1]: var-lib-containers-storage-overlay-61642ae356ab2a76943b27e9658c72aae57146682a87184cadcba9353e8142b4-merged.mount: Deactivated successfully.
Jan 23 06:16:53 np0005593232 podman[430654]: 2026-01-23 11:16:53.39909767 +0000 UTC m=+1.302294654 container remove 9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Jan 23 06:16:53 np0005593232 systemd[1]: libpod-conmon-9a4dd0327f5f97c34198f5c4a02facda963eb72e5106e92a49ad45c2fbd2e507.scope: Deactivated successfully.
Jan 23 06:16:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:53.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:54 np0005593232 podman[430840]: 2026-01-23 11:16:53.980284961 +0000 UTC m=+0.023808606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:54 np0005593232 podman[430840]: 2026-01-23 11:16:54.41199634 +0000 UTC m=+0.455520005 container create a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 23 06:16:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:54.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4541: 321 pgs: 321 active+clean; 137 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Jan 23 06:16:54 np0005593232 systemd[1]: Started libpod-conmon-a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02.scope.
Jan 23 06:16:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:55 np0005593232 podman[430840]: 2026-01-23 11:16:55.07209103 +0000 UTC m=+1.115614695 container init a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 23 06:16:55 np0005593232 podman[430840]: 2026-01-23 11:16:55.079987184 +0000 UTC m=+1.123510819 container start a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 06:16:55 np0005593232 festive_bardeen[430858]: 167 167
Jan 23 06:16:55 np0005593232 systemd[1]: libpod-a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02.scope: Deactivated successfully.
Jan 23 06:16:55 np0005593232 podman[430840]: 2026-01-23 11:16:55.099325243 +0000 UTC m=+1.142848888 container attach a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Jan 23 06:16:55 np0005593232 podman[430840]: 2026-01-23 11:16:55.099810027 +0000 UTC m=+1.143333652 container died a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:16:55 np0005593232 systemd[1]: var-lib-containers-storage-overlay-9fd2f9e8a0b8fb0d7d7e13cb430834195b57ad7181fe931ef3c94df2331188e4-merged.mount: Deactivated successfully.
Jan 23 06:16:55 np0005593232 podman[430840]: 2026-01-23 11:16:55.184957743 +0000 UTC m=+1.228481378 container remove a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:16:55 np0005593232 systemd[1]: libpod-conmon-a821fc750bebe2c8677045eb16493cc91d03bfff6b589b75269d0cd441f07b02.scope: Deactivated successfully.
Jan 23 06:16:55 np0005593232 podman[430883]: 2026-01-23 11:16:55.350998304 +0000 UTC m=+0.044940806 container create 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 23 06:16:55 np0005593232 systemd[1]: Started libpod-conmon-190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2.scope.
Jan 23 06:16:55 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a646782c104a89a3341ffd597b3567976d1f1e783145a9299180af9fea597f2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a646782c104a89a3341ffd597b3567976d1f1e783145a9299180af9fea597f2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a646782c104a89a3341ffd597b3567976d1f1e783145a9299180af9fea597f2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:55 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a646782c104a89a3341ffd597b3567976d1f1e783145a9299180af9fea597f2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:55 np0005593232 podman[430883]: 2026-01-23 11:16:55.333037295 +0000 UTC m=+0.026979827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:55.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:55 np0005593232 podman[430883]: 2026-01-23 11:16:55.717312289 +0000 UTC m=+0.411254801 container init 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 23 06:16:55 np0005593232 podman[430883]: 2026-01-23 11:16:55.724989157 +0000 UTC m=+0.418931679 container start 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:16:55 np0005593232 podman[430883]: 2026-01-23 11:16:55.729455383 +0000 UTC m=+0.423397895 container attach 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:16:56 np0005593232 nova_compute[250269]: 2026-01-23 11:16:56.185 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:56.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]: {
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:    "0": [
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:        {
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "devices": [
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "/dev/loop3"
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            ],
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "lv_name": "ceph_lv0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "lv_size": "7511998464",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "name": "ceph_lv0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "tags": {
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.cluster_name": "ceph",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.crush_device_class": "",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.encrypted": "0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.osd_id": "0",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.type": "block",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:                "ceph.vdo": "0"
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            },
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "type": "block",
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:            "vg_name": "ceph_vg0"
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:        }
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]:    ]
Jan 23 06:16:56 np0005593232 heuristic_bose[430899]: }
Jan 23 06:16:56 np0005593232 systemd[1]: libpod-190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2.scope: Deactivated successfully.
Jan 23 06:16:56 np0005593232 podman[430883]: 2026-01-23 11:16:56.563470599 +0000 UTC m=+1.257413131 container died 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:16:56 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a646782c104a89a3341ffd597b3567976d1f1e783145a9299180af9fea597f2b-merged.mount: Deactivated successfully.
Jan 23 06:16:56 np0005593232 podman[430883]: 2026-01-23 11:16:56.61744313 +0000 UTC m=+1.311385632 container remove 190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_bose, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:16:56 np0005593232 systemd[1]: libpod-conmon-190f21307163bcb11e95c914cd6a7b5735b13c5cc43f3512109cf9425681aff2.scope: Deactivated successfully.
Jan 23 06:16:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4542: 321 pgs: 321 active+clean; 137 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.227921622 +0000 UTC m=+0.041058096 container create 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:16:57 np0005593232 systemd[1]: Started libpod-conmon-0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6.scope.
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.209204351 +0000 UTC m=+0.022340845 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:57 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.339 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.340 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.340 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.340 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.341 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:16:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.415903836 +0000 UTC m=+0.229040310 container init 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.422962697 +0000 UTC m=+0.236099161 container start 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:16:57 np0005593232 charming_jepsen[431079]: 167 167
Jan 23 06:16:57 np0005593232 systemd[1]: libpod-0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6.scope: Deactivated successfully.
Jan 23 06:16:57 np0005593232 conmon[431079]: conmon 0a955d21e18102c87642 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6.scope/container/memory.events
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.652716876 +0000 UTC m=+0.465853360 container attach 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:16:57 np0005593232 podman[431063]: 2026-01-23 11:16:57.653764516 +0000 UTC m=+0.466900990 container died 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:16:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:57.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:16:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578273595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:16:57 np0005593232 systemd[1]: var-lib-containers-storage-overlay-6f594344a35c254c5840bda6a9d6cdddf415569188c02fa51ea4c4d8d3e4c9d5-merged.mount: Deactivated successfully.
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.787 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.954 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.955 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3972MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.955 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:16:57 np0005593232 nova_compute[250269]: 2026-01-23 11:16:57.956 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.050 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.051 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.073 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.102 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:58 np0005593232 podman[431063]: 2026-01-23 11:16:58.394180034 +0000 UTC m=+1.207316498 container remove 0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jepsen, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:16:58 np0005593232 systemd[1]: libpod-conmon-0a955d21e18102c87642e6626e56d4f191925df91e48664d6b6e6ae8bab757a6.scope: Deactivated successfully.
Jan 23 06:16:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:16:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:16:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.536 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:58 np0005593232 podman[431145]: 2026-01-23 11:16:58.567233935 +0000 UTC m=+0.043924458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.666 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:16:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:16:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1509296538' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.700 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.706 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.750 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.752 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:16:58 np0005593232 nova_compute[250269]: 2026-01-23 11:16:58.752 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:16:58 np0005593232 podman[431145]: 2026-01-23 11:16:58.762175216 +0000 UTC m=+0.238865719 container create 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Jan 23 06:16:58 np0005593232 systemd[1]: Started libpod-conmon-7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef.scope.
Jan 23 06:16:58 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:16:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec526ddb44cf322cdbfc3c9fd7e3c14f94f8354f0110bcb325afb08e56fe64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec526ddb44cf322cdbfc3c9fd7e3c14f94f8354f0110bcb325afb08e56fe64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec526ddb44cf322cdbfc3c9fd7e3c14f94f8354f0110bcb325afb08e56fe64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:58 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98ec526ddb44cf322cdbfc3c9fd7e3c14f94f8354f0110bcb325afb08e56fe64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:16:58 np0005593232 podman[431145]: 2026-01-23 11:16:58.907178671 +0000 UTC m=+0.383869194 container init 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 06:16:58 np0005593232 podman[431145]: 2026-01-23 11:16:58.91491691 +0000 UTC m=+0.391607413 container start 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Jan 23 06:16:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4543: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 233 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Jan 23 06:16:58 np0005593232 podman[431145]: 2026-01-23 11:16:58.930314297 +0000 UTC m=+0.407004810 container attach 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:16:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:16:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:16:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:16:59.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]: {
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:        "osd_id": 0,
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:        "type": "bluestore"
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]:    }
Jan 23 06:16:59 np0005593232 wonderful_ellis[431165]: }
Jan 23 06:16:59 np0005593232 systemd[1]: libpod-7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef.scope: Deactivated successfully.
Jan 23 06:16:59 np0005593232 podman[431145]: 2026-01-23 11:16:59.765416903 +0000 UTC m=+1.242107446 container died 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 23 06:17:00 np0005593232 systemd[1]: var-lib-containers-storage-overlay-98ec526ddb44cf322cdbfc3c9fd7e3c14f94f8354f0110bcb325afb08e56fe64-merged.mount: Deactivated successfully.
Jan 23 06:17:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:00 np0005593232 podman[431145]: 2026-01-23 11:17:00.677099292 +0000 UTC m=+2.153789795 container remove 7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:17:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:17:00 np0005593232 systemd[1]: libpod-conmon-7b3f82be70a9cc8900ea925359b01aa9e6c490288824f7375b2b8be5439d1aef.scope: Deactivated successfully.
Jan 23 06:17:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4544: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 23 06:17:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:17:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:17:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:17:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 3bd672bd-586e-4393-be08-d1d3c6802d35 does not exist
Jan 23 06:17:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev d7bf5d51-fadc-4402-8b5e-3efe9c6a8c6a does not exist
Jan 23 06:17:01 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 381d5b42-7560-4fdf-83e1-644416efb860 does not exist
Jan 23 06:17:01 np0005593232 nova_compute[250269]: 2026-01-23 11:17:01.187 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:01.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:17:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:17:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4545: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 23 06:17:03 np0005593232 nova_compute[250269]: 2026-01-23 11:17:03.104 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:17:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:03.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:17:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:04.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4546: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 170 B/s wr, 1 op/s
Jan 23 06:17:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:06 np0005593232 nova_compute[250269]: 2026-01-23 11:17:06.189 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4547: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 170 B/s wr, 1 op/s
Jan 23 06:17:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:07.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:08 np0005593232 nova_compute[250269]: 2026-01-23 11:17:08.107 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4548: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail; 597 B/s rd, 170 B/s wr, 1 op/s
Jan 23 06:17:09 np0005593232 podman[431304]: 2026-01-23 11:17:09.414023021 +0000 UTC m=+0.062741461 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 23 06:17:09 np0005593232 podman[431303]: 2026-01-23 11:17:09.428714008 +0000 UTC m=+0.091210279 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 23 06:17:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:09.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4549: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:11 np0005593232 nova_compute[250269]: 2026-01-23 11:17:11.195 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4550: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:13 np0005593232 nova_compute[250269]: 2026-01-23 11:17:13.108 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:13.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4551: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:15.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:15 np0005593232 nova_compute[250269]: 2026-01-23 11:17:15.748 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:16 np0005593232 nova_compute[250269]: 2026-01-23 11:17:16.194 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:16.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4552: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:17:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:17.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:17:18 np0005593232 nova_compute[250269]: 2026-01-23 11:17:18.110 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4553: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:19.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4554: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:21 np0005593232 nova_compute[250269]: 2026-01-23 11:17:21.196 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:21.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4555: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:23 np0005593232 nova_compute[250269]: 2026-01-23 11:17:23.114 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:23.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4556: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:25.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:26 np0005593232 nova_compute[250269]: 2026-01-23 11:17:26.197 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4557: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:27.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:28 np0005593232 nova_compute[250269]: 2026-01-23 11:17:28.118 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4558: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:29.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4559: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:31 np0005593232 nova_compute[250269]: 2026-01-23 11:17:31.198 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:31.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4560: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:33 np0005593232 nova_compute[250269]: 2026-01-23 11:17:33.120 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:33.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:34.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4561: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:35.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:36 np0005593232 nova_compute[250269]: 2026-01-23 11:17:36.200 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:17:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:17:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4562: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:17:37
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms']
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:17:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:17:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:17:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:37.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:17:38 np0005593232 nova_compute[250269]: 2026-01-23 11:17:38.123 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4563: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:17:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:17:39 np0005593232 ovn_controller[151001]: 2026-01-23T11:17:39Z|00877|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 23 06:17:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:39.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:40 np0005593232 nova_compute[250269]: 2026-01-23 11:17:40.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:40 np0005593232 nova_compute[250269]: 2026-01-23 11:17:40.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:17:40 np0005593232 podman[431417]: 2026-01-23 11:17:40.407773227 +0000 UTC m=+0.061240189 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 23 06:17:40 np0005593232 podman[431416]: 2026-01-23 11:17:40.443973484 +0000 UTC m=+0.096764897 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 23 06:17:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:40.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4564: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:41 np0005593232 nova_compute[250269]: 2026-01-23 11:17:41.201 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:41 np0005593232 nova_compute[250269]: 2026-01-23 11:17:41.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:41.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:42 np0005593232 nova_compute[250269]: 2026-01-23 11:17:42.287 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:42.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:17:42.702 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:17:42.703 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:17:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:17:42.703 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:17:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4565: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:43 np0005593232 nova_compute[250269]: 2026-01-23 11:17:43.125 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:43 np0005593232 nova_compute[250269]: 2026-01-23 11:17:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:43.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4566: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:45 np0005593232 nova_compute[250269]: 2026-01-23 11:17:45.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:45 np0005593232 nova_compute[250269]: 2026-01-23 11:17:45.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:17:45 np0005593232 nova_compute[250269]: 2026-01-23 11:17:45.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:17:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:45.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:45 np0005593232 nova_compute[250269]: 2026-01-23 11:17:45.960 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:17:46 np0005593232 nova_compute[250269]: 2026-01-23 11:17:46.202 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:46 np0005593232 nova_compute[250269]: 2026-01-23 11:17:46.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4567: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:17:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:47.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:17:48 np0005593232 nova_compute[250269]: 2026-01-23 11:17:48.129 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:17:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4568: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:49.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:50.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4569: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:51 np0005593232 nova_compute[250269]: 2026-01-23 11:17:51.205 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:51 np0005593232 nova_compute[250269]: 2026-01-23 11:17:51.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:51.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:52.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4570: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:53 np0005593232 nova_compute[250269]: 2026-01-23 11:17:53.131 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:53.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:54 np0005593232 nova_compute[250269]: 2026-01-23 11:17:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:54.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4571: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:17:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:55.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:17:56 np0005593232 nova_compute[250269]: 2026-01-23 11:17:56.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:56.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4572: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:17:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:57.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.135 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.422 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.422 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.423 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.423 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.424 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:17:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:17:58.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:17:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376831593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:17:58 np0005593232 nova_compute[250269]: 2026-01-23 11:17:58.927 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:17:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4573: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.078 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.079 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.080 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.080 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:17:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:17:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:17:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:17:59.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.924 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:17:59 np0005593232 nova_compute[250269]: 2026-01-23 11:17:59.925 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.451 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing inventories for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 23 06:18:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.617 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating ProviderTree inventory for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.617 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Updating inventory in ProviderTree for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.858 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing aggregate associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.879 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Refreshing trait associations for resource provider 0e4a8508-835c-4c0a-aa74-aae2c6536573, traits: COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_TRUSTED_CERTS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_QCOW2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 23 06:18:00 np0005593232 nova_compute[250269]: 2026-01-23 11:18:00.927 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:18:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4574: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:01 np0005593232 nova_compute[250269]: 2026-01-23 11:18:01.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:18:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/995716462' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:18:01 np0005593232 nova_compute[250269]: 2026-01-23 11:18:01.386 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:18:01 np0005593232 nova_compute[250269]: 2026-01-23 11:18:01.392 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:18:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:01.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0) v1
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0) v1
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:02 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4575: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b3eec2ee-046a-42da-bb43-dc1b243bf258 does not exist
Jan 23 06:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 4cc5237a-31ab-429c-bc19-2846ae77213e does not exist
Jan 23 06:18:03 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 625d1948-3047-4cbb-b52f-7104780cce85 does not exist
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:18:03 np0005593232 nova_compute[250269]: 2026-01-23 11:18:03.137 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:03 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.70384059 +0000 UTC m=+0.063090521 container create 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:18:03 np0005593232 systemd[1]: Started libpod-conmon-5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca.scope.
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.676106233 +0000 UTC m=+0.035356244 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:03 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:03.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.814776478 +0000 UTC m=+0.174026479 container init 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.82119147 +0000 UTC m=+0.180441431 container start 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.825314667 +0000 UTC m=+0.184564608 container attach 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:18:03 np0005593232 systemd[1]: libpod-5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca.scope: Deactivated successfully.
Jan 23 06:18:03 np0005593232 kind_roentgen[431910]: 167 167
Jan 23 06:18:03 np0005593232 conmon[431910]: conmon 5530a10c8ab7040352ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca.scope/container/memory.events
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.830510595 +0000 UTC m=+0.189760526 container died 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:18:03 np0005593232 systemd[1]: var-lib-containers-storage-overlay-e50d838c1cd767f979a285a8f4dacd5d0157571cb36bab0f00eb85dc89fc945f-merged.mount: Deactivated successfully.
Jan 23 06:18:03 np0005593232 podman[431894]: 2026-01-23 11:18:03.874223355 +0000 UTC m=+0.233473286 container remove 5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_roentgen, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 06:18:03 np0005593232 systemd[1]: libpod-conmon-5530a10c8ab7040352eda153294e6dfa94a198713407c4f25bf9bed900f0c1ca.scope: Deactivated successfully.
Jan 23 06:18:04 np0005593232 podman[431931]: 2026-01-23 11:18:04.119192806 +0000 UTC m=+0.078042445 container create 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:18:04 np0005593232 systemd[1]: Started libpod-conmon-9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae.scope.
Jan 23 06:18:04 np0005593232 podman[431931]: 2026-01-23 11:18:04.085460429 +0000 UTC m=+0.044310118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:04 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:04 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:04 np0005593232 podman[431931]: 2026-01-23 11:18:04.237647587 +0000 UTC m=+0.196497286 container init 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Jan 23 06:18:04 np0005593232 podman[431931]: 2026-01-23 11:18:04.245369236 +0000 UTC m=+0.204218875 container start 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 23 06:18:04 np0005593232 podman[431931]: 2026-01-23 11:18:04.249947276 +0000 UTC m=+0.208796995 container attach 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:18:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:04.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4576: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:05 np0005593232 affectionate_germain[431948]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:18:05 np0005593232 affectionate_germain[431948]: --> relative data size: 1.0
Jan 23 06:18:05 np0005593232 affectionate_germain[431948]: --> All data devices are unavailable
Jan 23 06:18:05 np0005593232 systemd[1]: libpod-9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae.scope: Deactivated successfully.
Jan 23 06:18:05 np0005593232 podman[431931]: 2026-01-23 11:18:05.13747787 +0000 UTC m=+1.096327489 container died 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 23 06:18:05 np0005593232 systemd[1]: var-lib-containers-storage-overlay-7a0428c189657f2710f192341af74c3283d75f8780ba36e4e1e8b5144a920b4d-merged.mount: Deactivated successfully.
Jan 23 06:18:05 np0005593232 podman[431931]: 2026-01-23 11:18:05.208846455 +0000 UTC m=+1.167696064 container remove 9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_germain, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:18:05 np0005593232 systemd[1]: libpod-conmon-9b0633c609fc3f31e6f14f165517d26e3d9028d5541ebeee4229714eb32992ae.scope: Deactivated successfully.
Jan 23 06:18:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:05.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:05 np0005593232 podman[432116]: 2026-01-23 11:18:05.921166366 +0000 UTC m=+0.042611260 container create ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 06:18:05 np0005593232 systemd[1]: Started libpod-conmon-ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad.scope.
Jan 23 06:18:05 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:05 np0005593232 podman[432116]: 2026-01-23 11:18:05.900642744 +0000 UTC m=+0.022087658 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:06 np0005593232 podman[432116]: 2026-01-23 11:18:06.009942155 +0000 UTC m=+0.131387049 container init ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:18:06 np0005593232 podman[432116]: 2026-01-23 11:18:06.015419221 +0000 UTC m=+0.136864115 container start ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:18:06 np0005593232 podman[432116]: 2026-01-23 11:18:06.019136426 +0000 UTC m=+0.140581320 container attach ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 23 06:18:06 np0005593232 magical_poitras[432134]: 167 167
Jan 23 06:18:06 np0005593232 systemd[1]: libpod-ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad.scope: Deactivated successfully.
Jan 23 06:18:06 np0005593232 podman[432116]: 2026-01-23 11:18:06.020743992 +0000 UTC m=+0.142188886 container died ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:18:06 np0005593232 systemd[1]: var-lib-containers-storage-overlay-ad42618b2010bcb2891229237e24bfb220c1a9fb01ca0e3e08e798d401caac29-merged.mount: Deactivated successfully.
Jan 23 06:18:06 np0005593232 podman[432116]: 2026-01-23 11:18:06.067650223 +0000 UTC m=+0.189095117 container remove ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_poitras, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:18:06 np0005593232 systemd[1]: libpod-conmon-ae9efeaa2dba390d3069de6bdf16d3410c9e34533fb9d38bf2b525dc0d8ab2ad.scope: Deactivated successfully.
Jan 23 06:18:06 np0005593232 nova_compute[250269]: 2026-01-23 11:18:06.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:06 np0005593232 podman[432158]: 2026-01-23 11:18:06.225554393 +0000 UTC m=+0.044018030 container create 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:18:06 np0005593232 systemd[1]: Started libpod-conmon-27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa.scope.
Jan 23 06:18:06 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:06 np0005593232 podman[432158]: 2026-01-23 11:18:06.203795446 +0000 UTC m=+0.022259123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dda61e41c93728a26d9aa5a46c7dde155f96484bf09b0f4792219bd48cc4d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dda61e41c93728a26d9aa5a46c7dde155f96484bf09b0f4792219bd48cc4d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dda61e41c93728a26d9aa5a46c7dde155f96484bf09b0f4792219bd48cc4d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:06 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8dda61e41c93728a26d9aa5a46c7dde155f96484bf09b0f4792219bd48cc4d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:06 np0005593232 nova_compute[250269]: 2026-01-23 11:18:06.306 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:18:06 np0005593232 nova_compute[250269]: 2026-01-23 11:18:06.309 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:18:06 np0005593232 nova_compute[250269]: 2026-01-23 11:18:06.310 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 7.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:18:06 np0005593232 podman[432158]: 2026-01-23 11:18:06.317819021 +0000 UTC m=+0.136282648 container init 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:18:06 np0005593232 podman[432158]: 2026-01-23 11:18:06.326910959 +0000 UTC m=+0.145374626 container start 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 23 06:18:06 np0005593232 podman[432158]: 2026-01-23 11:18:06.331377486 +0000 UTC m=+0.149841133 container attach 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 06:18:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4577: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]: {
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:    "0": [
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:        {
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "devices": [
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "/dev/loop3"
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            ],
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "lv_name": "ceph_lv0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "lv_size": "7511998464",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "name": "ceph_lv0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "tags": {
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.cluster_name": "ceph",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.crush_device_class": "",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.encrypted": "0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.osd_id": "0",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.type": "block",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:                "ceph.vdo": "0"
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            },
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "type": "block",
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:            "vg_name": "ceph_vg0"
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:        }
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]:    ]
Jan 23 06:18:07 np0005593232 eloquent_williams[432175]: }
Jan 23 06:18:07 np0005593232 systemd[1]: libpod-27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa.scope: Deactivated successfully.
Jan 23 06:18:07 np0005593232 podman[432158]: 2026-01-23 11:18:07.127514267 +0000 UTC m=+0.945977894 container died 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #234. Immutable memtables: 0.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.679056) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 234
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087679129, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 1733, "num_deletes": 250, "total_data_size": 3070191, "memory_usage": 3110432, "flush_reason": "Manual Compaction"}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #235: started
Jan 23 06:18:07 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c8dda61e41c93728a26d9aa5a46c7dde155f96484bf09b0f4792219bd48cc4d8-merged.mount: Deactivated successfully.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087713141, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 235, "file_size": 3018078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 98626, "largest_seqno": 100358, "table_properties": {"data_size": 3009983, "index_size": 4909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 15809, "raw_average_key_size": 19, "raw_value_size": 2994027, "raw_average_value_size": 3651, "num_data_blocks": 211, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769166914, "oldest_key_time": 1769166914, "file_creation_time": 1769167087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 235, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 34144 microseconds, and 13784 cpu microseconds.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.713201) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #235: 3018078 bytes OK
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.713228) [db/memtable_list.cc:519] [default] Level-0 commit table #235 started
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.743315) [db/memtable_list.cc:722] [default] Level-0 commit table #235: memtable #1 done
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.743372) EVENT_LOG_v1 {"time_micros": 1769167087743362, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.743399) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 3062886, prev total WAL file size 3078668, number of live WAL files 2.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000231.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.745138) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600353030' seq:72057594037927935, type:22 .. '6B7600373531' seq:0, type:0; will stop at (end)
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [235(2947KB)], [233(11MB)]
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087745276, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [235], "files_L6": [233], "score": -1, "input_data_size": 14821110, "oldest_snapshot_seqno": -1}
Jan 23 06:18:07 np0005593232 podman[432158]: 2026-01-23 11:18:07.768588337 +0000 UTC m=+1.587051984 container remove 27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 23 06:18:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:07 np0005593232 systemd[1]: libpod-conmon-27d89342592dac1eaf4da11cffa559e0d70fb3407367c364284e3eaf729c22fa.scope: Deactivated successfully.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #236: 12226 keys, 13698863 bytes, temperature: kUnknown
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087892534, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 236, "file_size": 13698863, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13623660, "index_size": 43538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30597, "raw_key_size": 325915, "raw_average_key_size": 26, "raw_value_size": 13413665, "raw_average_value_size": 1097, "num_data_blocks": 1621, "num_entries": 12226, "num_filter_entries": 12226, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769167087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.893031) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 13698863 bytes
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.894672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.6 rd, 93.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 11.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(9.4) write-amplify(4.5) OK, records in: 12743, records dropped: 517 output_compression: NoCompression
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.894693) EVENT_LOG_v1 {"time_micros": 1769167087894683, "job": 148, "event": "compaction_finished", "compaction_time_micros": 147342, "compaction_time_cpu_micros": 62324, "output_level": 6, "num_output_files": 1, "total_output_size": 13698863, "num_input_records": 12743, "num_output_records": 12226, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000235.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087895397, "job": 148, "event": "table_file_deletion", "file_number": 235}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167087898152, "job": 148, "event": "table_file_deletion", "file_number": 233}
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.744959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.898320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.898329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.898333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.898336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:07 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:18:07.898338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:18:08 np0005593232 nova_compute[250269]: 2026-01-23 11:18:08.140 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.521783439 +0000 UTC m=+0.042200638 container create bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 23 06:18:08 np0005593232 systemd[1]: Started libpod-conmon-bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252.scope.
Jan 23 06:18:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:08.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.598428654 +0000 UTC m=+0.118845833 container init bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.503329205 +0000 UTC m=+0.023746414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.604123126 +0000 UTC m=+0.124540305 container start bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.607225434 +0000 UTC m=+0.127642613 container attach bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Jan 23 06:18:08 np0005593232 fervent_fermat[432355]: 167 167
Jan 23 06:18:08 np0005593232 systemd[1]: libpod-bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252.scope: Deactivated successfully.
Jan 23 06:18:08 np0005593232 conmon[432355]: conmon bc10efa8f12e681f8abc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252.scope/container/memory.events
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.610008093 +0000 UTC m=+0.130425272 container died bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 06:18:08 np0005593232 systemd[1]: var-lib-containers-storage-overlay-d9341c65772ebbd027d0b366d8bc95d1e6fd372718e69d0ea54297dcab82a1cd-merged.mount: Deactivated successfully.
Jan 23 06:18:08 np0005593232 podman[432338]: 2026-01-23 11:18:08.652080756 +0000 UTC m=+0.172497945 container remove bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:18:08 np0005593232 systemd[1]: libpod-conmon-bc10efa8f12e681f8abce3a98a08ab25d33ccd676ee4432ab5eb9f164eef3252.scope: Deactivated successfully.
Jan 23 06:18:08 np0005593232 podman[432378]: 2026-01-23 11:18:08.81221543 +0000 UTC m=+0.040817289 container create 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Jan 23 06:18:08 np0005593232 systemd[1]: Started libpod-conmon-468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c.scope.
Jan 23 06:18:08 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:18:08 np0005593232 podman[432378]: 2026-01-23 11:18:08.794969581 +0000 UTC m=+0.023571440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bfe2d7597630611f93196e3cf4ec3e365c088f6899fa2228491b3e99bbf2c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bfe2d7597630611f93196e3cf4ec3e365c088f6899fa2228491b3e99bbf2c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bfe2d7597630611f93196e3cf4ec3e365c088f6899fa2228491b3e99bbf2c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:08 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9bfe2d7597630611f93196e3cf4ec3e365c088f6899fa2228491b3e99bbf2c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:18:08 np0005593232 podman[432378]: 2026-01-23 11:18:08.900734832 +0000 UTC m=+0.129336721 container init 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:18:08 np0005593232 podman[432378]: 2026-01-23 11:18:08.909554152 +0000 UTC m=+0.138156031 container start 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:18:08 np0005593232 podman[432378]: 2026-01-23 11:18:08.91301806 +0000 UTC m=+0.141619929 container attach 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 23 06:18:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4578: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:09.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]: {
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:        "osd_id": 0,
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:        "type": "bluestore"
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]:    }
Jan 23 06:18:09 np0005593232 focused_mahavira[432395]: }
Jan 23 06:18:09 np0005593232 systemd[1]: libpod-468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c.scope: Deactivated successfully.
Jan 23 06:18:09 np0005593232 podman[432378]: 2026-01-23 11:18:09.834500207 +0000 UTC m=+1.063102076 container died 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 06:18:09 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b9bfe2d7597630611f93196e3cf4ec3e365c088f6899fa2228491b3e99bbf2c7-merged.mount: Deactivated successfully.
Jan 23 06:18:09 np0005593232 podman[432378]: 2026-01-23 11:18:09.893801909 +0000 UTC m=+1.122403788 container remove 468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mahavira, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:18:09 np0005593232 systemd[1]: libpod-conmon-468cfa3e879fc05318ce8b5a7daf61a4dfd694c3b4b9655f4b72e287e2c2702c.scope: Deactivated successfully.
Jan 23 06:18:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:18:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:18:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev eb463616-bcb9-4644-b385-d8c891e41b93 does not exist
Jan 23 06:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 81f5eb00-dead-4705-b55f-975ad77878a8 does not exist
Jan 23 06:18:09 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 715797b4-3b73-47b0-ab11-440f59192d30 does not exist
Jan 23 06:18:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:10.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:10 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:18:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4579: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:11 np0005593232 nova_compute[250269]: 2026-01-23 11:18:11.214 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:11 np0005593232 podman[432484]: 2026-01-23 11:18:11.410827285 +0000 UTC m=+0.056964287 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 23 06:18:11 np0005593232 podman[432483]: 2026-01-23 11:18:11.444778268 +0000 UTC m=+0.095513631 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 06:18:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:11.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:12.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4580: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:13 np0005593232 nova_compute[250269]: 2026-01-23 11:18:13.143 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:13.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:14.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4581: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:15.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:16 np0005593232 nova_compute[250269]: 2026-01-23 11:18:16.245 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:16.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4582: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:17.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:18 np0005593232 nova_compute[250269]: 2026-01-23 11:18:18.146 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:18.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4583: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:19.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:20.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4584: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:21 np0005593232 nova_compute[250269]: 2026-01-23 11:18:21.247 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:21.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:22.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4585: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:23 np0005593232 nova_compute[250269]: 2026-01-23 11:18:23.149 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:23.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:24.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:24 np0005593232 systemd-logind[808]: New session 64 of user zuul.
Jan 23 06:18:24 np0005593232 systemd[1]: Started Session 64 of User zuul.
Jan 23 06:18:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4586: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:25.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:26 np0005593232 nova_compute[250269]: 2026-01-23 11:18:26.247 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:26.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4587: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.003000085s ======
Jan 23 06:18:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:27.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000085s
Jan 23 06:18:28 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41199 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:28 np0005593232 nova_compute[250269]: 2026-01-23 11:18:28.207 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:28 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47948 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:28 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41211 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:28 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 23 06:18:28 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3751812900' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 06:18:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4588: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:28 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47960 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:29.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:30 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50887 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:30.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:30 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50893 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4589: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:31 np0005593232 nova_compute[250269]: 2026-01-23 11:18:31.249 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:31.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:32.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4590: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:33 np0005593232 nova_compute[250269]: 2026-01-23 11:18:33.210 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:33 np0005593232 ovs-vsctl[432871]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 06:18:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:33.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:34 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 06:18:34 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 06:18:34 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 06:18:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:34.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4591: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:35 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: cache status {prefix=cache status} (starting...)
Jan 23 06:18:35 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:35 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: client ls {prefix=client ls} (starting...)
Jan 23 06:18:35 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:35 np0005593232 lvm[433217]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 06:18:35 np0005593232 lvm[433217]: VG ceph_vg0 finished
Jan 23 06:18:35 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41226 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:35.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41238 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977854235' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 nova_compute[250269]: 2026-01-23 11:18:36.294 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50908 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/819279770' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:36.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 06:18:36 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50926 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41256 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 23 06:18:36 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565977720' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4592: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50938 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:36 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:36 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:36.967+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.47996 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: ops {prefix=ops} (starting...)
Jan 23 06:18:37 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4148775052' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2432745836' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:18:37
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes']
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.50965 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:37 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:37.683+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:37 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41292 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:37.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 23 06:18:37 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2453858110' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: session ls {prefix=session ls} (starting...)
Jan 23 06:18:38 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41316 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48035 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:38 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:38.179+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:38 np0005593232 nova_compute[250269]: 2026-01-23 11:18:38.213 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:38 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: status {prefix=status} (starting...)
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3168813281' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51004 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115656863' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:18:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:38.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 23 06:18:38 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204212517' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51016 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4593: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:18:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1227224071' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159516237' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48080 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41376 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:39.419+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:39 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1209217392' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48095 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1199267263' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 23 06:18:39 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83180597' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51076 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:40.189+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41430 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701486589' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 06:18:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:40.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41445 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 23 06:18:40 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338999212' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4594: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48146 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:40 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:40.979+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:40 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51106 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41463 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 nova_compute[250269]: 2026-01-23 11:18:41.296 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51124 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41481 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 23 06:18:41 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677386062' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48191 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:41.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:41 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51139 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4631577 data_alloc: 234881024 data_used: 28614656
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 45056000 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba23ee400 session 0x55fba37590e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 45056000 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba354af00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.254592896s of 16.409994125s, submitted: 6
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba23a2000 session 0x55fba2ed7860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408428544 unmapped: 45056000 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3337000/0x0/0x1bfc00000, data 0x43b8444/0x45d7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 47366144 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3e99000/0x0/0x1bfc00000, data 0x3856444/0x3a75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba004c800 session 0x55fba16e94a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4511939 data_alloc: 234881024 data_used: 27041792
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3e99000/0x0/0x1bfc00000, data 0x3856444/0x3a75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3e99000/0x0/0x1bfc00000, data 0x3856444/0x3a75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512003 data_alloc: 234881024 data_used: 27037696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3e99000/0x0/0x1bfc00000, data 0x3856444/0x3a75000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 47341568 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4512003 data_alloc: 234881024 data_used: 27037696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.788344383s of 12.874544144s, submitted: 26
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405422080 unmapped: 48062464 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3c8f000/0x0/0x1bfc00000, data 0x3a60444/0x3c7f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba1193000 session 0x55fba044fc20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405766144 unmapped: 47718400 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405766144 unmapped: 47718400 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405774336 unmapped: 47710208 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405807104 unmapped: 47677440 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4560841 data_alloc: 234881024 data_used: 27287552
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405807104 unmapped: 47677440 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba23ee400 session 0x55fba2e1c1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 47308800 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 47292416 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 47292416 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 47292416 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4576816 data_alloc: 234881024 data_used: 28782592
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 47292416 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4576816 data_alloc: 234881024 data_used: 28782592
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3b07000/0x0/0x1bfc00000, data 0x3c56444/0x3e07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406200320 unmapped: 47284224 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.023752213s of 18.140953064s, submitted: 22
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407650304 unmapped: 45834240 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407650304 unmapped: 45834240 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4619916 data_alloc: 234881024 data_used: 30167040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 44310528 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 44138496 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3686000/0x0/0x1bfc00000, data 0x40cf444/0x4280000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 44138496 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3684000/0x0/0x1bfc00000, data 0x40d1444/0x4282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 44138496 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 44138496 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4629306 data_alloc: 234881024 data_used: 30339072
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409346048 unmapped: 44138496 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3684000/0x0/0x1bfc00000, data 0x40d1444/0x4282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba9dd0400 session 0x55fba1737c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4629306 data_alloc: 234881024 data_used: 30339072
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.533243179s of 12.727615356s, submitted: 82
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba32b7400 session 0x55fba0951e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3684000/0x0/0x1bfc00000, data 0x40d1444/0x4282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4629306 data_alloc: 234881024 data_used: 30339072
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba004c800 session 0x55fba37854a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 heartbeat osd_stat(store_statfs(0x1a3684000/0x0/0x1bfc00000, data 0x40d1444/0x4282000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409354240 unmapped: 44130304 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409370624 unmapped: 44113920 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba1193000 session 0x55fba234be00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4601454 data_alloc: 234881024 data_used: 30138368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409370624 unmapped: 44113920 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 ms_handle_reset con 0x55fba23ee400 session 0x55fba045fc20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.246494293s of 10.278975487s, submitted: 6
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fba9dd0400 session 0x55fba352a960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409378816 unmapped: 44105728 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a3897000/0x0/0x1bfc00000, data 0x3ec7434/0x4077000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fba28a5800 session 0x55fba08bdc20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 44089344 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fba56cc800 session 0x55fba04630e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 44089344 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a3894000/0x0/0x1bfc00000, data 0x3e5a143/0x407a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409395200 unmapped: 44089344 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4602896 data_alloc: 234881024 data_used: 30138368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409403392 unmapped: 44081152 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 heartbeat osd_stat(store_statfs(0x1a3894000/0x0/0x1bfc00000, data 0x3e5a143/0x407a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fba28a5800 session 0x55fba2e1d680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fbaad23000 session 0x55fba045fe00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba354a1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 ms_handle_reset con 0x55fba046b400 session 0x55fba3c93c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4605530 data_alloc: 234881024 data_used: 30146560
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.514863968s of 10.643457413s, submitted: 61
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409411584 unmapped: 44072960 heap: 453484544 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 ms_handle_reset con 0x55fba56cc800 session 0x55fba37c8f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 heartbeat osd_stat(store_statfs(0x1a388c000/0x0/0x1bfc00000, data 0x3e5d8db/0x4080000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba32e34a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407035904 unmapped: 49881088 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400678912 unmapped: 56238080 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 ms_handle_reset con 0x55fbaad23000 session 0x55fb9ffc8000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 ms_handle_reset con 0x55fba004c800 session 0x55fba37594a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4476296 data_alloc: 218103808 data_used: 17866752
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400678912 unmapped: 56238080 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 411 heartbeat osd_stat(store_statfs(0x1a3f60000/0x0/0x1bfc00000, data 0x3789526/0x39ac000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400678912 unmapped: 56238080 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 411 ms_handle_reset con 0x55fba1193000 session 0x55fba3260960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 ms_handle_reset con 0x55fba004c800 session 0x55fba0951e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 ms_handle_reset con 0x55fba56cc800 session 0x55fba0463e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 ms_handle_reset con 0x55fbaad23000 session 0x55fba1736000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba234be00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475076 data_alloc: 218103808 data_used: 17879040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 heartbeat osd_stat(store_statfs(0x1a3f5e000/0x0/0x1bfc00000, data 0x378b19b/0x39af000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398336000 unmapped: 58580992 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.614243507s of 11.920937538s, submitted: 47
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 ms_handle_reset con 0x55fba23ee400 session 0x55fba0b121e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 61841408 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395386 data_alloc: 218103808 data_used: 11624448
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 61841408 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 61841408 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 61841408 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395075584 unmapped: 61841408 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395083776 unmapped: 61833216 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395386 data_alloc: 218103808 data_used: 11624448
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395083776 unmapped: 61833216 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395386 data_alloc: 218103808 data_used: 11624448
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395091968 unmapped: 61825024 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395100160 unmapped: 61816832 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 395100160 unmapped: 61816832 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 ms_handle_reset con 0x55fba004c800 session 0x55fba1d2be00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 ms_handle_reset con 0x55fba56cc800 session 0x55fba045f0e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422266 data_alloc: 234881024 data_used: 21811200
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.315380096s of 16.399528503s, submitted: 31
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a455d000/0x0/0x1bfc00000, data 0x318acda/0x33b0000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 50724864 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 ms_handle_reset con 0x55fbaad23000 session 0x55fba354a000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 50724864 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 50724864 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba0b12000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406208512 unmapped: 50708480 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3a84cda/0x3caa000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406208512 unmapped: 50708480 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 414 ms_handle_reset con 0x55fbb37bd000 session 0x55fba09f9860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373562 data_alloc: 218103808 data_used: 14499840
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 414 heartbeat osd_stat(store_statfs(0x1a4bfb000/0x0/0x1bfc00000, data 0x2aec977/0x2d12000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377976 data_alloc: 218103808 data_used: 14725120
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4bf8000/0x0/0x1bfc00000, data 0x2aee4b6/0x2d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4377976 data_alloc: 218103808 data_used: 14725120
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4bf8000/0x0/0x1bfc00000, data 0x2aee4b6/0x2d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.684103966s of 15.898918152s, submitted: 54
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401637376 unmapped: 55279616 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4bf9000/0x0/0x1bfc00000, data 0x2aee4b6/0x2d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4395944 data_alloc: 218103808 data_used: 16711680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4bf9000/0x0/0x1bfc00000, data 0x2aee4b6/0x2d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401645568 unmapped: 55271424 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 ms_handle_reset con 0x55fba332ec00 session 0x55fba2e1c1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 ms_handle_reset con 0x55fba9dd0400 session 0x55fba36390e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a4bf9000/0x0/0x1bfc00000, data 0x2aee4b6/0x2d15000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [0,0,1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 ms_handle_reset con 0x55fba004c800 session 0x55fb9fda4960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4287160 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f3000/0x0/0x1bfc00000, data 0x21f44b6/0x241b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4287160 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.391709328s of 15.478992462s, submitted: 40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 ms_handle_reset con 0x55fba56cc800 session 0x55fba354b4a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f3000/0x0/0x1bfc00000, data 0x21f44b6/0x241b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398589952 unmapped: 58327040 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 ms_handle_reset con 0x55fbaad23000 session 0x55fba3c93860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398598144 unmapped: 58318848 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58310656 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58310656 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58310656 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58310656 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 58302464 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 58302464 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.272722244s of 30.330598831s, submitted: 16
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 416 ms_handle_reset con 0x55fba004c800 session 0x55fba09efa40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398630912 unmapped: 58286080 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 417 ms_handle_reset con 0x55fba332ec00 session 0x55fb9ffc9c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a54ed000/0x0/0x1bfc00000, data 0x21f7cf8/0x241f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293586 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a54ed000/0x0/0x1bfc00000, data 0x21f7cf8/0x241f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398696448 unmapped: 58220544 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398696448 unmapped: 58220544 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4295888 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.836272240s of 11.905964851s, submitted: 29
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3766960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298310 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba09f6f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298310 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fbaad23000 session 0x55fba044fa40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.303462029s of 11.313310623s, submitted: 3
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba0b13680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298134 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fb9ffc9c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba354b4a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398753792 unmapped: 58163200 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398786560 unmapped: 58130432 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398786560 unmapped: 58130432 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 58114048 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.471817017s of 43.603153229s, submitted: 19
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398835712 unmapped: 58081280 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398835712 unmapped: 58081280 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303952 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fb9ffc8000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303609 data_alloc: 218103808 data_used: 11640832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.290052414s of 10.332016945s, submitted: 10
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba045fe00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 61349888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 61349888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 61341696 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 61341696 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba045fc20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba0b12000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3766960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba3767a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 219K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.63 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4075 writes, 11K keys, 4075 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 4075 writes, 1682 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477747 data_alloc: 234881024 data_used: 20426752
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477747 data_alloc: 234881024 data_used: 20426752
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.893667221s of 29.003288269s, submitted: 12
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404291584 unmapped: 56295424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402489344 unmapped: 58097664 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537885 data_alloc: 234881024 data_used: 21372928
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a406b000/0x0/0x1bfc00000, data 0x367989a/0x38a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a406b000/0x0/0x1bfc00000, data 0x367989a/0x38a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 57720832 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 57720832 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.411119461s of 32.324645996s, submitted: 68
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540417 data_alloc: 234881024 data_used: 21827584
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba4981400 session 0x55fba0b09a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 57704448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b081e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 57704448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba0b08d20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310809 data_alloc: 218103808 data_used: 11902976
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310809 data_alloc: 218103808 data_used: 11902976
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.833645821s of 10.873172760s, submitted: 13
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba2ed5e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba2ed54a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400048128 unmapped: 60538880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.403900146s of 31.496391296s, submitted: 27
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400048128 unmapped: 60538880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba32614a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba2e1d680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363542 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363542 data_alloc: 218103808 data_used: 11636736
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.726271629s of 10.869317055s, submitted: 32
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3639680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 59351040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 59351040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379843 data_alloc: 218103808 data_used: 13783040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413123 data_alloc: 218103808 data_used: 18534400
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413603 data_alloc: 218103808 data_used: 18546688
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.796961784s of 12.985931396s, submitted: 6
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404193280 unmapped: 56393728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404652032 unmapped: 55934976 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a11000/0x0/0x1bfc00000, data 0x2cd3899/0x2efd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448949 data_alloc: 234881024 data_used: 19124224
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a0b000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a0b000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404668416 unmapped: 55918592 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.086083412s of 10.270460129s, submitted: 94
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 54714368 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.769052505s of 12.658998489s, submitted: 296
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448181 data_alloc: 234881024 data_used: 19136512
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3758f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba26ed4a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447625 data_alloc: 234881024 data_used: 19136512
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 54673408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 54673408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.277843475s of 11.294801712s, submitted: 4
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba3d03400 session 0x55fba3c923c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 54665216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4453631 data_alloc: 234881024 data_used: 19140608
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba004c800 session 0x55fba2683c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 54665216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3c92b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 54657024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454403 data_alloc: 234881024 data_used: 19210240
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 ms_handle_reset con 0x55fba56cc800 session 0x55fba328a000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456807 data_alloc: 234881024 data_used: 19218432
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.155465126s of 13.584081650s, submitted: 19
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405987328 unmapped: 54599680 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f5000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4476989 data_alloc: 234881024 data_used: 20905984
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477789 data_alloc: 234881024 data_used: 20926464
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.147567749s of 10.176469803s, submitted: 40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba354a3c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba332ec00 session 0x55fba354b0e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406036480 unmapped: 54550528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f2000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba352a3c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406061056 unmapped: 54525952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406061056 unmapped: 54525952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 54468608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 54468608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba08bc1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba04d9860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba56cc800 session 0x55fba3261860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba332f800 session 0x55fba09f70e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 68.158592224s of 68.289207458s, submitted: 32
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 52256768 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba16e9c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba2394f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba3784000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba56cc800 session 0x55fba38794a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba7585000 session 0x55fba04d92c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371238 data_alloc: 218103808 data_used: 14352384
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371238 data_alloc: 218103808 data_used: 14352384
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.134342194s of 32.618972778s, submitted: 26
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407805952 unmapped: 52781056 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405496 data_alloc: 218103808 data_used: 14360576
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 56475648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4ad5000/0x0/0x1bfc00000, data 0x27facee/0x2a29000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404578304 unmapped: 56008704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404578304 unmapped: 56008704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 56057856 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x283acee/0x2a69000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.385208130s of 28.853710175s, submitted: 67
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402122 data_alloc: 218103808 data_used: 14393344
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a93000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba0462000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a93000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402050 data_alloc: 218103808 data_used: 14393344
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3785680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.038470268s of 10.760538101s, submitted: 24
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba3d03400 session 0x55fba234be00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba56cc800 session 0x55fba0b08780
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba7585000 session 0x55fba3759a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402336 data_alloc: 218103808 data_used: 14401536
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba004c800 session 0x55fba2ed6960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba23d6c00 session 0x55fba16e90e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.117155075s of 17.333608627s, submitted: 25
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba3d03400 session 0x55fba26a90e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405676032 unmapped: 54910976 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8c000/0x0/0x1bfc00000, data 0x2e7e5f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8c000/0x0/0x1bfc00000, data 0x2e7e5f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441032 data_alloc: 218103808 data_used: 14520320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 ms_handle_reset con 0x55fba56cc800 session 0x55fba3785a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x28405f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4426062 data_alloc: 218103808 data_used: 14520320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x28405f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4426062 data_alloc: 218103808 data_used: 14520320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.353925705s of 15.476061821s, submitted: 24
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a89000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a89000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4430236 data_alloc: 218103808 data_used: 14528512
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.122325897s of 27.159317017s, submitted: 24
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba7585000 session 0x55fba354ad20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405692416 unmapped: 54894592 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443018 data_alloc: 218103808 data_used: 15032320
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba09f63c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405028864 unmapped: 55558144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405028864 unmapped: 55558144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09ee3c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba2e1d680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba3784d20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d00400 session 0x55fba3c92b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.370250702s of 35.421714783s, submitted: 24
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 56385536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba354b0e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040d1/0x2435000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba045e5a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba26a8000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4352964 data_alloc: 218103808 data_used: 11689984
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba3cb1e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba332f400 session 0x55fba2ed50e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba16e9860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4354265 data_alloc: 218103808 data_used: 11689984
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355545 data_alloc: 218103808 data_used: 11816960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355545 data_alloc: 218103808 data_used: 11816960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.212444305s of 19.736858368s, submitted: 10
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406609920 unmapped: 53977088 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406609920 unmapped: 53977088 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a495d000/0x0/0x1bfc00000, data 0x29670f4/0x2b99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416727 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48e7000/0x0/0x1bfc00000, data 0x29e50f4/0x2c17000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407142400 unmapped: 53444608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416491 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c6000/0x0/0x1bfc00000, data 0x2a060f4/0x2c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416491 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.595334053s of 13.891137123s, submitted: 62
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c6000/0x0/0x1bfc00000, data 0x2a060f4/0x2c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416547 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b12780
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48bd000/0x0/0x1bfc00000, data 0x2a0f0f4/0x2c41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416623 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48bd000/0x0/0x1bfc00000, data 0x2a0f0f4/0x2c41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416623 data_alloc: 218103808 data_used: 12005376
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407306240 unmapped: 53280768 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba0b04b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.955385208s of 16.972343445s, submitted: 4
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba0b134a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba686f800 session 0x55fba399d2c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba399c5a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba399cf00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba399cb40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.548852921s of 24.614151001s, submitted: 18
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba399c1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c9c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba37c9a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37c9680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c9860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404667 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 53223424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba37c94a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a49c3000/0x0/0x1bfc00000, data 0x290a0d1/0x2b3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba37c9e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 53223424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba26ecb40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26ed860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407511040 unmapped: 53075968 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459695 data_alloc: 218103808 data_used: 19058688
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459695 data_alloc: 218103808 data_used: 19058688
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.001239777s of 17.084104538s, submitted: 8
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407863296 unmapped: 52723712 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4465407 data_alloc: 218103808 data_used: 19075072
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba26ec3c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba2ed6000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.626197815s of 26.226358414s, submitted: 40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba686fc00 session 0x55fba37c8f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba3767e00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba1d2a960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba399d4a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba37581e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.153734207s of 31.213592529s, submitted: 13
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3338c00 session 0x55fba0b0a1e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba3260780
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415032 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415032 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.331642151s of 23.421398163s, submitted: 38
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411361280 unmapped: 49225728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4568398 data_alloc: 218103808 data_used: 17227776
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c9b000/0x0/0x1bfc00000, data 0x3631123/0x3862000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559734 data_alloc: 218103808 data_used: 17231872
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411959296 unmapped: 48627712 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c7b000/0x0/0x1bfc00000, data 0x3652123/0x3883000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.038951874s of 10.530030251s, submitted: 129
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0463860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4561130 data_alloc: 218103808 data_used: 17244160
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4562890 data_alloc: 218103808 data_used: 17440768
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 226K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 23K syncs, 2.63 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1936 writes, 7029 keys, 1936 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1936 writes, 756 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 47570944 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4562890 data_alloc: 218103808 data_used: 17440768
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 47562752 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.942539215s of 15.453146935s, submitted: 13
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c62000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4583378 data_alloc: 218103808 data_used: 19492864
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c62000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba3cb0d20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: mgrc ms_handle_reset ms_handle_reset con 0x55fba3d03c00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/530399322
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/530399322,v1:192.168.122.100:6801/530399322]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: mgrc handle_mgr_configure stats_period=5
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3784b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba09743c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410140672 unmapped: 50446336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.570503235s of 55.644828796s, submitted: 29
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c93860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4391918 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3cb0f00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4391918 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c92c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3c93c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba5994000 session 0x55fba3260000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411377664 unmapped: 49209344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411377664 unmapped: 49209344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 49192960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.814170837s of 23.856830597s, submitted: 4
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4731000/0x0/0x1bfc00000, data 0x2b9d0c1/0x2dcd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4729000/0x0/0x1bfc00000, data 0x2ba50c1/0x2dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480804 data_alloc: 218103808 data_used: 14610432
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481592 data_alloc: 218103808 data_used: 14622720
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4727000/0x0/0x1bfc00000, data 0x2ba60c1/0x2dd6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 47489024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c92d20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 47489024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.167284012s of 12.376974106s, submitted: 47
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26a85a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408592384 unmapped: 51994624 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba3c92000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3b6d680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452064 data_alloc: 218103808 data_used: 11685888
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3561000 session 0x55fba08bd680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba15b01e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f9860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 54829056 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525344 data_alloc: 234881024 data_used: 21975040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525344 data_alloc: 234881024 data_used: 21975040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.264961243s of 17.363716125s, submitted: 16
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570916 data_alloc: 234881024 data_used: 21975040
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 49217536 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba38790e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411533312 unmapped: 52207616 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411541504 unmapped: 52199424 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411541504 unmapped: 52199424 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607832 data_alloc: 234881024 data_used: 22740992
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411639808 unmapped: 52101120 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411639808 unmapped: 52101120 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411705344 unmapped: 52035584 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.158336163s of 10.001039505s, submitted: 278
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607480 data_alloc: 234881024 data_used: 22740992
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411705344 unmapped: 52035584 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 52019200 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba1737860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 52019200 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411738112 unmapped: 52002816 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 55255040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba045fe00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.407199860s of 31.508493423s, submitted: 205
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040ea/0x2435000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3633400 session 0x55fba3758b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba354be00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5022000/0x0/0x1bfc00000, data 0x22ab123/0x24dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4401153 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5022000/0x0/0x1bfc00000, data 0x22ab123/0x24dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba23870e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4401638 data_alloc: 218103808 data_used: 11706368
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405638 data_alloc: 218103808 data_used: 12288000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405638 data_alloc: 218103808 data_used: 12288000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408559616 unmapped: 55181312 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.829019547s of 21.756328583s, submitted: 30
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408576000 unmapped: 55164928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4440832 data_alloc: 218103808 data_used: 12500992
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c98000/0x0/0x1bfc00000, data 0x2626146/0x2858000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c6f000/0x0/0x1bfc00000, data 0x2657146/0x2889000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444738 data_alloc: 218103808 data_used: 12320768
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c6f000/0x0/0x1bfc00000, data 0x2657146/0x2889000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.852149963s of 11.262113571s, submitted: 75
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443138 data_alloc: 218103808 data_used: 12324864
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c72000/0x0/0x1bfc00000, data 0x265a146/0x288c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447094 data_alloc: 218103808 data_used: 12312576
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba0b13c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447022 data_alloc: 218103808 data_used: 12312576
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.434495926s of 12.933977127s, submitted: 31
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 ms_handle_reset con 0x55fba2e5f800 session 0x55fb9ffc8000
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448596 data_alloc: 218103808 data_used: 12316672
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449716 data_alloc: 218103808 data_used: 12439552
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449716 data_alloc: 218103808 data_used: 12439552
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.502620697s of 15.832051277s, submitted: 6
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 426 ms_handle_reset con 0x55fba3d02000 session 0x55fba37c8b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c6e000/0x0/0x1bfc00000, data 0x265cd9f/0x2890000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460144 data_alloc: 218103808 data_used: 13041664
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c6a000/0x0/0x1bfc00000, data 0x265ea4c/0x2893000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459456 data_alloc: 218103808 data_used: 13041664
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c66000/0x0/0x1bfc00000, data 0x2663a4c/0x2898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.013331413s of 11.288456917s, submitted: 38
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462206 data_alloc: 218103808 data_used: 13053952
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4c62000/0x0/0x1bfc00000, data 0x266558b/0x289b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba37c9680
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4853000/0x0/0x1bfc00000, data 0x266558b/0x289b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461414 data_alloc: 218103808 data_used: 13058048
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba354a780
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.423934937s of 58.936065674s, submitted: 37
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415596544 unmapped: 48144384 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3759860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a452a000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47bf000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448980 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3784b40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5f800 session 0x55fba32e3860
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47bf000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c93c20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37c94a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451458 data_alloc: 218103808 data_used: 11796480
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486978 data_alloc: 218103808 data_used: 16818176
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486978 data_alloc: 218103808 data_used: 16818176
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.668106079s of 18.706832886s, submitted: 11
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413728768 unmapped: 50012160 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415031296 unmapped: 48709632 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3c92000/0x0/0x1bfc00000, data 0x3227516/0x345c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579124 data_alloc: 218103808 data_used: 17281024
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4f000/0x0/0x1bfc00000, data 0x32ca516/0x34ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580874 data_alloc: 218103808 data_used: 17285120
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.567571640s of 10.119446754s, submitted: 95
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4c000/0x0/0x1bfc00000, data 0x32cd516/0x3502000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580462 data_alloc: 218103808 data_used: 17293312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32603c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba32610e0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580462 data_alloc: 218103808 data_used: 17293312
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416686080 unmapped: 47054848 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba686e400 session 0x55fba0b0ad20
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416686080 unmapped: 47054848 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.709341049s of 24.039503098s, submitted: 7
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4606926 data_alloc: 218103808 data_used: 20221952
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4608734 data_alloc: 218103808 data_used: 20221952
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba352a960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba37c9a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609694 data_alloc: 218103808 data_used: 20246528
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.323343277s of 16.135540009s, submitted: 6
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 50577408 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209516/0x243e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba2e1c960
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba0b0a5a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba1702400 session 0x55fba17363c0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba36394a0
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3879a40
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.897144318s of 39.861175537s, submitted: 12
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3b6de00
Jan 23 06:18:41 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3b6de00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba686e000 session 0x55fba3879a40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba17363c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b0a5a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456612 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 55812096 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba2e1c960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 55795712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 55795712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 55599104 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486600 data_alloc: 218103808 data_used: 15933440
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486600 data_alloc: 218103808 data_used: 15933440
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.937059402s of 18.226503372s, submitted: 13
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 52789248 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521352 data_alloc: 218103808 data_used: 15970304
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32b8000/0x0/0x1bfc00000, data 0x2a61516/0x2c96000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32b8000/0x0/0x1bfc00000, data 0x2a61516/0x2c96000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4516660 data_alloc: 218103808 data_used: 16220160
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 55107584 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 55107584 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ae000/0x0/0x1bfc00000, data 0x2a6a516/0x2c9f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519966 data_alloc: 218103808 data_used: 16281600
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.488749027s of 11.441194534s, submitted: 23
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519566 data_alloc: 218103808 data_used: 16318464
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32603c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba40f9400 session 0x55fba352a780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519434 data_alloc: 218103808 data_used: 16318464
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519434 data_alloc: 218103808 data_used: 16318464
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.342370987s of 15.328845024s, submitted: 2
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412860416 unmapped: 55083008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba234a3c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b13e00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412860416 unmapped: 55083008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3cb0000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41502 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 56508416 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 56508416 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32610e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbaad23400 session 0x55fba16e9680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3784780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f65a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.588714600s of 44.694046021s, submitted: 24
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415113216 unmapped: 52830208 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3759680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3773000/0x0/0x1bfc00000, data 0x25a7506/0x27db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c93e00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452372 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3773000/0x0/0x1bfc00000, data 0x25a7506/0x27db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba3d03800 session 0x55fba3c92960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3b6f0e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba328be00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408068096 unmapped: 59875328 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408084480 unmapped: 59858944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484068 data_alloc: 218103808 data_used: 15433728
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408092672 unmapped: 59850752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba09743c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.832244873s of 11.179603577s, submitted: 16
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba045fc20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483936 data_alloc: 218103808 data_used: 15433728
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483844 data_alloc: 218103808 data_used: 15437824
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba332a400 session 0x55fba328ab40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.298506737s of 13.644285202s, submitted: 3
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484004 data_alloc: 218103808 data_used: 15441920
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba32e25a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f6960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409231360 unmapped: 58712064 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.674488068s of 43.959205627s, submitted: 31
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba38785a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409313280 unmapped: 58630144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464645 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a36be000/0x0/0x1bfc00000, data 0x265c506/0x2890000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c93680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4f7fc00 session 0x55fba0950d20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba1d2ba40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba15b0780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3cb1a40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba0463860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a36bd000/0x0/0x1bfc00000, data 0x265c516/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba354b2c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba23943c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4467413 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413515776 unmapped: 54427648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3274000/0x0/0x1bfc00000, data 0x2aa4526/0x2cda000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 54419456 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.059988022s of 10.040143967s, submitted: 15
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528931 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba26ec960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3639680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba328a000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba3784b40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c930e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c923c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba26ed4a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba15b0960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba0b12960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba32e3e00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26a8b40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4128000/0x0/0x1bfc00000, data 0x2c2f549/0x2e66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524925 data_alloc: 218103808 data_used: 11739136
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba09f70e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba354b860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409575424 unmapped: 58368000 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba0463c20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4128000/0x0/0x1bfc00000, data 0x2c2f549/0x2e66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.600816727s of 11.452827454s, submitted: 18
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4558707 data_alloc: 218103808 data_used: 16261120
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba0b081e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba26a8d20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557826 data_alloc: 218103808 data_used: 16265216
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4569986 data_alloc: 218103808 data_used: 17883136
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.936899185s of 10.173114777s, submitted: 4
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,22])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412418048 unmapped: 55525376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 54403072 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 54394880 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3adb000/0x0/0x1bfc00000, data 0x327556c/0x34ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 54312960 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4652240 data_alloc: 218103808 data_used: 20611072
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 54312960 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 54304768 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 54304768 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 52813824 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 51937280 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba32e3680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a364e000/0x0/0x1bfc00000, data 0x370856c/0x3940000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4699400 data_alloc: 218103808 data_used: 20676608
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.537017345s of 10.000730515s, submitted: 106
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417087488 unmapped: 50855936 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a29f2000/0x0/0x1bfc00000, data 0x30f855c/0x332f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba37845a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 49307648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a279e000/0x0/0x1bfc00000, data 0x3419539/0x364f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4647054 data_alloc: 218103808 data_used: 19095552
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba2e1d0e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 49307648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 63K writes, 233K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 63K writes, 24K syncs, 2.62 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2022 writes, 7204 keys, 2022 commit groups, 1.0 writes per commit group, ingest: 7.29 MB, 0.01 MB/s#012Interval WAL: 2022 writes, 809 syncs, 2.50 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba38792c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2f07000/0x0/0x1bfc00000, data 0x2cb2529/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560688 data_alloc: 218103808 data_used: 14761984
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba38790e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.918930054s of 11.380777359s, submitted: 38
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba09ee780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2f07000/0x0/0x1bfc00000, data 0x2cb2529/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448208 data_alloc: 218103808 data_used: 11735040
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414212096 unmapped: 53731328 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37585a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba32e2780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba0462780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b08d20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba36390e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.287857056s of 49.987705231s, submitted: 31
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 52101120 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba37590e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fb9ffc7680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba2ed5860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba16e94a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba0b094a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475096 data_alloc: 218103808 data_used: 11730944
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba3767a40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2393400 session 0x55fba3785e00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2393400 session 0x55fba38faf00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3758f00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491974 data_alloc: 218103808 data_used: 13836288
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503654 data_alloc: 218103808 data_used: 15507456
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414253056 unmapped: 53690368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503654 data_alloc: 218103808 data_used: 15507456
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414253056 unmapped: 53690368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.310138702s of 19.604589462s, submitted: 4
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4538618 data_alloc: 218103808 data_used: 16359424
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4538618 data_alloc: 218103808 data_used: 16359424
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415924224 unmapped: 52019200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 52011008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 52011008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 52011008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415932416 unmapped: 52011008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4538618 data_alloc: 218103808 data_used: 16359424
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4538618 data_alloc: 218103808 data_used: 16359424
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415940608 unmapped: 52002816 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4538618 data_alloc: 218103808 data_used: 16359424
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4539738 data_alloc: 218103808 data_used: 16388096
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415948800 unmapped: 51994624 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 51986432 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba2ed5860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba23870e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 51986432 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 51986432 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba08bd2c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32fa000/0x0/0x1bfc00000, data 0x28be526/0x2af4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415956992 unmapped: 51986432 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2393400 session 0x55fba3c921e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.110565186s of 34.201934814s, submitted: 23
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4636966 data_alloc: 218103808 data_used: 18698240
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 428 ms_handle_reset con 0x55fba23a3400 session 0x55fba3639c20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416915456 unmapped: 63209472 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 429 ms_handle_reset con 0x55fba23d6c00 session 0x55fb9ffc63c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416923648 unmapped: 63201280 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412008448 unmapped: 68116480 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412008448 unmapped: 68116480 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412016640 unmapped: 68108288 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4629346 data_alloc: 218103808 data_used: 18698240
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a29d8000/0x0/0x1bfc00000, data 0x31dbaa1/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412016640 unmapped: 68108288 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412016640 unmapped: 68108288 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 430 ms_handle_reset con 0x55fba2e5e400 session 0x55fba32e3680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412016640 unmapped: 68108288 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412016640 unmapped: 68108288 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 430 heartbeat osd_stat(store_statfs(0x1a3090000/0x0/0x1bfc00000, data 0x2b26a81/0x2d5e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412024832 unmapped: 68100096 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4531723 data_alloc: 218103808 data_used: 11739136
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.589056969s of 10.530668259s, submitted: 58
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412033024 unmapped: 68091904 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412106752 unmapped: 68018176 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412205056 unmapped: 67919872 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a308d000/0x0/0x1bfc00000, data 0x2b285c0/0x2d61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4535017 data_alloc: 218103808 data_used: 11747328
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a308d000/0x0/0x1bfc00000, data 0x2b285c0/0x2d61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a308d000/0x0/0x1bfc00000, data 0x2b285c0/0x2d61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4535017 data_alloc: 218103808 data_used: 11747328
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba726f400 session 0x55fba26a8d20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2393400 session 0x55fba04621e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba23a3400 session 0x55fba09f9860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.594408035s of 13.980184555s, submitted: 330
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b0a960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2e5e400 session 0x55fba399cb40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2a5a800 session 0x55fba37c9680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2393400 session 0x55fba3b6d680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412221440 unmapped: 67903488 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a308d000/0x0/0x1bfc00000, data 0x2b285c0/0x2d61000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba23a3400 session 0x55fb9ffc8000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4671775 data_alloc: 218103808 data_used: 20242432
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b130e0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424206336 unmapped: 55918592 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2a5a800 session 0x55fba04625a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424206336 unmapped: 55918592 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2e5e400 session 0x55fba0b125a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba2393400 session 0x55fba045eb40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424206336 unmapped: 55918592 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 ms_handle_reset con 0x55fba23a3400 session 0x55fba399c960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424214528 unmapped: 55910400 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 heartbeat osd_stat(store_statfs(0x1a23e0000/0x0/0x1bfc00000, data 0x37d35e0/0x3a0e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 431 handle_osd_map epochs [432,432], i have 432, src has [1,432]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417644544 unmapped: 62480384 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 432 ms_handle_reset con 0x55fba2883c00 session 0x55fba1d2a960
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4598420 data_alloc: 218103808 data_used: 15450112
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2cf4000/0x0/0x1bfc00000, data 0x2ebd28d/0x30f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 432 heartbeat osd_stat(store_statfs(0x1a2cf4000/0x0/0x1bfc00000, data 0x2ebd28d/0x30f9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4598420 data_alloc: 218103808 data_used: 15450112
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.207132339s of 11.453425407s, submitted: 32
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4601394 data_alloc: 218103808 data_used: 15450112
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2cf1000/0x0/0x1bfc00000, data 0x2ebedcc/0x30fc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417652736 unmapped: 62472192 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2cf1000/0x0/0x1bfc00000, data 0x2ebedcc/0x30fc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4610994 data_alloc: 218103808 data_used: 16551936
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.050232887s of 11.063552856s, submitted: 11
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2cef000/0x0/0x1bfc00000, data 0x2ec0dcc/0x30fe000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612892 data_alloc: 218103808 data_used: 16551936
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2cec000/0x0/0x1bfc00000, data 0x2ec4dcc/0x3102000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417660928 unmapped: 62464000 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612892 data_alloc: 218103808 data_used: 16551936
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 62455808 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 62455808 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2ceb000/0x0/0x1bfc00000, data 0x2ec4dcc/0x3102000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 62455808 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417669120 unmapped: 62455808 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2ceb000/0x0/0x1bfc00000, data 0x2ec4dcc/0x3102000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 62447616 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4612908 data_alloc: 218103808 data_used: 16551936
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a2ceb000/0x0/0x1bfc00000, data 0x2ec4dcc/0x3102000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 62447616 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 62447616 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 62447616 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23d6c00 session 0x55fba32605a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2a5a800 session 0x55fba2e80780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417677312 unmapped: 62447616 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.594015121s of 17.422815323s, submitted: 6
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2393400 session 0x55fba26ecb40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 62439424 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482255 data_alloc: 218103808 data_used: 11755520
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 62439424 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417685504 unmapped: 62439424 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4482255 data_alloc: 218103808 data_used: 11755520
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399f000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23a3400 session 0x55fba04d8780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23d6c00 session 0x55fba045f2c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417693696 unmapped: 62431232 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.544411659s of 10.574276924s, submitted: 11
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2883c00 session 0x55fba2e1cb40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417701888 unmapped: 62423040 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4493019 data_alloc: 218103808 data_used: 13922304
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 62406656 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a399d000/0x0/0x1bfc00000, data 0x2213e1e/0x2451000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 62406656 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417718272 unmapped: 62406656 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2a5a800 session 0x55fba352a000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2393400 session 0x55fba37c8000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 428072960 unmapped: 52051968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23a3400 session 0x55fba1736000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3b6cf00
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560885 data_alloc: 218103808 data_used: 13922304
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3125000/0x0/0x1bfc00000, data 0x2a8be1e/0x2cc9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3125000/0x0/0x1bfc00000, data 0x2a8be1e/0x2cc9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560885 data_alloc: 218103808 data_used: 13922304
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418840576 unmapped: 61284352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418848768 unmapped: 61276160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418848768 unmapped: 61276160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418848768 unmapped: 61276160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3125000/0x0/0x1bfc00000, data 0x2a8be1e/0x2cc9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560885 data_alloc: 218103808 data_used: 13922304
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.831798553s of 16.097116470s, submitted: 67
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2883c00 session 0x55fba2ed4b40
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 419004416 unmapped: 61120512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 419004416 unmapped: 61120512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4625309 data_alloc: 234881024 data_used: 22720512
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3101000/0x0/0x1bfc00000, data 0x2aafe1e/0x2ced000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3101000/0x0/0x1bfc00000, data 0x2aafe1e/0x2ced000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4625309 data_alloc: 234881024 data_used: 22720512
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.041227341s of 12.054943085s, submitted: 2
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3101000/0x0/0x1bfc00000, data 0x2aafe1e/0x2ced000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 422117376 unmapped: 58007552 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423862272 unmapped: 56262656 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4688497 data_alloc: 234881024 data_used: 22884352
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423411712 unmapped: 56713216 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 55574528 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 55574528 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 55574528 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424550400 unmapped: 55574528 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706903 data_alloc: 234881024 data_used: 23638016
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.653481960s of 10.046114922s, submitted: 83
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706919 data_alloc: 234881024 data_used: 23638016
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706919 data_alloc: 234881024 data_used: 23638016
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424583168 unmapped: 55541760 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29a1000/0x0/0x1bfc00000, data 0x31f9e1e/0x3437000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 55533568 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 55533568 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706919 data_alloc: 234881024 data_used: 23638016
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424591360 unmapped: 55533568 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.165441513s of 14.176957130s, submitted: 1
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423927808 unmapped: 56197120 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29b5000/0x0/0x1bfc00000, data 0x31fae1e/0x3438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423936000 unmapped: 56188928 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423944192 unmapped: 56180736 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423944192 unmapped: 56180736 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4701147 data_alloc: 234881024 data_used: 23654400
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423944192 unmapped: 56180736 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29b6000/0x0/0x1bfc00000, data 0x31fae1e/0x3438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba332cc00 session 0x55fba3759860
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fbb2bddc00 session 0x55fba0975680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 56172544 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 56172544 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 56172544 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423952384 unmapped: 56172544 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698131 data_alloc: 234881024 data_used: 23654400
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 56164352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 56164352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29b6000/0x0/0x1bfc00000, data 0x31fae1e/0x3438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 56164352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423960576 unmapped: 56164352 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698131 data_alloc: 234881024 data_used: 23654400
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29b6000/0x0/0x1bfc00000, data 0x31fae1e/0x3438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4698131 data_alloc: 234881024 data_used: 23654400
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.992525101s of 19.094150543s, submitted: 31
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2393400 session 0x55fb9ffc65a0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29b6000/0x0/0x1bfc00000, data 0x31fae1e/0x3438000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423968768 unmapped: 56156160 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423976960 unmapped: 56147968 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 56139776 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 56139776 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 56139776 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 56139776 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423985152 unmapped: 56139776 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 56131584 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 56131584 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 56131584 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 56131584 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 423993344 unmapped: 56131584 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424001536 unmapped: 56123392 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4691815 data_alloc: 234881024 data_used: 23543808
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.731601715s of 30.739130020s, submitted: 2
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29da000/0x0/0x1bfc00000, data 0x31d6e1e/0x3414000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424009728 unmapped: 56115200 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23a3400 session 0x55fba09f8780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424017920 unmapped: 56107008 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4695093 data_alloc: 234881024 data_used: 23547904
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 56090624 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424034304 unmapped: 56090624 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4696317 data_alloc: 234881024 data_used: 23642112
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4696317 data_alloc: 234881024 data_used: 23642112
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d9000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424042496 unmapped: 56082432 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424050688 unmapped: 56074240 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.986763000s of 17.849494934s, submitted: 12
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424050688 unmapped: 56074240 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4703585 data_alloc: 234881024 data_used: 23969792
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d7000/0x0/0x1bfc00000, data 0x31d6e41/0x3415000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424124416 unmapped: 56000512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424124416 unmapped: 56000512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424124416 unmapped: 56000512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424124416 unmapped: 56000512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424124416 unmapped: 56000512 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba004c800 session 0x55fba37c83c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4707585 data_alloc: 234881024 data_used: 24350720
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 55902208 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d4000/0x0/0x1bfc00000, data 0x31dbe41/0x341a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 55902208 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424222720 unmapped: 55902208 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d4000/0x0/0x1bfc00000, data 0x31dbe41/0x341a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4708199 data_alloc: 234881024 data_used: 24346624
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.980678558s of 13.459820747s, submitted: 29
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29d4000/0x0/0x1bfc00000, data 0x31dbe41/0x341a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424230912 unmapped: 55894016 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 55877632 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4707959 data_alloc: 234881024 data_used: 24342528
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424247296 unmapped: 55877632 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424255488 unmapped: 55869440 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29ca000/0x0/0x1bfc00000, data 0x31e1e41/0x3420000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 55861248 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 55861248 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 55861248 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4706727 data_alloc: 234881024 data_used: 24342528
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29ce000/0x0/0x1bfc00000, data 0x31e1e41/0x3420000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 55861248 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424263680 unmapped: 55861248 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29ce000/0x0/0x1bfc00000, data 0x31e1e41/0x3420000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424312832 unmapped: 55812096 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 55795712 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.815032959s of 11.849040031s, submitted: 14
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3639680
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2883c00 session 0x55fba3758780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 55795712 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4711707 data_alloc: 234881024 data_used: 25329664
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba2393400 session 0x55fba354a000
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 55795712 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 55795712 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424329216 unmapped: 55795712 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29ce000/0x0/0x1bfc00000, data 0x31e1e1e/0x341f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23a3400 session 0x55fba16e92c0
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 424337408 unmapped: 55787520 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a29ce000/0x0/0x1bfc00000, data 0x31e1e1e/0x341f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37c8780
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417234944 unmapped: 62889984 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417243136 unmapped: 62881792 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417251328 unmapped: 62873600 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417259520 unmapped: 62865408 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 ms_handle_reset con 0x55fba3d03400 session 0x55fba3878d20
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417267712 unmapped: 62857216 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417267712 unmapped: 62857216 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417267712 unmapped: 62857216 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417275904 unmapped: 62849024 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417284096 unmapped: 62840832 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417292288 unmapped: 62832640 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417292288 unmapped: 62832640 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417292288 unmapped: 62832640 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417292288 unmapped: 62832640 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4503677 data_alloc: 218103808 data_used: 13852672
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417292288 unmapped: 62832640 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'config diff' '{prefix=config diff}'
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'config show' '{prefix=config show}'
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'counter dump' '{prefix=counter dump}'
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'counter schema' '{prefix=counter schema}'
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416399360 unmapped: 63725568 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416727040 unmapped: 63397888 heap: 480124928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: osd.0 433 heartbeat osd_stat(store_statfs(0x1a3936000/0x0/0x1bfc00000, data 0x2213dac/0x244f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:18:42 np0005593232 ceph-osd[85010]: do_command 'log dump' '{prefix=log dump}'
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48209 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 06:18:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800526684' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51154 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41526 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 podman[434326]: 2026-01-23 11:18:42.46806901 +0000 UTC m=+0.122157188 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 23 06:18:42 np0005593232 podman[434322]: 2026-01-23 11:18:42.481546262 +0000 UTC m=+0.139078397 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 23 06:18:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48230 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 rsyslogd[1008]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51172 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:42.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 23 06:18:42 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1476254398' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 06:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:18:42.703 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:18:42.704 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:18:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:18:42.704 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41544 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4595: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:42 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2336656535' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51190 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41571 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:43 np0005593232 nova_compute[250269]: 2026-01-23 11:18:43.215 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48275 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51205 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41595 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1227484201' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 06:18:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:43.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51226 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48284 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 23 06:18:43 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664301871' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48308 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51244 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41625 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:44.336+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1657687588' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 23 06:18:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:44.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48332 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/604920721' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 23 06:18:44 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2874381228' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 23 06:18:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4596: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:45 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51277 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:45.042+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:45 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:45 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48356 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1029486760' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/68881958' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3565966318' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3055871918' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 23 06:18:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:45.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:45 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48395 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:45 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:18:45.852+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:45 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 23 06:18:45 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779178001' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4266910294' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 23 06:18:46 np0005593232 nova_compute[250269]: 2026-01-23 11:18:46.330 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1986377747' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 06:18:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:46.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3373049018' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 23 06:18:46 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532756080' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 23 06:18:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4597: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:47 np0005593232 systemd[1]: Starting Hostname Service...
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738362822' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 23 06:18:47 np0005593232 systemd[1]: Started Hostname Service.
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403905633' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785737730' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 06:18:47 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41787 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:47.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:18:47 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.0 total, 600.0 interval#012Cumulative writes: 22K writes, 100K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s#012Cumulative WAL: 22K writes, 22K syncs, 1.00 writes per sync, written: 0.15 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1447 writes, 6731 keys, 1447 commit groups, 1.0 writes per commit group, ingest: 10.14 MB, 0.02 MB/s#012Interval WAL: 1447 writes, 1447 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     35.8      3.86              0.54        74    0.052       0      0       0.0       0.0#012  L6      1/0   13.06 MB   0.0      0.9     0.1      0.7       0.8      0.0       0.0   5.6     72.2     62.2     12.51              2.85        73    0.171    626K    38K       0.0       0.0#012 Sum      1/0   13.06 MB   0.0      0.9     0.1      0.7       0.9      0.1       0.0   6.6     55.2     56.0     16.37              3.38       147    0.111    626K    38K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     33.0     33.6      2.24              0.34        10    0.224     62K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.9     0.1      0.7       0.8      0.0       0.0   0.0     72.2     62.2     12.51              2.85        73    0.171    626K    38K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     35.8      3.86              0.54        73    0.053       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     13.9      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 8400.0 total, 600.0 interval#012Flush(GB): cumulative 0.135, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.90 GB write, 0.11 MB/s write, 0.88 GB read, 0.11 MB/s read, 16.4 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 2.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557a844231f0#2 capacity: 304.00 MB usage: 94.58 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.000935 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5883,90.25 MB,29.6865%) FilterBlock(148,1.70 MB,0.558105%) IndexBlock(148,2.63 MB,0.865981%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 23 06:18:47 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41805 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 23 06:18:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1195613062' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51406 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41814 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 23 06:18:48 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/560739521' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 23 06:18:48 np0005593232 nova_compute[250269]: 2026-01-23 11:18:48.217 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41826 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51427 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51421 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:48.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41844 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4598: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3145466784' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1852011323' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41865 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51442 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 nova_compute[250269]: 2026-01-23 11:18:49.311 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:49 np0005593232 nova_compute[250269]: 2026-01-23 11:18:49.312 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:49 np0005593232 nova_compute[250269]: 2026-01-23 11:18:49.312 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:18:49 np0005593232 nova_compute[250269]: 2026-01-23 11:18:49.312 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41880 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2009564047' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48521 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48527 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:49.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:49 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51463 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 23 06:18:49 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598814225' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.062 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.063 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.063 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.063 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.064 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:50 np0005593232 nova_compute[250269]: 2026-01-23 11:18:50.064 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41901 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48539 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51484 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48542 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4037865179' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.41919 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51499 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:50.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4599: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 06:18:50 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 06:18:50 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51517 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 06:18:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 06:18:51 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:51 np0005593232 nova_compute[250269]: 2026-01-23 11:18:51.331 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:51 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48608 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:51 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 23 06:18:51 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/441285049' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 23 06:18:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:51.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48632 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42021 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48647 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51613 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2549605634' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 23 06:18:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:18:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:52.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48662 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4600: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 23 06:18:52 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 23 06:18:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0) v1
Jan 23 06:18:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2751895531' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 23 06:18:53 np0005593232 nova_compute[250269]: 2026-01-23 11:18:53.220 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:53 np0005593232 nova_compute[250269]: 2026-01-23 11:18:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:53 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Jan 23 06:18:53 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060671535' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 23 06:18:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:53.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Jan 23 06:18:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1672781115' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 23 06:18:54 np0005593232 nova_compute[250269]: 2026-01-23 11:18:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:18:54 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42087 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:54.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:54 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48734 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:54 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Jan 23 06:18:54 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2446097099' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 23 06:18:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4601: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:55 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51676 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:55 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Jan 23 06:18:55 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802767441' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 23 06:18:55 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42120 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:55.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:56 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Jan 23 06:18:56 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3780705346' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 23 06:18:56 np0005593232 nova_compute[250269]: 2026-01-23 11:18:56.333 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:56 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51697 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:56 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42144 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:56.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:56 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42150 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:56 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4602: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:56 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42156 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:57 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51709 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Jan 23 06:18:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3503762423' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 23 06:18:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:18:57 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51715 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Jan 23 06:18:57 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301073343' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 23 06:18:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:57.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42177 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:58 np0005593232 nova_compute[250269]: 2026-01-23 11:18:58.223 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48809 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42183 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51739 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:18:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:18:58.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:18:58 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Jan 23 06:18:58 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173471238' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4603: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51748 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:18:58 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:18:59 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 23 06:18:59 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2699683567' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 06:18:59 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42210 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:18:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:18:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:18:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:18:59.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:18:59 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42216 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.366 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.367 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.367 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.367 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.367 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714353941' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 06:19:00 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48830 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:00.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4103655932' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:19:00 np0005593232 nova_compute[250269]: 2026-01-23 11:19:00.878 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:19:00 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48839 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:00 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51772 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4604: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Jan 23 06:19:00 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1896923457' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 23 06:19:01 np0005593232 nova_compute[250269]: 2026-01-23 11:19:01.043 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:19:01 np0005593232 nova_compute[250269]: 2026-01-23 11:19:01.045 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3806MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:19:01 np0005593232 nova_compute[250269]: 2026-01-23 11:19:01.045 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:19:01 np0005593232 nova_compute[250269]: 2026-01-23 11:19:01.045 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:19:01 np0005593232 ovs-appctl[437506]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 06:19:01 np0005593232 ovs-appctl[437511]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 06:19:01 np0005593232 ovs-appctl[437515]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 23 06:19:01 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51781 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:01 np0005593232 nova_compute[250269]: 2026-01-23 11:19:01.384 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Jan 23 06:19:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2148196641' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 23 06:19:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:19:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:01.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42267 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:02 np0005593232 nova_compute[250269]: 2026-01-23 11:19:02.377 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:19:02 np0005593232 nova_compute[250269]: 2026-01-23 11:19:02.377 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48857 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:02.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 23 06:19:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4061418698' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48866 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:02 np0005593232 nova_compute[250269]: 2026-01-23 11:19:02.806 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:19:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4605: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.226 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.446 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0) v1
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/959883474' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.453 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.614 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.616 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:19:03 np0005593232 nova_compute[250269]: 2026-01-23 11:19:03.616 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:19:03 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48893 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/171527360' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 23 06:19:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:03.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0) v1
Jan 23 06:19:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1028431742' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 23 06:19:04 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48899 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:04 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0) v1
Jan 23 06:19:04 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/38676753' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 23 06:19:04 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48908 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:19:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:04.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:04 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42324 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4606: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0) v1
Jan 23 06:19:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3452197552' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 23 06:19:05 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0) v1
Jan 23 06:19:05 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821656871' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 23 06:19:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:05.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:06 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42348 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:06 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0) v1
Jan 23 06:19:06 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/95584690' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 23 06:19:06 np0005593232 nova_compute[250269]: 2026-01-23 11:19:06.385 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.002000056s ======
Jan 23 06:19:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:06.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000056s
Jan 23 06:19:06 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51862 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:06 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42360 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4607: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42375 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.48965 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0) v1
Jan 23 06:19:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158260845' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:07.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0) v1
Jan 23 06:19:07 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2235014915' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51889 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:08 np0005593232 nova_compute[250269]: 2026-01-23 11:19:08.229 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42399 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42405 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:19:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51901 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4608: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 23 06:19:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029642925' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 23 06:19:09 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51907 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:09 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0) v1
Jan 23 06:19:09 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860312639' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 23 06:19:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:09.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:09 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42429 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49016 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42435 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51925 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:10.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51931 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/35866544' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0) v1
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0) v1
Jan 23 06:19:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4609: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:10 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3310317056' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 23 06:19:11 np0005593232 nova_compute[250269]: 2026-01-23 11:19:11.422 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:11 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49040 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:11 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 06:19:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:19:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:11.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:11 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51952 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 89b5de39-18f9-4f5c-9254-4ccd078d2b17 does not exist
Jan 23 06:19:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 7c01604c-af16-4660-a863-97eacb2bb199 does not exist
Jan 23 06:19:11 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev b0c92ed5-0990-4c71-bc57-bebfeb6097e8 does not exist
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:19:11 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.51958 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49052 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:12 np0005593232 podman[439662]: 2026-01-23 11:19:12.610626441 +0000 UTC m=+0.104262749 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 23 06:19:12 np0005593232 podman[439669]: 2026-01-23 11:19:12.644687368 +0000 UTC m=+0.131883603 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 23 06:19:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:12.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.670665995 +0000 UTC m=+0.055778914 container create ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 23 06:19:12 np0005593232 systemd[1]: Started libpod-conmon-ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4.scope.
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.648354142 +0000 UTC m=+0.033467081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:12 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.788452777 +0000 UTC m=+0.173565726 container init ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.79701496 +0000 UTC m=+0.182127879 container start ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.801488247 +0000 UTC m=+0.186601166 container attach ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:19:12 np0005593232 boring_kare[439774]: 167 167
Jan 23 06:19:12 np0005593232 systemd[1]: libpod-ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4.scope: Deactivated successfully.
Jan 23 06:19:12 np0005593232 podman[439744]: 2026-01-23 11:19:12.805270204 +0000 UTC m=+0.190383133 container died ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:12 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49058 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:12 np0005593232 systemd[1]: var-lib-containers-storage-overlay-74b3fbcfb373b41a3beea585f06cf84360ac4e6e0042a9b16b9e038de6b2849a-merged.mount: Deactivated successfully.
Jan 23 06:19:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4610: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:13 np0005593232 nova_compute[250269]: 2026-01-23 11:19:13.231 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:13 np0005593232 podman[439744]: 2026-01-23 11:19:13.234855804 +0000 UTC m=+0.619968733 container remove ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kare, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:19:13 np0005593232 systemd[1]: libpod-conmon-ba959ff5503c74e121555472a2d4172c01a1ea079686d35ca62dcb13b4f385a4.scope: Deactivated successfully.
Jan 23 06:19:13 np0005593232 podman[439839]: 2026-01-23 11:19:13.419289947 +0000 UTC m=+0.053250522 container create 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:19:13 np0005593232 systemd[1]: Started libpod-conmon-07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e.scope.
Jan 23 06:19:13 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:13 np0005593232 podman[439839]: 2026-01-23 11:19:13.396862871 +0000 UTC m=+0.030823436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:13 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:13 np0005593232 podman[439839]: 2026-01-23 11:19:13.502618852 +0000 UTC m=+0.136579427 container init 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 23 06:19:13 np0005593232 podman[439839]: 2026-01-23 11:19:13.510554087 +0000 UTC m=+0.144514642 container start 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:19:13 np0005593232 podman[439839]: 2026-01-23 11:19:13.51595086 +0000 UTC m=+0.149911465 container attach 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Jan 23 06:19:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:13.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:13 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42480 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:13 np0005593232 systemd[1]: Starting Time & Date Service...
Jan 23 06:19:14 np0005593232 systemd[1]: Started Time & Date Service.
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42483 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:19:14 np0005593232 awesome_mahavira[439869]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:19:14 np0005593232 awesome_mahavira[439869]: --> relative data size: 1.0
Jan 23 06:19:14 np0005593232 awesome_mahavira[439869]: --> All data devices are unavailable
Jan 23 06:19:14 np0005593232 systemd[1]: libpod-07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e.scope: Deactivated successfully.
Jan 23 06:19:14 np0005593232 podman[439839]: 2026-01-23 11:19:14.396500115 +0000 UTC m=+1.030460690 container died 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 23 06:19:14 np0005593232 systemd[1]: var-lib-containers-storage-overlay-afd027f739126347becb21a9f2071bbafb6c9ab56094d1da184cbf6ba110d46f-merged.mount: Deactivated successfully.
Jan 23 06:19:14 np0005593232 podman[439839]: 2026-01-23 11:19:14.469789284 +0000 UTC m=+1.103749849 container remove 07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:19:14 np0005593232 systemd[1]: libpod-conmon-07c73c0f1afe6a3dbae5a403873378f5bb799a01937bbd5548f0ef904a52977e.scope: Deactivated successfully.
Jan 23 06:19:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:14.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:14 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4611: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.089953772 +0000 UTC m=+0.040583793 container create 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:19:15 np0005593232 systemd[1]: Started libpod-conmon-2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd.scope.
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.071388545 +0000 UTC m=+0.022018596 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.197251326 +0000 UTC m=+0.147881367 container init 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.206371165 +0000 UTC m=+0.157001186 container start 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.209539575 +0000 UTC m=+0.160169596 container attach 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:15 np0005593232 inspiring_lewin[440077]: 167 167
Jan 23 06:19:15 np0005593232 systemd[1]: libpod-2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd.scope: Deactivated successfully.
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.212490519 +0000 UTC m=+0.163120540 container died 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 06:19:15 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f65a32b7378bf0edf2d0b517fa521216874e6ab0e0d6dfec1be93f46109a961a-merged.mount: Deactivated successfully.
Jan 23 06:19:15 np0005593232 podman[440061]: 2026-01-23 11:19:15.258255007 +0000 UTC m=+0.208885038 container remove 2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_lewin, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 23 06:19:15 np0005593232 systemd[1]: libpod-conmon-2ff0908f9554df300a1fd0d0650088fe876294bfe94922649cc2f4c373e3f1fd.scope: Deactivated successfully.
Jan 23 06:19:15 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49097 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:15 np0005593232 podman[440101]: 2026-01-23 11:19:15.428666353 +0000 UTC m=+0.056390511 container create 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:15 np0005593232 systemd[1]: Started libpod-conmon-43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c.scope.
Jan 23 06:19:15 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:15 np0005593232 podman[440101]: 2026-01-23 11:19:15.396327475 +0000 UTC m=+0.024051643 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ac890f052769899f9e7c5da854c14b0b0685e3d0317a9c0d53ad3220fcc86d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ac890f052769899f9e7c5da854c14b0b0685e3d0317a9c0d53ad3220fcc86d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ac890f052769899f9e7c5da854c14b0b0685e3d0317a9c0d53ad3220fcc86d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:15 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3ac890f052769899f9e7c5da854c14b0b0685e3d0317a9c0d53ad3220fcc86d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:15 np0005593232 podman[440101]: 2026-01-23 11:19:15.524808891 +0000 UTC m=+0.152533059 container init 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:19:15 np0005593232 podman[440101]: 2026-01-23 11:19:15.531168011 +0000 UTC m=+0.158892169 container start 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 06:19:15 np0005593232 podman[440101]: 2026-01-23 11:19:15.534665561 +0000 UTC m=+0.162389769 container attach 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:15 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49103 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 23 06:19:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:15.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]: {
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:    "0": [
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:        {
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "devices": [
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "/dev/loop3"
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            ],
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "lv_name": "ceph_lv0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "lv_size": "7511998464",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "name": "ceph_lv0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "tags": {
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.cluster_name": "ceph",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.crush_device_class": "",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.encrypted": "0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.osd_id": "0",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.type": "block",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:                "ceph.vdo": "0"
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            },
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "type": "block",
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:            "vg_name": "ceph_vg0"
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:        }
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]:    ]
Jan 23 06:19:16 np0005593232 gifted_proskuriakova[440118]: }
Jan 23 06:19:16 np0005593232 systemd[1]: libpod-43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c.scope: Deactivated successfully.
Jan 23 06:19:16 np0005593232 podman[440101]: 2026-01-23 11:19:16.3131334 +0000 UTC m=+0.940857558 container died 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 06:19:16 np0005593232 systemd[1]: var-lib-containers-storage-overlay-a3ac890f052769899f9e7c5da854c14b0b0685e3d0317a9c0d53ad3220fcc86d-merged.mount: Deactivated successfully.
Jan 23 06:19:16 np0005593232 podman[440101]: 2026-01-23 11:19:16.370168518 +0000 UTC m=+0.997892676 container remove 43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:19:16 np0005593232 systemd[1]: libpod-conmon-43aee4704c0d0961aad54caa03b26dce7de1cc16815040c4d2bf2d3157aa9c3c.scope: Deactivated successfully.
Jan 23 06:19:16 np0005593232 nova_compute[250269]: 2026-01-23 11:19:16.423 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:16.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:16 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4612: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.02293708 +0000 UTC m=+0.049184466 container create 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:17 np0005593232 systemd[1]: Started libpod-conmon-6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41.scope.
Jan 23 06:19:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.002621394 +0000 UTC m=+0.028868810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.101486679 +0000 UTC m=+0.127734085 container init 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.112011738 +0000 UTC m=+0.138259114 container start 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.115797515 +0000 UTC m=+0.142044911 container attach 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 23 06:19:17 np0005593232 sleepy_buck[440298]: 167 167
Jan 23 06:19:17 np0005593232 systemd[1]: libpod-6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41.scope: Deactivated successfully.
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.117933126 +0000 UTC m=+0.144180512 container died 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:17 np0005593232 systemd[1]: var-lib-containers-storage-overlay-b7f8fa5c20b414fd768a03128b4d2bf7dccc8745136637973287fc12cbcf29bd-merged.mount: Deactivated successfully.
Jan 23 06:19:17 np0005593232 podman[440281]: 2026-01-23 11:19:17.163066437 +0000 UTC m=+0.189313823 container remove 6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:19:17 np0005593232 systemd[1]: libpod-conmon-6bc5593a393eac49f581577a89546043efff64155992a671c73e58bd61a08f41.scope: Deactivated successfully.
Jan 23 06:19:17 np0005593232 podman[440322]: 2026-01-23 11:19:17.325185966 +0000 UTC m=+0.045882723 container create 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:19:17 np0005593232 systemd[1]: Started libpod-conmon-4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e.scope.
Jan 23 06:19:17 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:19:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28501c767515cdbb65a3c9288f4eb6df6c4f10fef8ae6fd7eb53722eee4eea20/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28501c767515cdbb65a3c9288f4eb6df6c4f10fef8ae6fd7eb53722eee4eea20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28501c767515cdbb65a3c9288f4eb6df6c4f10fef8ae6fd7eb53722eee4eea20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:17 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28501c767515cdbb65a3c9288f4eb6df6c4f10fef8ae6fd7eb53722eee4eea20/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:19:17 np0005593232 podman[440322]: 2026-01-23 11:19:17.30664692 +0000 UTC m=+0.027343717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:19:17 np0005593232 podman[440322]: 2026-01-23 11:19:17.407014058 +0000 UTC m=+0.127710825 container init 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:19:17 np0005593232 podman[440322]: 2026-01-23 11:19:17.413278505 +0000 UTC m=+0.133975262 container start 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 23 06:19:17 np0005593232 podman[440322]: 2026-01-23 11:19:17.416736984 +0000 UTC m=+0.137433741 container attach 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 23 06:19:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:19:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:17.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:19:18 np0005593232 confident_burnell[440336]: {
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:        "osd_id": 0,
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:        "type": "bluestore"
Jan 23 06:19:18 np0005593232 confident_burnell[440336]:    }
Jan 23 06:19:18 np0005593232 confident_burnell[440336]: }
Jan 23 06:19:18 np0005593232 systemd[1]: libpod-4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e.scope: Deactivated successfully.
Jan 23 06:19:18 np0005593232 podman[440322]: 2026-01-23 11:19:18.234423056 +0000 UTC m=+0.955119833 container died 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 23 06:19:18 np0005593232 nova_compute[250269]: 2026-01-23 11:19:18.234 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:18.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:18 np0005593232 systemd[1]: var-lib-containers-storage-overlay-28501c767515cdbb65a3c9288f4eb6df6c4f10fef8ae6fd7eb53722eee4eea20-merged.mount: Deactivated successfully.
Jan 23 06:19:18 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4613: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:19 np0005593232 podman[440322]: 2026-01-23 11:19:19.558355262 +0000 UTC m=+2.279052039 container remove 4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:19:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:19:19 np0005593232 systemd[1]: libpod-conmon-4599aa327ba12b695ae4a109cb0107bac7c1423feaf03ce7ad9287cd98e5202e.scope: Deactivated successfully.
Jan 23 06:19:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:19 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:19:19 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cec68e8e-4488-4e48-87db-4a576ec81c39 does not exist
Jan 23 06:19:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev c413d45d-b056-4897-a9ae-c35ba5f34bfe does not exist
Jan 23 06:19:19 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev a4d3eefc-2159-450a-85de-4a06438cb39a does not exist
Jan 23 06:19:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:19.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:20.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:20 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:19:20 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4614: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:21 np0005593232 nova_compute[250269]: 2026-01-23 11:19:21.512 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:21.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:22 np0005593232 nova_compute[250269]: 2026-01-23 11:19:22.612 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:22.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:22 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4615: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:23 np0005593232 nova_compute[250269]: 2026-01-23 11:19:23.240 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:23.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:24.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:24 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4616: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:25.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:26 np0005593232 nova_compute[250269]: 2026-01-23 11:19:26.516 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:26.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:26 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4617: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:27.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:28 np0005593232 nova_compute[250269]: 2026-01-23 11:19:28.243 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:28 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4618: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:29.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:30 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4619: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:31 np0005593232 nova_compute[250269]: 2026-01-23 11:19:31.515 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:32.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:32 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4620: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:33 np0005593232 nova_compute[250269]: 2026-01-23 11:19:33.246 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 23 06:19:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:33.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 23 06:19:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:34.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:34 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4621: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:19:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:35.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:19:36 np0005593232 nova_compute[250269]: 2026-01-23 11:19:36.547 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:36.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:36 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4622: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:19:37
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'vms', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'volumes', 'backups', 'cephfs.cephfs.meta']
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:19:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:19:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:37.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:38 np0005593232 nova_compute[250269]: 2026-01-23 11:19:38.250 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:38.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4623: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:19:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:19:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:40.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:40 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4624: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:41 np0005593232 nova_compute[250269]: 2026-01-23 11:19:41.602 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:41.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:42 np0005593232 nova_compute[250269]: 2026-01-23 11:19:42.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:42 np0005593232 nova_compute[250269]: 2026-01-23 11:19:42.291 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:19:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:19:42.704 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:19:42.705 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:19:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:19:42.705 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:19:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:42.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:42 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4625: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:43 np0005593232 nova_compute[250269]: 2026-01-23 11:19:43.254 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:43 np0005593232 nova_compute[250269]: 2026-01-23 11:19:43.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:43 np0005593232 podman[440487]: 2026-01-23 11:19:43.396827698 +0000 UTC m=+0.064306638 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 23 06:19:43 np0005593232 podman[440486]: 2026-01-23 11:19:43.42584939 +0000 UTC m=+0.093767522 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:19:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:44 np0005593232 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 23 06:19:44 np0005593232 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 23 06:19:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:44.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:44 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4626: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:45 np0005593232 nova_compute[250269]: 2026-01-23 11:19:45.285 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:45 np0005593232 nova_compute[250269]: 2026-01-23 11:19:45.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:45.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:46 np0005593232 nova_compute[250269]: 2026-01-23 11:19:46.654 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:46 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4627: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:47.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:19:48 np0005593232 nova_compute[250269]: 2026-01-23 11:19:48.257 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:48 np0005593232 nova_compute[250269]: 2026-01-23 11:19:48.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:48 np0005593232 nova_compute[250269]: 2026-01-23 11:19:48.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:19:48 np0005593232 nova_compute[250269]: 2026-01-23 11:19:48.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:19:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:19:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:48.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:19:48 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4628: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:49 np0005593232 nova_compute[250269]: 2026-01-23 11:19:49.084 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:19:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:49.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:50 np0005593232 nova_compute[250269]: 2026-01-23 11:19:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:50 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4629: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:51 np0005593232 nova_compute[250269]: 2026-01-23 11:19:51.656 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:51.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:52.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:52 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4630: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:53 np0005593232 nova_compute[250269]: 2026-01-23 11:19:53.258 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:53 np0005593232 nova_compute[250269]: 2026-01-23 11:19:53.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:53.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:54 np0005593232 nova_compute[250269]: 2026-01-23 11:19:54.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:19:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:54.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:54 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4631: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:55.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:19:56 np0005593232 nova_compute[250269]: 2026-01-23 11:19:56.657 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:56.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:19:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4632: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:19:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:19:58 np0005593232 nova_compute[250269]: 2026-01-23 11:19:58.303 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:19:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:19:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:19:58.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:19:58 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4633: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:19:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:19:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:19:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:19:59.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:00 np0005593232 ceph-mon[74423]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 23 06:20:00 np0005593232 ceph-mon[74423]: overall HEALTH_OK
Jan 23 06:20:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:00.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:00 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4634: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.364 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.365 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.365 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.365 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.365 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.704 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:01 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:20:01 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317905029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.802 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:20:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:01.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.946 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.947 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3910MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.947 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:20:01 np0005593232 nova_compute[250269]: 2026-01-23 11:20:01.948 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.031 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.032 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.184 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:20:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:20:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3258430889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:20:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.756 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.762 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.780 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.782 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.782 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.783 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:02 np0005593232 nova_compute[250269]: 2026-01-23 11:20:02.783 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 23 06:20:02 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4635: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:03 np0005593232 nova_compute[250269]: 2026-01-23 11:20:03.305 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:03 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-crash-compute-0[81752]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 23 06:20:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:03.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:04.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:04 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4636: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:06 np0005593232 nova_compute[250269]: 2026-01-23 11:20:06.741 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:06 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4637: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:07.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:08 np0005593232 nova_compute[250269]: 2026-01-23 11:20:08.307 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:08 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4638: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:09.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:10 np0005593232 systemd-logind[808]: Session 64 logged out. Waiting for processes to exit.
Jan 23 06:20:10 np0005593232 systemd[1]: session-64.scope: Deactivated successfully.
Jan 23 06:20:10 np0005593232 systemd[1]: session-64.scope: Consumed 3min 2.193s CPU time, 1.0G memory peak, read 462.1M from disk, written 420.1M to disk.
Jan 23 06:20:10 np0005593232 systemd-logind[808]: Removed session 64.
Jan 23 06:20:10 np0005593232 systemd-logind[808]: New session 65 of user zuul.
Jan 23 06:20:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:10 np0005593232 systemd[1]: Started Session 65 of User zuul.
Jan 23 06:20:10 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4639: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:11 np0005593232 systemd[1]: session-65.scope: Deactivated successfully.
Jan 23 06:20:11 np0005593232 systemd-logind[808]: Session 65 logged out. Waiting for processes to exit.
Jan 23 06:20:11 np0005593232 systemd-logind[808]: Removed session 65.
Jan 23 06:20:11 np0005593232 systemd-logind[808]: New session 66 of user zuul.
Jan 23 06:20:11 np0005593232 systemd[1]: Started Session 66 of User zuul.
Jan 23 06:20:11 np0005593232 systemd[1]: session-66.scope: Deactivated successfully.
Jan 23 06:20:11 np0005593232 systemd-logind[808]: Session 66 logged out. Waiting for processes to exit.
Jan 23 06:20:11 np0005593232 systemd-logind[808]: Removed session 66.
Jan 23 06:20:11 np0005593232 nova_compute[250269]: 2026-01-23 11:20:11.743 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:11.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:12.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:12 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4640: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:13 np0005593232 nova_compute[250269]: 2026-01-23 11:20:13.355 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:13.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:14 np0005593232 podman[440755]: 2026-01-23 11:20:14.4239374 +0000 UTC m=+0.071215019 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:20:14 np0005593232 podman[440754]: 2026-01-23 11:20:14.493367338 +0000 UTC m=+0.136943645 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 23 06:20:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:14.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4641: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:15.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:16.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:16 np0005593232 nova_compute[250269]: 2026-01-23 11:20:16.793 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4642: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:17 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:17 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:17 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:17.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:18 np0005593232 nova_compute[250269]: 2026-01-23 11:20:18.361 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:18.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4643: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:19 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:19 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:19 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:19.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:20.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4644: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 33354e53-a4e8-4466-b418-97b5d4643c5e does not exist
Jan 23 06:20:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev cbdf5379-1493-4abc-a279-e49c2f70b416 does not exist
Jan 23 06:20:21 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev fc02cd94-bbe5-4b80-b7ea-1c92548c07c9 does not exist
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:21 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 23 06:20:21 np0005593232 nova_compute[250269]: 2026-01-23 11:20:21.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.810209373 +0000 UTC m=+0.051570766 container create 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Jan 23 06:20:21 np0005593232 systemd[1]: Started libpod-conmon-4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd.scope.
Jan 23 06:20:21 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.787512686 +0000 UTC m=+0.028874129 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.891299283 +0000 UTC m=+0.132660696 container init 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.89807609 +0000 UTC m=+0.139437483 container start 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:20:21 np0005593232 loving_murdock[441089]: 167 167
Jan 23 06:20:21 np0005593232 systemd[1]: libpod-4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd.scope: Deactivated successfully.
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.902570175 +0000 UTC m=+0.143931618 container attach 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.904049505 +0000 UTC m=+0.145410918 container died 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:20:21 np0005593232 systemd[1]: var-lib-containers-storage-overlay-c2ca5e5a3631c2f40831ba974dafb0593a3a28af0b509b45d0b0ef1efcd8f316-merged.mount: Deactivated successfully.
Jan 23 06:20:21 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:21 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:21 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:21.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:21 np0005593232 podman[441072]: 2026-01-23 11:20:21.94618939 +0000 UTC m=+0.187550783 container remove 4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_murdock, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:20:21 np0005593232 systemd[1]: libpod-conmon-4ab6fe3c415325427795276adab73f5c03d7b0c039326245122b8531dba0eccd.scope: Deactivated successfully.
Jan 23 06:20:22 np0005593232 podman[441112]: 2026-01-23 11:20:22.128171347 +0000 UTC m=+0.052776509 container create 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:20:22 np0005593232 podman[441112]: 2026-01-23 11:20:22.100792281 +0000 UTC m=+0.025397523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:22 np0005593232 systemd[1]: Started libpod-conmon-79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad.scope.
Jan 23 06:20:22 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:22 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:22 np0005593232 podman[441112]: 2026-01-23 11:20:22.466533815 +0000 UTC m=+0.391139057 container init 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:20:22 np0005593232 podman[441112]: 2026-01-23 11:20:22.480373348 +0000 UTC m=+0.404978510 container start 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:22 np0005593232 podman[441112]: 2026-01-23 11:20:22.657675276 +0000 UTC m=+0.582280518 container attach 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #237. Immutable memtables: 0.
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.771027) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 237
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167222771077, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 506, "total_data_size": 2819786, "memory_usage": 2881872, "flush_reason": "Manual Compaction"}
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #238: started
Jan 23 06:20:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:22.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167222836815, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 238, "file_size": 2780748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100359, "largest_seqno": 102430, "table_properties": {"data_size": 2770783, "index_size": 5498, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 28941, "raw_average_key_size": 21, "raw_value_size": 2747336, "raw_average_value_size": 2050, "num_data_blocks": 234, "num_entries": 1340, "num_filter_entries": 1340, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769167087, "oldest_key_time": 1769167087, "file_creation_time": 1769167222, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 238, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 65868 microseconds, and 9226 cpu microseconds.
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.836894) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #238: 2780748 bytes OK
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.836917) [db/memtable_list.cc:519] [default] Level-0 commit table #238 started
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.977104) [db/memtable_list.cc:722] [default] Level-0 commit table #238: memtable #1 done
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.977157) EVENT_LOG_v1 {"time_micros": 1769167222977144, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.977186) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 2808880, prev total WAL file size 2809520, number of live WAL files 2.
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000234.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.978703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353233' seq:72057594037927935, type:22 .. '6C6F676D0034373734' seq:0, type:0; will stop at (end)
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [238(2715KB)], [236(13MB)]
Jan 23 06:20:22 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167222978776, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [238], "files_L6": [236], "score": -1, "input_data_size": 16479611, "oldest_snapshot_seqno": -1}
Jan 23 06:20:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4645: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #239: 12538 keys, 14393635 bytes, temperature: kUnknown
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167223131292, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 239, "file_size": 14393635, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14315289, "index_size": 45913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31365, "raw_key_size": 336456, "raw_average_key_size": 26, "raw_value_size": 14098708, "raw_average_value_size": 1124, "num_data_blocks": 1708, "num_entries": 12538, "num_filter_entries": 12538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769158724, "oldest_key_time": 0, "file_creation_time": 1769167222, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a81b8e15-1a54-43c0-85f4-c7c545c93281", "db_session_id": "V2SI29UDTZDZQYJD4090", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.131730) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 14393635 bytes
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.179273) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.0 rd, 94.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.1 +0.0 blob) out(13.7 +0.0 blob), read-write-amplify(11.1) write-amplify(5.2) OK, records in: 13566, records dropped: 1028 output_compression: NoCompression
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.179313) EVENT_LOG_v1 {"time_micros": 1769167223179297, "job": 150, "event": "compaction_finished", "compaction_time_micros": 152625, "compaction_time_cpu_micros": 35513, "output_level": 6, "num_output_files": 1, "total_output_size": 14393635, "num_input_records": 13566, "num_output_records": 12538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000238.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167223179983, "job": 150, "event": "table_file_deletion", "file_number": 238}
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769167223182124, "job": 150, "event": "table_file_deletion", "file_number": 236}
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:22.978586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.182164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.182168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.182170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.182171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 ceph-mon[74423]: rocksdb: (Original Log Time 2026/01/23-11:20:23.182172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 23 06:20:23 np0005593232 focused_kare[441128]: --> passed data devices: 0 physical, 1 LVM
Jan 23 06:20:23 np0005593232 focused_kare[441128]: --> relative data size: 1.0
Jan 23 06:20:23 np0005593232 focused_kare[441128]: --> All data devices are unavailable
Jan 23 06:20:23 np0005593232 systemd[1]: libpod-79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad.scope: Deactivated successfully.
Jan 23 06:20:23 np0005593232 podman[441112]: 2026-01-23 11:20:23.33068075 +0000 UTC m=+1.255285912 container died 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:20:23 np0005593232 systemd[1]: var-lib-containers-storage-overlay-f329846fb8504cdffc9abcc6b82503d2aea7f5094b5e3af7694e21fdee987655-merged.mount: Deactivated successfully.
Jan 23 06:20:23 np0005593232 nova_compute[250269]: 2026-01-23 11:20:23.363 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:23 np0005593232 podman[441112]: 2026-01-23 11:20:23.387748616 +0000 UTC m=+1.312353778 container remove 79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_kare, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 23 06:20:23 np0005593232 systemd[1]: libpod-conmon-79f4132ce17aaa77a2c358bc05b3b9bdd35bf13714a9a61a78845b43d2127dad.scope: Deactivated successfully.
Jan 23 06:20:23 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:23 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:23 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:23.998997014 +0000 UTC m=+0.020855577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.225512842 +0000 UTC m=+0.247371405 container create c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 23 06:20:24 np0005593232 systemd[1]: Started libpod-conmon-c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d.scope.
Jan 23 06:20:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.316783344 +0000 UTC m=+0.338641927 container init c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.32462599 +0000 UTC m=+0.346484553 container start c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.328444066 +0000 UTC m=+0.350302659 container attach c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:20:24 np0005593232 fervent_rubin[441334]: 167 167
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.331244503 +0000 UTC m=+0.353103066 container died c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 23 06:20:24 np0005593232 systemd[1]: libpod-c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d.scope: Deactivated successfully.
Jan 23 06:20:24 np0005593232 systemd[1]: var-lib-containers-storage-overlay-4c93718e6dbbfe249c57b309bffc84f07efda8415c2df40c296d082e2dd5c0cd-merged.mount: Deactivated successfully.
Jan 23 06:20:24 np0005593232 podman[441295]: 2026-01-23 11:20:24.373510971 +0000 UTC m=+0.395369534 container remove c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 23 06:20:24 np0005593232 systemd[1]: libpod-conmon-c50d771007e98cea42e4b2399d044aba3a288438a4942066f55ac5e88adada5d.scope: Deactivated successfully.
Jan 23 06:20:24 np0005593232 podman[441384]: 2026-01-23 11:20:24.549949925 +0000 UTC m=+0.051391210 container create 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 23 06:20:24 np0005593232 systemd[1]: Started libpod-conmon-61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155.scope.
Jan 23 06:20:24 np0005593232 podman[441384]: 2026-01-23 11:20:24.52913198 +0000 UTC m=+0.030573285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:24 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3149bb371937797d9822955f5af9de3b140066fd84bb6cca7bca8da6bfd516c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3149bb371937797d9822955f5af9de3b140066fd84bb6cca7bca8da6bfd516c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3149bb371937797d9822955f5af9de3b140066fd84bb6cca7bca8da6bfd516c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:24 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3149bb371937797d9822955f5af9de3b140066fd84bb6cca7bca8da6bfd516c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:24 np0005593232 podman[441384]: 2026-01-23 11:20:24.648816407 +0000 UTC m=+0.150257702 container init 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Jan 23 06:20:24 np0005593232 podman[441384]: 2026-01-23 11:20:24.663816631 +0000 UTC m=+0.165257896 container start 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 23 06:20:24 np0005593232 podman[441384]: 2026-01-23 11:20:24.668017267 +0000 UTC m=+0.169458552 container attach 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:20:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:24.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4646: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]: {
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:    "0": [
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:        {
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "devices": [
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "/dev/loop3"
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            ],
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "lv_name": "ceph_lv0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "lv_size": "7511998464",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=e1533653-0a5a-584c-b34b-8689f0d32e77,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=90ada044-56e3-48f6-a1ea-fdd1f369ab0a,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "lv_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "name": "ceph_lv0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "tags": {
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.block_uuid": "M3FOyq-WW1t-wqhC-aAPY-BL3I-wwqQ-ghiZG7",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.cephx_lockbox_secret": "",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.cluster_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.cluster_name": "ceph",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.crush_device_class": "",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.encrypted": "0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.osd_fsid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.osd_id": "0",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.type": "block",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:                "ceph.vdo": "0"
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            },
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "type": "block",
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:            "vg_name": "ceph_vg0"
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:        }
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]:    ]
Jan 23 06:20:25 np0005593232 inspiring_johnson[441400]: }
Jan 23 06:20:25 np0005593232 systemd[1]: libpod-61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155.scope: Deactivated successfully.
Jan 23 06:20:25 np0005593232 podman[441384]: 2026-01-23 11:20:25.465849479 +0000 UTC m=+0.967290754 container died 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 23 06:20:25 np0005593232 systemd[1]: var-lib-containers-storage-overlay-3149bb371937797d9822955f5af9de3b140066fd84bb6cca7bca8da6bfd516c5-merged.mount: Deactivated successfully.
Jan 23 06:20:25 np0005593232 podman[441384]: 2026-01-23 11:20:25.530786263 +0000 UTC m=+1.032227548 container remove 61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_johnson, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:20:25 np0005593232 systemd[1]: libpod-conmon-61c348abb8bd4c3d92d27afdbbcd8268a982eb17aab300f4820b266328ee3155.scope: Deactivated successfully.
Jan 23 06:20:25 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:25 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:25 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.216085976 +0000 UTC m=+0.045732364 container create 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 23 06:20:26 np0005593232 systemd[1]: Started libpod-conmon-6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9.scope.
Jan 23 06:20:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.290585114 +0000 UTC m=+0.120231532 container init 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.197830492 +0000 UTC m=+0.027476890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.298175874 +0000 UTC m=+0.127822292 container start 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.302606706 +0000 UTC m=+0.132253094 container attach 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 23 06:20:26 np0005593232 relaxed_knuth[441576]: 167 167
Jan 23 06:20:26 np0005593232 systemd[1]: libpod-6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9.scope: Deactivated successfully.
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.305034464 +0000 UTC m=+0.134680852 container died 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Jan 23 06:20:26 np0005593232 systemd[1]: var-lib-containers-storage-overlay-422bf2fb417f3ea36dd0c36026777241b8ca8860dd3dd473595be61ff8139b42-merged.mount: Deactivated successfully.
Jan 23 06:20:26 np0005593232 podman[441560]: 2026-01-23 11:20:26.348404772 +0000 UTC m=+0.178051140 container remove 6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_knuth, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 23 06:20:26 np0005593232 systemd[1]: libpod-conmon-6bf6e0aabc09c8b963e11e0df5fc625dabe9b3ba7626a7d885d179d7acb244e9.scope: Deactivated successfully.
Jan 23 06:20:26 np0005593232 podman[441598]: 2026-01-23 11:20:26.532807626 +0000 UTC m=+0.051640017 container create d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 23 06:20:26 np0005593232 systemd[1]: Started libpod-conmon-d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79.scope.
Jan 23 06:20:26 np0005593232 systemd[1]: Started libcrun container.
Jan 23 06:20:26 np0005593232 podman[441598]: 2026-01-23 11:20:26.51157892 +0000 UTC m=+0.030411351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 23 06:20:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ff058b4744f5f1f3e297b295b4c0310d0eb72e36345628b2b8ea8e9e79ef57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ff058b4744f5f1f3e297b295b4c0310d0eb72e36345628b2b8ea8e9e79ef57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ff058b4744f5f1f3e297b295b4c0310d0eb72e36345628b2b8ea8e9e79ef57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:26 np0005593232 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ff058b4744f5f1f3e297b295b4c0310d0eb72e36345628b2b8ea8e9e79ef57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 23 06:20:26 np0005593232 podman[441598]: 2026-01-23 11:20:26.62017169 +0000 UTC m=+0.139004091 container init d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Jan 23 06:20:26 np0005593232 podman[441598]: 2026-01-23 11:20:26.626853315 +0000 UTC m=+0.145685706 container start d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:20:26 np0005593232 podman[441598]: 2026-01-23 11:20:26.631076091 +0000 UTC m=+0.149908492 container attach d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:20:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:26.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:26 np0005593232 nova_compute[250269]: 2026-01-23 11:20:26.797 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:27 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4647: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]: {
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:    "90ada044-56e3-48f6-a1ea-fdd1f369ab0a": {
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:        "ceph_fsid": "e1533653-0a5a-584c-b34b-8689f0d32e77",
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:        "osd_id": 0,
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:        "osd_uuid": "90ada044-56e3-48f6-a1ea-fdd1f369ab0a",
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:        "type": "bluestore"
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]:    }
Jan 23 06:20:27 np0005593232 peaceful_almeida[441614]: }
Jan 23 06:20:27 np0005593232 systemd[1]: libpod-d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79.scope: Deactivated successfully.
Jan 23 06:20:27 np0005593232 podman[441598]: 2026-01-23 11:20:27.469263559 +0000 UTC m=+0.988095950 container died d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 23 06:20:27 np0005593232 systemd[1]: var-lib-containers-storage-overlay-55ff058b4744f5f1f3e297b295b4c0310d0eb72e36345628b2b8ea8e9e79ef57-merged.mount: Deactivated successfully.
Jan 23 06:20:27 np0005593232 podman[441598]: 2026-01-23 11:20:27.538010118 +0000 UTC m=+1.056842509 container remove d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_almeida, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 23 06:20:27 np0005593232 systemd[1]: libpod-conmon-d396a3419f445857fee270379f7054cf05eae31346d89ff54f280d11dc0b4d79.scope: Deactivated successfully.
Jan 23 06:20:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Jan 23 06:20:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Jan 23 06:20:27 np0005593232 ceph-mon[74423]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8eda5ca0-478b-45a7-9554-f9fbcc7e74dc does not exist
Jan 23 06:20:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 8d3e4d31-dc30-427f-8c73-1e50711ab94e does not exist
Jan 23 06:20:27 np0005593232 ceph-mgr[74726]: [progress WARNING root] complete: ev 82053471-d273-430c-9931-8303d6a2f915 does not exist
Jan 23 06:20:27 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:27 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:27 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:27 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:27.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:28 np0005593232 nova_compute[250269]: 2026-01-23 11:20:28.366 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:28 np0005593232 ceph-mon[74423]: from='mgr.14132 192.168.122.100:0/1215693943' entity='mgr.compute-0.yntofk' 
Jan 23 06:20:28 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:28 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:28 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:28.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:29 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4648: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:29 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:29 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:29 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:29.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:30 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:30 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:30 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:30.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:31 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4649: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:31 np0005593232 nova_compute[250269]: 2026-01-23 11:20:31.842 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:31 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:31 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:31 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:31.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:32 np0005593232 nova_compute[250269]: 2026-01-23 11:20:32.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:32 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:32 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:32 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:32 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:32.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:33 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4650: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:33 np0005593232 nova_compute[250269]: 2026-01-23 11:20:33.368 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:33 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:33 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:33 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:33.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:34 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:34 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:34 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:34.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:35 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4651: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:35 np0005593232 nova_compute[250269]: 2026-01-23 11:20:35.325 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:35 np0005593232 nova_compute[250269]: 2026-01-23 11:20:35.325 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 23 06:20:35 np0005593232 nova_compute[250269]: 2026-01-23 11:20:35.355 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 23 06:20:35 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:35 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:35 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:35.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:36 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:36 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:36 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:36.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:36 np0005593232 nova_compute[250269]: 2026-01-23 11:20:36.844 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4652: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Optimize plan auto_2026-01-23_11:20:37
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] do_upmap
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', '.mgr', 'vms']
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [balancer INFO root] prepared 0/10 changes
Jan 23 06:20:37 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:20:37 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:20:37 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:37 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:37 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:37.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:38 np0005593232 nova_compute[250269]: 2026-01-23 11:20:38.370 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:38 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:38 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:38 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:38.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 23 06:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 23 06:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 23 06:20:38 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 23 06:20:39 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4653: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:39 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:39 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:39 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:39.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:40 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:40 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:40 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:40.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:41 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4654: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:41 np0005593232 nova_compute[250269]: 2026-01-23 11:20:41.881 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:41 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:41 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:41 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:41.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:42 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:20:42.705 161902 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:20:42.706 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:20:42 np0005593232 ovn_metadata_agent[161895]: 2026-01-23 11:20:42.706 161902 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:20:42 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:42 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:42 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:42.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:43 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4655: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:43 np0005593232 nova_compute[250269]: 2026-01-23 11:20:43.372 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:43 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:43 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:43 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:43.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:44 np0005593232 nova_compute[250269]: 2026-01-23 11:20:44.322 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:44 np0005593232 nova_compute[250269]: 2026-01-23 11:20:44.322 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 23 06:20:44 np0005593232 podman[441730]: 2026-01-23 11:20:44.540316401 +0000 UTC m=+0.059492134 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 23 06:20:44 np0005593232 podman[441773]: 2026-01-23 11:20:44.667820584 +0000 UTC m=+0.103975714 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 23 06:20:44 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:44 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:44 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:44.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:45 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4656: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:45 np0005593232 nova_compute[250269]: 2026-01-23 11:20:45.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:45 np0005593232 nova_compute[250269]: 2026-01-23 11:20:45.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:45 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:45 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:45 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:45.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:46 np0005593232 nova_compute[250269]: 2026-01-23 11:20:46.286 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:46 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:46 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:46 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:46.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:46 np0005593232 nova_compute[250269]: 2026-01-23 11:20:46.883 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:47 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4657: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:47 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:47 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:47 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:47 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:47.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] _maybe_adjust
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.0538165363856318e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021752279918109065 of space, bias 1.0, pg target 0.6525683975432719 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.817536757863391e-07 of space, bias 1.0, pg target 5.4526102735901735e-05 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0019031427391587568 of space, bias 1.0, pg target 0.570942821747627 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4540294062907128e-06 of space, bias 4.0, pg target 0.0017448352875488555 quantized to 16 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 7.270147031453564e-07 of space, bias 1.0, pg target 0.00021810441094360694 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 6.17962497673553e-06 of space, bias 1.0, pg target 0.0018538874930206588 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 22535995392
Jan 23 06:20:48 np0005593232 ceph-mgr[74726]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 3.635073515726782e-07 of space, bias 4.0, pg target 0.0004362088218872139 quantized to 32 (current 32)
Jan 23 06:20:48 np0005593232 nova_compute[250269]: 2026-01-23 11:20:48.375 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:48 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:48 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:48 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:48.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:49 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4658: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:49 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:49 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:49 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:49.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:50 np0005593232 nova_compute[250269]: 2026-01-23 11:20:50.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:50 np0005593232 nova_compute[250269]: 2026-01-23 11:20:50.292 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 23 06:20:50 np0005593232 nova_compute[250269]: 2026-01-23 11:20:50.293 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 23 06:20:50 np0005593232 nova_compute[250269]: 2026-01-23 11:20:50.314 250273 DEBUG nova.compute.manager [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 23 06:20:50 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:50 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:50 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:51 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4659: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:51 np0005593232 nova_compute[250269]: 2026-01-23 11:20:51.926 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:51 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:51 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:51 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:51.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:52 np0005593232 nova_compute[250269]: 2026-01-23 11:20:52.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:52 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:52 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:52 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:52 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:52.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:53 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4660: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:53 np0005593232 nova_compute[250269]: 2026-01-23 11:20:53.378 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:53 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:53 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:53 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:53.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:54 np0005593232 nova_compute[250269]: 2026-01-23 11:20:54.292 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:54 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:54 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:54 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:54.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:55 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4661: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:55 np0005593232 nova_compute[250269]: 2026-01-23 11:20:55.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:20:55 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:55 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:20:55 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:55.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:20:56 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:56 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:20:56 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:56.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:20:56 np0005593232 nova_compute[250269]: 2026-01-23 11:20:56.928 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:57 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4662: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:57 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:20:57 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:57 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:57 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:57.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:58 np0005593232 nova_compute[250269]: 2026-01-23 11:20:58.380 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:20:58 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:58 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:58 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:20:58.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:20:59 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4663: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:20:59 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:20:59 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:20:59 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:20:59.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:00 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:00 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:00 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:00.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:01 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4664: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:01 np0005593232 nova_compute[250269]: 2026-01-23 11:21:01.968 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:01 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:01 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:01 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:01.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.291 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.330 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.330 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.331 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:21:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:21:02 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:21:02 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882521723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.761 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:21:02 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:02 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:02 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:02.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.926 250273 WARNING nova.virt.libvirt.driver [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.928 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3957MB free_disk=20.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.928 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.929 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.997 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 23 06:21:02 np0005593232 nova_compute[250269]: 2026-01-23 11:21:02.997 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 23 06:21:03 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4665: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.022 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.382 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:03 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 23 06:21:03 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/735186680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.471 250273 DEBUG oslo_concurrency.processutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.477 250273 DEBUG nova.compute.provider_tree [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed in ProviderTree for provider: 0e4a8508-835c-4c0a-aa74-aae2c6536573 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.503 250273 DEBUG nova.scheduler.client.report [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Inventory has not changed for provider 0e4a8508-835c-4c0a-aa74-aae2c6536573 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.505 250273 DEBUG nova.compute.resource_tracker [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 23 06:21:03 np0005593232 nova_compute[250269]: 2026-01-23 11:21:03.505 250273 DEBUG oslo_concurrency.lockutils [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 23 06:21:03 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:03 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:03 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:03.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:04 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:04 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:04 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:04.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:05 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4666: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:05 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:05 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:05 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:05.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:06 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:06 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:06 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:06.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:06 np0005593232 nova_compute[250269]: 2026-01-23 11:21:06.970 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4667: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] scanning for idle connections..
Jan 23 06:21:07 np0005593232 ceph-mgr[74726]: [volumes INFO mgr_util] cleaning up connections: []
Jan 23 06:21:07 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:21:07 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:07 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:07 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:07.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:08 np0005593232 nova_compute[250269]: 2026-01-23 11:21:08.385 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:08 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:08 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:08 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:08.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:09 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4668: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:09 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:09 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:09 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:09.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:10 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:10 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:10 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:11 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4669: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:11 np0005593232 systemd-logind[808]: New session 67 of user zuul.
Jan 23 06:21:11 np0005593232 systemd[1]: Started Session 67 of User zuul.
Jan 23 06:21:11 np0005593232 nova_compute[250269]: 2026-01-23 11:21:11.972 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:11 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:11 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:11 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:11.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:12 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:21:12 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:12 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:12 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:12.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:13 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4670: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:13 np0005593232 nova_compute[250269]: 2026-01-23 11:21:13.387 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:13 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:13 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:13 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:13.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:14 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42558 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:14 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42570 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:14 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:14 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:14 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:14 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:14.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:15 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4671: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:15 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 23 06:21:15 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843986664' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 23 06:21:15 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49172 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:15 np0005593232 podman[442161]: 2026-01-23 11:21:15.400893431 +0000 UTC m=+0.057858599 container health_status 8c748d34dc67fbbe348dcadd651841d4a7aa042a06f64484e44f77ee38d37d0f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 23 06:21:15 np0005593232 podman[442160]: 2026-01-23 11:21:15.426798396 +0000 UTC m=+0.083611820 container health_status 709c5f904621e9699338f0156549fa8b8d41599d8b79d099449f3a7cb176e3a5 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '5578e70e41b73d3290da8fd2fbea1ee513fdfd20a4cfb3f7eb83917d8bc18a25-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a-bef93ef0e1986fa5fb51ca50e50f416967b546288301471fa1318e5a4765151a'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 23 06:21:15 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:15 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:15 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:15.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:16 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52069 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:16 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:16 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:16 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:16.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:21:16 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 8400.1 total, 600.0 interval#012Cumulative writes: 65K writes, 237K keys, 65K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 65K writes, 25K syncs, 2.61 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1474 writes, 4001 keys, 1474 commit groups, 1.0 writes per commit group, ingest: 3.71 MB, 0.01 MB/s#012Interval WAL: 1474 writes, 657 syncs, 2.24 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:21:16 np0005593232 nova_compute[250269]: 2026-01-23 11:21:16.974 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:16 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:17 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4672: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:17 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:21:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:17.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:18 np0005593232 ovs-vsctl[442239]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 23 06:21:18 np0005593232 nova_compute[250269]: 2026-01-23 11:21:18.389 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:18 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:18 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:18 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:18.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:19 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4673: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:19 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 23 06:21:19 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 23 06:21:19 np0005593232 virtqemud[249592]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 23 06:21:19 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: cache status {prefix=cache status} (starting...)
Jan 23 06:21:19 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:19 np0005593232 lvm[442555]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 23 06:21:19 np0005593232 lvm[442555]: VG ceph_vg0 finished
Jan 23 06:21:19 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: client ls {prefix=client ls} (starting...)
Jan 23 06:21:19 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 23 06:21:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:20.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 23 06:21:20 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42597 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: damage ls {prefix=damage ls} (starting...)
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump loads {prefix=dump loads} (starting...)
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:20 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:21:20 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947181109' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:21:20 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42609 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:20 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:20 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:20 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:20.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 23 06:21:20 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 23 06:21:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2659005178' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 23 06:21:21 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4674: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:21 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 23 06:21:21 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1893493038' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:21 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42630 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:21 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:21 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:21.408+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:21 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52093 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: ops {prefix=ops} (starting...)
Jan 23 06:21:21 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:21 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49196 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:21 np0005593232 nova_compute[250269]: 2026-01-23 11:21:21.975 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:22.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4186241396' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086045223' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: session ls {prefix=session ls} (starting...)
Jan 23 06:21:22 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk Can't run that command on an inactive MDS!
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52102 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42675 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:22 np0005593232 nova_compute[250269]: 2026-01-23 11:21:22.501 250273 DEBUG oslo_service.periodic_task [None req-fd91e57a-eb00-4a28-aeee-811ceb0d5f50 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 23 06:21:22 np0005593232 ceph-mds[95253]: mds.cephfs.compute-0.djntrk asok_command: status {prefix=status} (starting...)
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2159811116' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 23 06:21:22 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52114 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42690 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:22 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:22 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:22 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:22.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49226 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:22 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:22.933+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:22 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:23 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4675: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1030964004' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3196943268' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:21:23 np0005593232 nova_compute[250269]: 2026-01-23 11:21:23.390 250273 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/273169891' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 23 06:21:23 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52147 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:23 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:23.577+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:23 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3592567376' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 06:21:23 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/802802994' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 06:21:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 23 06:21:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:24.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52165 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:24.117+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49262 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52174 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392542386' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49280 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1718382584' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52195 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 23 06:21:24 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211050914' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 23 06:21:24 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:24 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:24 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.102 - anonymous [23/Jan/2026:11:21:24.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: log_channel(cluster) log [DBG] : pgmap v4676: 321 pgs: 321 active+clean; 120 MiB data, 1.6 GiB used, 19 GiB / 21 GiB avail
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1920241470' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42792 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42801 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3775224595' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 23 06:21:25 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2962081090' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42822 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.49340 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:25 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:25 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:25.989+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:26 np0005593232 radosgw[94687]: ====== starting new request req=0x7f0a019d96f0 =====
Jan 23 06:21:26 np0005593232 radosgw[94687]: ====== req done req=0x7f0a019d96f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 23 06:21:26 np0005593232 radosgw[94687]: beast: 0x7f0a019d96f0: 192.168.122.100 - anonymous [23/Jan/2026:11:21:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 23 06:21:26 np0005593232 ceph-mon[74423]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 23 06:21:26 np0005593232 ceph-mon[74423]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2735278065' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 23 06:21:26 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.52255 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:26 np0005593232 ceph-mgr[74726]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:26 np0005593232 ceph-e1533653-0a5a-584c-b34b-8689f0d32e77-mgr-compute-0-yntofk[74722]: 2026-01-23T11:21:26.131+0000 7f341f708640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398606336 unmapped: 58310656 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 58302464 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398614528 unmapped: 58302464 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4285766 data_alloc: 218103808 data_used: 11628544
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 heartbeat osd_stat(store_statfs(0x1a54f5000/0x0/0x1bfc00000, data 0x21f43f2/0x2419000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.272722244s of 30.330598831s, submitted: 16
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398622720 unmapped: 58294272 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 416 ms_handle_reset con 0x55fba004c800 session 0x55fba09efa40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398630912 unmapped: 58286080 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 417 ms_handle_reset con 0x55fba332ec00 session 0x55fb9ffc9c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a54ed000/0x0/0x1bfc00000, data 0x21f7cf8/0x241f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4293586 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398688256 unmapped: 58228736 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 417 heartbeat osd_stat(store_statfs(0x1a54ed000/0x0/0x1bfc00000, data 0x21f7cf8/0x241f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398696448 unmapped: 58220544 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398696448 unmapped: 58220544 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4295888 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.836272240s of 11.905964851s, submitted: 29
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3766960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398704640 unmapped: 58212352 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298310 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba09f6f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398712832 unmapped: 58204160 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298310 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fbaad23000 session 0x55fba044fa40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398721024 unmapped: 58195968 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.303462029s of 11.313310623s, submitted: 3
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba0b13680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4298134 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9847/0x2423000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fb9ffc9c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba354b4a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398729216 unmapped: 58187776 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398737408 unmapped: 58179584 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398745600 unmapped: 58171392 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398753792 unmapped: 58163200 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398770176 unmapped: 58146816 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398778368 unmapped: 58138624 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398786560 unmapped: 58130432 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398786560 unmapped: 58130432 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398794752 unmapped: 58122240 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398802944 unmapped: 58114048 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4296845 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ec000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.471817017s of 43.603153229s, submitted: 19
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398835712 unmapped: 58081280 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398835712 unmapped: 58081280 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303952 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fb9ffc8000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398811136 unmapped: 58105856 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4303609 data_alloc: 218103808 data_used: 11640832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 398819328 unmapped: 58097664 heap: 456916992 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.290052414s of 10.332016945s, submitted: 10
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fbb2bdc800 session 0x55fba045fe00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399220736 unmapped: 61366272 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399228928 unmapped: 61358080 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 61349888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399237120 unmapped: 61349888 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 61341696 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399245312 unmapped: 61341696 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba045fc20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba0b12000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3766960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba3767a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4396467 data_alloc: 218103808 data_used: 11640832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 59K writes, 219K keys, 59K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 59K writes, 22K syncs, 2.63 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4075 writes, 11K keys, 4075 commit groups, 1.0 writes per commit group, ingest: 9.73 MB, 0.02 MB/s#012Interval WAL: 4075 writes, 1682 syncs, 2.42 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399253504 unmapped: 61333504 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477747 data_alloc: 234881024 data_used: 20426752
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477747 data_alloc: 234881024 data_used: 20426752
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a488a000/0x0/0x1bfc00000, data 0x2e5a89a/0x3084000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 399261696 unmapped: 61325312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.893667221s of 29.003288269s, submitted: 12
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404291584 unmapped: 56295424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402489344 unmapped: 58097664 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4537885 data_alloc: 234881024 data_used: 21372928
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a406b000/0x0/0x1bfc00000, data 0x367989a/0x38a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a406b000/0x0/0x1bfc00000, data 0x367989a/0x38a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402948096 unmapped: 57638912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402825216 unmapped: 57761792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402833408 unmapped: 57753600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402841600 unmapped: 57745408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4541165 data_alloc: 234881024 data_used: 21823488
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402849792 unmapped: 57737216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 57720832 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402866176 unmapped: 57720832 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 29.411119461s of 32.324645996s, submitted: 68
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4540417 data_alloc: 234881024 data_used: 21827584
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402874368 unmapped: 57712640 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba4981400 session 0x55fba0b09a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 57704448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b081e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 402882560 unmapped: 57704448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4065000/0x0/0x1bfc00000, data 0x367f89a/0x38a9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba0b08d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310809 data_alloc: 218103808 data_used: 11902976
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54ab000/0x0/0x1bfc00000, data 0x223989a/0x2463000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400015360 unmapped: 60571648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4310809 data_alloc: 218103808 data_used: 11902976
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.833645821s of 10.873172760s, submitted: 13
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba2ed5e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba2ed54a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400023552 unmapped: 60563456 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400031744 unmapped: 60555264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a54eb000/0x0/0x1bfc00000, data 0x21f9837/0x2422000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400039936 unmapped: 60547072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4306155 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400048128 unmapped: 60538880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.403900146s of 31.496391296s, submitted: 27
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 400048128 unmapped: 60538880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba9dd0400 session 0x55fba32614a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba004c800 session 0x55fba2e1d680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363542 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4363542 data_alloc: 218103808 data_used: 11636736
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401260544 unmapped: 59326464 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.726271629s of 10.869317055s, submitted: 32
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3639680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 59351040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401235968 unmapped: 59351040 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401252352 unmapped: 59334656 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4379843 data_alloc: 218103808 data_used: 13783040
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413123 data_alloc: 218103808 data_used: 18534400
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 401973248 unmapped: 58613760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4e3f000/0x0/0x1bfc00000, data 0x28a5899/0x2acf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4413603 data_alloc: 218103808 data_used: 18546688
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.796961784s of 12.985931396s, submitted: 6
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404193280 unmapped: 56393728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404652032 unmapped: 55934976 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a11000/0x0/0x1bfc00000, data 0x2cd3899/0x2efd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448949 data_alloc: 234881024 data_used: 19124224
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404660224 unmapped: 55926784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a0b000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a4a0b000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x182ef9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404668416 unmapped: 55918592 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.086083412s of 10.270460129s, submitted: 94
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405872640 unmapped: 54714368 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447509 data_alloc: 234881024 data_used: 19128320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.769052505s of 12.658998489s, submitted: 296
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405897216 unmapped: 54689792 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448181 data_alloc: 234881024 data_used: 19136512
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba56cc800 session 0x55fba3758f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 ms_handle_reset con 0x55fba332ec00 session 0x55fba26ed4a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447625 data_alloc: 234881024 data_used: 19136512
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 heartbeat osd_stat(store_statfs(0x1a45fb000/0x0/0x1bfc00000, data 0x2cd9899/0x2f03000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405905408 unmapped: 54681600 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 54673408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405913600 unmapped: 54673408 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.277843475s of 11.294801712s, submitted: 4
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba3d03400 session 0x55fba3c923c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 54665216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4453631 data_alloc: 234881024 data_used: 19140608
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba004c800 session 0x55fba2683c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405921792 unmapped: 54665216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3c92b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405929984 unmapped: 54657024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4454403 data_alloc: 234881024 data_used: 19210240
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 heartbeat osd_stat(store_statfs(0x1a45f6000/0x0/0x1bfc00000, data 0x2cdb554/0x2f07000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 ms_handle_reset con 0x55fba56cc800 session 0x55fba328a000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456807 data_alloc: 234881024 data_used: 19218432
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405954560 unmapped: 54632448 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.155465126s of 13.584081650s, submitted: 19
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f3000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405987328 unmapped: 54599680 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 heartbeat osd_stat(store_statfs(0x1a45f5000/0x0/0x1bfc00000, data 0x2cdd19f/0x2f09000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4476989 data_alloc: 234881024 data_used: 20905984
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406020096 unmapped: 54566912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4477789 data_alloc: 234881024 data_used: 20926464
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406028288 unmapped: 54558720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f1000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.147567749s of 10.176469803s, submitted: 40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba354a3c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba332ec00 session 0x55fba354b0e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406036480 unmapped: 54550528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a45f2000/0x0/0x1bfc00000, data 0x2cdecde/0x2f0c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba352a3c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406044672 unmapped: 54542336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406052864 unmapped: 54534144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406061056 unmapped: 54525952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406061056 unmapped: 54525952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406069248 unmapped: 54517760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406077440 unmapped: 54509568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406085632 unmapped: 54501376 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406093824 unmapped: 54493184 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406102016 unmapped: 54484992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406110208 unmapped: 54476800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 54468608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406118400 unmapped: 54468608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406134784 unmapped: 54452224 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406142976 unmapped: 54444032 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406151168 unmapped: 54435840 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4d93000/0x0/0x1bfc00000, data 0x21fec7c/0x242b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4323777 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba08bc1e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba04d9860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba56cc800 session 0x55fba3261860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406159360 unmapped: 54427648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba332f800 session 0x55fba09f70e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 68.158592224s of 68.289207458s, submitted: 32
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408330240 unmapped: 52256768 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba16e9c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba2394f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba3d03400 session 0x55fba3784000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba56cc800 session 0x55fba38794a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba7585000 session 0x55fba04d92c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406175744 unmapped: 54411264 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406183936 unmapped: 54403072 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351878 data_alloc: 218103808 data_used: 11661312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406192128 unmapped: 54394880 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371238 data_alloc: 218103808 data_used: 14352384
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406593536 unmapped: 53993472 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4371238 data_alloc: 218103808 data_used: 14352384
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4e39000/0x0/0x1bfc00000, data 0x2496cee/0x26c5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.134342194s of 32.618972778s, submitted: 26
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406601728 unmapped: 53985280 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407805952 unmapped: 52781056 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405496 data_alloc: 218103808 data_used: 14360576
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404111360 unmapped: 56475648 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4ad5000/0x0/0x1bfc00000, data 0x27facee/0x2a29000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404578304 unmapped: 56008704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404578304 unmapped: 56008704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404529152 unmapped: 56057856 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x283acee/0x2a69000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405594112 unmapped: 54992896 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405602304 unmapped: 54984704 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a8b000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402330 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.385208130s of 28.853710175s, submitted: 67
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402122 data_alloc: 218103808 data_used: 14393344
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a93000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba004c800 session 0x55fba0462000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 heartbeat osd_stat(store_statfs(0x1a4a93000/0x0/0x1bfc00000, data 0x283ccee/0x2a6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402050 data_alloc: 218103808 data_used: 14393344
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3785680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.038470268s of 10.760538101s, submitted: 24
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba3d03400 session 0x55fba234be00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba56cc800 session 0x55fba0b08780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba7585000 session 0x55fba3759a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402336 data_alloc: 218103808 data_used: 14401536
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405651456 unmapped: 54935552 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba004c800 session 0x55fba2ed6960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405659648 unmapped: 54927360 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 heartbeat osd_stat(store_statfs(0x1a4a8f000/0x0/0x1bfc00000, data 0x283e947/0x2a6e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba23d6c00 session 0x55fba16e90e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4403296 data_alloc: 218103808 data_used: 14512128
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.117155075s of 17.333608627s, submitted: 25
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 ms_handle_reset con 0x55fba3d03400 session 0x55fba26a90e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405667840 unmapped: 54919168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405676032 unmapped: 54910976 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8c000/0x0/0x1bfc00000, data 0x2e7e5f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8c000/0x0/0x1bfc00000, data 0x2e7e5f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4441032 data_alloc: 218103808 data_used: 14520320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 ms_handle_reset con 0x55fba56cc800 session 0x55fba3785a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x28405f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4426062 data_alloc: 218103808 data_used: 14520320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 heartbeat osd_stat(store_statfs(0x1a4a8d000/0x0/0x1bfc00000, data 0x28405f4/0x2a71000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4426062 data_alloc: 218103808 data_used: 14520320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.353925705s of 15.476061821s, submitted: 24
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a89000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405610496 unmapped: 54976512 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a89000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4430236 data_alloc: 218103808 data_used: 14528512
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405618688 unmapped: 54968320 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405635072 unmapped: 54951936 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444210 data_alloc: 218103808 data_used: 15032320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4a70000/0x0/0x1bfc00000, data 0x2842133/0x2a74000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.122325897s of 27.159317017s, submitted: 24
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405643264 unmapped: 54943744 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405684224 unmapped: 54902784 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba7585000 session 0x55fba354ad20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405692416 unmapped: 54894592 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443018 data_alloc: 218103808 data_used: 15032320
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba09f63c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405078016 unmapped: 55508992 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405086208 unmapped: 55500800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405020672 unmapped: 55566336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405028864 unmapped: 55558144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405028864 unmapped: 55558144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405037056 unmapped: 55549952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405045248 unmapped: 55541760 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4362890 data_alloc: 218103808 data_used: 12701696
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4dfb000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09ee3c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba2e1d680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba3784d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d00400 session 0x55fba3c92b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405053440 unmapped: 55533568 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.370250702s of 35.421714783s, submitted: 24
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404201472 unmapped: 56385536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba354b0e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040d1/0x2435000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba045e5a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba26a8000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4352964 data_alloc: 218103808 data_used: 11689984
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba3cb1e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba332f400 session 0x55fba2ed50e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404430848 unmapped: 56156160 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba16e9860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4354265 data_alloc: 218103808 data_used: 11689984
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355545 data_alloc: 218103808 data_used: 11816960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404381696 unmapped: 56205312 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4355545 data_alloc: 218103808 data_used: 11816960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5062000/0x0/0x1bfc00000, data 0x226a0f4/0x249c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404389888 unmapped: 56197120 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.212444305s of 19.736858368s, submitted: 10
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406609920 unmapped: 53977088 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 406609920 unmapped: 53977088 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a495d000/0x0/0x1bfc00000, data 0x29670f4/0x2b99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416727 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48e7000/0x0/0x1bfc00000, data 0x29e50f4/0x2c17000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407134208 unmapped: 53452800 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407142400 unmapped: 53444608 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416491 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c6000/0x0/0x1bfc00000, data 0x2a060f4/0x2c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416491 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407273472 unmapped: 53313536 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.595334053s of 13.891137123s, submitted: 62
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c6000/0x0/0x1bfc00000, data 0x2a060f4/0x2c38000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48c0000/0x0/0x1bfc00000, data 0x2a0c0f4/0x2c3e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416547 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b12780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48bd000/0x0/0x1bfc00000, data 0x2a0f0f4/0x2c41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416623 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407289856 unmapped: 53297152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a48bd000/0x0/0x1bfc00000, data 0x2a0f0f4/0x2c41000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4416623 data_alloc: 218103808 data_used: 12005376
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407298048 unmapped: 53288960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407306240 unmapped: 53280768 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba0b04b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.955385208s of 16.972343445s, submitted: 4
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba0b134a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407322624 unmapped: 53264384 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407330816 unmapped: 53256192 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407339008 unmapped: 53248000 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407347200 unmapped: 53239808 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4351257 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba686f800 session 0x55fba399d2c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba399c5a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba399cf00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba399cb40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.548852921s of 24.614151001s, submitted: 18
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba399c1e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c9c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba37c9a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37c9680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c9860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407355392 unmapped: 53231616 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4404667 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 53223424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba37c94a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a49c3000/0x0/0x1bfc00000, data 0x290a0d1/0x2b3b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba56cc800 session 0x55fba37c9e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407363584 unmapped: 53223424 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba26ecb40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26ed860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407511040 unmapped: 53075968 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407265280 unmapped: 53321728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459695 data_alloc: 218103808 data_used: 19058688
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459695 data_alloc: 218103808 data_used: 19058688
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a499f000/0x0/0x1bfc00000, data 0x292e0d1/0x2b5f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407281664 unmapped: 53305344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.001239777s of 17.084104538s, submitted: 8
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 407863296 unmapped: 52723712 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4465407 data_alloc: 218103808 data_used: 19075072
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405626880 unmapped: 54960128 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405716992 unmapped: 54870016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba26ec3c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba2ed6000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 405725184 unmapped: 54861824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4505947 data_alloc: 218103808 data_used: 19189760
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a45fa000/0x0/0x1bfc00000, data 0x2cc50d1/0x2ef6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.626197815s of 26.226358414s, submitted: 40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba686fc00 session 0x55fba37c8f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4361547 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404422656 unmapped: 56164352 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba3767e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba1d2a960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba399d4a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba37581e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.153734207s of 31.213592529s, submitted: 13
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3338c00 session 0x55fba0b0a1e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404676608 unmapped: 55910400 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba004c800 session 0x55fba3260780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415032 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415032 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404684800 unmapped: 55902208 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404692992 unmapped: 55894016 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447192 data_alloc: 218103808 data_used: 16142336
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4b18000/0x0/0x1bfc00000, data 0x27b5123/0x29e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 404701184 unmapped: 55885824 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.331642151s of 23.421398163s, submitted: 38
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411361280 unmapped: 49225728 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4568398 data_alloc: 218103808 data_used: 17227776
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c9b000/0x0/0x1bfc00000, data 0x3631123/0x3862000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411811840 unmapped: 48775168 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411942912 unmapped: 48644096 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4559734 data_alloc: 218103808 data_used: 17231872
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411959296 unmapped: 48627712 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c7b000/0x0/0x1bfc00000, data 0x3652123/0x3883000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.038951874s of 10.530030251s, submitted: 129
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0463860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4561130 data_alloc: 218103808 data_used: 17244160
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4562890 data_alloc: 218103808 data_used: 17440768
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413007872 unmapped: 47579136 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.1 total, 600.0 interval#012Cumulative writes: 61K writes, 226K keys, 61K commit groups, 1.0 writes per commit group, ingest: 0.21 GB, 0.03 MB/s#012Cumulative WAL: 61K writes, 23K syncs, 2.63 writes per sync, written: 0.21 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1936 writes, 7029 keys, 1936 commit groups, 1.0 writes per commit group, ingest: 7.72 MB, 0.01 MB/s#012Interval WAL: 1936 writes, 756 syncs, 2.56 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413016064 unmapped: 47570944 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c64000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4562890 data_alloc: 218103808 data_used: 17440768
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413024256 unmapped: 47562752 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.942539215s of 15.453146935s, submitted: 13
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c62000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4583378 data_alloc: 218103808 data_used: 19492864
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c62000/0x0/0x1bfc00000, data 0x3669123/0x389a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba3cb0d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413032448 unmapped: 47554560 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: mgrc ms_handle_reset ms_handle_reset con 0x55fba3d03c00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/530399322
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/530399322,v1:192.168.122.100:6801/530399322]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: mgrc handle_mgr_configure stats_period=5
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3784b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba09743c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410116096 unmapped: 50470912 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410124288 unmapped: 50462720 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4373492 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 50454528 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410140672 unmapped: 50446336 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 55.570503235s of 55.644828796s, submitted: 29
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c93860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4391918 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410148864 unmapped: 50438144 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3cb0f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4391918 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba37c92c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3c93c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba5994000 session 0x55fba3260000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410157056 unmapped: 50429952 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411377664 unmapped: 49209344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411377664 unmapped: 49209344 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4ebd000/0x0/0x1bfc00000, data 0x24110c1/0x2641000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411385856 unmapped: 49201152 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406638 data_alloc: 218103808 data_used: 13824000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 49192960 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.814170837s of 23.856830597s, submitted: 4
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4731000/0x0/0x1bfc00000, data 0x2b9d0c1/0x2dcd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4729000/0x0/0x1bfc00000, data 0x2ba50c1/0x2dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4480804 data_alloc: 218103808 data_used: 14610432
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4481592 data_alloc: 218103808 data_used: 14622720
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413089792 unmapped: 47497216 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4727000/0x0/0x1bfc00000, data 0x2ba60c1/0x2dd6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 47489024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c92d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413097984 unmapped: 47489024 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.167284012s of 12.376974106s, submitted: 47
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26a85a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408592384 unmapped: 51994624 heap: 460587008 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba3c92000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3d03400 session 0x55fba3b6d680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452064 data_alloc: 218103808 data_used: 11685888
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3561000 session 0x55fba08bd680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba15b01e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f9860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408903680 unmapped: 54837248 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408911872 unmapped: 54829056 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525344 data_alloc: 234881024 data_used: 21975040
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4525344 data_alloc: 234881024 data_used: 21975040
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a46e5000/0x0/0x1bfc00000, data 0x2be90c1/0x2e19000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408977408 unmapped: 54763520 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.264961243s of 17.363716125s, submitted: 16
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4570916 data_alloc: 234881024 data_used: 21975040
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414523392 unmapped: 49217536 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba38790e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411533312 unmapped: 52207616 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411541504 unmapped: 52199424 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411541504 unmapped: 52199424 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607832 data_alloc: 234881024 data_used: 22740992
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411607040 unmapped: 52133888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411639808 unmapped: 52101120 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411639808 unmapped: 52101120 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411705344 unmapped: 52035584 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.158336163s of 10.001039505s, submitted: 278
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4607480 data_alloc: 234881024 data_used: 22740992
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411705344 unmapped: 52035584 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 52019200 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba2e5f800 session 0x55fba1737860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411721728 unmapped: 52019200 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a3c3f000/0x0/0x1bfc00000, data 0x368f0c1/0x38bf000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411738112 unmapped: 52002816 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408485888 unmapped: 55255040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba045fe00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408526848 unmapped: 55214080 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50ca000/0x0/0x1bfc00000, data 0x22040c1/0x2434000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4386748 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408535040 unmapped: 55205888 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 28.407199860s of 31.508493423s, submitted: 205
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a50c9000/0x0/0x1bfc00000, data 0x22040ea/0x2435000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba3633400 session 0x55fba3758b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23a3400 session 0x55fba354be00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5022000/0x0/0x1bfc00000, data 0x22ab123/0x24dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4401153 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5022000/0x0/0x1bfc00000, data 0x22ab123/0x24dc000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408543232 unmapped: 55197696 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba23d6c00 session 0x55fba23870e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4401638 data_alloc: 218103808 data_used: 11706368
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405638 data_alloc: 218103808 data_used: 12288000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4405638 data_alloc: 218103808 data_used: 12288000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408551424 unmapped: 55189504 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408559616 unmapped: 55181312 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a5021000/0x0/0x1bfc00000, data 0x22ab146/0x24dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.829019547s of 21.756328583s, submitted: 30
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408576000 unmapped: 55164928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4440832 data_alloc: 218103808 data_used: 12500992
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c98000/0x0/0x1bfc00000, data 0x2626146/0x2858000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410378240 unmapped: 53362688 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c6f000/0x0/0x1bfc00000, data 0x2657146/0x2889000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4444738 data_alloc: 218103808 data_used: 12320768
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c6f000/0x0/0x1bfc00000, data 0x2657146/0x2889000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.852149963s of 11.262113571s, submitted: 75
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4443138 data_alloc: 218103808 data_used: 12324864
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410755072 unmapped: 52985856 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c72000/0x0/0x1bfc00000, data 0x265a146/0x288c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447094 data_alloc: 218103808 data_used: 12312576
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 ms_handle_reset con 0x55fba28a5800 session 0x55fba0b13c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4447022 data_alloc: 218103808 data_used: 12312576
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 heartbeat osd_stat(store_statfs(0x1a4c71000/0x0/0x1bfc00000, data 0x265b146/0x288d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.434495926s of 12.933977127s, submitted: 31
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 ms_handle_reset con 0x55fba2e5f800 session 0x55fb9ffc8000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448596 data_alloc: 218103808 data_used: 12316672
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449716 data_alloc: 218103808 data_used: 12439552
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4449716 data_alloc: 218103808 data_used: 12439552
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 heartbeat osd_stat(store_statfs(0x1a4c6c000/0x0/0x1bfc00000, data 0x265ce01/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.502620697s of 15.832051277s, submitted: 6
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410763264 unmapped: 52977664 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 426 ms_handle_reset con 0x55fba3d02000 session 0x55fba37c8b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c6e000/0x0/0x1bfc00000, data 0x265cd9f/0x2890000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4460144 data_alloc: 218103808 data_used: 13041664
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c6a000/0x0/0x1bfc00000, data 0x265ea4c/0x2893000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410771456 unmapped: 52969472 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4459456 data_alloc: 218103808 data_used: 13041664
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 426 heartbeat osd_stat(store_statfs(0x1a4c66000/0x0/0x1bfc00000, data 0x2663a4c/0x2898000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.013331413s of 11.288456917s, submitted: 38
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4462206 data_alloc: 218103808 data_used: 13053952
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4c62000/0x0/0x1bfc00000, data 0x266558b/0x289b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x186ff9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba37c9680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4853000/0x0/0x1bfc00000, data 0x266558b/0x289b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4461414 data_alloc: 218103808 data_used: 13058048
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410779648 unmapped: 52961280 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba354a780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410796032 unmapped: 52944896 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410804224 unmapped: 52936704 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4a1c000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4406262 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410812416 unmapped: 52928512 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 56.423934937s of 58.936065674s, submitted: 37
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415596544 unmapped: 48144384 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3759860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a452a000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47bf000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448980 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3784b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5f800 session 0x55fba32e3860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47bf000/0x0/0x1bfc00000, data 0x26fb506/0x292f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c93c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37c94a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4451458 data_alloc: 218103808 data_used: 11796480
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410836992 unmapped: 52903936 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486978 data_alloc: 218103808 data_used: 16818176
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a47be000/0x0/0x1bfc00000, data 0x26fb516/0x2930000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486978 data_alloc: 218103808 data_used: 16818176
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411033600 unmapped: 52707328 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.668106079s of 18.706832886s, submitted: 11
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413728768 unmapped: 50012160 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415031296 unmapped: 48709632 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3c92000/0x0/0x1bfc00000, data 0x3227516/0x345c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18b0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4579124 data_alloc: 218103808 data_used: 17281024
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415604736 unmapped: 48136192 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4f000/0x0/0x1bfc00000, data 0x32ca516/0x34ff000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580874 data_alloc: 218103808 data_used: 17285120
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416669696 unmapped: 47071232 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.567571640s of 10.119446754s, submitted: 95
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4c000/0x0/0x1bfc00000, data 0x32cd516/0x3502000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580462 data_alloc: 218103808 data_used: 17293312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32603c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba32610e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4580462 data_alloc: 218103808 data_used: 17293312
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416677888 unmapped: 47063040 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416686080 unmapped: 47054848 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba686e400 session 0x55fba0b0ad20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416686080 unmapped: 47054848 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416694272 unmapped: 47046656 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4581422 data_alloc: 218103808 data_used: 17391616
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.709341049s of 24.039503098s, submitted: 7
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416702464 unmapped: 47038464 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4606926 data_alloc: 218103808 data_used: 20221952
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a4b000/0x0/0x1bfc00000, data 0x32ce516/0x3503000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4608734 data_alloc: 218103808 data_used: 20221952
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2a46000/0x0/0x1bfc00000, data 0x32d3516/0x3508000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba352a960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba37c9a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4609694 data_alloc: 218103808 data_used: 20246528
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416768000 unmapped: 46972928 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.323343277s of 16.135540009s, submitted: 6
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413163520 unmapped: 50577408 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209516/0x243e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba2e1c960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4415703 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412114944 unmapped: 51625984 heap: 463740928 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba0b0a5a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba1702400 session 0x55fba17363c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba36394a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3879a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.897144318s of 39.861175537s, submitted: 12
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3b6de00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3b6de00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba686e000 session 0x55fba3879a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba17363c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b0a5a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412123136 unmapped: 55820288 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4456612 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412131328 unmapped: 55812096 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba2e1c960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 55795712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412147712 unmapped: 55795712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412344320 unmapped: 55599104 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486600 data_alloc: 218103808 data_used: 15933440
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4486600 data_alloc: 218103808 data_used: 15933440
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412377088 unmapped: 55566336 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a35fc000/0x0/0x1bfc00000, data 0x271d516/0x2952000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412385280 unmapped: 55558144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.937059402s of 18.226503372s, submitted: 13
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415154176 unmapped: 52789248 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4521352 data_alloc: 218103808 data_used: 15970304
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32b8000/0x0/0x1bfc00000, data 0x2a61516/0x2c96000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32b8000/0x0/0x1bfc00000, data 0x2a61516/0x2c96000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4516660 data_alloc: 218103808 data_used: 16220160
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412827648 unmapped: 55115776 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 55107584 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412835840 unmapped: 55107584 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ae000/0x0/0x1bfc00000, data 0x2a6a516/0x2c9f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519966 data_alloc: 218103808 data_used: 16281600
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.488749027s of 11.441194534s, submitted: 23
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519566 data_alloc: 218103808 data_used: 16318464
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32603c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba40f9400 session 0x55fba352a780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519434 data_alloc: 218103808 data_used: 16318464
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412844032 unmapped: 55099392 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4519434 data_alloc: 218103808 data_used: 16318464
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.342370987s of 15.328845024s, submitted: 2
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412852224 unmapped: 55091200 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412860416 unmapped: 55083008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba234a3c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b13e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412860416 unmapped: 55083008 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a32ab000/0x0/0x1bfc00000, data 0x2a6e516/0x2ca3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3cb0000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411394048 unmapped: 56549376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411402240 unmapped: 56541184 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411410432 unmapped: 56532992 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411418624 unmapped: 56524800 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411426816 unmapped: 56516608 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 56508416 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 411435008 unmapped: 56508416 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba32610e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbaad23400 session 0x55fba16e9680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3784780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f65a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4422474 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 44.588714600s of 44.694046021s, submitted: 24
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415113216 unmapped: 52830208 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba3759680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b11000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3773000/0x0/0x1bfc00000, data 0x25a7506/0x27db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c93e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4452372 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3773000/0x0/0x1bfc00000, data 0x25a7506/0x27db000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba3d03800 session 0x55fba3c92960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408059904 unmapped: 59883520 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3b6f0e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba328be00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408068096 unmapped: 59875328 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408084480 unmapped: 59858944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484068 data_alloc: 218103808 data_used: 15433728
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408092672 unmapped: 59850752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba09743c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.832244873s of 11.179603577s, submitted: 16
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba045fc20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483936 data_alloc: 218103808 data_used: 15433728
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408100864 unmapped: 59842560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4483844 data_alloc: 218103808 data_used: 15437824
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba332a400 session 0x55fba328ab40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3771000/0x0/0x1bfc00000, data 0x25a7539/0x27dd000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.298506737s of 13.644285202s, submitted: 3
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4484004 data_alloc: 218103808 data_used: 15441920
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba32e25a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 408109056 unmapped: 59834368 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba09f6960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409174016 unmapped: 58769408 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409182208 unmapped: 58761216 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409190400 unmapped: 58753024 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409198592 unmapped: 58744832 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409214976 unmapped: 58728448 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4428257 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409223168 unmapped: 58720256 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3b10000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409231360 unmapped: 58712064 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.674488068s of 43.959205627s, submitted: 31
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba38785a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409313280 unmapped: 58630144 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4464645 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a36be000/0x0/0x1bfc00000, data 0x265c506/0x2890000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c93680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4f7fc00 session 0x55fba0950d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba1d2ba40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba15b0780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3cb1a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba0463860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a36bd000/0x0/0x1bfc00000, data 0x265c516/0x2891000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba354b2c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba23943c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4467413 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409321472 unmapped: 58621952 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413515776 unmapped: 54427648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3274000/0x0/0x1bfc00000, data 0x2aa4526/0x2cda000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19caf9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413523968 unmapped: 54419456 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.059988022s of 10.040143967s, submitted: 15
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4528931 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410910720 unmapped: 57032704 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba26ec960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba3639680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba328a000
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba3784b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba3c930e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3c923c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba26ed4a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba15b0960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba28a5800 session 0x55fba0b12960
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba32e3e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba26a8b40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4128000/0x0/0x1bfc00000, data 0x2c2f549/0x2e66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4524925 data_alloc: 218103808 data_used: 11739136
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba09f70e0
Jan 23 06:21:26 np0005593232 ceph-mgr[74726]: log_channel(audit) log [DBG] : from='client.42843 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409206784 unmapped: 58736640 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba354b860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409575424 unmapped: 58368000 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba0463c20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4128000/0x0/0x1bfc00000, data 0x2c2f549/0x2e66000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.600816727s of 11.452827454s, submitted: 18
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4558707 data_alloc: 218103808 data_used: 16261120
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba0b081e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba26a8d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4557826 data_alloc: 218103808 data_used: 16265216
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 409608192 unmapped: 58335232 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4569986 data_alloc: 218103808 data_used: 17883136
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.936899185s of 10.173114777s, submitted: 4
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 410132480 unmapped: 57810944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a4127000/0x0/0x1bfc00000, data 0x2c2f56c/0x2e67000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,0,22])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 412418048 unmapped: 55525376 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413540352 unmapped: 54403072 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413548544 unmapped: 54394880 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3adb000/0x0/0x1bfc00000, data 0x327556c/0x34ad000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 54312960 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4652240 data_alloc: 218103808 data_used: 20611072
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413630464 unmapped: 54312960 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 54304768 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 413638656 unmapped: 54304768 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415129600 unmapped: 52813824 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 416006144 unmapped: 51937280 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba32e3680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a364e000/0x0/0x1bfc00000, data 0x370856c/0x3940000, compress 0x0/0x0/0x0, omap 0x639, meta 0x18c6f9c7), peers [1,2] op hist [0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #54. Immutable memtables: 10.
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4699400 data_alloc: 218103808 data_used: 20676608
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.537017345s of 10.000730515s, submitted: 106
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 417087488 unmapped: 50855936 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a29f2000/0x0/0x1bfc00000, data 0x30f855c/0x332f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fba37845a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418136064 unmapped: 49807360 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 49307648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a279e000/0x0/0x1bfc00000, data 0x3419539/0x364f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4647054 data_alloc: 218103808 data_used: 19095552
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba5994400 session 0x55fba2e1d0e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 418635776 unmapped: 49307648 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.1 total, 600.0 interval#012Cumulative writes: 63K writes, 233K keys, 63K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 63K writes, 24K syncs, 2.62 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2022 writes, 7204 keys, 2022 commit groups, 1.0 writes per commit group, ingest: 7.29 MB, 0.01 MB/s#012Interval WAL: 2022 writes, 809 syncs, 2.50 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba38792c0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2f07000/0x0/0x1bfc00000, data 0x2cb2529/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4560688 data_alloc: 218103808 data_used: 14761984
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba38790e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.918930054s of 11.380777359s, submitted: 38
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba09ee780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a2f07000/0x0/0x1bfc00000, data 0x2cb2529/0x2ee7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4448208 data_alloc: 218103808 data_used: 11735040
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414195712 unmapped: 53747712 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414212096 unmapped: 53731328 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba37585a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414220288 unmapped: 53723136 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a39b0000/0x0/0x1bfc00000, data 0x2209506/0x243d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4446696 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ddfc00 session 0x55fba32e2780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba0462780
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba0b08d20
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba36390e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 47.287857056s of 49.987705231s, submitted: 31
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 415842304 unmapped: 52101120 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba37590e0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fbb2bdc000 session 0x55fb9ffc7680
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba2ed5860
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23d6c00 session 0x55fba16e94a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2e5e400 session 0x55fba0b094a0
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3616000/0x0/0x1bfc00000, data 0x25a3516/0x27d8000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4475096 data_alloc: 218103808 data_used: 11730944
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414228480 unmapped: 53714944 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba4ec8c00 session 0x55fba3767a40
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2393400 session 0x55fba3785e00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba2393400 session 0x55fba38faf00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 ms_handle_reset con 0x55fba23a3400 session 0x55fba3758f00
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414236672 unmapped: 53706752 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: bluestore.MempoolThread(0x55fb9f09fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4491974 data_alloc: 218103808 data_used: 13836288
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: osd.0 427 heartbeat osd_stat(store_statfs(0x1a3615000/0x0/0x1bfc00000, data 0x25a3526/0x27d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0x19e0f9c7), peers [1,2] op hist [])
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
Jan 23 06:21:26 np0005593232 ceph-osd[85010]: prioritycache tune_memory target: 4294967296 mapped: 414244864 unmapped: 53698560 heap: 467943424 old mem: 2845415832 new mem: 2845415832
